Add your own alternative version
Stats
34K views 408 downloads 51 bookmarked
Posted
13 Oct 2017

Comments and Discussions


Brilliant work  I like it very much and tried to use it in my own solution.
But after I referenced the OuelletConvexHullCpp.dll assembly from my C# project it failed at runtime, because the assembly is not signed with a strong name (while my project is and requires it). I had no luck in postsigning your assembly using a tool.
So could you please sign the assemblies for easy use in other solutions, or provide the information on how to sign them correctly?





Sorry for the delay. I was just browsing and realized that there was message but I never saw the notification of it.
I don't know how to sign assembly. I can't help you much on this. But you have all the code. You should be able to do it yourself, I think. The C++ project I have done for the Convex Hull was the only C++ project I have done in 15 years. My C++ knowledge is pretty much old and partly gone from my memory.
Sorry not being able to help you more on this one.
Good luck,
Eric
modified 16Mar18 7:24am.





Thanks and no worries  same with my old C++ knowledge here  guess I'll just have to install VS C++ and just dig a little deeper then.





Great implementation. Easy to wrap. 5 minutes (even less). 1 for explanations on site.





Thanks for your vote and your feedback.
I'm not sure to understand what you mean by: "1 for explanation on site"...
Do you mean:
 It would have been better if I add a section on how to install/use the code on your own project?
 Or there is missing comments in my code (which are almost completely missing) and it would be nice if I add some to make things a little bit clearer?
modified 11Dec17 12:12pm.





the "1 for explanation" was a mistake... Don't know what I meant... I wanted to say the explenation is great, and I give a vote 5, and if I could  I would have given more.
Thank you very much





Fantastic work, so great that I would like to include some of your code in an open source project I am working on. Would you be willing to release this or just the Ouellet AVL v2 code under an MIT license? Naturally of course happy to credit you and include a link back to this article (or anywhere else you like)
Thanks,
Adam





No problem with me.
You can take any portion of the code without any problem. I share it to be used.
I would be grateful if you can mention my name or a link to here somewhere in your code.
Thanks and Good luck!
... if you agree, let me know when you have something that works !





When you write:
protected override bool IsGoodQuadrantForPoint(Point pt) {
if (pt.X > this.RootPoint.X && pt.Y > this.RootPoint.Y) {
return true;
}
return false;
}
you could just write:
protected override bool IsGoodQuadrantForPoint(Point pt) {
return pt.X > this.RootPoint.X && pt.Y > this.RootPoint.Y;
}
degski
modified 20Nov17 18:18pm.





Yes! Thanks!
I will modify my code accordingly.
It should not affect performance because the compiler shoud produce the same code in both cases but it is shorter and has the same clarity. It is just better.
I just sent myself an email to the job and will do it in the next days. It should be visible before the end of the week.





You write: "This advantage comes from the fact that we are using quadrants, and also, because quadrant Convex Hull points are always sorted.".
I'm thick, could you please explain what you are doing exactly (crossproduct?) to perform a Quick Discard? In particular, what does the "... because quadrant Convex Hull points are always sorted ..." have to do with the quick discard.
Have a good day,
degski





Hello degski.
Because of quadrant and because point are always sorted (either using an array or an AVL tree to store convex hull points):
In the second pass (2/3) of the algorithm, we are trying to insert point to its proper place (try to find a point that could become a convex hull point). Each point are sorted in X coordinate by quadrant.
Note about using x coordinate:
You can use only X to know where to insert. If a convex hull point is found, you have to check and/or invalidate in both directions starting at newly inserted point.
When you want to find the proper insertion point of the current point you try to insert (say point "P"). You find the proper quadrant. For the proper quadrant, you start dichotomy to find proper insertion point. At each iteration, you are going closer to the exact insertion point. Doing so, it highly increase your chance to invalidate "P" (only a small amount of point are convex hull point in general use). Statistically you increase a lot the chance to reject "P" on each iteration. Not using quadrant or dichotomy would prevent to discriminate an invalid point just using a simple comparison (where no calc are required at all). I'm not sure, but I suspect that any other algorithm would require some calculation in order to have enough information (inner or outer) of the hull to discriminate. Because you know the limits (by the quadrant) and because of sorting (merging through proper place  increasing invalidation chances) ==> you then have the ability to quick discard.
Hope it is a little bit clearer. I will think tonight on a way to explain it more clearly. If I found one, I will potentially add it to the article to make it clearer.
Also, you can look at the code in "OuelletConvexHull" in "QuadrantSpecific[14].cs" at the line with comment: "// No calc needed". The "if" is the one used to quickly reject. Perhaps it could be easier to understand by looking at the code?
Hope it helps?





I purposely never looked at your code, just as to have clean slate (in my mind) as to how I was gonna approach an implementation.
What I understood (I assumed you implied), and implemented therefore, is to quickly discard a point based on the crossproduct of the new point and the two points defining the quadrant, i.e. is it out(side) of the triangle formed by the quadrants' defining points and the rootpoint of the quadrant... This avoids doing a search for a whole lot of points (what you call dichotomy, I would call it std::lower_bound in c++ STL parlance).
Your code doesn't seem to be doing the above, could you please confirm.
degski
modified 20Nov17 17:52pm.





I'm using cross product but not for quick discard.
Quick discard is accomplish by a qucik comparaison between the point to be added and the current point in the dichotomy search.
For example in quadrant 1 for max x = 10 (10, 0) and max y = 10 (0,10). I took origin as quadrant root for simplicity. If the exisitng hull point in the center of the container is p(8,6). if the point to add is q(6, 6). If q.X <= p.X and q.Y <= p.Y then we can easily discard it because it can't be part of the hull point. No cross product is necessary at all. And as you can also infer is that it is due to quadrant, otherwise it wouldn't be possible. We know in space where the point is located in reagrd to its quadrant hull points (if it is inside or outside). It is also required to have point ordored (it is a must for dichotomy anyway) but it is also what makes algorithm merging through potential insertion point and in the same time increasing chances to be able to discard the point without having to do cross product.
The cross product is mandatory when the point to insert is inside the square defined by the 2 neigbors where the point to insert has to be inserted. Otherwise no cross product is necessary. A simple comparaison could be used.
About dichotomy: dichotomy require to have a set of ordered point. You pick the middle one and compare. Higher goes in one side and lower in the other side. You then pick the middle point of the side where the comparison brings you. You repeat that until you reach the destination. This is what brings the Log(h) in the complexity. In fact if you do not use a tree and you decide to go with an array (or array equivalent like stl:vector) then the algorithm will not be exactly in Log(h) because you have to take into account the array items moving to make room for the new added point. For that reason it is prefered to use a balanced tree base container for managing convex hull points.
modified 20Nov17 23:17pm.





Eric Ouellet wrote: Quick discard is accomplish by a qucik comparaison between the point to be added and the current point in the dichotomy search.
I think I got it now, while you are doing the binary search you compare with the new point with that point and discard it (if possible) before you even found the lower_bound.
What I'm talking about is to (potentially) discard the point (by calculating the crossproduct) even before starting to do a search at all. The crossproduct is cheap, doesn't trash the cache, contrary to the search.
My crossproduct (technically the perp dot product) check is implemented so:
bool is_hull_point ( const value_type & a_, const value_type & t_, const value_type & b_ ) const noexcept {
return ( b_.x  a_.x ) * ( t_.y  a_.y ) < ( b_.y  a_.y ) * ( t_.x  a_.x );
}
I would imagine the left and the right (of the <sign) to be executed in parallel.
degski
modified 21Nov17 10:53am.





Quick discard is what I explained.
Trying to discard the point using crossproduct is not Quick discard. An optimization using cross product with the first and last point of a quadrant (quadrant limits) is what I call PCZB.
It is possible to use both. PCZB first and then Quick discard for point which would not have been discarded by PCZB. Quick discard could then be used when we have to iterate into the dicotomy. But if you use PCZB prior to Quick Discard, then statistically there is less chances that quick discard will be a real advantage. I wonder if PCZB alone is better then PCZB + quick discard. I'm not sure. Yes you are probably right, calculating the PCZB cost less than doing quick discard to quickly reject point. It could be enough and possibly more efficient in most cases to do only PCZB. Tests with both generator (circle and throw away) could be use to have an idea of the differences and what optimization or mix of them is better. But it will only show us some statistics based on predefined cases which could differ from real user usage.
I'm pretty busy (work and home). I do not have time to make those tests. But ideas are there. If you ever find better way to optimize, it will be great to share and I will really want to see resulsts.
About the line of code... Wow! It think it is great. I don't know anything about "const noexcept". I have to update my C++. But I prefer to invest time into my C# over my C++. About parallel, i'm not sure. I don't know CPU architecture enough. It would require 2 ALU I think (but perhaps ALU are not for double? Perhaps in parallel... ??? But it should be really in comparaison to dicotomy because dicotomy increase the risk of having to fetch memmory which is not cached in L1 L2.
I have to go...
Talk to you later...





Although that guy is a putz about the wikipedia reference, when doing a search at google for "Convex Hull algorithm" this article appears on the first page of results. That means this should get plenty of exposure and traffic.
Also, you might want to consider adding C# to the tags listed for the article.





Thanks Rick!
I just added the C# tag. Your are right I forgot it and it is pretty much important.
Just FYI, I put a lot of emphasis on Wikipedia because this is where I started in 2013 when I had to implement a Convex Hull algorithm into the project I was working on. Knowing each and every algorithm is important in order to choose the one you want to take and/or implement.





Let alone the outstanding performance of Ouellet algorithm itself shown in the benchmarks, I'm really happy to read this highly elaborated and structured project article which teaches me many many helpful information and considerations to take.
Even if I happen not to use this algorithm and the codework in my tiny project (actually I'm going to use it), just reading the pages of your convex hull code project helps me tremendously.
I searched 'convex hull algorithm in C#' keyword and found the link to the page of the first version of this project. As a newbie to geometric algorithm field, I think I'm lucky to have found this performant, readytouse algorithm project page first before getting tired of reading toolight or tooheavy pages on many algorithms available on the internet, and before spending too much time trying to find C# library or porting them to C#.
The excellent diagrams and easytounderstand text on the old project page are really impressive.
And that careful and thorough tests, analysis, and detailed and informative written considerations on Optimization parts are just superb.
Especially, Optimization section of this page is crazily exciting and informative part to read for me. There have been such many things to consider and test!
I'm learning. Feels like what I'm seeing is a very good model of how to develop a practical algorithm and to 'present' it for others who wanna get to know about and use it.
So I really appreciate your hard work, the project, the article.
Hope this algorithm to be listed on the Wiki page soon and utilized by many.





Wow! Thanks a lot again. I'm really happy you appreciated it.





Another fine article
Just because the code works, it doesn't mean that it is good code.





Thanks again





I found this article really fascinating and will try out your algorithm, so thanks for putting this together. I just have a short comment on the Voronoi/Delaunay diagrams speed and storage requirement that you posted in your article.
This Wikipedia article Voronoi diagram  Wikipedia[^] claims that a Voronoi diagram in d dimensions containing n points requires O(n^(0.5*d)) storage space.
As for the speed of the Delaunay diagram, there exists a divide and conquer algorithm that in the best case senario can run on O(n log log n) which is ridiculously fast (see here Delaunay triangulation  Wikipedia[^] ). This will get you the convex hull for free.
As for improving the speed by using lower levels compilers, it might be worth looking into what professional companies use in their code like Matlab does for its matrix multiplications:
performance  Why is MATLAB so fast in matrix multiplication?  Stack Overflow[^].
Kenneth





Thanks a lot Kenneth for your support.
I made some corrections (currently in draft) but I'm not sure about one thing:
For Voronoi diagram, if for d dimensions containing n points requires O(n^(0.5*d)) storage space. When dimensions = 2 then we have n power 1 which is n. But with a formula like that, it is surely not an in place algorithm (swaping point in place). Only in place algorithm can claim to take O(n) space, I think. What did I got wrong? Do you know? It can't be n without being "in place"?
About MATLAB. I don't know. But I would construct a dependency graph where leaves would be independant equation and would recurse bottom up using multithread. At least, for what I know about matrix which is very little.





In a Voronoi diagram, each point needs to store a polygon of its influence and a binary tree of its closes neighbors. So I read O(n) as the number of instances of class objects stored for each point.
As for the Matlab the matrix calculation is just an example, but they do have convex hull algorithms as well. But Matlab uses Delaunay diagram as input:
Convex hull  MATLAB[^]. If you want other people to use your algorithm you should test against other professional packages.






General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

 Best C++ Article of October 2017 : First Prize
