|
Hi,
how about this:
1. start with "infinite" values for dLeft, dTop, dRight, dBottom;
2. consider the rectangle from (Xref-dLeft, Yref-dTop) to (Xref+dRight, Yref+dBottom);
3. now iterate over all segments and reduce dLeft, dTop, dRight, dBottom so they just match your criteria
this should meet a,b,d
you can amend 3 to try and get (c) and/or (e)
refinement: when you have a choice how to keep a segment outside your rectangle, provide either a recursion (might be too costly), or a backtracking algorithm to analyze different alternatives.
|
|
|
|
|
Yes, this was my first thought when I bump into this ugly problem.
Already tried it and does not work properly, that's why I came here for help.
Most of the times I have lines starting exactly from (Xref,Yref) or crosing that point or so close that the resulting rectangle is too small - the closesness thing : if I'd be just a little bit off (Xref,Yref) the rectagle might be bigger.
The second issue : granularity. All points coordinates are double precision - I did a "step up" based on (Xmax-Xmin)/somthing big - but looks very ugly when you zoom in the scene to fit the result.
And the third issue : performance - I have roughly 30K-40K lines per scene and I have to find ~700 enclosed rectangles. I've managed to break the scene into smaller pieces, but still it takes a lot of time.
I observed that selecting the "proper" Xref/Yref point makes all the difference - the more free space around it the better the result (actually this sounds like a problem in itself) ... but I cannot find a way to select it (now I'm fiddling manualy with the coordinates).
Edit: "inside" the scene means inside the culling polygon around the scene
modified on Sunday, December 27, 2009 9:54 AM
|
|
|
|
|
I have given it some thought, this is definitely no easy problem. I'm starting to think that there is no polynomial algorithm for this..
It would have been easy if your lines were lines rather than segments.. but they aren't.
But there might be shortcuts, what more is known about those line segments? Do they, on average, cross most of the scene or are they mostly tiny? Are they usually connected to each other or usually separated?
How much can we relax condition C?
|
|
|
|
|
I know this is a hard one ... I've been trying to find a workable solution for a week now.
So ... they are segments indeed - nothing to do about that. But any idea if they weren't is also welcome. (I've tried something where each polygon is just a intersection of semi-planes plus a lot of orientation vectors, but nothing has come yet ...)
What these segment represent ? Shapes - 2D slices through complex mechanical 3D objects or assemblies manually retouched (rightfully or not). Or in the worst case a series of slices one over the other in the same spot (actually this is more the case - I have very few simple shapes). There is nothing specific about them in the meaning they are either closed polygons or not, concave/convex or not or undefinable. They are somehow grouped (per initial object), but doesn't follow any logic in the order of definition. I found simple rectangles defined by 4 segments in 1,2,3,4 counterclockwise order starting from the top as often as 1,3,2,4 (side - opposite side). Arcs/circles/curves are defined as a fragmented series of segments, but not necessarily in order. Don't start me on the more complex shapes. But I did managed to order them so they follow a logic order (ccw from Ox) and where they intersect I broke each of them in two so I can have only adjacent polygons and not crossing ones. (The actual definitions for these shapes comes from some dxf files - if it matters). But I still have single segments not connected to anything else.
As per condition c) ... that is the biggest problem : I need the biggest rectangle where I can fit something without touching anything else.
Where I can be a little malleable is the (Xref,Yref) point. This is now the geometric center of culling polygon that enclose the original object.
|
|
|
|
|
Hi,
here is another idea:
1. before drawing the segments, create an imaginary grid, where each cell has a certain height and width, say 64 pixels.
2. place the segments, and mark the grid cells that get (partially) covered by them.
3. search an unmarked grid cell, and now look for the largest rectangle of empty grid cells containing it
4. only then enlarge the rectangle using actual coordinates instead of grid indices.
|
|
|
|
|
If I would thinking in bitmaps/pixels this might work. But I don't have pixels, I don't have a moveto/lineto/floodfill algorithm, I only have coordinates ... a math approach is what I'm hoping for ... if exists ... however complicate that might be ... On performance I will think after that.
Think of this problem like a "inverse"/"reverse" convex hull problem. There you have to find the smallest polygon which enclose a scene, I need the biggest free one inside the scene (sort of anyway).
|
|
|
|
|
Hi,
third, partial and final attempt:
assume an unknown point px,py is inside the rectangle you want;
if you were to apply the conversion:
x' = 1/(x-px)
y' = 1/(y-py)
then (px,py) would map on infinity, and you would really be looking for a convex hull.
Maybe what you should do is try a couple of (px,py) points in the neighbourhood of your point of interest.
BTW: The problem of course now is your shapes look all different, so it may be more difficult to locate their extreme points.
|
|
|
|
|
I've tested this just now using memory-mapped bitmaps. It works in the meaning that it finds the biggest rectangle (actually all the rectangles, then I chose the fittest) ... But this walking pixel by pixel on a segment (grid cell is 1x1) ... arghh . I still hope there is a math solution to this.
|
|
|
|
|
Hmm,
I am having a problem visualizing the problem. I'm looking at some examples of problems of this type on Wolfram|Alpha:
Pappus graph[^]
The problem in the example seems to have the same characteristics as your problem and it is pure mathematical solution. My conclusion is that you need some sort of graph algorithm.
Can you give a simple example using geometric terms? So I can visualize the problem better? I would like to help...
|
|
|
|
|
Okay I have a veague idea of what you are trying to do. Here is how I would attack it.
Daniel.Sturza wrote: a) have the interior completely free (no segment can cross its boundaries - might touch them thou);
Use the 'Bentley–Ottmann algorithm':
Bentley–Ottmann algorithm[^]
Find the intersections of points, dump these into a point graph (tree graph, using an algorithm which indexes the roots and children, etc.), for each graph that does not cross a segment add to an 'exclusion graph'.
Next Find the max area of space occupied by exclusion graph (interior completely royalty free):
Distance transform[^]
-or-
Shoelace formula[^]
The input would be the exclusion graph's. For each graph in the stack find the largest area, add to an 'area graph'. Here you migh need to also jump to step #b first to find the correct rotation or merge this into one step.
Daniel.Sturza wrote: b) have his sides parallel to Ox and Oy axes;
Rotating calipers algorithm[^]
This algorithm does some rotating, so I'm not sure if it is good for your (Ox, Oy) points. What are Ox,Oy? The algorithm is good for any poly or hull.
Daniel.Sturza wrote: c) have the biggest area;
This should have been found in step a.
Daniel.Sturza wrote: d) have the center closest to a specific point (Xref,Yref);
Jump-and-Walk algorithm[^]
This is a triangulation algorithm, my guess is that it can be adopted to your needs.
Daniel.Sturza wrote: e) have a certain aspect ratio H/W.
No idea what aspect ratio would be. I would be using vector graphics of some sort, so the hull algorithm you are using should do this for you?!?!
Then again this is all brain storming and it might be more like 'day dreaming' Ha,ha,ha. Hope this gives you some ideas...
Another bazaar approach might be to create a Voronoi Diagram of all points, then for each segment connect the points, if the point moves through an area described by the Voronoi Diagram then remove the area from the diagram. After all points have been connected you will be left with areas excluded by the segments and then you can find the max rectangle.
Voronoi Diagram[^]
-And-
Largest empty rectangle[^]
A Fast Algorithm for Finding Maximal Empty Rectangles for Dynamic FPGA Placement[^]
~TheArch
modified on Monday, December 28, 2009 3:31 PM
|
|
|
|
|
How to visualize my problem ... hmm ... if you have some CAD knowledge, you definitively bump into this problem: you have a limited work space crammed with objects. You have to find a location for a new one (kind of). My problem is that I can't touch anything that is already there. It's not an optimization problem (how can I rearrange my objects in placement or in insertion order so the next one will fit) ... it's simply where I can fit it.
The particularity is that the new object it's not yet fully defined : it will grow to the maximum extent that will be available to him (hence the aspect ratio).
Or think simpler: you suddenly have an idea, you want to write it down but you only have an already written piece of paper available. You don't know beforehand how much you need to write down (you'll figure that out on the way), you can write bigger or smaller, but you need to write down the whole idea. For any human this is easy as "hello" ... we don't even think how to do it, but we do it. I am trying to put this in code ...
For now I managed the brake all my objects into pieces defined only by convex polygons (some graphs involved here ) still have to decide what to do with those single floating segments ...
But you gave me a few ideas ... I'm optimistic ...
I'll be checking in when I'll have some results.
Edit : Ox and Oy -> X and Y axes
|
|
|
|
|
why are you using line segments and these complex conditions, a space partition should do the trick ? - at least approximately.
|
|
|
|
|
My simple approach would be a greedy algorithm finding y-monotone polygons, based on an scanline/sweepline technique. By this we should meet condition a) and b). While condition c) d) e) are approximately evaluated in second state of the algorithm with help of score/ranking based on Delaunay-triangulations (q.v. voronoi hint)
Of course it's an Greedy algorithm, but it should perform well in respect that is probably np hard to find the best solution.
|
|
|
|
|
Here is how I would do it:
1) Add line segments to the extents of your workspace
2) Find the intersections of all line segments, and put them into a directed graph as vertices
3) For each line segment, add the endpoint of that line segment to the graph (if not already there)
4) Add bidirectional edges to the graph to indicate line segments that only connect adjacent intersections or endpoints
5) From each line segment endpoint, draw a fictitious segment horizontally until you hit another segment. Add the waypoint at the intersection, as well as a directed edge from the segment endpoint to the intersection, followed by a directed edge from that intersection to the next adjacent point on the line going in both directions.
6) Repeat step #5 in the vertical direction
Whew! OK, now the setup is complete. All you have to do now is the actual work:
1) Find all polygons in the graph. Do this by traversing the nodes by repeatedly taking the adjacent edge in the same direction (ie, at each vertex, find the next edge that is clockwise from the one you arrived from, and take it to the next vertex). If you arrive at a vertex with no outgoing edges, there is no polygon that met your starting criteria, so move on. Otherwise, push the resulting polygon onto a queue/stack.
2) For each polygon in your queue/stack, compute the largest inscribed rectangle. If this rectangle has a larger area than your current largest, swap them out. If they are the same, test your d criteria, followed by your e criteria, to determine which one to use.
Now you have the largest rectangle that meets your criteria (although I don't even want to imagine the asymptotic performance of this algorithm )
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I am engaged in research and development of a new article in a series of articles on algorithms. I plant to use the Big ‘O’ Algorithm Analyzer for .NET featured in my article here at The Code Project.
Big O Algorithm Analyzer for .NET
I recently read the MSDN Article by Dr. James McCaffrey on the QICT pair wise testing utility he wrote for. NET. In his article he encourages the reader to try to use different algorithms for weighting the pairs found by the algorithms using in the tool.
I thought this would be a great application to put under test of the Big ‘O’ tool to demonstrate how to instrument code in an application and additionally show some methods of improving the algorithms for given data sets.
Dr. James’ article can be found here:
Pairwise Testing with QICT
I would like to ask any one who would like to write a different algorithm as described in the article to post it here or discuss different approaches which seem like they would be an asset to the existing code base.
I will also demonstrate the difference of a good Knuth shuffle and a bad on assume the Perl code below:
Bad Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = 0; $i < $length; $i++) {
my $r = int(rand() * $length);
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
Good Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = $length - 1; $i > 0; $i--) {
my $r = int(rand() * ($i + 1));
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
|
|
|
|
|
Okay I'm stumped. Would anyone like to tell me why this is a bad question?
I'm not trolling my article.
I am trying to get ideas to better solve the sorting/complexity algorithm as posed in MSDN.
I got down voted twice on this post. I'd like to know why so in the future I don't make the same mistake.
|
|
|
|
|
As an independent observer, i.e. I know nothing about the QICT Knuth shuffle algorithm, I am as stumped as you are. Your previous post seems quite reasonable to me, so I can only assume the downvoting was by one or more of the many people who seem to stalk the forums downvoting for no reason. I have noticed certain people automatically downvote articles without giving a reason; it's just something we have to live with. I think a question was raised as to tracking these types but Chris commented that it would be too much effort for little gain.
I suggest you go ahead with your article and see what happens.
[edit]I gave you a 5 to try and even the score, let's hope someone else follows suit.[/edit]
|
|
|
|
|
Thanks...
Maybe down voting should require a comment w/ option to make it anonymous to protect the innocent... I will write it up as a suggestion.
I will also down load the code and place the exact algorithm in the thread to better illustrate the problem.
|
|
|
|
|
I don't see why it should be anonymous. If you think something is not up to standard then you should have the courage to put your name and comments behind it.
|
|
|
|
|
Lately I've been re-writing some graph algorithms to operate on adjacency-matrix graphs where the matrix is implemented as an array of ints/longs (for speed reasons)
Algorithms such as determining the size of a strongly connected sub-graph (somewhat like DFS, except the visiting order is different) and perfect graph coloring in chordal graphs.
They demonstrate uses of many bit hacks such as x &= (x - 1) .
Would anyone like to see an article about this?
(and is this the right place to ask this question?)
|
|
|
|
|
harold aptroot wrote: Would anyone like to see an article about this?
(and is this the right place to ask this question?)
yes, sure. and no (Article Writing, Lounge).
|
|
|
|
|
Thanks Luc, next time I'll ask it in the Lounge
|
|
|
|
|
But he's asking if anyone wants an article about this particular algorithm. Not everyone wastes time in The Lounge.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
And how about the bitwise version of Algorithm X? Is that interesting enough?
|
|
|
|
|
A float has only 24 bits that need to be divided, and one of them is "usually 1", so it requires less iterations than a full 32bit number (or even 64, which is a disaster)
Double's shouldn't really be faster than ints though (edit: ok yes it is possible in some cases, but it depends, see below), something else is going on here. Note that the time a division takes also depends on the magnitude of the result for ints, and on the roundness of the divisor for floats. This makes it harder to accurately compare the speed of integer division and floating point division.
As if that isn't bad enough yet there is also SSE vs good ol' FPU, and with Java you really have no clue which one you're getting, so I suggest you use C++ with assembly subroutines to take full control of what you're testing.
Hope this helps
edit2: forgot source, Agner Fog's optimization resources[^]
modified on Monday, December 21, 2009 11:36 AM
|
|
|
|
|