|
Yes, rectangles are allowed to be turned 90 degrees. I don't know possible sizes, they can be anything.
Is there any way to somehow calculate coordinates of rectangles inside big rectangle?
What do you mean by greedy approach ? Is there any kind of already made algorithm that could help me ?
|
|
|
|
|
Greedy is when a series of optimal local choices will always lead to a global optimum
Which is not the case here. It would have been for some sizes of small rectangles.
But basically, this sucks. Does it absolutely have to be actually optimal?
Also, are the coordinates not-too-big integers?
If they are, what you could try (but I don't guarantee it's the best way) is a limited form of Branch&Bound combined with a heuristic to determine which choice to test first (to test "probable" cases first) and bound the subtree when you can determine that due to earlier choices it would be completely impossible to find a solution in that subtree (such as when the biggest open space is not big enough to place the biggest unplaced small-rectangle in it)
It will still suck for any but tiny instances of the problem. There would be an enormous number of choices at each node - the total number of possible placements of all rectangles that were not yet placed. The heuristic might be something like "try the biggest rectangle first" combined with a sane way to place it (eg near the edge).
I don't even see a way to do real B&B on this..
It's stupid though, there are many overlapping subproblems (you could use memoization but you'd get a Huge lookup table - maybe try something like "memoization only for the first k levels of recursion") and this algorithm would likely take longer than the estimated age of the universe to run on any interesting instance of the problem.
If a non-optimal (but "good") result is also OK, you could use basically the same algorithm but with far less choices (eg use an heuristic to do only the k "probably best" choices at each node) - you can still expect major suckage.
I hope this helped anyway.. though I don't think that's likely.. sorry
Note: I apologize for all errors and falsities in this post, there can't be none but I don't know what they are yet.
|
|
|
|
|
Yeah, you're helping and thanks for that
Anyway, I don't think that coordinates will be bigger than 32000 (maybe not bigger than 10 000). I've thought of similar approach with recursion and memoization, but I'm not sure how to do it (I think I'll run into memory problems).
And yeah, non-optimal but "good" result should be OK. I'm not familiar with Branch&Bound, but I'll research it and try to learn something
And one more thought ... Should I try placing every rectangle two times(normal and 90 degrees rotated) in the big rectangle and use memoization? Then I can determine how must small rectangles can there be for that algorithm to be executing in about 10 seconds ?
And I also have one more problem ... What's the best way to remember what parts of big rectangle I've already used ? I don't know how to do it ... Should I try with some custom class or ?? Please tell me if you have any ideas
And one more time, thanks for your big help !
|
|
|
|
|
xx77abs wrote: And one more thought ... Should I try placing every rectangle two times(normal and 90 degrees rotated) in the big rectangle and use memoization? Then I can determine how must small rectangles can there be for that algorithm to be executing in about 10 seconds ?
I don't really know what you mean by that - could you explain it in more detail?
xx77abs wrote: And I also have one more problem ... What's the best way to remember what parts of big rectangle I've already used ? I don't know how to do it ... Should I try with some custom class or ?? Please tell me if you have any ideas
I have a couple of idea, but none of them are really good:
1) directly use the positions of the rectangles to calculate intersections (slow)
2) use a bitarray (too much memory)
3) hybrid: store big rectangles as rectangles, but use bitarrays to store the small rectangles (justification: in general the small rectangles will end up close to each other since using small rectangles leaves small gaps) Very complex though, and it may not be worth the effort
You're welcome, but I have the nagging feeling that there is a better solution - I just don't know what it is.
|
|
|
|
|
harold aptroot wrote: I don't really know what you mean by that - could you explain it in more detail?
I meant that I'll try and make recursive function that will implement memoization and try to find best possible solution. Then I'll run it with various number of elements and determine what's the case in which recursive function works OK(not too slow). And if that's OK for the person that has requested the program from me - problem solver
harold aptroot wrote: I have a couple of idea, but none of them are really good:
1) directly use the positions of the rectangles to calculate intersections (slow)
2) use a bitarray (too much memory)
3) hybrid: store big rectangles as rectangles, but use bitarrays to store the small rectangles (justification: in general the small rectangles will end up close to each other since using small rectangles leaves small gaps) Very complex though, and it may not be worth the effort
You're welcome, but I have the nagging feeling that there is a better solution - I just don't know what it is.
1. Yes, I think that would be too slow
2. Yes, it would take too much memory if big rectangles maximum size if 32000x32000. But if it is 10000x10000, then memory consumption would be about 100 MB (that I can spare). So this is maybe good solution (I have to contact the person that has requested the program ...)
3. Well, I'm not even sure how to implement this
Yeah, I too have feeling that there's a better solution - maybe already completed algorithm ...
|
|
|
|
|
Ok, then, I don't have much to add.. I hope someone else knows a better way
|
|
|
|
|
I've worked on a similar problem before but not exactly like your issue. I would suggest that you google "rectangle packing problem" and read some of the papers in the google links. Maybe you can modify the algorithm to work for your case.
|
|
|
|
|
Thanks, I'll read that
|
|
|
|
|
Hi everybody,
I have an tool that reads an file. That file has for example the follow content:
wvtnfhxyz1hdxyz1fdxyz1ejxyz1dhxyz1dxyz1eeaa1oeys
I would like to search for repeated character combinations like the bold characters. The character combination xyz1 repeated 6 times.
I would like to return some top 5 with the most repeated character combinations. Something like this:
xyz1 : 6 times
aa5 : 5 times
ab4 : 3 times
ab : 2 times
66 : 2 times
This is the code that reads the file:
string path = @openFileDialog1.FileName;
try
{
using (FileStream fs = File.OpenRead(path))
{
byte[] b = new byte[1024];
UTF7Encoding temp = new UTF7Encoding(true);
while (fs.Read(b, 0, b.Length) > 0)
{
textBox1.Text += temp.GetString(b);
}
}
}
Thanks!
|
|
|
|
|
This question has already been answered in the C# forum. Please follow the guidelines.
|
|
|
|
|
Thats right, but someone reply on my topic that it is bether to place the topic here in algorithms..
|
|
|
|
|
Hi,
here is some idea:
1. read all text into a single string (File.ReadAllText)
2. have a main loop traversing the text, and get the substring with length 2
3. now call recursive method that uses String.IndexOf to locate and count extra occurences of that substring
4. when found, for each of them increase substring length by 1 and recurse to 3
side-effects: aaaa will be reported as three overlapping "aa" and two overlapping "aaa"
|
|
|
|
|
Thanks for your support. Can I find somewhere an example?
|
|
|
|
|
you're welcome.
khalidelmeknesi wrote: an example?
I don't know.
Luc Pattyn
Have a look at my entry for the lean-and-mean competition; please provide comments, feedback, discussion, and don’t forget to vote for it! Thank you.
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
I have found this example but it is not 100 % good. Could you please have a look?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string _givenString = "wvtnfhxyz1hdxyz1fdxyz1ejxyz1dhxyz1dxyz1eeaa1oeys";
Dictionary<string, int> _codes = new Dictionary<string, int>();
int _startingPoint = 0;
while (_startingPoint < _givenString.Length)
{
for (int j = 2; j < _givenString.Length; j++)
{
for (int i = _startingPoint; i < _givenString.Length && i + j < _givenString.Length; i++)
{
if (!_codes.ContainsKey(_givenString.Substring(i, j)))
{
_codes.Add(_givenString.Substring(i, j), 1);
}
else
{
_codes[_givenString.Substring(i, j)] += 1;
}
}
}
_startingPoint++;
}
List<KeyValuePair<string, int>> _sortedCodes = SortDictionary(_codes);
for (int i = 0; i < 5; i++)
{
Console.WriteLine(_sortedCodes[i].Key + " : " + _sortedCodes[i].Value + " times");
}
Console.ReadKey();
}
public static List<KeyValuePair<string, int>> SortDictionary(Dictionary<string, int> data)
{
List<KeyValuePair<string, int>> result =
new List<KeyValuePair<string, int>>(data);
result.Sort(
delegate(
KeyValuePair<string, int> first,
KeyValuePair<string, int> second)
{
return second.Value.CompareTo(first.Value);
}
);
return result;
}
}
}
modified on Thursday, September 17, 2009 8:15 AM
|
|
|
|
|
Sorry, I don't read unformatted code; you should use PRE tags as you did before.
Luc Pattyn
Have a look at my entry for the lean-and-mean competition; please provide comments, feedback, discussion, and don’t forget to vote for it! Thank you.
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
I have edit the code block..
|
|
|
|
|
khalidelmeknesi wrote: is not 100 % good
that is quite a diagnosis. Take some white pills every day for the next couple of weeks or months.
I haven't studied the code in detail as I don't like it from first sight. I see three nested loops, so its performance will scale terribly. I see one dictionary that probably ends up containing all substrings of all starting points and all lengths, so it is quadratic. This is a brute force approach, nowhere near what I suggested initially.
Luc Pattyn
Have a look at my entry for the lean-and-mean competition; please provide comments, feedback, discussion, and don’t forget to vote for it! Thank you.
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
You should be able to solve this with a suffix tree[^].
1. build the suffix tree
2. sort the list of suffixes
3. compare every entry with the adjacent ones in the list to find how many times that string is repeated - if it's repeated, then then next few entries in the sorted list will begin by the same string
It's only a draft idea and it sure needs to be refined and completed when implementing, but it should work.
Keep an eye on memory usage.
Good luck.
2+2=5 for very large amounts of 2
(always loved that one hehe!)
modified on Sunday, September 13, 2009 3:37 AM
|
|
|
|
|
khalidelmeknesi,
I'll throw my 2 cents worth into the pot for what its worth (probably about 2 cents).
Use a sort like the Mix Software C Database Toolchest C-BTree routine to create a database (a tree).
Read the file line by line and for each line, ping pong from the front of the line using two character strings, then from the back of the line with 3 character strings, then from the front of the line with 4 character strings, etc, until a full sized line is finally entered, for each such string, add an entry to the tree, each entry containing the string, its length, and an Item (an unsigned long) with a value of 1, specifying no duplicates (if the sort detects a duplicate, it will not add a new entry but return an error, if so then read the existing entry's Item, increment the Item count, and re-save the updated Item, and as you are updating the items, save the top highest item values found in a short array of as you said "some top 5 counts so you know what to look for at the end).
When the end of the input file is reached, close it, then read each entry in the tree and delete the entry if the entry's Item is a value that is not 0 and less than the top 5 or so counts you saved. For entries in the top 5 or so, create a new entry that starts with the Item count subtracted from the maximum saved count converted to a fixed length string with a leading 1 (i.e. for an entry of "abc" with Item count of 45 with a maximum count of 50 start the string with 105, Note: all these created count strings must be the same length) then append the entry string ("abc") giving a string of "105abc", and set the new Item to a value of 0, and once this new entry is created, delete the original entry. If a tree entry is found that contains an Item with a count of 0 (an entry with a header string), then ignore it.
When you have finished modifying all of the tree entries, the tree will contain only the top 5 or so counts with the associated string snipits, all neatly sorted in ascending sequence with the highest count first, and for each high count, the snipits in ascending sequence. Note, to report the count, set a value of 1xxx where xxx is the maximum count, and subtract the leading string value, i.e. as in the example above, set the constant to 150 (1 + the maximum value) and subtract 105 (the heading string) to get a count of 45 for string "abc" (Note: the highest values would have a header string of 100 thus the count would be reported as 50).
When you are done, delete the tree file (Note: this tree file will be HUGE as it is being built).
Note: Mix software still has this excellent database available as of my last check about a year ago. I originally purchased the program and the source in 1984 or so and have recently converted it to compile under the latest Visual Studio. I also created export/import functions that could read old databases (where ints were 16 bits) and create new databases (where ints are 32 bits).
Piece of cake.
Dave.
|
|
|
|
|
|
I would make a big-big-big database containing the faces of all the world's people then ask Rajesh for a monkey (you know, the recognition task) ...
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
|
|
kabirbdboy wrote: actually my supervisor said me first find three webside on face recognition.
If you can't find three websites with information on face recognition, you need to change majors. Good advice, take it.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|