|
In general, it is not doable: consider what happens when "a large lookup table" that you describe in your post is filled with randomly generated unique entries. Your target alphabet easily accommodates this, so the result satisfies each of your criteria 1 though 5. Yet the only way to "crack" this "pattern" would be to record the entire lookup table.
|
|
|
|
|
Clearly a very basic algorithm if each character is changed individually. I would say it is just a shift cipher of some sort.
First convert the characters to hex
a b c d e f g a 1 2 3 4 =
61 62 63 64 65 66 67 61 31 32 33 34
now find out what values you need to add to those to get your resulting encryption. As I only have two values to use, here is a sample
61 62 63 64 65 66 67 61 31 32 33 34
+ -10 -57
--------------------------------------------------
57 0A
fill in the rest and look for a pattern. Could be a repeating key, e.g. -10 +5 +23 - 57
interestingly (and maybe a coincidence) is that the second "a" is a reduction of the previously calculated "a"
So the algorithm could be...
1. start with a byte array of size [256] that contains all -10 values
2. loop each character
3. convert character to hex (iHex)
4. convert iHex to decimal (iDec)
5. encrypted hex (eHex) equals: iHex - bytearray[iDec]
6. set new bytearray value: bytearray[iDec] = eHex (ready for next use)
7. output eHex to result string
8. process next character
...of course that is just one possible to match the results you have given
I may or may not be responsible for my own actions
|
|
|
|
|
First off thanks to both Moreno Airoldi and musefan for providing such detailed and well thoughout replies, this is the kind of help internet forums/boards are all about.
I didn't realize how beenficial it would be to prvide additional sample values esle I would have given more then just the few I did. I am going to check out both your recomendations but it will be a day or so before I can get to them (thats why my own reply to each of you has been so slow coming) and so in the mean time here are more examples that might actually scream a pattern to you 2 who are clearly more versed in patterns.
I tried to provide a sampleing without going over board. The belwo restroicted to a,b & c (in upper & lower case) is for just the first 3 positions and the resulting number of rows is 18. To show all 12 positions for just a few characters is a lot.
Anyway, if anything about the pattern in teh eblow jumps out at you please let me know else I'll let you know what comes of testing your suggestions.
Thanks
Position, unencrypted character, encrytped value
1 a 57
2 a 24
3 a 2A
1 A 77
2 A 04
3 A 0A
1 b 54
2 b 27
3 b 29
1 B 74
2 B 07
3 B 09
1 c 55
2 c 26
3 c 28
1 C 75
2 C 06
3 C 08
|
|
|
|
|
Well obviously the same character in upper and lower case in the same position differs by (hex) 20. This strongly implies that the ANSI value of the character is being used in a relatively unsophisticated way (upper and lower case are separated by hex 20 in ANSI). The difference can be either way; to me this suggests an xor. However, it is not totally trivial as a->b->c is not just +1 each time:
(ASCII value a 61 b 62 c 63)
Slot 1 a 57 b 54 c 55
Slot 2 a 24 b 27 c 26
Slot 3 a 2A b 29 c 28
All numbers posted here are in the range 06-77. I would guess that it is a 7 bit algorithm (working on 00-7F) and operating with XORing the input character with some function of the input. Let's look at the XOR values for those slots:
Slot 1 a 36 b 36 c 36
Slot 2 a 45 b 45 c 45
Slot 3 a 4B b 4B c 4B
Looks like I'm onto something there. All that remains is to determine what the XOR parameter which is a function of position is. There isn't enough information in 3 slots and the pattern 36->45->4B does not immediately jump out at me as being meaningful. The encrypted value of 'aaaaaaaaaaaa' should provide all the XOR keys and hopefully that is enough to identify how they are calculated.
|
|
|
|
|
You beat me to it
Your suggestion of encrypting "aaaaaaaaaaaa" should be enough to solve this you are right.
I think there is no pattern as it is just a random set of chars used to provide the XOR values, which should start with "6EK........."
I think the OP has been lucky that such a simple algorithm was decided on
If my jokes make me laugh, then I have already succeeded with 100% of my target audience
|
|
|
|
|
Well he said it was a mind exercise/puzzle, so I guess it's supposed to be solvable by a person in a reasonable time.
You may well be right that the XOR key is simply a 12 character pass code/encryption key.
Encrypting 'aaaaaaaaaaa' (or the equivalent with byte zeroes) is often useful for key analysis in simple encryption schemes. I had to modify the first version of my stream encryptor because by throwing a stream of zeroes at it, it would dispense the key. (Real life binary messages, particularly if you're using my socket library as well, often have quite a few zeroes in known places – for example if you request an EXE or DLL download you can rely on 16 or more consecutive zeroes in places in the header; and 4 byte length encoded messages will generally be possible to generate with three zeroes in known positions.)
|
|
|
|
|
Hello everybody
i am trying to disprove the claim:
the time complexity of merging K AVL trees is O(the sum of their nodes number)
i tried to calculate the complexity of meging K AVL trees but there could be more than one algorithm.
How do i disprove the claim.
Thanks
|
|
|
|
|
If you handle every node in every tree, the complexity will be O(the sum of their nodes number).
So one way to disprove this is to come up with a merging algorithm that avoids handling every node. One way to avoid handling every node is to reuse the existing structure of the AVL trees.
This can be done in trivial cases where the ranges of key values for the trees don't overlap. If they DO overlap (which is most likely), then you have a much more difficult problem, because you have to "weave" one tree into another, while maintaining the tree's balanced property.
You may be able to find subtrees whose key values don't overlap, and merge these. Using bigger building blocks like this avoids handling every node, and reduces the time complexity of your algorithm. After merging, you will probably have to do a little rearranging to restore the balanced property.
But it seems very hard, and in most cases, probably impossible.
|
|
|
|
|
If you diligently handle every node, I think the time will be O(N*Log2 N), where N is the sum of all nodes numbers.
|
|
|
|
|
To merge K trees in O(N), where N is the sum of node counts, you must never search on insertion: all your insertions must complete in O(1). In other words, on each step you must insert the new root node, and maybe do a rotation or two. However, the new root could come from any of the K trees that you are merging, so the time to search for that node would be O(K) (perhaps you could do O(Log2 K), but regardless of how you do it, K will be a factor to it). That's why I think the best you can do is O(N*Log2 K).
|
|
|
|
|
i liked that idea, lets say we've got k tree where each one got one node.
we know that there is no algorithm that can sort with better than teta(klogk) so, in order to sort the nodes in array and scan an empty tree to insert them, it will takes at least teta(klogk)
|
|
|
|
|
Who can tell me more about Eye-like Algorithm?
|
|
|
|
|
|
I don't want google I want to hear the opnion of our fulks here in CP what they have to say about this subject.
|
|
|
|
|
I never heard the term, but back in the 80s I worked at a place that was doing (among other things) research on computer vision.
One innovation (by Stan Sternberg) was the use of Lissajous curves (http://en.wikipedia.org/wiki/Lissajous_curve[^] ) to adaptively trace the outline of an object like a real eye.
The algorithm would sample a single point on the curve at regular intervals and slightly adjust the Lissajous parameters to adaptively fit the edge of the object. (E.g. if it hit the interior of the object, it would expand the curve accordingly, and if it hit the background it would contract.)
For some tasks it was orders of magnitude faster than traditional algorithms that looked at every pixel.
|
|
|
|
|
Finally some one with an answer.
|
|
|
|
|
This is a bit of a 'name that method' query - I'm sure there must be a way to do this!
Say I have a set of 2D points with data values attached to them. Let's say the data is 0 - 100. If I colour the points in the ranges 0-19, 20-30, 31-69, 70-80 and 81-100 I will get five or more regions (which may include "islands" of data, and there may be several of each colour, and they are not necessarily connected).
I want to display those regions as a set of congruent polygons - how can I find the polygons?
I could grid the data and calculate iso-lines, but I was wondering if I could search each point and "assign" it to a polygon somehow (like a flood fill on the points?).
My data is ordered in a grid-ish manner (by "column") if that helps any.
Suggested approaches welcome.
Thanks!
|
|
|
|
|
Kyudos,
your problem statement is not totally clear to me. A figure would be helpful.
You can form polygons around your point clouds by means of the convex hull construction (tightest convex polygon that includes all points); if you mention unconnected islands, this means that convexity is a too strong requirement. You could resort to Alpha-shapes, a generalization of the convex hull (see http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html).
This suggestion only allows you to process each class independently, it will not guarantee that the polygons are nested. This requires a more general approach.
Another answer is provided by the Voronoi diagram, a tiling of the plane where every point is associated to all points that are closer to it than to another point. (http://en.wikipedia.org/wiki/Voronoi_diagram). Coloring the diagram will give you the desired polygon set (in reality some discrete form of iso-lines).
Maybe the specific arrangement of your data points can provide shortcuts to the solution...
|
|
|
|
|
Thanks for the reply. I created a couple of figures that may help.
Area[^]
Area Detail[^]
I'd want a polygon for each of the coloured areas, but bearing in mind that there may be no areas of any given colour, or several unconnected areas that may be islanded (e.g., there could be a "yellow" island in the middle of the "green").
Seems like Alpha-Shaped convex hulls might do the trick - will look into it
|
|
|
|
|
Ah, the images sure help. I didn't have a clue what your description was about, but now I see.
Alas I don't know the official algorithm(s) that would do exactly this for you, however it resembles the flood fill problem, so I would read up on that, and study the A* algorithm.
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
Ah, this is quite different from what I had imagined (I thought of much sparser points irregularly arranged) !
There is a much simpler way then.
Consider every rectangular stich (or are they quasi-rectangular ?) in the grid in space and "cut" it with a plane at altitude 20 (say). You test every edge of the stich for intersection with this plane, just by detecting a change of sign in the Z coordinate mins 20. Then linear interpolation gives you the X and Y coordinates of the intersection point.
The Z test is such that you'll get an even number of intersections. When you have two of them, join them with a line segment; when you have four of them, join with two line segments using a nearest neighbor rule.
After doing that, you will obtain a nice polygonal decomposition of your domain. In reality, a C1 continuous approximation of the level curves. This takes two hours to implement.
If you need polylines rather than isolated line segments, it is possible (and not so difficult) to follow paths in the grid: every time you find an intersection in a stich, it is joined to a second intersection point in the same stich; this other intersection point also belongs to the neighboring stich, and so on, and so on.
Is that clear ? If you need more details, please ask me.
|
|
|
|
|
Ooops, did I say C1 continuty ? No, it's only C0. (C1 is achievable by using a more elaborate interpolation scheme inside the stiches, giving you smooth curves.)
|
|
|
|
|
Very good post. I had the same thought (that it was a few points scattered in space). The array certainly appears to be close enough to a grid to use a slightly modified version of normal grid-based contour finding.
|
|
|
|
|
I'm trying to construct a database of all the data types defined by the Win32 API.
I'm using the DIA SDK to read pdb files in order to extract the type information.
My question is the following:
The recursive nature of data type declarations makes it very difficult to flatten them into a traditional row and column structure.
Am I doing it the wrong way? Is there an easier structure I could use for my database?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Hi Richard,
I'd suggest you google tree in database, and read some of the articles, maybe this one: Implementing a Tree Structure with Database[^]
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|