

Sorry didn't see your reply, I was messing with a test app. The issue I have knocked up I have some test data in a list box.





No need to say sorry. It's not a chat here.
If you already have a list box, you may use my suggestion. Then you can identify the device strings by querying the item data (e.g. write all strings for a specific device to a file).





After a bit of fiddling, I have a list box I can add the data to
111111116:11:520810120000a8
now this is a sample returned from the unit, The serial number of what I am trying to sort are 11111111 I would like to get an opinion on whether or not using an array of strings the size of the total number of records on the unit is a good idea bearing in mind I have no idea what PC it is going to be run on. Here's hoping what I sent is good enough (one file)! (before you say it I know full well Hope & Pray are not viable options! )
Glenn





If you add such strings to a list box you already have an array of strings. You can use the Find() method to find strings that begin with a specific serial numbers. You can implement a sort function to sort by the serial numbers.
I'm using C++ and did not have experience with .NET. Using MFC or Windows API functions, each item of a list box can be assigned a user data value. When setting this item data to the serial number, the list can be searched and sorted by these values which is much faster than searching/sorting using the strings (item text).
Performance depends on the max. number of items. You may raise a new question about this. May be someone else has experience with large list boxes. I think a few thousand items should be no problem.





Thanks, in the actual application I am updating the actual RTB on the fly which doesn't seem to be causing any problems yet I will look at changing to a list box. The issue I am having at the moment is how to split the data up if I need to (I wish I could call the guy testing it and ask him, several problems involving time zones, company politics, project ownership, pointy haired business practice , etc.) Really I just need to read the first 8(ish, in spec, 7 on dev. board)
bytes and if they are different from ones already read create a separate store for them.
After some though (& coffee)
Step 1 use total number of readings to create a string array.
2 read the RTB string split using substring for a \n
3 read the first 8 (or 7 bytes) if its new place in a list. if not add to list
4 repeat 2 & 3 until total number string read in 1 are reached.
5 Save out each of the populated strings and free up resources. Sorry about this reply got grabbed to something else in the office!
Glenn





I like the ability to downvote, I do it myself on occasion. Then again, these are valid questions to ask, one would actually assume that information to be part of the question.
The reason I'm making a point of it, is because you're not a respectable downvoter. You're a twovoter! The easy way of coloring someone gray, without losing a point yerself.
Grow a spine, and tell the man why you think his answer sucks
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]





hi guys . i ask my qustion very fast . i just need to help me . ty for any help.
my qustion is that , i have made a bfs solver class that solve the puzzle 8 with bfs algorithm .
what im doing its like this .
1  node class >> contain some info
2 bfs solver class
21 : Qeue Neede .
22 : enqeue the first node (initial state) << its parent .
23 : check if its the Goal Or not (if its not go to next )
24 : find the empty tile and see if you can exchange any tile with it .
25 : any change w'll make a child node . of that parent
26 : deqeue the parent node.
26 : check if the child node of the parent is in the qeue (if is not then enqeue the child node and put a parent label on it). jump to the (23)
is that algorithm working well , i have test it and it really work fine , the problem its here that it dosnt work if i played too much with the tile and chaneged them around . it works fine in small problem like {1,2,3,5,4,6,0,7,8} or sth like this .
and it gave me an error stackoverflow in big problems like {8,0,4,2,3,1,6,7,5} i dont know what should i do to prevent this error but i just know that my codes work fine .
im writing my code with c#.net . using a node class and bfs solver class.
i can put some parts of my code if you neede to see what im doing .
i'll appriciate any help .
modified 26Sep12 4:58am.





The problem is that breadthfirst search (bfs) tries ALL sequences of length N before it tries any of length N+1. And it stores all these intermediate attempts, resulting in your stack overflow for a deep solution.
An approach more specific to the problem would give better results, e.g.
for (int tile = 0; tile <= 8; ++tile)
placeTileInCorrectPosition (tile, board);
"Microsoft  Adding unnecessary complexity to your work since 1987!"





im sorry but what should i do with this peace of code , i didnt get it . can you explain more ?
this is iteration that go 8 time throw the board and place the tile in correct position i didnt get it honestly .





The function will generate the sequence of moves to place tile n in the correct position without disturbing the tiles < n, which have already been correctly placed.
This is more focused than blindly trying one solution after another, and will work faster and without a stack overflow.
Unfortunately writing this function takes a little more work than bfs. But it's doable.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





this is good soluation .
you mean with tile < n , that if i had a puzzle with this tiles
{{1,2,4}
{3,5,0}
{6,8,7})
its change the 4 with 3 and 8 with 7 and 0 with 8 , is that what you meaning .
and can you light me up with a source or website that has already done this thing . ty for ans my stupid qustion .





The function would generate the sequence of moves that would place 3 in the position 4 is in, while leaving 1 and 2 in their correct positions. This is because 3 is the next tile to be placed in its correct position.
I don't have source code, but it would be a good exercise for you to write it. Googling "9 puzzle solution" gets lots of hits, and one of them probably has some source code. Similar results can be obtained searching for "15 puzzle solution", which is another common form of this puzzle type.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





recording to your description about that method , it'll change the place of 4 and 3 with eatch other and dosnt touch the second tile bcz it is in its correct place . i have made sth . may you take alook on it . is that what you mean ?
[[^]]
but i think it cant find the soluation(path) bcz it should change the first and second tile .
i have read sth about manhaten distance , can i use this method in bfs sloving ?
it tell me all the steps that a puzzle needed for sorting , like this one
{{4,5,6}
,{3,1,8}
,{2,7,0}}
it needed 2+3+3+1+1+1+1+2 = it neede 14 steps for sorting it
A B
{{4,5,6} {{4,5,6}
,{3,1,8} ,{3,1,0}
,{2,0,7}} ,{2,7,8}}
A:1+2+3+3+1+1+1+1+1 = 14
B:2+3+3+1+1+1+1+1+1 = 14
, but it dosnt mean that we can sort the puzzle in 14 steps is just
give me the coast or sth like this , i think this algorithm help me in finding the path faster (im just worry about the stack overflow and the memory), but i didnt test it yet.
i have read sth about bfs that it says the bfs searhing (require exponential memory). what does that mean ? is that mean i should declear or define a plceholder or a sth like a memory for it to contain i dont know the nodes or the calculating instead of the memory . or sth like this , does that things i have said is right or i just made them by my self ?
modified 27Sep12 8:45am.





I am wondering if anyone knows of a more efficient algorithm to perform a bitinterleave/deinterleave that takes 256bits and interleaves every 64th bit in the output. For example, I want:
output = (bit[ 0] << 255)  (bit[ 64] << 254)  (bit[128] << 253)  (bit[192] << 252)
 (bit[ 1] << 251)  (bit[ 65] << 250)  (bit[129] << 249)  (bit[193] << 248)
 ...
 (bit[63] << 3)  (bit[127] << 2)  (bit[191] << 1)  (bit[255])
So far, I have the following algorithms (presented in untested C++):
__uint16 *data = new __uint16[32 / sizeof(__uint16)]; __uint64 *result = new __uint64[32 / sizeof(__uint64)]();
for (int resIdx = 0; resIdx < 4; ++resIdx) {
for (int dataIdx = resIdx; dataIdx < 16; dataIdx += 4) {
__uint64 temp = data[dataIdx];
temp = temp << 24;
temp = temp << 12;
temp &= 0x000F000F000F000FULL;
temp = temp << 6;
temp = temp << 3;
temp &= 0x1111111111111111ULL;
result[resIdx] = temp << (dataIdx & 3);
}
}
__uint64 *data = new __uint64[32 / sizeof(__uint64)]; __uint16 *result = new __uint16[32 / sizeof(__uint16)];
for (int dataIdx = 0; dataIdx < 4; ++dataIdx) {
for (int resIdx = dataIdx; resIdx < 16; resIdx += 4) {
__uint64 temp = data[dataIdx] >> (resIdx >> 2);
temp &= 0x1111111111111111ULL;
temp = temp >> 3;
temp = temp >> 6;
temp &= 0x000F000F000F000FULL;
temp = temp >> 12;
temp = temp >> 24;
result[resIdx] = (__uint16)temp;
}
}
Sounds like somebody's got a case of the Mondays
Jeff







My problem goes something like this. I have a variable of 5 arrays a[5] for storing the 5nodes of a network. The network is randomly formed by the random number generator. I have the time of travel for each paths of the network. All the nodes may not be connected to eachother. The network and the time of travel of each path is as per the user has assigned. Now I have to check the elements of each array and assign the penalty as per the conditions. My main task is to find a network path traveling each node of the network only once such that the node visited is not repeated. Its similar to TSP but the difference is that in TSP has the condition that it is possible to travel from each node to every other node. But for my case the network is pre defined and nodes are not connected to every other nodes in the network.
a) Like if the network stored is 22345 then I find that a[1]=a[2] so I will have to assign the penalty (type 1 )as the network has a path 22 which is not possible as the starting and end of the path is same. At the same time for rest of the paths from node 2 to 3, node 3 to 4, node 4 to 5 I will have to calculate the sum of total time of travel when I finally reach the node 5.
b) Also there is another penalty condition if the path is not a possible path in the network given by user then I will have to assign penaly( type 2) for such kind of condition like if 12435 is a network to be checked. Here I am supposing that all the paths are possible but the path 43 doesn’t exist in the real network provided by the user, So for such cases I will have to calculate the sum of the time of travel for the paths which are possible and also assign the penalty (type 2)
c) There is also a condition that I need the end point of the network as node 5.So in the network generated randomly if the end node is not the node 5 then I will have to assign the penalty (type 3) and at the same time calculated the sum of the rest possible paths. For example the node 35214 here the end node is 4 so I will have to assign the penalty(type 4)
d) Since I am not allowed to visit the same node twice I will have to assign the (penalty type 4) if the network has repeatition of nodes like if the network is 24314. Here the node 4 is occurring twice so I will have to assign the penalty(type 4) and also calculate the sum of the time of travel of the other paths.
Considering all the above penalty conditions I will have to check the network s. I am really not being able to use any logic on how do I start. Need some hint on how do I do it.





Sorry, I see nothing difficult in this problem statement. Or maybe is it just that you never wrote a program ? Or are you asking how to represent the network topology and path costs ?
For a), scan the array and check if two consecutive indexes are the same.
For b), scan the array and check if every pair of indexes are linked in the usersupplied topology (you need some function that will query the topology and tell you if a link exists).
For c), the condition is a[5] != 5.
For d), use a double loop: the outer loop from i=1 to 4, the inner loop for j=i+1 to 5. In the body of the inner loop, test a[i] == a[j].
Accumulating the path costs is also obvious once you have a query function per network link.





Note: I see an interesting "resonance" between this question, and Roger Wright's question below: "A Modelling Question;" which I had not read before I made notes today about this "problem space," to post later on CP, when I got home !
For some reason today, while riding the elevators in a shopping mall that has three sets of two publicuse elevators: in one end of the mall; in the other end of the mall; and in the middle of the mall ...
The idea came to me that it would be an interesting programming challenge (of a type I had never taken on before) to create an elevator use optimization solution: note that I have never studied differential equations, and I am not familiar with queue optimzation algorithms, etc.
Here's how I framed the problem:
1. Given #n sets of elevators, where the number in each set can vary from #1 to #n:
a. defining "set" to mean elevators that are adjacent, and that any button press on a floor that requests an elevator to go up, or down: simultaneously shares that request with every other elevator in its set that is not in use currently: i.e., stopped at one floor with the door closed, and no requests pending (that might be an unrealistic constraint ?).
2. Assuming all elevators serve the same the number of floors:
3. Assuming that the following timestamped data, tagged with a unique ID for each elevator, is generated and sent to a central computer:
a. for every elevator the time it starts moving, and what floor it starts moving from.
b. for every elevator the time it stops moving, after being in motion, and the floor it stops on.
c. on every floor of the building, when someone presses an elevator up, or down, button (from outside the elevator) to request service:
d. data tagged by unique elevator ID containing the time of requestservice button press, and the requested direction, up, or down.
e. inside the elevator: when any floor choice button is pressed: data tagged by unique elevator ID containing the time of floorchoice button press, and the destination floor chosen. So any one moment in time you have a complete list of all floors to be stoppedat by elevator #n.
3. when any elevator starts or stops moving is recorded tagged and timestamped as in the above.
So, imagining we have all this incoming timestamped information, and that some, or all, elevators are in use, some moving up, or down, some stopped.
The problem to be solved:
1. given a new request for service, up, or down, on floor #n of elevator #n:
2. and, given the context of pending requests and states of every other elevator in the set of which elevator #n is contained:
The desired result: to dispatch the elevators most efficiently, so they serve the most number of people in the smallest amount of time.
I'm not looking for "answers" by asking the question here: I am just looking for a few "pointers" to direct my initial study of the type of scheduling optimization this particular example represents.
Frankly, I don't have a clue about how to approach this kind of problem right now (no formal computer science courses for me, unfortunately).
Obviously could make this example much more complex by taking into realworld factors like most requests may originate from the groundfloor before based on some pattern (like, in a hotel: most request originate from the ground and/or checkin floor up to and a certain amount of time past, checkin time).
You could consider recording the exact times of elevator #n's door opening and closing, and figure out that if it's a short enough interval that noone could have gotten on or off, so someone hit the closedoor button immediately for whatever reason.
But all that type of complexity I don't want to even consider until I reach some understanding of basic optimization problems.
thanks, Bill
p.s. A difference I think I see between Roger Wright's simulation problem (if I can even begin to interpret it correctly), and the one I've described here is:
There are "unknowns" (see Roger's response here describing his inability to monitor pumpstates in realtime):[^]) in Roger's complex system of pumps, wells, and flows; while, in the problem described here, there are no similar "unknowns:"
For every elevator #n: its current position; whether it is idle stopped on one floor; or moving up or down to some other floor; and, the number of floors it must stop at before reaching the top or bottom floor depending on which way its moving: is known precisely.
"When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images." Niels Bohr





I've thought about this problem (while waiting for elevators!).
A few random comments: The standard algorithms used by existing elevators seem to be nonoptimal; A cluster of floors where an elevator has been requested will slow down multiple elevators making everyone wait unnecessarily. This is because when the first elevator stops at the first floor in the cluster, the remaining elevators will get bogged down by the next floors in the cluster.
It would be more optimal for a single elevator to handle the cluster, while the others continue down with no delays.
It's hard to optimize for the number of people, because there's no way of knowing exactly how many people are on a particular elevator. We can make a guess, however. When an elevator stops at a requested floor, we can assume at least one person got on. But we don't know how many got off.
Using artificial intelligence may optimize the algorithm better than any "blind" approach, that doesn't take historical use patterns into account.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Hi Alan, thanks for responding. I've been exceptionally busy and thrown offtrack by some hardware problems in the last week, so have not kept up with this thread.
I think about that's a good point about using some for of of AI technique, or even some form statistical analysis (factor analysis).
If you had a three month data set of all the kinds of data I described above, it would seem you could factorout use patterns: such as heavy traffic from all upper floors, or any floors above coffeeshop level, or groundfloors, or checkout floor, down, in the AM hours, etc.
Near checkin time you'd expect groundfloor to checkin floor (if separate) to be heavy, followed by lots of traffic of people going up to their rooms, and so forth.
Similarly, hotels often have special events at noon, or in the evening involving large numbers of people; it's easy to anticipate when heavy elevatoruse periods related to those events may be ?
And, I wonder if the kind of "fuzzy" algorithms developed by Japanese device makers might be useful here ?
I wonder how long it will be before elevator makers put small videocams with a wideangle lens on each elevator, and from the pictures being fed back into a central dispatching system compouter, roughly estimate the size of the crowd waiting on a particular floor ? Maybe it's already being done ?
best, Bill
"When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images." Niels Bohr





Hello,
I'm a complete newbie to the field of cryptographic algorithms themselves, always having relied on thirdparty libraries and code in the past. Now I'm starting to poke around with them by myself I finally managed to throw together some sort of tiny cryptographic library in VB.NET. However, I'm concerned that... quite frankly... it's a bit rubbish. Sorry about the long code block here:
Imports System.Numerics
Public Class BlumBlumShub
Private _x As BigInteger
Private _p As BigInteger
Private _q As BigInteger
Private _m As BigInteger
Private _pow As New BigInteger(2)
Public Function NextNumber() As BigInteger
_x = BigInteger.ModPow(_x, _pow, _m)
Return _x
End Function
Public Sub New(ByVal seed As BigInteger)
_p = BigInteger.Parse("32416190071")
_q = BigInteger.Parse("32416185031")
_x = seed
_m = BigInteger.Multiply(_p, _q)
End Sub
End Class
Public Class RandomBitStream
Private _b As BlumBlumShub
Public Function ReadByte() As Byte
Dim num As Byte = 0
For i = 1 To 8
num += Math.Pow(i, 2) * If(_b.NextNumber().IsEven, 1, 0)
Next
Return num
End Function
Public Sub New(ByVal seed As BigInteger)
_b = New BlumBlumShub(seed)
End Sub
End Class
Public Class BlumXor
Private _bitSrc As RandomBitStream
Private _key As BigInteger
Public Sub Cipher(ByRef message As Byte())
_bitSrc = New RandomBitStream(_key)
For i = 0 To message.Length  1
message(i) = _bitSrc.ReadByte() Xor message(i)
Next
End Sub
Public Function ByteToStr(ByVal inByte As Byte) As String
Return inByte.ToString().PadLeft(3, "0")
End Function
Public Sub New(ByVal keyStr As String)
Dim encoder As New System.Text.ASCIIEncoding()
Dim keyBytes As Byte() = encoder.GetBytes(keyStr)
_key = BigInteger.Parse(String.Join("", Array.ConvertAll(Of Byte, String)(keyBytes, New System.Converter(Of Byte, String)(AddressOf ByteToStr))))
End Sub
End Class
And that is that. Firstly, I think my implementation of the BlumBlumShub PRNG is off, secondly I'm not entirely sure that I should be using the parity of numbers generated to give me random bits and thirdly I'm not so sure about my use of Xor or how I'm generating a seed integer for the PRNG from a string. I welcome the input and criticism of any cryptographers or mathematicians out there, plus anyone else that can see problems with my code.
SixOfTheClock
A programming language is to a programmer what a fine hat is to one who is fond of fancy garden parties. Just don't try wearing any .NET language on your head. Some of them are sharp.
modified 14Sep12 6:30am.





Hi all,
I am trying to write a windows forms program to communicate to a CNCmachine from the 1980ties.
The machine uses a 6802 microprocessor and communicates with a Desktop computer with a DOS program.
The communication is in ASCII characters and is in the following format:
4 spaces and a 3 digit value and a checksum character or 3 spaces and 4 digits and a checksum character.
So a total of 7 characters and a checksum character.
This is a part of the original message(space represented as  and checksum character bold):
853>1370%248>1204'00117887215625522346#2346#1031#
I already tried some things like adding all decimal ASCII values and MOD against all possible values,
like 853 : 32+32+32+32+56+53+51 MOD some value and add 30 or 31 to make sure result is in the range of printable characters,but no luck.
note that 853 and 248 have the same checksum, also 2346 and 1031.
I hope someone recognizes the checksum algorithm or can help me to find out how the checksum character is calculated.
Thanks,
Groover
0200 A9 23
0202 8D 01 80
0205 00
modified 9Sep12 16:14pm.





Easy,
XOR the 7 ascii values together.
" 853>"
32^32^32^32^56^53^51 = 62 ">"
" 248>"
32^32^32^32^50^52^56 = 62 ">"
" 2346#"
32^32^32^50^51^52^54 = 35 "#"
" 1031#"
32^32^32^49^48^51^49 = 35 "#"
" 1370%"
32^32^32^49^51^55^48 = 37 "%"
" 1024'"
32^32^32^49^48^50^52 = 39 "'"
" 00"
32^32^32^32^32^32^48 = 48 "0"
" 11"
32^32^32^32^32^32^49 = 49 "1"
" 7887"
32^32^32^32^55^56^56 = 55 "7"
" 2156"
32^32^32^32^50^49^53 = 54 "6"
" 2552"
32^32^32^32^50^53^53 = 50 "2"
Alan.





Thank you Alan,
I wrote a function in my program and it works perfect.
Groover,
0200 A9 23
0202 8D 01 80
0205 00





I have a group of points that I need to plot on a graph and calculate the line of best fit(Slope and Intercept) in a programming language.
for example
x 0 1 2 3 4
y 40 41 45 41 47
At the moment I am using and have implemented the Least Squares algorithm
here http://en.wikipedia.org/wiki/Least_squares[^]
I was wondering whether anyone knows of a computationally more efficient algorithm with close to the same accuracy and be able to explain how the algorithm works?
Thanks in advance





A decent implementation of linear fitting by LSQ should run in O(n) time and O(1) space.
One pass over the data, accumulating sum(x), sum(y), sum(x*x) and sum(x*y) then crunch those four (and n) to get the answer.
If you're just doing a simple linear fit, DON'T get sucked into all the matrix stuff. Waaaaay overkill.
Cheers,
Peter
[edit]forgot sum(y)[/edit]
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
modified 5Sep12 2:43am.





So, a quick maths check.
If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results.
Assume image size is 1000 x 1000, so 1 million pixels
Algorithm is an iteration of Zn+1 = Zn^2 + Z0 up to a maximum of say 10,000 times
Each iteration results in a complex number
A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty)
Theoretically this could result in 10,000 x 1,000,000 complex number instances
This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits
Which is:
1.6 x 10^11 bytes
1.56 x 10^8 K bytes
152,588 M bytes
149 G bytes
That would seem a dumb thing to do
But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about.
Mike





Does your algorithm depend on the pixel colour or pixel position?
What I'm thinking is that, if it depends on your pixel colour alone, then your iterations for two pixels with identical colour should yield the same results. If you come up with a way of storing the results in bulk for pixels with the same colours, then your memory consumption would most likely go way down.
Other than that, after a quick at your numbers as displayed above, they seem to be correct
Fullfledged Java/.NET lover, fullfledged PHP hater.
Fullfledged Google/Microsoft lover, fullfledged Apple hater.
Fullfledged Skype lover, fullfledged YM hater.
modified 3Sep12 15:22pm.





Pixel position, it's converted to a complex number which is then stuffed into the iterative equation and that results in a long (or short) list of complex numbers, Z0 to Zn, n could be anywhere from 0 to the limiting number  10k in the example I gave, but could be much much lower (50) or much much higher.
The problem arises since the way I'm going through this 'rite of passage' (doing Mandelbrot images) is to try and create a pluggable set of classes that could be used to implement any fractal set (mandelbrot, julia, newton etc.) and use any of the many colourising methods (escape time, distance estimators etc.). Adding a new set (e.g. Burning Ship, Phoenix) or a new colourising method (e.g. escape angle) therefore simply entails creating a new class that does that specific job and implements the relevant interface.
The consequence of this is that the key class which manages and calls the FractalSet iterators and then calls the colouriser to get a colour for the complex number (pixel) under investigation has no way of knowing what data that colouriser needs, the classic escape time needs only the actual number of iterations, the distance estimators need things like the last 2 (or more) results of the iteration.
So the above question came from my first idea which was to store them all whilst the entire image was iterated pixel by pixel  bad idea!
I think my way of doing this will be to change the colouriser interface and have a method that takes the complete result set of results for a single iteration, processes it as appropriate (turning potentially thousands of complex numbers into something much smaller) and builds the data set it needs, plus a method to to manipulate them once all iterations have been completed and finally a method that spits out the image.
Thanks for the confirmation.
Mike





MikeMadBadger wrote: I think my way of doing this will be to change the colouriser interface and have a method that takes the complete result set of results for a single iteration, processes it as appropriate and builds the data set it needs
Yeah, if you can do that (I thought storing all the values was a requirement or spec or mandatory), it definitely beats having to store tons of values.
Now, in terms of speed, I have no idea what the gain would be, but I'm also betting it would be faster than storing the calculations (which, come to think of it, in your particular case would only be a cache of some sort).
MikeMadBadger wrote: Thanks for the confirmation.
I doubt I was of much help, but you're welcome
Fullfledged Java/.NET lover, fullfledged PHP hater.
Fullfledged Google/Microsoft lover, fullfledged Apple hater.
Fullfledged Skype lover, fullfledged YM hater.





Andrei Straut wrote: I thought storing all the values was a requirement or spec or mandatory
The closest I have to a spec. is the jumbled nonsense in my head and its many and frequent lurches from bad to poor design (an improvement of sorts!).
Thanks again.
Mike





Even if you wanted the results of all iterations, why would you need the results of all iterations for all pixels at the same time? Surely the pixels are independent?





Absolutely correct, I don't need them all at the same time.
That was my gut reaction to the problem of using pluggable colourisers, i.e. the Imager class, that does the looping through the pixels, asks a FractalSet to iterate its equation and send back the results, it would then manipulate them and finally send the manipuplated data to the colouriser. However, the Imager doesn't know what data the colouriser needs, it could be as simple as the iteraton count and the max iterations, it could be the average of all iterations, the escape angle etc.
The conclusion is that I need to send the results of each iteration to the colouriser immediately after each iteration has finished. The colouriser can then do whatever manipulations it wants to and store only the data it needs rather than the Imager storing everything and only invoking the colouriser once all iterations have finished.
I think that'll work, we'll soon see...
Mike





Noooo
This question is on the news page. Really! Must be a slow news day.
Ah well, my dumb question for the world to peruse and giggle quietly to itself.





What use could it be to keep all iterates for all pixels ? The main reason I can think of is to render an animation showing the evolution of convergence, and be able to play it forwards but also backwards. (If only forwards, your machine will be fast enough to recompute the iterates on the fly; if just the final result matters, there is no reason to stroe all iterates.)
If your purpose is that kind of animation, a solution can be to MPEG compress the rendered images, achieving decent storage amounts even for large frame numbers.





See my answer to harold atropot, hopefully that answers your question  essentially I don't need to store them all, that was just a really poor way of solving a problem (so poor it wouldn't have worked).
Creating animations is a much later (if ever) addition to this project, right now it's just about me learning how to use interfaces, design patterns etc.
Thanks,
Mike





Mike,
Do you need to keep all 10,000 iterations leading up to your final image? I thought the idea with fractals was to loop back through the formula until you either settle on a value of not, and then paint the pixel (or not). Wouldn't you just need the final iteration, or maybe an array with your decision (paint/don't paint) for each pixel? Would reduce the size of your data by 10k.
Paul





The colouriser needs all the iterations for a given pixel (= a complex) since there are n ways to colourise a fractal, some of which manipulate all the values of Zn achieved during an iterative cycle.
However, the colouriser doesn't need all iterations for all pixels at the same time, so I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion?
Mostly the colourisers store their data in List(Of T) objects since they can cope better than arrays with changing dimensions.
Mike





MikeMadBadger wrote: I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion?
Yes, I was thinking in terms of storing only what is absolutely necessary for each pixel.
Good luck.
Paul





You may want to check out www.FractalForums.com.
The people that wrote mandelbulb (mandelbox?) hang out there and may be a source of programming knowhow.
I'm user panzerboy there, I've written a few plugins for FractalExtreme. I use that to make zoom movies which I post on YouTube as CommandLineCowboy.
FractalExtreme is somewhat limited in its colouring, its strictly iteration count indexing into a 228 element palette table. I often do multiple renders with different palettes and mappings layering the results. If you check out my latest videos you'll see that you can still accomplish quite a lot with simple iteration count colouring.
Your 'average of all iterations' comment intrigues me. In Fractal Extreme you can hold Shift and Ctrl and click on the initial mandelbrot set to see the 'iteration paths'. Values in the Mset spiral inwards around a central point which would be your average value. The values outside the set with high iteration counts buzz around a point before going outside the window. Again that average would the central point only slightly altered by the last few iterations. So I'm thinking you may not need all the iterations to find the axis of the spirals.





Jeremy David Thomson wrote: Your 'average of all iterations' comment intrigues me
Then I was probably a little misleading, I was just trying to pick an example as to why you might want anything other than than the classic 'IterationCount / MaxIterations' which is what most folk doing this 'rite of passage' project would recognise. More practical examples might have been standard deviation or other more complex statistical analysis  many (all?) of which have been done by someone at some point to varying degrees of success and aesthetic value.
Jeremy David Thomson wrote: You may want to check out www.FractalForums.com
That I have discovered, though I have only scratched the surface at this point. It seems one of the better (or at least more broad ranging) places to go, though there are literally millions of places to get little bits and pieces of useful information.
Thanks for your time.






Zipping wouldn't work due to the very nature of fractals: they're all about chaos and unpredictability, whereas compression algorithms are all about detecting patterns and predictability of data sequences. The two don't mesh. Ever.
P.S: for the same reason restricting yourself to lower precision won't work either. In fact, with a precision of only about 55 bit (which I think is the usual length of the mantisse in a double), doing more than about 500 iterations will yield no meaningful data. It may still look nice when visualized (which is usually the point of fractal programs), but you are effectively looking at rounding errors. I would go so far as to say you need an arbitrary math library for more than 100 iterations.





I'm a bit in a hurry right now, but I'd like to point out that for 10000 iterations you need arbitrary precision math! The rounding errors from each iteration grow exponentially, and so will the size you need to store each resulting value in sufficient precision.
Unless of course you're fine that you're effectively looking at rounding errors after a few hundred iterations.





Stefan_Lang wrote: for 10000 iterations you need arbitrary precision math
Good point, well made. << Heads off to investigate >>
To be fair, at this point it's more proof of concept, can I design a set of classes that allow you to plug any fractal set into any colourising method (with any mapping technique to boot), it's not intended to be a finished product, why would the world need another fractal generator?
Thanks,
Mike





I had some time to get an estimate for the accumulation of the precisionbased error, and found that it's upper limit is about machine_precision*4^iterations , and the average error would be around machine_precision*2^iterations . That means the maximum error approaches 1 after about 28 iterations for double values (5556 bit mantisse), and the average error approaches 1 after about 56 iterations.
If you intend to store intermediate results, you might consider approaching this problem from the other side:
1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points
2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter.
3. repeat for each point with a value of 0:
3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest)
3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0.
3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16)
This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).





Thanks for spending time on this  I'm going to have to sit down with a large coffee (or 8) before I get my head around this. Separately I found GNU MP and some .Net wrappers. When I get there I'll have a decent look at both options
Thanks again,
Mik





From here: http://mrob.com/pub/muency/accuracy.html[^]
The commonlyseen views of the Mandelbrot Set have been challenged by arguments based on techniques like errorterm analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious".
However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C.
This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error".
Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc.
Mike





Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest.
Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably)  zooming in pretty quickly revealed the huge errors this limited accuracy produced.
In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph.
P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.




