

My problem goes something like this. I have a variable of 5 arrays a[5] for storing the 5nodes of a network. The network is randomly formed by the random number generator. I have the time of travel for each paths of the network. All the nodes may not be connected to eachother. The network and the time of travel of each path is as per the user has assigned. Now I have to check the elements of each array and assign the penalty as per the conditions. My main task is to find a network path traveling each node of the network only once such that the node visited is not repeated. Its similar to TSP but the difference is that in TSP has the condition that it is possible to travel from each node to every other node. But for my case the network is pre defined and nodes are not connected to every other nodes in the network.
a) Like if the network stored is 22345 then I find that a[1]=a[2] so I will have to assign the penalty (type 1 )as the network has a path 22 which is not possible as the starting and end of the path is same. At the same time for rest of the paths from node 2 to 3, node 3 to 4, node 4 to 5 I will have to calculate the sum of total time of travel when I finally reach the node 5.
b) Also there is another penalty condition if the path is not a possible path in the network given by user then I will have to assign penaly( type 2) for such kind of condition like if 12435 is a network to be checked. Here I am supposing that all the paths are possible but the path 43 doesn’t exist in the real network provided by the user, So for such cases I will have to calculate the sum of the time of travel for the paths which are possible and also assign the penalty (type 2)
c) There is also a condition that I need the end point of the network as node 5.So in the network generated randomly if the end node is not the node 5 then I will have to assign the penalty (type 3) and at the same time calculated the sum of the rest possible paths. For example the node 35214 here the end node is 4 so I will have to assign the penalty(type 4)
d) Since I am not allowed to visit the same node twice I will have to assign the (penalty type 4) if the network has repeatition of nodes like if the network is 24314. Here the node 4 is occurring twice so I will have to assign the penalty(type 4) and also calculate the sum of the time of travel of the other paths.
Considering all the above penalty conditions I will have to check the network s. I am really not being able to use any logic on how do I start. Need some hint on how do I do it.





Sorry, I see nothing difficult in this problem statement. Or maybe is it just that you never wrote a program ? Or are you asking how to represent the network topology and path costs ?
For a), scan the array and check if two consecutive indexes are the same.
For b), scan the array and check if every pair of indexes are linked in the usersupplied topology (you need some function that will query the topology and tell you if a link exists).
For c), the condition is a[5] != 5.
For d), use a double loop: the outer loop from i=1 to 4, the inner loop for j=i+1 to 5. In the body of the inner loop, test a[i] == a[j].
Accumulating the path costs is also obvious once you have a query function per network link.





Note: I see an interesting "resonance" between this question, and Roger Wright's question below: "A Modelling Question;" which I had not read before I made notes today about this "problem space," to post later on CP, when I got home !
For some reason today, while riding the elevators in a shopping mall that has three sets of two publicuse elevators: in one end of the mall; in the other end of the mall; and in the middle of the mall ...
The idea came to me that it would be an interesting programming challenge (of a type I had never taken on before) to create an elevator use optimization solution: note that I have never studied differential equations, and I am not familiar with queue optimzation algorithms, etc.
Here's how I framed the problem:
1. Given #n sets of elevators, where the number in each set can vary from #1 to #n:
a. defining "set" to mean elevators that are adjacent, and that any button press on a floor that requests an elevator to go up, or down: simultaneously shares that request with every other elevator in its set that is not in use currently: i.e., stopped at one floor with the door closed, and no requests pending (that might be an unrealistic constraint ?).
2. Assuming all elevators serve the same the number of floors:
3. Assuming that the following timestamped data, tagged with a unique ID for each elevator, is generated and sent to a central computer:
a. for every elevator the time it starts moving, and what floor it starts moving from.
b. for every elevator the time it stops moving, after being in motion, and the floor it stops on.
c. on every floor of the building, when someone presses an elevator up, or down, button (from outside the elevator) to request service:
d. data tagged by unique elevator ID containing the time of requestservice button press, and the requested direction, up, or down.
e. inside the elevator: when any floor choice button is pressed: data tagged by unique elevator ID containing the time of floorchoice button press, and the destination floor chosen. So any one moment in time you have a complete list of all floors to be stoppedat by elevator #n.
3. when any elevator starts or stops moving is recorded tagged and timestamped as in the above.
So, imagining we have all this incoming timestamped information, and that some, or all, elevators are in use, some moving up, or down, some stopped.
The problem to be solved:
1. given a new request for service, up, or down, on floor #n of elevator #n:
2. and, given the context of pending requests and states of every other elevator in the set of which elevator #n is contained:
The desired result: to dispatch the elevators most efficiently, so they serve the most number of people in the smallest amount of time.
I'm not looking for "answers" by asking the question here: I am just looking for a few "pointers" to direct my initial study of the type of scheduling optimization this particular example represents.
Frankly, I don't have a clue about how to approach this kind of problem right now (no formal computer science courses for me, unfortunately).
Obviously could make this example much more complex by taking into realworld factors like most requests may originate from the groundfloor before based on some pattern (like, in a hotel: most request originate from the ground and/or checkin floor up to and a certain amount of time past, checkin time).
You could consider recording the exact times of elevator #n's door opening and closing, and figure out that if it's a short enough interval that noone could have gotten on or off, so someone hit the closedoor button immediately for whatever reason.
But all that type of complexity I don't want to even consider until I reach some understanding of basic optimization problems.
thanks, Bill
p.s. A difference I think I see between Roger Wright's simulation problem (if I can even begin to interpret it correctly), and the one I've described here is:
There are "unknowns" (see Roger's response here describing his inability to monitor pumpstates in realtime):[^]) in Roger's complex system of pumps, wells, and flows; while, in the problem described here, there are no similar "unknowns:"
For every elevator #n: its current position; whether it is idle stopped on one floor; or moving up or down to some other floor; and, the number of floors it must stop at before reaching the top or bottom floor depending on which way its moving: is known precisely.
"When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images." Niels Bohr





I've thought about this problem (while waiting for elevators!).
A few random comments: The standard algorithms used by existing elevators seem to be nonoptimal; A cluster of floors where an elevator has been requested will slow down multiple elevators making everyone wait unnecessarily. This is because when the first elevator stops at the first floor in the cluster, the remaining elevators will get bogged down by the next floors in the cluster.
It would be more optimal for a single elevator to handle the cluster, while the others continue down with no delays.
It's hard to optimize for the number of people, because there's no way of knowing exactly how many people are on a particular elevator. We can make a guess, however. When an elevator stops at a requested floor, we can assume at least one person got on. But we don't know how many got off.
Using artificial intelligence may optimize the algorithm better than any "blind" approach, that doesn't take historical use patterns into account.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Hi Alan, thanks for responding. I've been exceptionally busy and thrown offtrack by some hardware problems in the last week, so have not kept up with this thread.
I think about that's a good point about using some for of of AI technique, or even some form statistical analysis (factor analysis).
If you had a three month data set of all the kinds of data I described above, it would seem you could factorout use patterns: such as heavy traffic from all upper floors, or any floors above coffeeshop level, or groundfloors, or checkout floor, down, in the AM hours, etc.
Near checkin time you'd expect groundfloor to checkin floor (if separate) to be heavy, followed by lots of traffic of people going up to their rooms, and so forth.
Similarly, hotels often have special events at noon, or in the evening involving large numbers of people; it's easy to anticipate when heavy elevatoruse periods related to those events may be ?
And, I wonder if the kind of "fuzzy" algorithms developed by Japanese device makers might be useful here ?
I wonder how long it will be before elevator makers put small videocams with a wideangle lens on each elevator, and from the pictures being fed back into a central dispatching system compouter, roughly estimate the size of the crowd waiting on a particular floor ? Maybe it's already being done ?
best, Bill
"When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images." Niels Bohr





Hello,
I'm a complete newbie to the field of cryptographic algorithms themselves, always having relied on thirdparty libraries and code in the past. Now I'm starting to poke around with them by myself I finally managed to throw together some sort of tiny cryptographic library in VB.NET. However, I'm concerned that... quite frankly... it's a bit rubbish. Sorry about the long code block here:
Imports System.Numerics
Public Class BlumBlumShub
Private _x As BigInteger
Private _p As BigInteger
Private _q As BigInteger
Private _m As BigInteger
Private _pow As New BigInteger(2)
Public Function NextNumber() As BigInteger
_x = BigInteger.ModPow(_x, _pow, _m)
Return _x
End Function
Public Sub New(ByVal seed As BigInteger)
_p = BigInteger.Parse("32416190071")
_q = BigInteger.Parse("32416185031")
_x = seed
_m = BigInteger.Multiply(_p, _q)
End Sub
End Class
Public Class RandomBitStream
Private _b As BlumBlumShub
Public Function ReadByte() As Byte
Dim num As Byte = 0
For i = 1 To 8
num += Math.Pow(i, 2) * If(_b.NextNumber().IsEven, 1, 0)
Next
Return num
End Function
Public Sub New(ByVal seed As BigInteger)
_b = New BlumBlumShub(seed)
End Sub
End Class
Public Class BlumXor
Private _bitSrc As RandomBitStream
Private _key As BigInteger
Public Sub Cipher(ByRef message As Byte())
_bitSrc = New RandomBitStream(_key)
For i = 0 To message.Length  1
message(i) = _bitSrc.ReadByte() Xor message(i)
Next
End Sub
Public Function ByteToStr(ByVal inByte As Byte) As String
Return inByte.ToString().PadLeft(3, "0")
End Function
Public Sub New(ByVal keyStr As String)
Dim encoder As New System.Text.ASCIIEncoding()
Dim keyBytes As Byte() = encoder.GetBytes(keyStr)
_key = BigInteger.Parse(String.Join("", Array.ConvertAll(Of Byte, String)(keyBytes, New System.Converter(Of Byte, String)(AddressOf ByteToStr))))
End Sub
End Class
And that is that. Firstly, I think my implementation of the BlumBlumShub PRNG is off, secondly I'm not entirely sure that I should be using the parity of numbers generated to give me random bits and thirdly I'm not so sure about my use of Xor or how I'm generating a seed integer for the PRNG from a string. I welcome the input and criticism of any cryptographers or mathematicians out there, plus anyone else that can see problems with my code.
SixOfTheClock
A programming language is to a programmer what a fine hat is to one who is fond of fancy garden parties. Just don't try wearing any .NET language on your head. Some of them are sharp.
modified 14Sep12 6:30am.





Hi all,
I am trying to write a windows forms program to communicate to a CNCmachine from the 1980ties.
The machine uses a 6802 microprocessor and communicates with a Desktop computer with a DOS program.
The communication is in ASCII characters and is in the following format:
4 spaces and a 3 digit value and a checksum character or 3 spaces and 4 digits and a checksum character.
So a total of 7 characters and a checksum character.
This is a part of the original message(space represented as  and checksum character bold):
853>1370%248>1204'00117887215625522346#2346#1031#
I already tried some things like adding all decimal ASCII values and MOD against all possible values,
like 853 : 32+32+32+32+56+53+51 MOD some value and add 30 or 31 to make sure result is in the range of printable characters,but no luck.
note that 853 and 248 have the same checksum, also 2346 and 1031.
I hope someone recognizes the checksum algorithm or can help me to find out how the checksum character is calculated.
Thanks,
Groover
0200 A9 23
0202 8D 01 80
0205 00
modified 9Sep12 16:14pm.





Easy,
XOR the 7 ascii values together.
" 853>"
32^32^32^32^56^53^51 = 62 ">"
" 248>"
32^32^32^32^50^52^56 = 62 ">"
" 2346#"
32^32^32^50^51^52^54 = 35 "#"
" 1031#"
32^32^32^49^48^51^49 = 35 "#"
" 1370%"
32^32^32^49^51^55^48 = 37 "%"
" 1024'"
32^32^32^49^48^50^52 = 39 "'"
" 00"
32^32^32^32^32^32^48 = 48 "0"
" 11"
32^32^32^32^32^32^49 = 49 "1"
" 7887"
32^32^32^32^55^56^56 = 55 "7"
" 2156"
32^32^32^32^50^49^53 = 54 "6"
" 2552"
32^32^32^32^50^53^53 = 50 "2"
Alan.





Thank you Alan,
I wrote a function in my program and it works perfect.
Groover,
0200 A9 23
0202 8D 01 80
0205 00





I have a group of points that I need to plot on a graph and calculate the line of best fit(Slope and Intercept) in a programming language.
for example
x 0 1 2 3 4
y 40 41 45 41 47
At the moment I am using and have implemented the Least Squares algorithm
here http://en.wikipedia.org/wiki/Least_squares[^]
I was wondering whether anyone knows of a computationally more efficient algorithm with close to the same accuracy and be able to explain how the algorithm works?
Thanks in advance





A decent implementation of linear fitting by LSQ should run in O(n) time and O(1) space.
One pass over the data, accumulating sum(x), sum(y), sum(x*x) and sum(x*y) then crunch those four (and n) to get the answer.
If you're just doing a simple linear fit, DON'T get sucked into all the matrix stuff. Waaaaay overkill.
Cheers,
Peter
[edit]forgot sum(y)[/edit]
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
modified 5Sep12 2:43am.





So, a quick maths check.
If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results.
Assume image size is 1000 x 1000, so 1 million pixels
Algorithm is an iteration of Zn+1 = Zn^2 + Z0 up to a maximum of say 10,000 times
Each iteration results in a complex number
A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty)
Theoretically this could result in 10,000 x 1,000,000 complex number instances
This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits
Which is:
1.6 x 10^11 bytes
1.56 x 10^8 K bytes
152,588 M bytes
149 G bytes
That would seem a dumb thing to do
But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about.
Mike





Does your algorithm depend on the pixel colour or pixel position?
What I'm thinking is that, if it depends on your pixel colour alone, then your iterations for two pixels with identical colour should yield the same results. If you come up with a way of storing the results in bulk for pixels with the same colours, then your memory consumption would most likely go way down.
Other than that, after a quick at your numbers as displayed above, they seem to be correct
Fullfledged Java/.NET lover, fullfledged PHP hater.
Fullfledged Google/Microsoft lover, fullfledged Apple hater.
Fullfledged Skype lover, fullfledged YM hater.
modified 3Sep12 15:22pm.





Pixel position, it's converted to a complex number which is then stuffed into the iterative equation and that results in a long (or short) list of complex numbers, Z0 to Zn, n could be anywhere from 0 to the limiting number  10k in the example I gave, but could be much much lower (50) or much much higher.
The problem arises since the way I'm going through this 'rite of passage' (doing Mandelbrot images) is to try and create a pluggable set of classes that could be used to implement any fractal set (mandelbrot, julia, newton etc.) and use any of the many colourising methods (escape time, distance estimators etc.). Adding a new set (e.g. Burning Ship, Phoenix) or a new colourising method (e.g. escape angle) therefore simply entails creating a new class that does that specific job and implements the relevant interface.
The consequence of this is that the key class which manages and calls the FractalSet iterators and then calls the colouriser to get a colour for the complex number (pixel) under investigation has no way of knowing what data that colouriser needs, the classic escape time needs only the actual number of iterations, the distance estimators need things like the last 2 (or more) results of the iteration.
So the above question came from my first idea which was to store them all whilst the entire image was iterated pixel by pixel  bad idea!
I think my way of doing this will be to change the colouriser interface and have a method that takes the complete result set of results for a single iteration, processes it as appropriate (turning potentially thousands of complex numbers into something much smaller) and builds the data set it needs, plus a method to to manipulate them once all iterations have been completed and finally a method that spits out the image.
Thanks for the confirmation.
Mike





MikeMadBadger wrote: I think my way of doing this will be to change the colouriser interface and have a method that takes the complete result set of results for a single iteration, processes it as appropriate and builds the data set it needs
Yeah, if you can do that (I thought storing all the values was a requirement or spec or mandatory), it definitely beats having to store tons of values.
Now, in terms of speed, I have no idea what the gain would be, but I'm also betting it would be faster than storing the calculations (which, come to think of it, in your particular case would only be a cache of some sort).
MikeMadBadger wrote: Thanks for the confirmation.
I doubt I was of much help, but you're welcome
Fullfledged Java/.NET lover, fullfledged PHP hater.
Fullfledged Google/Microsoft lover, fullfledged Apple hater.
Fullfledged Skype lover, fullfledged YM hater.





Andrei Straut wrote: I thought storing all the values was a requirement or spec or mandatory
The closest I have to a spec. is the jumbled nonsense in my head and its many and frequent lurches from bad to poor design (an improvement of sorts!).
Thanks again.
Mike





Even if you wanted the results of all iterations, why would you need the results of all iterations for all pixels at the same time? Surely the pixels are independent?





Absolutely correct, I don't need them all at the same time.
That was my gut reaction to the problem of using pluggable colourisers, i.e. the Imager class, that does the looping through the pixels, asks a FractalSet to iterate its equation and send back the results, it would then manipulate them and finally send the manipuplated data to the colouriser. However, the Imager doesn't know what data the colouriser needs, it could be as simple as the iteraton count and the max iterations, it could be the average of all iterations, the escape angle etc.
The conclusion is that I need to send the results of each iteration to the colouriser immediately after each iteration has finished. The colouriser can then do whatever manipulations it wants to and store only the data it needs rather than the Imager storing everything and only invoking the colouriser once all iterations have finished.
I think that'll work, we'll soon see...
Mike





Noooo
This question is on the news page. Really! Must be a slow news day.
Ah well, my dumb question for the world to peruse and giggle quietly to itself.





What use could it be to keep all iterates for all pixels ? The main reason I can think of is to render an animation showing the evolution of convergence, and be able to play it forwards but also backwards. (If only forwards, your machine will be fast enough to recompute the iterates on the fly; if just the final result matters, there is no reason to stroe all iterates.)
If your purpose is that kind of animation, a solution can be to MPEG compress the rendered images, achieving decent storage amounts even for large frame numbers.





See my answer to harold atropot, hopefully that answers your question  essentially I don't need to store them all, that was just a really poor way of solving a problem (so poor it wouldn't have worked).
Creating animations is a much later (if ever) addition to this project, right now it's just about me learning how to use interfaces, design patterns etc.
Thanks,
Mike





Mike,
Do you need to keep all 10,000 iterations leading up to your final image? I thought the idea with fractals was to loop back through the formula until you either settle on a value of not, and then paint the pixel (or not). Wouldn't you just need the final iteration, or maybe an array with your decision (paint/don't paint) for each pixel? Would reduce the size of your data by 10k.
Paul





The colouriser needs all the iterations for a given pixel (= a complex) since there are n ways to colourise a fractal, some of which manipulate all the values of Zn achieved during an iterative cycle.
However, the colouriser doesn't need all iterations for all pixels at the same time, so I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion?
Mostly the colourisers store their data in List(Of T) objects since they can cope better than arrays with changing dimensions.
Mike





MikeMadBadger wrote: I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion?
Yes, I was thinking in terms of storing only what is absolutely necessary for each pixel.
Good luck.
Paul





You may want to check out www.FractalForums.com.
The people that wrote mandelbulb (mandelbox?) hang out there and may be a source of programming knowhow.
I'm user panzerboy there, I've written a few plugins for FractalExtreme. I use that to make zoom movies which I post on YouTube as CommandLineCowboy.
FractalExtreme is somewhat limited in its colouring, its strictly iteration count indexing into a 228 element palette table. I often do multiple renders with different palettes and mappings layering the results. If you check out my latest videos you'll see that you can still accomplish quite a lot with simple iteration count colouring.
Your 'average of all iterations' comment intrigues me. In Fractal Extreme you can hold Shift and Ctrl and click on the initial mandelbrot set to see the 'iteration paths'. Values in the Mset spiral inwards around a central point which would be your average value. The values outside the set with high iteration counts buzz around a point before going outside the window. Again that average would the central point only slightly altered by the last few iterations. So I'm thinking you may not need all the iterations to find the axis of the spirals.





Jeremy David Thomson wrote: Your 'average of all iterations' comment intrigues me
Then I was probably a little misleading, I was just trying to pick an example as to why you might want anything other than than the classic 'IterationCount / MaxIterations' which is what most folk doing this 'rite of passage' project would recognise. More practical examples might have been standard deviation or other more complex statistical analysis  many (all?) of which have been done by someone at some point to varying degrees of success and aesthetic value.
Jeremy David Thomson wrote: You may want to check out www.FractalForums.com
That I have discovered, though I have only scratched the surface at this point. It seems one of the better (or at least more broad ranging) places to go, though there are literally millions of places to get little bits and pieces of useful information.
Thanks for your time.






Zipping wouldn't work due to the very nature of fractals: they're all about chaos and unpredictability, whereas compression algorithms are all about detecting patterns and predictability of data sequences. The two don't mesh. Ever.
P.S: for the same reason restricting yourself to lower precision won't work either. In fact, with a precision of only about 55 bit (which I think is the usual length of the mantisse in a double), doing more than about 500 iterations will yield no meaningful data. It may still look nice when visualized (which is usually the point of fractal programs), but you are effectively looking at rounding errors. I would go so far as to say you need an arbitrary math library for more than 100 iterations.





I'm a bit in a hurry right now, but I'd like to point out that for 10000 iterations you need arbitrary precision math! The rounding errors from each iteration grow exponentially, and so will the size you need to store each resulting value in sufficient precision.
Unless of course you're fine that you're effectively looking at rounding errors after a few hundred iterations.





Stefan_Lang wrote: for 10000 iterations you need arbitrary precision math
Good point, well made. << Heads off to investigate >>
To be fair, at this point it's more proof of concept, can I design a set of classes that allow you to plug any fractal set into any colourising method (with any mapping technique to boot), it's not intended to be a finished product, why would the world need another fractal generator?
Thanks,
Mike





I had some time to get an estimate for the accumulation of the precisionbased error, and found that it's upper limit is about machine_precision*4^iterations , and the average error would be around machine_precision*2^iterations . That means the maximum error approaches 1 after about 28 iterations for double values (5556 bit mantisse), and the average error approaches 1 after about 56 iterations.
If you intend to store intermediate results, you might consider approaching this problem from the other side:
1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points
2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter.
3. repeat for each point with a value of 0:
3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest)
3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0.
3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16)
This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).





Thanks for spending time on this  I'm going to have to sit down with a large coffee (or 8) before I get my head around this. Separately I found GNU MP and some .Net wrappers. When I get there I'll have a decent look at both options
Thanks again,
Mik





From here: http://mrob.com/pub/muency/accuracy.html[^]
The commonlyseen views of the Mandelbrot Set have been challenged by arguments based on techniques like errorterm analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious".
However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C.
This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error".
Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc.
Mike





Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest.
Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably)  zooming in pretty quickly revealed the huge errors this limited accuracy produced.
In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph.
P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.





Try doing the computation 1 pixel at a time, and discarding results from all the iterations except 0, n1 and n, where n is the total number of iterations in the computation so far. This would reduce memory usage by complex numbers to just 48 bytes!
EDIT: Probably not 48 bytes, but 3 complex numbers need to be stored. This could be any amount, depending on the precision used.





Hello everyone
I need to generate a PDF417 bar code and do the reverse operation.
but the problem is that the text I need to encode is in Arabic like "أ ب ت ث ... "
I found a lot of SDK and online programs that help generating and reading pdf417 bar codes, but non of them support the Arabic language.
could anyone help me with that? and do I need to build a program for the whole pdf417 encoding which needs time?





I did not downvote this, but someone probably did so because you already posted this question in the C# forum.
Having been a member on this site for almost 4 years, you should know not to do that...
Soren Madsen





I guess it's an algorithm sort of question, or maybe not, but this is the closest approximation I can find...
I spent many years of my life creating mathematical models of systems, but all of them were more or less continuous functions  missile guidance, targeting, filtering  that sort of thing. But I'm trying to model a discontinuous system at the moment, and I haven't a clue how to start. The current problem, I have a series of lift stations, each containing a wet well  a hole in the ground that received liquid from upstream sources at unpredictable times  and a pair of pumps that switch on at preset levels to empty each well. The linear parts I can figure out, knowing such things as the pump flow rates, head pressures and frictional pipeline losses. But how do I model the discrete on/off times for each pump in order to maintain optimal transport rates without overflowing any well in the line?
Can anyone suggest a link or two that demonstrates how this is typically done? I'm thinking some kind of state machine model with discrete time intervals, but I could be completely off the mark.
Will Rogers never met me.





Hi Roger,
What you are looking for is "discrete event simulation". The basic idea is that the system runs off a queue of future events, sorted in time order, and picks them off and handles them one at a time. The "clock" jumps from one event to the next, which is where the "discrete time" bit comes from. Consider me filling a tub with a bucket.
Event 1, t=0: Bucket under tap, turn tap on. It takes 5 secs to do this, so schedule event 2, time 5.
Event 2, t=5: Tap running. Tap runs at 5 gpm, bucket holds 10gal. Schedule event 3, time 65.
Event 3, t=65: Bucket full. turn tap off. 2 sec. => event 4 @ t=67.
Event 4, t=67: Carry bucket to tub and tip in. 10 secs
Event 5, t=77: Is the tub full yet? ...
These events could easily be interleaved with you filling a different tub from a different tap. Things get interesting when we interact, by sharing, say, the tap. Queuing gets involved then. (btw, discrete event simulation is *the* way to do Monte Carlo queueing problems.)
Having said all that, I'm sure:
(a) your googlefu is at least as good as mine.
(b) there are free packages out there.
The hard work is in describing the system to the point where your model is complete enough to hang together. Conservation of matter is always a good starting point.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Peter_in_2780 wrote: The hard work is in describing the system to the point where your model is complete enough to hang together.
That's what I'm trying to grasp, I think. Six stations, 12 pumps, two pumps don't output as much as 2x 1 pump, one pump starts at level 1, the other at level 2, but both shut down at level 0... It's not as simple as simply writing a differential equation for an electrical circuit.
Will Rogers never met me.





The good thing about simulation is that if it blows up, no physical damage is done! (Just as well, given some of the simulations I've run in the past!)
At each event, you need to update the state of the world (pump 1 is running, so the level in tank 23 is going down at 1000gpm, the pressure in the pipe at point X is ... ) then predict what "nonlinear" events are going to happen and when (tank 23 will reach lower limit switch level in 18 minutes, tank 28 will start filling at 1000gpm in 7 minutes ...) then plug them in as future events. All the continuous stuff (like solving DEs ) is hidden in the 'prediction' phase of event handling.
I must admit, the first few serious simulations I wrote, the system behaviour stuff was hardcoded. The event handling skeleton and utility functions were reused, and slowly morphed into a more general purpose beast that could actually be described as a 'package'. Sadly, it's all faded into history. Last seen in the bucket "things I might port from Fortran77 to C".
The size of your system is NOT an issue for getting a simulation running. If you can model one station, then adding five more (even with different parameters) is trivial.
If you want to continue this conversation offline, feel free.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Thanks, Peter. I've spent much of the evening doing some paper model building, and I might just take you up on the offer. I've done a bunch of state transition stuff in the past, but it's been a long time. The prime rule was, that which can be observed, can be controlled  that which can't, can't. So now I'm looking at what can be observed directly, what can be inferred from those observations, and what state variables to control using that information. It's unfortunate that we have no means by which to observe directly whether a pump is running, just crude floats that are either on or off. If I could access actual pump states, I could do so much more for prediction and control, but the best I can hope for is to learn that, after a level was reached, and it failed to subside after a set period of time, the pump did not respond. That will have to do for now, but gives me a great tool for arguing that we should add more monitoring circuitry!
What I think I can do, though, is to model the proper operation condition, and use that to simulate different flows into and out of various stations in order to optimize the levels of the floats and perhaps, identify pumps that may be under or over sized. Later, if they let me add more monitoring, I can extend the model to failure prediction, and that's my end goal. I am a hardware weenie, after all. I really think it would be better to call an emergency crew out before the thing overflows, rather than waiting until a high level alarm is sounded just minutes before raw sewage starts running over the top.
I'll email you if I get stuck, and Thanks again for the offer!
Will Rogers never met me.





right now, i know how to get the mouse location and show the mouse and so left clicj right click and such. I am wondering if anyone had a formula for an AI for a path, eg
o = you sprite
 = wall can pass through
' = floor can pass through
* = path way for the spite
C = the cliked for the path to go
heres my example
o'''''
'*'''
'****C
I hope you understand basicly i want to it go a certain location it will go there, and if there is an object in its way it will use a formula to get around it.
all this nis in 2d by the way.
and if you have the formula can you please make it as simple as possible?






ty, a* seems the most simple, but it still is allot and will take a while to understand







So I'm working on a side project at the moment dealing with computer vision, and I find myself needing to identify circles of an unknown size in an image. I've found a lot of information online about using the Hough Transform for circles, and MANY variations of that transform. Is there anything else out there that can be used for this purpose? I'm looking for something else that is quicker than the Hough Transform, and I am willing to sacrifice some accuracy to achieve this.
Please note that I am not looking for a library or tool to do this for me (like OpenCV), I've found plenty of them, and they all use the Hough Transform. I'm looking for an actual algorithm or related research.
Be The Noise





AFAIK Hough is the best available. When the circles are prominent, i.e. have quite some thickness, you could reduce the resolution of your image so the thickness of the circle(s) becomes say 2 pixels; that should provide quite some performance improvement.
And of course image processing is a field where you can efficiently apply multithreading, as well as gain performance by putting locality of reference first (i.e. deal with bands or small areas, not entire images at once).





Thanks for the suggestions I'll definitely try reducing the resolution and try to utilize more multithreading (this is for a mobile app, and the benefits of multithreading aren't THAT great). I've been playing with blur and color changes as well to speed things up.
Be The Noise





