

Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest.
Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably)  zooming in pretty quickly revealed the huge errors this limited accuracy produced.
In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph.
P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.





Try doing the computation 1 pixel at a time, and discarding results from all the iterations except 0, n1 and n, where n is the total number of iterations in the computation so far. This would reduce memory usage by complex numbers to just 48 bytes!
EDIT: Probably not 48 bytes, but 3 complex numbers need to be stored. This could be any amount, depending on the precision used.





Hello everyone
I need to generate a PDF417 bar code and do the reverse operation.
but the problem is that the text I need to encode is in Arabic like "أ ب ت ث ... "
I found a lot of SDK and online programs that help generating and reading pdf417 bar codes, but non of them support the Arabic language.
could anyone help me with that? and do I need to build a program for the whole pdf417 encoding which needs time?





I did not downvote this, but someone probably did so because you already posted this question in the C# forum. Having been a member on this site for almost 4 years, you should know not to do that...
Soren Madsen





I guess it's an algorithm sort of question, or maybe not, but this is the closest approximation I can find...
I spent many years of my life creating mathematical models of systems, but all of them were more or less continuous functions  missile guidance, targeting, filtering  that sort of thing. But I'm trying to model a discontinuous system at the moment, and I haven't a clue how to start. The current problem, I have a series of lift stations, each containing a wet well  a hole in the ground that received liquid from upstream sources at unpredictable times  and a pair of pumps that switch on at preset levels to empty each well. The linear parts I can figure out, knowing such things as the pump flow rates, head pressures and frictional pipeline losses. But how do I model the discrete on/off times for each pump in order to maintain optimal transport rates without overflowing any well in the line?
Can anyone suggest a link or two that demonstrates how this is typically done? I'm thinking some kind of state machine model with discrete time intervals, but I could be completely off the mark.
Will Rogers never met me.





Hi Roger, What you are looking for is "discrete event simulation". The basic idea is that the system runs off a queue of future events, sorted in time order, and picks them off and handles them one at a time. The "clock" jumps from one event to the next, which is where the "discrete time" bit comes from. Consider me filling a tub with a bucket. Event 1, t=0: Bucket under tap, turn tap on. It takes 5 secs to do this, so schedule event 2, time 5. Event 2, t=5: Tap running. Tap runs at 5 gpm, bucket holds 10gal. Schedule event 3, time 65. Event 3, t=65: Bucket full. turn tap off. 2 sec. => event 4 @ t=67. Event 4, t=67: Carry bucket to tub and tip in. 10 secs Event 5, t=77: Is the tub full yet? ... These events could easily be interleaved with you filling a different tub from a different tap. Things get interesting when we interact, by sharing, say, the tap. Queuing gets involved then. (btw, discrete event simulation is *the* way to do Monte Carlo queueing problems.)
Having said all that, I'm sure: (a) your googlefu is at least as good as mine. (b) there are free packages out there. The hard work is in describing the system to the point where your model is complete enough to hang together. Conservation of matter is always a good starting point.
Cheers, Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Peter_in_2780 wrote: The hard work is in describing the system to the point where your model is complete enough to hang together.
That's what I'm trying to grasp, I think. Six stations, 12 pumps, two pumps don't output as much as 2x 1 pump, one pump starts at level 1, the other at level 2, but both shut down at level 0... It's not as simple as simply writing a differential equation for an electrical circuit.
Will Rogers never met me.





The good thing about simulation is that if it blows up, no physical damage is done! (Just as well, given some of the simulations I've run in the past!) At each event, you need to update the state of the world (pump 1 is running, so the level in tank 23 is going down at 1000gpm, the pressure in the pipe at point X is ... ) then predict what "nonlinear" events are going to happen and when (tank 23 will reach lower limit switch level in 18 minutes, tank 28 will start filling at 1000gpm in 7 minutes ...) then plug them in as future events. All the continuous stuff (like solving DEs ) is hidden in the 'prediction' phase of event handling.
I must admit, the first few serious simulations I wrote, the system behaviour stuff was hardcoded. The event handling skeleton and utility functions were reused, and slowly morphed into a more general purpose beast that could actually be described as a 'package'. Sadly, it's all faded into history. Last seen in the bucket "things I might port from Fortran77 to C".
The size of your system is NOT an issue for getting a simulation running. If you can model one station, then adding five more (even with different parameters) is trivial.
If you want to continue this conversation offline, feel free.
Cheers, Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





Thanks, Peter. I've spent much of the evening doing some paper model building, and I might just take you up on the offer. I've done a bunch of state transition stuff in the past, but it's been a long time. The prime rule was, that which can be observed, can be controlled  that which can't, can't. So now I'm looking at what can be observed directly, what can be inferred from those observations, and what state variables to control using that information. It's unfortunate that we have no means by which to observe directly whether a pump is running, just crude floats that are either on or off. If I could access actual pump states, I could do so much more for prediction and control, but the best I can hope for is to learn that, after a level was reached, and it failed to subside after a set period of time, the pump did not respond. That will have to do for now, but gives me a great tool for arguing that we should add more monitoring circuitry!
What I think I can do, though, is to model the proper operation condition, and use that to simulate different flows into and out of various stations in order to optimize the levels of the floats and perhaps, identify pumps that may be under or over sized. Later, if they let me add more monitoring, I can extend the model to failure prediction, and that's my end goal. I am a hardware weenie, after all. I really think it would be better to call an emergency crew out before the thing overflows, rather than waiting until a high level alarm is sounded just minutes before raw sewage starts running over the top.
I'll email you if I get stuck, and Thanks again for the offer!
Will Rogers never met me.





right now, i know how to get the mouse location and show the mouse and so left clicj right click and such. I am wondering if anyone had a formula for an AI for a path, eg
o = you sprite  = wall can pass through ' = floor can pass through * = path way for the spite C = the cliked for the path to go
heres my example
o'''''
'*'''
'****C
I hope you understand basicly i want to it go a certain location it will go there, and if there is an object in its way it will use a formula to get around it.
all this nis in 2d by the way.
and if you have the formula can you please make it as simple as possible?






ty, a* seems the most simple, but it still is allot and will take a while to understand







So I'm working on a side project at the moment dealing with computer vision, and I find myself needing to identify circles of an unknown size in an image. I've found a lot of information online about using the Hough Transform for circles, and MANY variations of that transform. Is there anything else out there that can be used for this purpose? I'm looking for something else that is quicker than the Hough Transform, and I am willing to sacrifice some accuracy to achieve this.
Please note that I am not looking for a library or tool to do this for me (like OpenCV), I've found plenty of them, and they all use the Hough Transform. I'm looking for an actual algorithm or related research.
Be The Noise





AFAIK Hough is the best available. When the circles are prominent, i.e. have quite some thickness, you could reduce the resolution of your image so the thickness of the circle(s) becomes say 2 pixels; that should provide quite some performance improvement.
And of course image processing is a field where you can efficiently apply multithreading, as well as gain performance by putting locality of reference first (i.e. deal with bands or small areas, not entire images at once).





Thanks for the suggestions I'll definitely try reducing the resolution and try to utilize more multithreading (this is for a mobile app, and the benefits of multithreading aren't THAT great). I've been playing with blur and color changes as well to speed things up.
Be The Noise





You should also think of using the GPU for such tasks which can improve the performance a lot.






The erosion operator (http://en.wikipedia.org/wiki/Erosion_%28morphology%29[^] ) can detect circles faster than the Hough Transform.
You have to know the size in advance, though, although you can do N searches for N different diameters. (You'd have to do N searches for different sized circles using the Hough Transform also.)
Are the circles drawn as just the circumferences, or are they filled in?
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Very nice, thanks I'll check it out and let you know if it works out. While the sizes of circles change a bit, it's not too bad to just go through a few diameters.
The circles are filled, though I could do an edge detection to get rid of it if needed.
Be The Noise





The Wikipedia article makes it look harder than it is. Erosion (binary) can be easily implemented as only shifts and ANDs.
To recognize a circle:
1. Take an arc that's half the circle's circumference, and divide it into N segments. Each segment is a short vector.
2. For each vector, shift the image by that vector and AND it with the original image.
3. When you're done, pixels will remain only at the regions that were at the center of (at least) a circle of the original size.
4. Starting at the higher diameters will enable you to remove them first, so you can recognize the smaller diameters later.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





haha, you must've been reading my mind This makes it much easier to implement. Thanks!
Be The Noise





Looking at this again, I realized Step 2 could be misinterpreted:
"2. For each vector, shift the image by that vector and AND it with the original image."
By "original image", I mean the image before the shift.
So,
foreach (vector in Vectors)
{
previousImage = image;
image.shiftBy (vector);
image.andWith (previousImage);
}
And all remaining pixels in 'image' are contained within (at least) a circle of the given radius.
"Microsoft  Adding unnecessary complexity to your work since 1987!"





Hello,
When it comes to image processing tasks, I would say that it is much easier to discuss when there are few sample pictures available (if there are no some confidentiality restrictions of course). Talking about circles ... in some cases you can simplify things a lot by finding stand alone blobs/objects in a picture and then doing further shape analysis of those ...




