|
Assume that there is a vertical cylinder with a diameters as follows
1cm at base
3cm at the height of 2cm
2cm at the height of 4cm
1cm at the height of 7cm
etc etc
Now, what would be the height of different colored fluid columns one over the other (Assuming there are no other forces). The columns are on top of each other and movable by drag and drop. Hence the calculation of each column depends on the bottom column.
Its not necessary to consider volumetric analysis because the solution could be in 2D for area also.
The function should take parameters of volume/area of each column at which it starts.
This is required for a graphical application where the movement of a liquid is shown in 2D.
I am looking for a faster alternative to "for" or "while" loops. It needs to be very fast so that a fluid motion could be achieved. Current algorithm is damn slow.
my current solution looks like:
while (volumeToAdjust > 0)
{
volumeCovered = Min (volumeToAdjust, volumePossibleInCurrentWidth)
if(volumeCovered>volumePossibleInCurrentWidth)
height += heightForCurrentWidth
else
height += CalculateHeightForThisVolumeInCurrentWidth
volumeToAdjust -= volumeCovered
}
This solution works fine for a single column of fluid. Multiple columns, multiple widths, the program slows down. Any better ideas?
|
|
|
|
|
Do the diameters change or can they be hard coded?
What happens in between the values you gave? Is the diameter constant or tapered?
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Yes diameters can be changed. Also, the volume of each fluid column and their sequence can be changed by drag and drop. That is why the need of an algorithm to do it the fastest way.
Diameters are not tapered.
|
|
|
|
|
http://www.prime-numbers.org
Tadeusz Westawic
An ounce of Clever is worth a pound of Experience.
|
|
|
|
|
that list is not complete.
|
|
|
|
|
Of course it is, they all are.
Tadeusz Westawic
An ounce of Clever is worth a pound of Experience.
|
|
|
|
|
No it isn't, just the ones less than 10 billion. As far as anyone knows, the complete list is infinite.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hi every body ,
I am in critical position, can u plz tell me how the plain text to encrypt and decrypt using ecc(Elliptic Curve Cryptography) in C# i need step by step process if not the complete source code i need.
can any one help me plz......
advance thanks to for reply
|
|
|
|
|
I need someone to mow the lawn, paint the house and wash the car. plzzzzzzzzzzzzzzz
|
|
|
|
|
People here will help you with problems, not do your work or more likely homework for you.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
i very need to an algorithm for Imaging of cars No in roads.
means how police cameras imaging of car No for forfeit.
because size of cars No is different.
thanx.
|
|
|
|
|
And what has your literature search turned up so far?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
question about K-Means algorithm for image
how to code a K-Means algorithm for Image Segmentation In Matlab
and also how to plot the color histogram in matlab??
Thank you
|
|
|
|
|
|
i'm puzzled why it uses sequential search in wndproc to search the handler of a msg.(use swtich..case.. statement). at least, it should use binary search for efficiency, doesn't it? sequential search's time-complexity is O(n), and binary search's is O(logn). they're quite different in efficiency along with the increase of 'n'.
the same puzzle in search the addr. of a function in virtual-table at runtime polymorphic.
|
|
|
|
|
big-O notation gives the dependency law, not the absolute value; there is at least one constant involved, and that tends to be smaller for simpler algorithms. I suggest you create some implementations and test and compare them for a series of reasonable values of n.
|
|
|
|
|
Why is this important? It is a very small part of any program and will have negligible effect on the overall performance.
|
|
|
|
|
IMO both VTABLE lookup and Windows message identification, although small in size, are needed at a high frequency (every virtual method call, every message in an overridden WndProc), so they may very well be relevant to overall performance.
|
|
|
|
|
Possibly, but I suspect the most pressing need for most apps is to make the actual business logic as efficient as possible.
|
|
|
|
|
certainly we should focus on the bigger problems in our app. but this problem can be solved in a simple way. it doesn't take much time and brain to do. why don't we make it? binary search is very simple so that you can easily write out in asm. in C it just takes about 10 sentences, isn't it?
|
|
|
|
|
I don't see what your point is any more. If you think you can write better code for the WndProc function than using a switch block then go ahead and do it. Nobody here is trying to stop you.
|
|
|
|
|
yippee!! my opinion is just as identical as yours. i just think that they may be used in high frequence and may occur not very small size. i think an app. that has 100+ kinds of msg. to proc. is popular, so is a class with 100+ virtual function. we assume the count is 128. so the average time cost of sequential search is (1+128)*(128/2)/128 == 64.5 and for binary search it is 2log128 == 7
big difference!!
|
|
|
|
|
acelandin wrote: my opinion is just as identical as yours
apparently not.
acelandin wrote: so the average time cost of ...
that is completely wrong, as it ignores reality, where a non-linear search takes more code to execute, and has lower "locality of reference" possibly resulting in a smaller caching advantage. That is why I suggested you write actual code and try it, rather than come up with unrealistic theories.
|
|
|
|
|
to implement what? a new mode of proc msg in WndProc with my suppose? i've already done it. i created an array that the element is a struct that consists of two part: the msg id and the msg handle-function's address. this array can be sure and sorted at compile-time and static at run-time, just like the "switch..case.." in traditional one. then in WndProc i use binary search with the msg id as the key to find out the func addr.
why you said there exist non-linear access problem? sorry, i can't understand.
|
|
|
|
|
Note: this post contains mostly speculation and theory, and I'm not in the mood to analyze less-important things such as register access stalls, or even to benchmark anything. But really, that is your task, not mine. It's also quite early in the morning and I just woke up, so there will be mistakes/errors/clear lack of reasoning/etc - use the information in this post at your own risk.
Really?
Let's assume for a minute that all values are equally likely to occur (not true in practice)
Now consider the conditional jumps here, in the sequential version predicting "not found" would be more than 50% accurate for all but the last two entries. In the binary search code, none of the relevant jumps can be predicted at all (every path down the "tree" is equally likely, as assumed).
It depends on your CPU architecture how much this matters, on a Core2 a misprediction costs you 15 cc's. In the binary search way you will have 3.5 mispredictions on average so 52.5 extra cc's due to branch misprediction. In the sequential search code you only need 1 mispredicted branch (the one that ends your search*) giving you 15 extra cc's due to misprediction.
A 36.5 cc's difference doesn't look too bad, but that's only from branch misprediction.
So let's look at the memory access pattern. Suppose the array is not cached (otherwise we'd have little to talk about). In the binary search way, the first couple of accesses will be a cache miss, and in addition to being a cache miss they're also bringing in a piece of data you probably won't need again soon. That sounds as bad as it is. In the sequential scan you're bringing in blocks of data you probably will need, and the CPU will notice your sequential access pattern and start prefetching the next block even before you need it.
* - this sounds lame, since it means putting your most-likely entries near the end would be better than putting them in the beginning to make sure the "not found" prediction isn't changed to "found" by your CPU who's trying to be clever and thereby messing up the upper bound of 1 misprediction. Whether it's better to put those at the beginning and predict "found" depends on how likely it is that the first entries are used compared to the rest. In the binary search we could unbalance the search tree to make the more likely cases faster than the unlikely cases.
And of course, benchmark!
|
|
|
|