|
i have a couple of image processing fns which input and output BYTEs but need to convert the input to doubles for the internals because even floats run out of precision well within reasonable parameters.
i'd love to be able to not have to use 24 bytes per pixel, but...
modified 17-Jul-14 13:24pm.
|
|
|
|
|
Chris Losinger wrote: i'd love to be able to not have to use 24 bytes per pixel, er, bits?
But, as we all know - palletized images suck when it comes to performing any real manipulation on them, since you're stuck with looking-up the 24 bit values anyway.
Alpha-blending images with a pallette? :shudder: Uhhhrgh! Please dont make me do that again. Recomputing a new pallete is more effort than the image processing itself.
|
|
|
|
|
no, BYTEs.
doubles are 8 bytes each. 3x8 = 24 bytes.
|
|
|
|
|
Chris Losinger wrote: but need to convert the input to doubles for the internals
Of course. Must be about time for bed for me...
Those images sure do get pretty big, pretty quick, at that rate of memory consumption.
|
|
|
|
|
For pallet(e)s, all you need is scrub wood and nails. Palettes on the other hand...
|
|
|
|
|
Ennis Ray Lynch, Jr. wrote: I wrote a hurricane damage simulator
I didn't know you worked on Sim City.
|
|
|
|
|
Mine isn't nearly as accurate. Wind fields in real-life are surprisingly non-deterministic and literally change with the phase of the moon.
|
|
|
|
|
Ennis Ray Lynch, Jr. wrote: and literally change with the phase of the moon. So do I! Oh no... It's a full moo OOOOOOOOOOH!!! *Howl*
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
|
|
|
|
|
|
But phases of the moon are VERY deterministic.
|
|
|
|
|
Ennis Ray Lynch, Jr. wrote: Most of my work four places is good enough and I wrote a hurricane damage simulator.
Well, how much precision do you need to say $15 billion (or whatever the number is in billions of dollar?)
Marc
|
|
|
|
|
You are not supposed to point out flaws in my argument!
|
|
|
|
|
Someone, somewhere, is thinking "I am a hurricane damage simulator".
cheers
Chris Maunder
|
|
|
|
|
Sean been at the hamsters' sunflower hooch again?
|
|
|
|
|
I work on stuff where 15 digits is marginally OK, using plenty of trickery.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
I'll have my math float away, thank you!
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
|
|
|
|
|
Is that anything like a root beer float?
|
|
|
|
|
Depends what I need to do - for some things I don't need either, others will need precision and also accuracy in which case there's no point getting the wrong answer instantly, others I just need an estimate but I need it quickly
|
|
|
|
|
Most of my work, 4 byte floats are too inaccurate. Some of my work, doubles barely suffice. All of my work, speed matters. Conclusion: I need both, and I need it yesterday!
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Sometimes you can solve/reduce floating point problems by scaling up - always recording your prices in pennies not dollars for example and your hurricane strength in metric butterflies.
|
|
|
|
|
4 base 10 digits would generally be OK if the internal representation was base 10 also. Even with all the digits an IEEE754 64bit float provides, weirdness from the inability to represent 0.1 precisely in binary has an obnoxious habit of leaking into userspace.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Both. In my experience, when I've needed more than just a few digits of precision, its because I'm doing a complex calculation, and that's usually when I've needed both precision and speed.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
I would rather use strings, and convert to accurate precision using Unidex.
|
|
|
|
|
A modern 64-bit CPU should be able to do double precision in the exact same time it takes to do single. The only difference is how many bytes (8- 4) are needed to store the answer. Personally, I'd use double precision to the bitter end and then format to any desired width. BTW, my Garmin GPS uses single, but military GPS systems use double for coffee mug precision (a thimble if the antenna would fit) and quad to position (station) satellites.
If you find yourself in a fair fight, your tactics suck. - John Steinbeck
|
|
|
|
|
True, in most cases such accuracy is unnecessary and thus just CPU cycles wasted. But there are situations where the accuracy could make for easier accomplishments later.
Around 2000 the AutoCAD product changed the way it stored polylines (non regular polygons) - the old way was to save each line/arc segment in series as doubles for XYZ values. This had the detriment that the further you move away from 0,0,0 the worse accuracy became, to the point where such polygon was displayed as dis-joint vectors. The "new" method used a start point, then a length, angle and "bulge" for each vector - made computation a lot faster and the polygon itself didn't loose accuracy because of distance from origin. BUT it has a secondary inaccuracy in that its interaction with other objects became prone to errors - which in turn made things like hatching (fill the space between vectors) very problematic.
Anyhow, there are quite a few ways people have tried to get both accuracy as well as speed from these figures. As an example: http://keithbriggs.info/mpfs.html[^]
So it seems it's something which just always needs to be chosen on a per-problem basis. Similar to the speed-vs-memory trade-off of using BST / HashTable.
|
|
|
|