|
Read this - http://en.wikipedia.org/wiki/Hamming_weight[^]
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
תפסיק לספר לה' כמה הצרות שלך גדולות, תספר לצרות שלך כמה ה' גדול!
|
|
|
|
|
Thank you for the answer. This is the answer I was looking for.
Thank for the other for the answers.
|
|
|
|
|
There are several algorithms listed here[^]. For a 32-bit integer, you probably want "Counting bits set, in parallel[^]":
int CountBitsSet(int v)
{
v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
return ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
}
Just don't ask me to explain how it works!
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: Just don't ask me to explain how it works! How, how indeed...http://en.wikipedia.org/wiki/Hamming_weight[^]
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
תפסיק לספר לה' כמה הצרות שלך גדולות, תספר לצרות שלך כמה ה' גדול!
|
|
|
|
|
It first sums pairs of adjacent bits, then it sums the groups of 2, then the same thing with the nibbles, and finally it cheats and sums the bytes with a multiplication.
|
|
|
|
|
if you're using c++ gcc - __builtin_popcount
|
|
|
|
|
Member 11055093 wrote: I've been told I can do it using space as much as I need.
Fastest way with unlimited space would be a lookup table.
|
|
|
|
|
It depends on your abstract model. With the usual model, you can't do any better than O(n) (so o(n) is not happening, or did you write a lower case o by accident?) - obviously on a plain old Turing machine, you're going to have to read every bit.
But this problem is in NC. You could sum n/2 pairs of bits, then n/4 pairs of "2bit numbers", n/8 pairs of nibbles, etc, and you're done in log n steps, with each step taking logarithmic time too (adders) (sometimes not counted), all with a polynomial number of processing elements.
Similarly in "broadword computing", you would say that you could compute this in O(log n) broadword steps - using the same construction, but now every layer is a couple of steps (mask and add) (with the addition counted as 1 step, instead of as a circuit of depth O(log n))
Practically, on 32bit words but "pretending 32 is not a constant", you can still use the same construction for an O(log n) algorithm possibly with a multiplication trick to do several sums at once (already shown in answers) or lookup tables or (with as much space as you need, you could cheat terribly and compute any mapping from 32bit integers to anything in a single step), if available, the popcnt instruction.
For arrays you could use a pshufb-based trick[^] (pshufb is awesome).
|
|
|
|
|
I am new here, i have not found a link for Python,so i figured i'd ask any way.
I am trying to come up with an algorithm where you have several x-axis points like
d = [-5, -3.5, -2.8, -0.6, 1.2, 3.4, 5.6] and where you prompt the user to enter a certain point, the program should give the closest point to the entered value by the user.
Thank you.
|
|
|
|
|
This is nothing to do with Python, or any other language. It's a simple matter of sorting the intial values into order and then searching for the two points closest to the one entered by the user. Could be speeded up by binary chop (Google for that).
|
|
|
|
|
I know about the sorting part,i don't need to Google it, iam looking for the acquisition part(capture that nearest number) maybe you can enlighten me.
|
|
|
|
|
I suggested a solution in my previous reply.
|
|
|
|
|
You can lead a horse to water...
|
|
|
|
|
I think Richard was suggesting that you google "Binary chop" not sorting.
Try a search of "python binary search closest value" - the first hit that came up for me (in Google) was an answer to the same homework question
|
|
|
|
|
I think he meant "binary search". (Binary chop is a debugging pattern.)
|
|
|
|
|
|
|
This makes me sad as well.
Do you mean finger trees?
|
|
|
|
|
no finger search algorithm only!!!!!
|
|
|
|
|
Excuse me, first posted in the lounge, but Bill suggested I post here:
I have a range of values (voltage) over time (thousands of minutes, one value per minute). I am trying to chart these. Determining the length of my Y axis is quite a problem for me. If I take a minimum and maximum, and use that as the axis height, one or two zero values result in all the others being scrunched up at the top of the chart. If I remove zeroes, it looks much better, and for a chart, they aren't very important, I'll give all real values in a tabular report.
What I would like to do is determine the average height of the band of data points, sort of the space between the moving average of the low points and that of the heigh points. I figure to do that, I would need a median series, so I could determine a smoothed series of points above and below median, and make my Y axis 's' higher and 's' lower than those.
How do people normally do this?
|
|
|
|
|
Do you think of "standard deviation"?
|
|
|
|
|
Yes, I just can't place it quite right. The standard deviation during morning could be radically different from afternoon, so I would need a curve of standard deviation, not a single figure.
|
|
|
|
|
What about calculating average / standard deviation for hourly sets of data?
|
|
|
|
|
For my old computer I need an assembler which is able to take assembled code from a library and link it together in the smallest possible combination.
One 'speciality' of the old CDP1802 processor will force me to write the assembler and linker myself. There are two types of branching instructions: long branches and short branches. Long branches use full 16 bit addresses, but will cause timing issues with the graphics chip. This is an ancient hardware bug.
This is the reason why i must use short branches with short 8 bit addresses. The upper 8 bits are just assumed to remain the same as in the instruction's address. This way memory is segmented into 256 byte blocks. It's not a very strict segmentation as the code can run across the boundaries without any consequences, You just can't loop back with a short branch and long branches can't be used.
The linker will have to puzzle together snippets of code and data with this in mind. At the same time I must be sure that memory usage is as low as possible in the end. My old computer has only 4k RAM, and more than 16k is quite unusual.
The only thing I can think of is to make a memory map of each possible combination and take the one which needs the least amount of memory. There are easily hundreds of small code snippets to be linked and blindly testing every combination will be very slow and inefficient.
First thought: Build a tree with only valid options and then find the branch with the lowest byte count. This is alresy better than brute force, but I hope there is still a more elegant algorithm for this.
The language is JavaScript. that of Mordor, which I will not utter here
I hold an A-7 computer expert classification, Commodore. I'm well acquainted with Dr. Daystrom's theories and discoveries. The basic design of all our ship's computers are JavaScript.
|
|
|
|
|
Assuming that you subdivide the code into N snippets, each terminated by an unconditional jump (e.g. procedures), you could test each possible sequence out of the N! possibilities.
Note that the maximum savings in bytes that you could achieve are the number of jumps that may be converted from 16-bit form to 8-bit form. If this number is smaller than the length of the smallest code snippet, you would not be able to use any sort of pruning of the search tree, but would be forced to evaluate all N! leaves of the tree, which might take a long time...
Borland's Turbo Assembler (for x86 processors) had an option whereby it attempted to optimize (conditional) jumps:
1. All jumps were written without qualifiers.
2. The assembler would make multiple passes through the code, applying the following algorithm:
a. If a jump target was within +127/-128 bytes, output a short (2-byte) jump.
b. If an unconditional jump target was outside that range, output a 3-byte jump.
b. If a conditional jump target was outside that range, output a 5-byte sequence - jump <inverted condition=""> over the following jump (2 bytes) / jump unconditional to the target (3 bytes).
This was applied in a loop until either no more jumps could be optimized or a predetermined number of loops was reached. Typically, only 2-3 loops were necessary.
In addition to the automatic method given above, I would try to write each procedure so that the jumps are all 8-bit forms. Optimizing a procedure by hand is likely to be much easier than attempting global optimization.
I hope that this helps.
|
|
|
|