For those new to message boards please try to follow a few simple rules when posting your question.
Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
Keep the subject line brief, but descriptive. eg "File Serialization problem"
Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
No advertising or soliciting.
We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
I am working on a bignum library and have been checking out some of the functions. I would like to know it the times I am seeing are good, bad, or indifferent.
To test big numbers, I used the RSA challenge file.
The times are in seconds and hundredths.
The initialize time is setting high priority and allocating all of virtual memory.
The read and split time is the time to read the file (one time) and tokenize it.
The edit time is for deleting unneeded lines in the file.
To get any difference in time between the different RSA instances, I had to loop 100,000 times to get different time values for the different instances (as you can see, the read/split time and the edit time are the default .01 second), thus those times are listed as 100K*xxx, i.e. "RSA-2048 100K*DTB conversion time: 2.06" means 20.6 microseconds for any single loop.
A single RSA instance includes the following data which is verified during the conversion (the first RSA instance is used as this example):
The edit deletes the blank lines and the Status line. The initialize, read, and edit are only done one time.
The conversion time for each RSA instance includes validating that only decimal digits are present in the data, and that there the correct number of digits, and that the digit sum matches. I load the number as a radix 10 value and convert it to binary, and verify that the result has the correct number of bits. I save this binary value over the decimal digits so I can load it later without conversion. I also load this binary value to insure correct operation with radix 1. Each RSA instance is processed 100,000 times and then the time is calculated.
The SQRT times include loading the binary value and taking the SQRT and getting its remainder, and then squaring the SQRT and adding the remainder and comparing the result with the RSA instance value. Each RSA instance is processed 100,000 times and then the time is calculated.
With all of the above operations as described, are the times at all reasonable, i.e. 20.6 microseconds for DTB conversion and validation of a 2048 bit binary number and 63.6 microseconds to load and take the SQRT and validate (SQRT**2 + remainder == semi prime) for the 2048 bit binary value?
Time to initialize: 0.06
Time to read and split RSA.TXT: 0.01
Time to edit RSA.TXT: 0.01
RSA-576 100K*DTB conversion time: 0.49
RSA-640 100K*DTB conversion time: 0.56
RSA-704 100K*DTB conversion time: 0.59
RSA-768 100K*DTB conversion time: 0.79
RSA-896 100K*DTB conversion time: 0.98
RSA-1024 100K*DTB conversion time: 1.13
RSA-1536 100K*DTB conversion time: 1.54
RSA-2048 100K*DTB conversion time: 2.06
Time to DTB convert 100K*RSA.TXT: 8.34
RSA-576 SQRT Time - 100k*RSA.TXT: 1.46
RSA-640 SQRT Time - 100k*RSA.TXT: 2.51
RSA-704 SQRT Time - 100k*RSA.TXT: 2.71
RSA-768 SQRT Time - 100k*RSA.TXT: 2.12
RSA-896 SQRT Time - 100k*RSA.TXT: 2.40
RSA-1024 SQRT Time - 100k*RSA.TXT: 2.80
RSA-1536 SQRT Time - 100k*RSA.TXT: 5.69
RSA-2048 SQRT Time - 100k*RSA.TXT: 6.36
No, I prefer to roll my own, only my functions process only positive integers. I do not use C++ or .NET, I am a MASM(5,6,7,8,9) programmer. OBTW, I read the class documentation but could not find a SQRT function, did I miss it somewhere?
If you are interested, I can email you the RSA.txt file (if you do not already have it) and you could implement and test the SQRT function and report the timings from the big integer class. My timings were taken on an HP Pavilion dv7-6c23cl - 6GB memory, quad AMD, 2.5GHz, 32 bit assembly, console application.
What I was looking for was anyone's WAG about how long the square roots of 576 bit to 2048 bit integers should take.
I am looking for help at the following problem:
at my final project for Bsc i get data and need to decide if a threshold had been crossed (tha data IS NOT BOOLEAN.. i get data about the velocity of an object)
the deffinition : if i get that for 1sec at least the given data is above threshold - It has been crossed.
it ofcourse complicated beacuse i need to filter "noise".
i thought to use "ALPHA FILTER" or "X/Y decision" or "Avarege" and other.. but i am sure that someone did a these on that and there is a proven algorythm to handle it.
I'm looking for an algorithm to track the numbers at least duplicate content in a two-dimensional array to draw complex objects composed of rectangles, squares, triangles, hexagons, etc.
The constraints are:
- I require at least how many numbers should not be duplicated
- The vertices of the objects must contain at least duplicate numbers except those required
- Objects should be placed symmetrically
In the example you can see duplicate values 0,10,15,18,19,25*,35,45,50 and the three individual numbers. * shared between two objects
The numbers 16,27,46 are the three numbers that I requested unduplicated
Course can be traced different combinations of objects that match the given condition.
They are looking for a fast algorithm. I have processing times biblical!
I have been working with C# using the FlickrNET API in order to create a slideshow that shows images from Flickr based on a single word search term.
This has been easy enough to implement thus far but the images shown on the slideshow are sometimes repetitive i.e. they show 10 of the same thing taken at a different angle by a single user.
As I am a bit of a newby to coding more generally I was looking for general advice or pointers on the best way to randomize these images according to their other related tags. So one way this might work is if someone searched "London" it could show everything with the tag "London" but use the other tags to organize the images so they are more diverse.
9 = 1001 in binary so the function GetNumOfBinary(9) = 2.
I know I can do it in o(n) (time) by convert it to binary and exam digit by digit.
I've been told I can do it using space as much as I need.
How can I do it? (it's seems impossible because I need to check every digit, doesn't matter which way I do it and it'll be still o(n))
It depends on your abstract model. With the usual model, you can't do any better than O(n) (so o(n) is not happening, or did you write a lower case o by accident?) - obviously on a plain old Turing machine, you're going to have to read every bit.
But this problem is in NC. You could sum n/2 pairs of bits, then n/4 pairs of "2bit numbers", n/8 pairs of nibbles, etc, and you're done in log n steps, with each step taking logarithmic time too (adders) (sometimes not counted), all with a polynomial number of processing elements.
Similarly in "broadword computing", you would say that you could compute this in O(log n) broadword steps - using the same construction, but now every layer is a couple of steps (mask and add) (with the addition counted as 1 step, instead of as a circuit of depth O(log n))
Practically, on 32bit words but "pretending 32 is not a constant", you can still use the same construction for an O(log n) algorithm possibly with a multiplication trick to do several sums at once (already shown in answers) or lookup tables or (with as much space as you need, you could cheat terribly and compute any mapping from 32bit integers to anything in a single step), if available, the popcnt instruction.
For arrays you could use a pshufb-based trick[^] (pshufb is awesome).
This is nothing to do with Python, or any other language. It's a simple matter of sorting the intial values into order and then searching for the two points closest to the one entered by the user. Could be speeded up by binary chop (Google for that).
I think Richard was suggesting that you google "Binary chop" not sorting.
Try a search of "python binary search closest value" - the first hit that came up for me (in Google) was an answer to the same homework question
Excuse me, first posted in the lounge, but Bill suggested I post here:
I have a range of values (voltage) over time (thousands of minutes, one value per minute). I am trying to chart these. Determining the length of my Y axis is quite a problem for me. If I take a minimum and maximum, and use that as the axis height, one or two zero values result in all the others being scrunched up at the top of the chart. If I remove zeroes, it looks much better, and for a chart, they aren't very important, I'll give all real values in a tabular report.
What I would like to do is determine the average height of the band of data points, sort of the space between the moving average of the low points and that of the heigh points. I figure to do that, I would need a median series, so I could determine a smoothed series of points above and below median, and make my Y axis 's' higher and 's' lower than those.
For my old computer I need an assembler which is able to take assembled code from a library and link it together in the smallest possible combination.
One 'speciality' of the old CDP1802 processor will force me to write the assembler and linker myself. There are two types of branching instructions: long branches and short branches. Long branches use full 16 bit addresses, but will cause timing issues with the graphics chip. This is an ancient hardware bug.
This is the reason why i must use short branches with short 8 bit addresses. The upper 8 bits are just assumed to remain the same as in the instruction's address. This way memory is segmented into 256 byte blocks. It's not a very strict segmentation as the code can run across the boundaries without any consequences, You just can't loop back with a short branch and long branches can't be used.
The linker will have to puzzle together snippets of code and data with this in mind. At the same time I must be sure that memory usage is as low as possible in the end. My old computer has only 4k RAM, and more than 16k is quite unusual.
The only thing I can think of is to make a memory map of each possible combination and take the one which needs the least amount of memory. There are easily hundreds of small code snippets to be linked and blindly testing every combination will be very slow and inefficient.
First thought: Build a tree with only valid options and then find the branch with the lowest byte count. This is alresy better than brute force, but I hope there is still a more elegant algorithm for this.
Assuming that you subdivide the code into N snippets, each terminated by an unconditional jump (e.g. procedures), you could test each possible sequence out of the N! possibilities.
Note that the maximum savings in bytes that you could achieve are the number of jumps that may be converted from 16-bit form to 8-bit form. If this number is smaller than the length of the smallest code snippet, you would not be able to use any sort of pruning of the search tree, but would be forced to evaluate all N! leaves of the tree, which might take a long time...
Borland's Turbo Assembler (for x86 processors) had an option whereby it attempted to optimize (conditional) jumps:
1. All jumps were written without qualifiers.
2. The assembler would make multiple passes through the code, applying the following algorithm:
a. If a jump target was within +127/-128 bytes, output a short (2-byte) jump.
b. If an unconditional jump target was outside that range, output a 3-byte jump.
b. If a conditional jump target was outside that range, output a 5-byte sequence - jump over the following jump (2 bytes) / jump unconditional to the target (3 bytes).
This was applied in a loop until either no more jumps could be optimized or a predetermined number of loops was reached. Typically, only 2-3 loops were necessary.
In addition to the automatic method given above, I would try to write each procedure so that the jumps are all 8-bit forms. Optimizing a procedure by hand is likely to be much easier than attempting global optimization.
You have a misunderstanding about this -- the CLR doesn't understand any languages -- it's more like the languages understand the CLR, but even that is a misleading description.
Furthermore, most members here on CP (me anyway) are just ordinary developers who do not know (or care) anything about how the deep internals work. You would probbaly need to contact the developers at Microsoft to gain the level of detail you desire.
The things you are asking about are way beyond what is required for day-to-day development of commercial and enterprise applications. If you truly want to understand how it all works, you will likely need a doctorate degree.
I can't find anything wrong to understand things that have developed successfully already, i need your help if you already know it then just you can share it that will increase your's knowledge level also Mr PIEBALDconsult and i need to tell you one thing Sir Isaac Newton didn't have a doctorate degree when he found the gravitational force at all, it can mean you that doctorate degree is not necessary at all to become a big man in knowledge like Sir Isaac Newton.
Hi friends, need help!! i am absloute beginner and need advise. below algorithm is an extrat from a text book but when i try to apply and solve the problem on paper i see that this algorithm will fail. as i get a remainder 0 every digit i key in till 8 (i took number 8 as an examole and applied below)...please advise....also i found that applying this algorithm on number 2 would result in 0 as well and if it is 0 then not prime, then how is this algorithm correct!
2 read the number num
i <--2, flag <--1
4 repeatnsteps 4 through 6 unitl i
4th step - repeat steps 4 through 6 unitl i <num or flag =0 5) rem <--num mod i 6) if rem=0 then flag<-- 0 else i<--i+1 7) if flag =0 then print number is not prime else prit number is prime 8) stop in this step if i use number 2 as an example it would result in 0 which will result it number being non-prime.
Ok, now it makes more sense. The problem here is that 2 is a special case, and they did not handle it. Simply add a step: if the number is two, it is prime.
So in short, yes, the book is wrong and you are right.
edit: ok now it makes less sense, why are most of the steps gone again? What I wrote above should still apply though.
Thank you , i was editing so the steps can be read properly....atleast i know that i was not wrong. i appricate you help and gives me confidence that peoople online can help me with my problems while i am learning.
so assuming the number 2 in these steps in not properly defined and so if we move in the sequence and found another number to be divisible and remainder to be 0 (we should then in this case consider it to be correct) and therefore 99 would not show up as prime in when the algorithm runs, is that correct?
Last Visit: 31-Dec-99 19:00 Last Update: 27-Dec-14 2:05