|
|
Member 4194593 wrote: The third table is just a list of the prime numbers to be considered (all prime numbers less than 300 decimal digits), but saved in a special way as a multi-level directory structure as described below.
A quick consultation with the revered Dr Riemann indicates that there are over 10^297 primes less than 10^300. Since there are somewhere on the order of 10^80 atoms in the known universe, your compression scheme must achieve a minimum compression ratio in excess of 10^217 : 1. Can't see that happening in a hurry... Sometimes "big data" is just too big.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Peter,
I was waiting for someone to figure that out. Check with Ravi, I proposed this to him in Email in March. The following is the second half of the original Email to him. Too bad this would not work, I think that the actual algo would work except for the fact that the file requirements are insane:
<pre>
------------------------------------------------------------------------------------------
The following discussion is why this algorithm (the part above this point) should be
published in Algorithms on April 1st without this lower section - let the wrecking crew of
experts in CP determine what is so wrong with the solution (the algorithm is correct but
just will not work).
------------------------------------------------------------------------------------------
Now, the number of primes with 300 digits is:
((10^300) / ln(10^300)) = (3.33 * (10^297))
The max size of a prime number in that range is 52 DWORDS or 26 QWORDS which would convert
to 232 characters so the total size of all path names describing all of these prime
numbers will be:
(232 * 3.33 * (10^297)) = (7.7 * (10^299)) BYTES
The number of terabytes to contain this data would be:
(7.7 * (10^299) / (10^12)) = (7.7 * (10^287)) TB
The number of 4 TB drives to contain this total size would thus be:
((1/4) * 7.7 * (10^287)) = (1.9 * (10^287)) drives
I don't think that Seagate has enough material to create that many drives. Penrose in his
book "The Emperor's New Mind" estimated that the number of particles in the visible
universe was 10^80, and after that problem has been solved, you still have another
problem about how much time it would take to write out these prime number directories to
the drives. So much for having the prime numbers available to use as trial multipliers to
see if the product matches the semi prime.
Lest you think it would make a difference (in the count and total size of the prime
numbers) if you only saved the prime numbers that lie between 10^200 and 10^300:
((10^300) / ln(10^300) - (10^200) / ln(10^200)) =
(1.447 * (10^297) - 2.171 * (10^197)) =
(1.447 * (10^297))
or
(1.447 * (10^297)) = (1.447 * (10^297))
Saving the number of prime numbers in a range of digits (from 200 to 300) would not even
make a visible dent in the count because the actual calculation is subtracting a 197 digit
number from a 297 digit number, leaving a 297 digit number with a diminished lowest value
(the last 196 digits if a borrow was needed else 197 digits), but the most significant 100
(or at least 99) digits would be unchanged. Only if you displayed the result in its actual
297 digit decimal form (instead of in truncated scientific notation) would you be able to
see the difference.
Another way to visualize this is to see how big a block of (1.9 * (10^287)) 4 TB drives
would be. Now my drives are (in inches) about 1.75 * 4.75 * 7. The diagonal from the
center of the drive to any corner is:
((((1.75 / 2)^2 + (4.75 / 2)^2 + (7 / 2)^2))^(1 / 2))) = 4.32 inches
A slightly smaller version of the block of drives would be to block the drives as a 4 wide
and 2 deep block, almost making them a cube. This would group the drives in blocks of 8
drives giving a total number of blocks as:
((1.9 / 8 * (10^287))) = (2.375 * (10^286)) blocks
The diagonal from the center of the block to any corner is:
((((1.75 * 4 / 2)^2 + (4.75 * 2 / 2)^2 + (7 / 2)^2))^(1 / 2))) = 6.86 inches
The dimensions (in the number of blocks) of a block with the same number of blocks in each
row, column, and layer would be:
((2.375 * (10^286))^(1/3)) = (2.874 * (10^95))
The diagonal from the center of this block to any corner would be
(6.86 * 2.874 * (10^95)) = (1.97 * (10^96)) inches
((6.86 * 2.874 / 12) * (10^95)) = (1.64 * (10^95)) feet
((6.86 * 2.874 / 5280) * (10^95)) = (3.11 * (10^91)) miles
The radius of a sphere that would enclose this block of drives would then be:
(3.11 * (10^91)) miles.
The radius of the the sphere that would enclose our solar system would be equal to the
orbit size of the planetoid Pluto (I guess I should be politically correct and not call
it a planet anymore) which is:
4.583 billion miles
or
(4.583 * (10^9)) miles
How much bigger would the radius of the sphere enclosing the block of drives be relative
to the radius of the sphere enclosing the solar system? To compare two spheres for "which
one is bigger, and by how much", consider comparing a baseball with a basketball. You
compare their height or width differences and not their circumference or volumes
differences (whether you are comparing diameters or radii), so try:
((radius of drives enclosing sphere) / (radius of solar system enclosing sphere))
((3.11 * (10^91)) / (4.583 * (10^9))) = (6.79 * (10^81))
It would be (6.79 * (10^81)) times as big as the radius of our solar system, and,
furthermore, you would be filling that space with 4 TB drives instead of the almost
complete vacuum that exists there today.
Where would you get that much material to build the drives? Maybe we could be seeing some
quantum sharing of particles between (10^287) different drives, all at the same time,
without dropping any recorded bits??? Let's not even consider the power requirements, or
the write time to populate the directory entries before the factoring algorithm starts.
</pre>
Dave.
|
|
|
|
|
Hi,
I want to make a simple algorithm for F1 Challenge. Basically i want to make an external program that takes my race results and at the end of the season i get "contract offers" depending on how well i did. I already know how i want it to work, i just want to know if there's already an algorithm similar to this?
|
|
|
|
|
This hardly needs an algorithm. It is just a matter of adding up all the points and sorting the drivers and teams into ascending order by points.
|
|
|
|
|
I use Carbonite backup service on my main machine. Carbonite claims that they are able to transmit only changes in a file to their servers so that the whole file needn't be transmitted again when there are changes to that file.
How are they doing this? Wouldn't they need to compare the changed file against a complete copy of the original? And to do this across the net would require transmitting the entire file anyway, right?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Well I guess only Carbonite could give a definitive answer. But, it is possible they are keeping a copy of the file's directory information so they can just compare disk segments, and they backup by copying the segments direct from disk. Then on the next incremental backup they only need to copy any segments that have been added since the previous backup.
|
|
|
|
|
That is likely to miss changes to files that do updates in place with mapped files...
|
|
|
|
|
Which is why I added the initial sentence to my response.
|
|
|
|
|
I developed a lossless, key-based, encryption algorithm that can work in any modern programming language and it has a 1-to-1 relationship between the unencrypted values and the encrypted values as long as the key remains the same and the minimum values and maximum values are known. The key can also be of virtually limitless size or of a very limited size. The algorithm uses carefully calculated mathematics to give the appearance of gibberish until it is decoded with the proper key or combination of keys. Other than that, I cannot reveal anything, lest I lose any intellectual property right opportunities that I have to the algorithm. How much is this encryption algorithm potentially worth if I patent it?
|
|
|
|
|
How many keys are you using? (I mean public/private).
How you exchange keys?
There is default values for key size?
Did you run some performance tests on real life data?
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
Daniel Mullarkey wrote: How much is this encryption algorithm potentially worth if I patent it? Not much, since there are already high quality encryption methods available at no cost.
|
|
|
|
|
Well, to address both replies, the core function headers look something like this:
Public Function Encode(ByVal UnencodedValue As Integer, ByVal KeyValue As Integer, ByVal MinimumValue As Integer, ByVal MaximumValue As Integer) As Integer
Public Function Decode(ByVal EncodedValue As Integer, ByVal KeyValue As Integer, ByVal MinimumValue As Integer, ByVal MaximumValue As Integer) As Integer
Keep in mind that UnencodedValue and EncodedValue can range anywhere from MinimumValue to MaximumValue and that the KeyValue can range anywhere from MinimumValue + 2 to MaximumValue.
|
|
|
|
|
Encryption is not only (and not mainly) about hide your data behind numbers. It can be smart but that's not nearly enough...You have to think about the data exchange between the side encrypting and the other that decrypting the data. For every encryption has helpers - these are the keys - and those helpers are have to pass (or be presented) on both side. You method signatures are telling nothing about that - just from this we can't tell nothing about, how your encryption is good. It also tells nothing about it speed of execution...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
modified 23-Mar-14 10:49am.
|
|
|
|
|
Well, I can say that the core methods involve 10 lines of code, one with a conditional (if...then) recursion that at best only occurs once per original use and the rest involves conditional (if...then) mathematics at three levels, thereby making it very quick and very fast. Besides, ever heard of lossless compression? Well, I might as well have created perpetual lossless, key-based, encryption. Keep in mind that the encryption algorithm is symmetrical, so the very idea of having public keys would be a bad idea. The core of my encryption algorithm is essentially a block cipher in which the unencrypted value does not grow or shrink in size when encrypted. Also, keep in mind that it can not only encrypt bulk data, but it can also use bulk keys. And one day I will show that I can deliver on the goods. I just have to find a way to afford to pay LegalZoom $1,000 to patent it.
"In cryptography, a block cipher is a deterministic algorithm operating on fixed-length groups of bits, called blocks, with an unvarying transformation that is specified by a symmetric key. Block ciphers are important elementary components in the design of many cryptographic protocols, and are widely used to implement encryption of bulk data."
--Quoted from Wikipedia from Block cipher
|
|
|
|
|
I'm not really interested in the details of your code, what I'm trying to tell you that your core methods are just the core...It's a very good thing that it performs well, however the recursion is always turn a red light on with me...That kind of methods need intensive test with different kind of data...
Daniel Mullarkey wrote: encryption algorithm is symmetrical According your previous post - with the method signatures - it's true only if you have key, min and max values...So still you have to consider how do you share that info between the parties...
So bfore you spend that $1000, try to fill all the gaps (and may be more that I listed). You may look into existing protocols for info how they build...
http://en.wikipedia.org/wiki/Cryptographic_protocol[^]
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
modified 23-Mar-14 11:04am.
|
|
|
|
|
All encryption is lossless? You have to be able to undo it back to the original :P
|
|
|
|
|
I suspect you really need to study the principles of encryption. Any encryption system worth its salt should be language neutral, and able to handle any stream of bytes rather than just integers.
|
|
|
|
|
Bytes can be transformed into integers easily enough using any modern programming language and since I have already succeeded with strings and characters, bytes are on their way. :8)
|
|
|
|
|
Daniel Mullarkey wrote: Bytes can be transformed into integers easily enough using any modern programming language And how do you decide how many bytes to make up each integer? There is no correspondence between the two types. In reality any encryption/decryption system works purely on a stream of bytes, the actual values of individual groups has no relevance.
|
|
|
|
|
Would you believe that I used conversion type functions?
|
|
|
|
|
I would believe anything, but what does that have to do with producing an efficient and valid encryption algorithm?
|
|
|
|
|
Nothing, in and of itself. What makes my encryption algorithm unique is that unlike other encryption algorithms, the set of all unencrypted values, encrypted values, and key values, are electronically indistinguishable from one another, while the each unencrypted value its corresponding encrypted value are still uniquely different from one another.
modified 5-Apr-14 10:21am.
|
|
|
|
|
So you keep telling us, but that proves nothing. Your algorithm needs to be tested to destruction by experts in the field, before anyone is going to pay you any money for it.
|
|
|
|
|
That point I will concede. Nonetheless, I was planning on licensing it out, rather than selling the patent wholesale.
|
|
|
|