I made this library where 128-bit integer (or more) can be calculated. Please just think that I made it in order to satisfy myself. With 32-bit integer or 64-bit integers, frankly, I think the range is too low. Of course, the acceptable range of 32-bit or 64-bit integers does not mean there is not enough. I wanted a more wide range of integers. The majority of people know that the largest available is a 64-bit integer. They say it's not possible to have an 128-bit integer. I used to think the same.
Then I studied logic circuits, and found that Boolean algebra can be expressed in it.
In logical mathematics, the SUM logic is expressed like below.
s = ((a xor b) xor c)
c = (a and b) or ((a xor b) and c)
I then implemented binary "addition" and because I was able to make it easier, "subtraction, multiplication, division". And I wrote code for bit shift logic, left shift and right shift. And my library uses little endian so it has compatibility with all integer types in C#. I hope that you would advise me, so I post this article with the code I wrote.
There is a class diagram in below. It is very dirty and it has unused.... it will be removed... with updates or higher version releases. :=/
Using the code
You can define your own n-bit integer like this...
IntNBit int512 = new IntNBit(512);
So, it means,...
.....new IntNBit( BIT COUNT YOU WANT TO USE );
IntNBit has so many operator definitions. I will show them next.
You can use the
+ operator like this. It is same as in
int512 = int512 + 1024;
You can use the
+= operator like this:
int512 += 1024;
Single increasing operator (
Comparing with other integer variables:
if(int512 > 1024)...
So you can use the
IntNBit class easily. In addition, almost all operators are compatible with the underlying integer C#. But there is a serious bug. An exception occurs when you try to compute two integers with different sizes. So I've made a temporary solution to use a string of bits to be able to generate numbers with different sizes. This will be modified when I update this article later.
How to generate an IntNBit instance, with a string of bits
public static IntNBit ParseBitString(string bitString, int bitCounts);
This function is for generating a instance of
IntNBit with a string of bits. If you want to generate a 512 bit integer with a string of bits, "001010111", you can use this function below.
IntNBit myInteger = IntNBit.ParseBitString( "001010111", 512);
and, you can generate a string of bits with this function:
public string ToBitString()
This function is faster than
ToString() because this function works only by adding a character to a string instance. But the
ToString() function to get the string as a decimal number internally performs complex operations. So it is very slow. This function is the most annoying to optimize.
You can access bit-array with this and you must follow write/read rules. If you don't follow these rules, your software will not run or will raise an exception.
int512.InternalRegister.RsGates[BIT NUMBER].Input.State = WHAT YOU WANT TO WRITE; bool bitValue = int512.InternalRegister.RsGates[BIT NUMBER].Output.State;
How easy is that! but...
Many parts of the classes were written to implement the
IntNBit class. Some of them are not useful but I did not clear them as they might be needed later. And I provide you one of three samples as below.
Integer 128 Virtualization Speed Meter
Points of Interest
My code separates bits to represent a 512-bit integer or a 128-bit integer stored in an array. And when my code is represented by a number that people can understand, an array of bits is not assembled. It only joins characters one by one, followed by just a fake bit operation. Please expect my library to be updated in the future, I will implement a fake floating-point operation. I hope to make up a fake FPU in which over 256-bit floating-point numbers can be computed. Your praise, a word of advice becomes my strength and vitality to me. Thank you.
P.S. And I wrote a copyright text. Ignore the copyright statement. It is only an adornment. :=)