Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / operating-systems / Windows

Neural Cryptography

4.84/5 (19 votes)
29 Aug 2009CPOL4 min read 119.2K   5.7K  
This article presents a new cryptography algorithm based on neural networks. Here, you can find some theory and a demo project.

1. Abstract

There are many Cryptography algorithms nowadays. Some of them are more secure, some less. All those algorithms can be split into symmetric or asymmetric cryptography. In symmetric cryptography, the sender and receiver use a shared key to encode and decode plain text. In asymmetric algorithms, users have their own private and public keys. There are some advantages and disadvantages in both methods in terms of speed and level of security. I will just say that symmetric algorithms are a lot faster than asymmetric ones. But they do need a shared key. How can we get a shared key via a public channel and protect it against opponents? There are many ways to do this, which are more or less effective, but I want to offer a rather new and absolutely secure method: neural cryptography.

2. Requirements

  • Familiarity with Delphi
  • Some math knowledge about basic neural networks

3. Introduction

There are two guys: Alex and Boris, and an insecure channel (for example, ICQ). They want to send some top-secret information to each other. Also, they can't use RSA-like asymmetric algorithms, and they can't meet to create a secret key for their messages. What should they do? The answer is to build neural networks, one for each. Then, they should synchronize their networks, and the weights will be the secret key. This article implements a basic neural cryptography algorithm and a demo project written in Delphi to show its fantastic effect. It would be interesting for developers, cryptoanalytics, and those people who want to make their communication secure.

4. Background

This part explains some of the neural network theories you need to understand in Section 5.

Image 1

Here is a simple neural network developed by Rosenblat in 1968. It consists of an input vector X, a hidden layer Sigma s, a weights coefficients W between the input vector and the hidden layer, and an activation procedure ? that counts the result value t. Let's call such a neural network a neural machine. It can be described by three parameters: K, the number of hidden neurons, N, the number of input neurons connected to each hidden neuron, and L, the maximum value for weight {-L..+L}. Two partners have the same neural machines. To synchronize them, they should execute such an algorithm:

scheme_small.JPG

To count the output value, we use a simple method:

Image 3

How do we update the weights? We update the weights only if the output values of the neural machines are equal. There are three different rules:

Image 4

Hebbian learning rule

Image 5

Anti-Hebbian learning rule

Image 6

Random-walk learning rule

Here, Theta is a special function. Theta(a, b)=0 if a<>b; else Theta=1. The g(...) function keeps the weights in the range -L..+L. x is the input vector and w is the weights vector. After the machines are synchronized, their weights are equal: we can use them for constructing a shared key. In [3], there is a lot of information about the attacks on this algorithm. I just want to say here that it is impossible to hack it.

5. Implementation

In this section, I describe how to program neural machines and will show how to use the Delphi unit NeuroCrypt.pas.

The main object in this unit is tTPM (TPM - Tree Parity Machine in [3]). It contains two vectors: H and W. "H" is used for internal operations during result value counting. "W" contains weights. There are also four integer values: K, L, N, and TPOutput. Here is the interface of the tTPM, tVector, and tInputVector objects:

tVector = array of integer;
tTPM = object
  w,h:tVector;
  K,N,L:integer;
  TPOutput:integer;
  Procedure InitAll;
  Procedure CountResult(X:tVector);
  Procedure UpdateWeight(X:tVector);
  Procedure RandomWeight;
  Function VectorValue:integer;
end;
tInputVector=object
  X: TVector;
  Procedure FormRandomVector(k,n:integer);
end;

I think the names of the procedures say for themselves what they do. You can also download the source code from this site, and it's not necessary to explain it. Here I just show how to use it. First of all, we initialize the neural machine's properties:

A: tTPM;
~~~~~~~~~~~~~~~~~~~~~
with A do
begin
  K:=spinedit1.Value;
  N:=spinedit2.Value;
  L:=spinedit3.Value;
  InitAll;
  RandomWeight;
end;

Then, in every iteration, we should produce the input vector, the count output value, and send them to the partner B. "B" should do the same.

inp: tInputVector;
~~~~~~~~~~~~~~~~~~~~~
inp.FormRandomVector(b.K,b.N);
A.CountResult(inp.X);
if a.TPOutput=b.TPOutput then
  begin
     a.UpdateWeight(inp.X);
     ~~~~~~~~~~~~~~~~~~~~~~~~~~~
     b.UpdateWeight(inp.X);
  end;

Here is a screenshot of the demo project based on NeuroCrypt:

neurocrypt.JPG

6. Future Work

  • Solve the problem of man-in-middle attacks when a man can change messages
  • Build an add-on to ICQ/MSN client to secure messages
  • Add file transfer procedures

7. References

  • [1] A New Technique on Neural Cryptography with Securing of Electronic Medical Records in Telemedicine System – N.Prabakaran, Department of Mathematics, Anna University, 2008
  • [2] Cryptography based on delayed chaotic neural networks – Wenwu Yu, Department of Mathematics, Southeast University, 2006
  • [3] Neural Synchronization and Cryptography — Andreas Ruttor. PhD thesis, Bayerische Julius-Maximilians-Universität Würzburg, 2006

8. History

  • 18th August, 2009: Initial post.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)