ReInventing Neural Networks






4.90/5 (34 votes)
Neural Networks can do a lot of amazing things, and you can understand how you can make one from the ground up. You can actually be surprised how easy it is to develop one from scratch!
The Full Series
- Part 1: We create the whole
NeuralNetwork
class from scratch. - Part 2: We create an environment in Unity in order to test the neural network within that environment.
- Part 3: We make a great improvement to the neural network already created by adding a new type of mutation to the code.
Introduction
Before you start, I want you to know that I'm switching to YouTube:
Over the past two decades, Machine Learning has become one of the mainstays of information technology and with that, a rather central, albeit usually hidden, part of our life. With the ever increasing amounts of data becoming available, there is good reason to believe that smart data analysis will become even more pervasive as a necessary ingredient for technological progress. Recently, not only Neural Networks have been taking over the "Machine Learning" gig, but I also noticed there was a lack of tutorials that explain how you can implement a Neural Network from scratch, so I thought I should make one!
Background
In this article, you're going to understand the core fundamentals of Neural Networks, how you can implement one in pure C# and train it using genetic mutation. You need to know basing C# programming knowledge and basic knowledge of Object Oriented Programming before going through this article. Note that the Neural Network is going to learn through unsupervised learning/mutation. Supervised Learning/Backpropagation is not going to be introduced in this article. This is, however, one of my top priorities.
Understanding Neural Networks
If you have no idea how neural networks work, I suggest you watch this video made by 3Blue1Brown:
I did not make that video, and I don't think I'll ever be able to explain Neural Networks in a more intuitive way. Note that, in the video, the Sigmoid activation function was introduced, but I'm going to use "Leaky ReLU" instead because it is faster to train and does not introduce the same issues the ReLU has (Dead Neurons). Furthermore, the bias explained in the video is going to be represented as a neuron that has a value of 1 all the time.
You can also watch the video posted earlier at the beginning of the article. I made that one, and it's using Leaky ReLU as an activation function.
Using the Code
Let's assume we want to make a neural network like this one:
It would be a good practice to split the network into a group of sections. Like this:
This way, a neural network is just an array of NeuralSections
. In the NeuralNetwork.cs script, we should first import the needed namespaces:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
Then, let's declare the NeuralNetwork
class:
public class NeuralNetwork
{
...
}
The NeuralNetwork
class should start with:
public UInt32[] Topology // Returns the topology in the form of an array
{
get
{
UInt32[] Result = new UInt32[TheTopology.Count];
TheTopology.CopyTo(Result, 0);
return Result;
}
}
ReadOnlyCollection<UInt32> TheTopology; // Contains the topology of the NeuralNetwork
NeuralSection[] Sections; // Contains the all the sections of the NeuralNetwork
Random TheRandomizer; // It is the Random instance used to mutate the NeuralNetwork
Then, the constructor should ensure that the inputs are valid, initialize TheRandomizer
, set the topology of the network, initialize all the sections of the network and construct each of these sections:
/// <summary>
/// Initiates a NeuralNetwork from a Topology and a Seed.
/// </summary>
/// <param name="Topology">The Topology of the Neural Network.</param>
/// <param name="Seed">The Seed of the Neural Network.
/// Set to 'null' to use a Timed Seed.</param>
public NeuralNetwork (UInt32[] Topology, Int32? Seed = 0)
{
// Validation Checks
if (Topology.Length < 2)
throw new ArgumentException("A Neural Network cannot contain less than 2 Layers.",
"Topology");
for (int i = 0; i < Topology.Length; i++)
{
if(Topology[i] < 1)
throw new ArgumentException("A single layer of neurons must contain,
at least, one neuron.", "Topology");
}
// Initialize Randomizer
if (Seed.HasValue)
TheRandomizer = new Random(Seed.Value);
else
TheRandomizer = new Random();
// Set Topology
TheTopology = new List<uint>(Topology).AsReadOnly();
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection(TheTopology[i], TheTopology[i + 1], TheRandomizer);
}
}
Another constructor that can clone NeuralNetwork
s should be present in order to make training of offsprings possible:
/// <summary>
/// Initiates an independent Deep-Copy of the Neural Network provided.
/// </summary>
/// <param name="Main">The Neural Network that should be cloned.</param>
public NeuralNetwork (NeuralNetwork Main)
{
// Initialize Randomizer
TheRandomizer = new Random(Main.TheRandomizer.Next());
// Set Topology
TheTopology = Main.TheTopology;
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection (Main.Sections[i]);
}
}
Then, there is the FeedForward
function. It takes the input array, makes sure it is valid, passes it through all the sections and returns the output of the final section:
/// <summary>
/// Feed Input through the NeuralNetwork and Get the Output.
/// </summary>
/// <param name="Input">The values to set the Input Neurons.</param>
/// <returns>The values in the output neurons after propagation.</returns>
public double[] FeedForward(double[] Input)
{
// Validation Checks
if (Input == null)
throw new ArgumentException("The input array cannot be set to null.", "Input");
else if (Input.Length != TheTopology[0])
throw new ArgumentException
("The input array's length does not match the number of neurons
in the input layer.", "Input");
double[] Output = Input;
// Feed values through all sections
for (int i = 0; i < Sections.Length; i++)
{
Output = Sections[i].FeedForward(Output);
}
return Output;
}
Now, we need to give the developer the ability to mutate the NeuralNetwork
s. The Mutate
function just mutates each section independently:
/// <summary>
/// Mutate the NeuralNetwork.
/// </summary>
/// <param name="MutationProbablity">The probability that a weight is going to be mutated.
/// (Ranges 0-1)</param>
/// <param name="MutationAmount">
/// The maximum amount a mutated weight would change.</param>
public void Mutate (double MutationProbablity = 0.3, double MutationAmount = 2.0)
{
// Mutate each section
for (int i = 0; i < Sections.Length; i++)
{
Sections[i].Mutate(MutationProbablity, MutationAmount);
}
}
This is how the NeuralNetwork
class should look like right now:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
public class NeuralNetwork
{
public UInt32[] Topology // Returns the topology in the form of an array
{
get
{
UInt32[] Result = new UInt32[TheTopology.Count];
TheTopology.CopyTo(Result, 0);
return Result;
}
}
ReadOnlyCollection<UInt32> TheTopology; // Contains the topology of the NeuralNetwork
NeuralSection[] Sections; // Contains the all the sections of the NeuralNetwork
Random TheRandomizer; // It is the Random instance used to mutate the NeuralNetwork
/// <summary>
/// Initiates a NeuralNetwork from a Topology and a Seed.
/// </summary>
/// <param name="Topology">The Topology of the Neural Network.</param>
/// <param name="Seed">The Seed of the Neural Network.
/// Set to 'null' to use a Timed Seed.</param>
public NeuralNetwork (UInt32[] Topology, Int32? Seed = 0)
{
// Validation Checks
if (Topology.Length < 2)
throw new ArgumentException("A Neural Network cannot contain less than 2 Layers.",
"Topology");
for (int i = 0; i < Topology.Length; i++)
{
if(Topology[i] < 1)
throw new ArgumentException
("A single layer of neurons must contain, at least, one neuron.", "Topology");
}
// Initialize Randomizer
if (Seed.HasValue)
TheRandomizer = new Random(Seed.Value);
else
TheRandomizer = new Random();
// Set Topology
TheTopology = new List<uint>(Topology).AsReadOnly();
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection
(TheTopology[i], TheTopology[i + 1], TheRandomizer);
}
}
/// <summary>
/// Initiates an independent Deep-Copy of the Neural Network provided.
/// </summary>
/// <param name="Main">The Neural Network that should be cloned.</param>
public NeuralNetwork (NeuralNetwork Main)
{
// Initialize Randomizer
TheRandomizer = new Random(Main.TheRandomizer.Next());
// Set Topology
TheTopology = Main.TheTopology;
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection (Main.Sections[i]);
}
}
/// <summary>
/// Feed Input through the NeuralNetwork and Get the Output.
/// </summary>
/// <param name="Input">The values to set the Input Neurons.</param>
/// <returns>The values in the output neurons after propagation.</returns>
public double[] FeedForward(double[] Input)
{
// Validation Checks
if (Input == null)
throw new ArgumentException("The input array cannot be set to null.", "Input");
else if (Input.Length != TheTopology[0])
throw new ArgumentException
("The input array's length does not match the number of neurons
in the input layer.",
"Input");
double[] Output = Input;
// Feed values through all sections
for (int i = 0; i < Sections.Length; i++)
{
Output = Sections[i].FeedForward(Output);
}
return Output;
}
/// <summary>
/// Mutate the NeuralNetwork.
/// </summary>
/// <param name="MutationProbablity">
/// The probability that a weight is going to be mutated. (Ranges 0-1)</param>
/// <param name="MutationAmount">The maximum amount a mutated weight would change.
/// </param>
public void Mutate (double MutationProbablity = 0.3, double MutationAmount = 2.0)
{
// Mutate each section
for (int i = 0; i < Sections.Length; i++)
{
Sections[i].Mutate(MutationProbablity, MutationAmount);
}
}
}
Now that we have implemented the NeuralNetwork
class, it's time to implement the NeuralSection
class:
public class NeuralNetwork
{
...
private class NeuralSection
{
...
}
...
}
Each NeuralSection
should contain those global variables:
private double[][] Weights; // Contains all the weights of the section where [i][j]
// represents the weight from neuron i in the input layer
// and neuron j in the output layer
private Random TheRandomizer; // Contains a reference to the
// Random instance of the NeuralNetwork
The NeuralSection
class should also contain 2 constructors:
/// <summary>
/// Initiate a NeuralSection from a topology and a seed.
/// </summary>
/// <param name="InputCount">The number of input neurons in the section.</param>
/// <param name="OutputCount">The number of output neurons in the section.</param>
/// <param name="Randomizer">The Ransom instance of the NeuralNetwork.</param>
public NeuralSection(UInt32 InputCount, UInt32 OutputCount, Random Randomizer)
{
// Validation Checks
if (InputCount == 0)
throw new ArgumentException
("You cannot create a Neural Layer with no input neurons.", "InputCount");
else if (OutputCount == 0)
throw new ArgumentException
("You cannot create a Neural Layer with no output neurons.", "OutputCount");
else if (Randomizer == null)
throw new ArgumentException
("The randomizer cannot be set to null.", "Randomizer");
// Set Randomizer
TheRandomizer = Randomizer;
// Initialize the Weights array
Weights = new double[InputCount + 1][]; // +1 for the Bias Neuron
for (int i = 0; i < Weights.Length; i++)
Weights[i] = new double[OutputCount];
// Set random weights
for (int i = 0; i < Weights.Length; i++)
for (int j = 0; j < Weights[i].Length; j++)
Weights[i][j] = TheRandomizer.NextDouble() - 0.5f;
}
/// <summary>
/// Initiates an independent Deep-Copy of the NeuralSection provided.
/// </summary>
/// <param name="Main">The NeuralSection that should be cloned.</param>
public NeuralSection(NeuralSection Main)
{
// Set Randomizer
TheRandomizer = Main.TheRandomizer;
// Initialize Weights
Weights = new double[Main.Weights.Length][];
for (int i = 0; i < Weights.Length; i++)
Weights[i] = new double[Main.Weights[0].Length];
// Set Weights
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
Weights[i][j] = Main.Weights[i][j];
}
}
}
Now comes the FeedForward
function that does all the propagation magic:
/// <summary>
/// Feed input through the NeuralSection and get the output.
/// </summary>
/// <param name="Input">The values to set the input neurons.</param>
/// <returns>The values in the output neurons after propagation.</returns>
public double[] FeedForward(double[] Input)
{
// Validation Checks
if (Input == null)
throw new ArgumentException
("The input array cannot be set to null.", "Input");
else if (Input.Length != Weights.Length - 1)
throw new ArgumentException("The input array's length
does not match the number of neurons in the input layer.", "Input");
// Initialize Output Array
double[] Output = new double[Weights[0].Length];
// Calculate Value
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
if (i == Weights.Length - 1) // If is Bias Neuron
Output[j] += Weights[i][j]; // Then, the value of the neuron
// is equal to one
else
Output[j] += Weights[i][j] * Input[i];
}
}
// Apply Activation Function
for (int i = 0; i < Output.Length; i++)
Output[i] = ReLU(Output[i]);
// Return Output
return Output;
}
As we have done in the NeuralNetwork
class, there should be a Mutate
function in the NeuralSection
class too:
/// <summary>
/// Mutate the NeuralSection.
/// </summary>
/// <param name="MutationProbablity">The probability that
/// a weight is going to be mutated. (Ranges 0-1)</param>
/// <param name="MutationAmount">The maximum amount a Mutated Weight would change.
/// </param>
public void Mutate (double MutationProbablity, double MutationAmount)
{
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
if (TheRandomizer.NextDouble() < MutationProbablity)
Weights[i][j] = TheRandomizer.NextDouble() *
(MutationAmount * 2) - MutationAmount;
}
}
}
Finally, we need to add our ReLU activation function to the NeuralSection
class:
/// <summary>
/// Puts a double through the activation function ReLU.
/// </summary>
/// <param name="x">The value to put through the function.</param>
/// <returns>x after it is put through ReLU.</returns>
private double ReLU(double x)
{
if (x >= 0)
return x;
else
return x / 20;
}
This way, the script should end up looking like this:
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
public class NeuralNetwork
{
public UInt32[] Topology // Returns the topology in the form of an array
{
get
{
UInt32[] Result = new UInt32[TheTopology.Count];
TheTopology.CopyTo(Result, 0);
return Result;
}
}
ReadOnlyCollection<UInt32> TheTopology; // Contains the topology of the NeuralNetwork
NeuralSection[] Sections; // Contains the all the sections of the NeuralNetwork
Random TheRandomizer; // It is the Random instance used to mutate the NeuralNetwork
private class NeuralSection
{
private double[][] Weights; // Contains all the weights of the section
// where [i][j] represents the weight from neuron i in the
// input layer and neuron j in the output layer
private Random TheRandomizer; // Contains a reference to the Random instance
// of the NeuralNetwork
/// <summary>
/// Initiate a NeuralSection from a topology and a seed.
/// </summary>
/// <param name="InputCount">The number of input neurons in the section.</param>
/// <param name="OutputCount">The number of output neurons in the section.</param>
/// <param name="Randomizer">The Ransom instance of the NeuralNetwork.</param>
public NeuralSection(UInt32 InputCount, UInt32 OutputCount, Random Randomizer)
{
// Validation Checks
if (InputCount == 0)
throw new ArgumentException
("You cannot create a Neural Layer with no input neurons.", "InputCount");
else if (OutputCount == 0)
throw new ArgumentException
("You cannot create a Neural Layer with no output neurons.", "OutputCount");
else if (Randomizer == null)
throw new ArgumentException
("The randomizer cannot be set to null.", "Randomizer");
// Set Randomizer
TheRandomizer = Randomizer;
// Initialize the Weights array
Weights = new double[InputCount + 1][]; // +1 for the Bias Neuron
for (int i = 0; i < Weights.Length; i++)
Weights[i] = new double[OutputCount];
// Set random weights
for (int i = 0; i < Weights.Length; i++)
for (int j = 0; j < Weights[i].Length; j++)
Weights[i][j] = TheRandomizer.NextDouble() - 0.5f;
}
/// <summary>
/// Initiates an independent Deep-Copy of the NeuralSection provided.
/// </summary>
/// <param name="Main">The NeuralSection that should be cloned.</param>
public NeuralSection(NeuralSection Main)
{
// Set Randomizer
TheRandomizer = Main.TheRandomizer;
// Initialize Weights
Weights = new double[Main.Weights.Length][];
for (int i = 0; i < Weights.Length; i++)
Weights[i] = new double[Main.Weights[0].Length];
// Set Weights
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
Weights[i][j] = Main.Weights[i][j];
}
}
}
/// <summary>
/// Feed input through the NeuralSection and get the output.
/// </summary>
/// <param name="Input">The values to set the input neurons.</param>
/// <returns>The values in the output neurons after propagation.</returns>
public double[] FeedForward(double[] Input)
{
// Validation Checks
if (Input == null)
throw new ArgumentException
("The input array cannot be set to null.", "Input");
else if (Input.Length != Weights.Length - 1)
throw new ArgumentException("The input array's length
does not match the number of neurons in the input layer.", "Input");
// Initialize Output Array
double[] Output = new double[Weights[0].Length];
// Calculate Value
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
if (i == Weights.Length - 1) // If is Bias Neuron
Output[j] += Weights[i][j]; // Then, the value of the neuron
// is equal to one
else
Output[j] += Weights[i][j] * Input[i];
}
}
// Apply Activation Function
for (int i = 0; i < Output.Length; i++)
Output[i] = ReLU(Output[i]);
// Return Output
return Output;
}
/// <summary>
/// Mutate the NeuralSection.
/// </summary>
/// <param name="MutationProbablity">The probability
/// that a weight is going to be mutated. (Ranges 0-1)</param>
/// <param name="MutationAmount">The maximum amount a Mutated Weight would change.
/// </param>
public void Mutate (double MutationProbablity, double MutationAmount)
{
for (int i = 0; i < Weights.Length; i++)
{
for (int j = 0; j < Weights[i].Length; j++)
{
if (TheRandomizer.NextDouble() < MutationProbablity)
Weights[i][j] = TheRandomizer.NextDouble() *
(MutationAmount * 2) - MutationAmount;
}
}
}
/// <summary>
/// Puts a double through the activation function ReLU.
/// </summary>
/// <param name="x">The value to put through the function.</param>
/// <returns>x after it is put through ReLU.</returns>
private double ReLU(double x)
{
if (x >= 0)
return x;
else
return x / 20;
}
}
/// <summary>
/// Initiates a NeuralNetwork from a Topology and a Seed.
/// </summary>
/// <param name="Topology">The Topology of the Neural Network.</param>
/// <param name="Seed">The Seed of the Neural Network.
/// Set to 'null' to use a Timed Seed.</param>
public NeuralNetwork (UInt32[] Topology, Int32? Seed = 0)
{
// Validation Checks
if (Topology.Length < 2)
throw new ArgumentException
("A Neural Network cannot contain less than 2 Layers.", "Topology");
for (int i = 0; i < Topology.Length; i++)
{
if(Topology[i] < 1)
throw new ArgumentException
("A single layer of neurons must contain,
at least, one neuron.", "Topology");
}
// Initialize Randomizer
if (Seed.HasValue)
TheRandomizer = new Random(Seed.Value);
else
TheRandomizer = new Random();
// Set Topology
TheTopology = new List<uint>(Topology).AsReadOnly();
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection
(TheTopology[i], TheTopology[i + 1], TheRandomizer);
}
}
/// <summary>
/// Initiates an independent Deep-Copy of the Neural Network provided.
/// </summary>
/// <param name="Main">The Neural Network that should be cloned.</param>
public NeuralNetwork (NeuralNetwork Main)
{
// Initialize Randomizer
TheRandomizer = new Random(Main.TheRandomizer.Next());
// Set Topology
TheTopology = Main.TheTopology;
// Initialize Sections
Sections = new NeuralSection[TheTopology.Count - 1];
// Set the Sections
for (int i = 0; i < Sections.Length; i++)
{
Sections[i] = new NeuralSection (Main.Sections[i]);
}
}
/// <summary>
/// Feed Input through the NeuralNetwork and Get the Output.
/// </summary>
/// <param name="Input">The values to set the Input Neurons.</param>
/// <returns>The values in the output neurons after propagation.</returns>
public double[] FeedForward(double[] Input)
{
// Validation Checks
if (Input == null)
throw new ArgumentException("The input array cannot be set to null.", "Input");
else if (Input.Length != TheTopology[0])
throw new ArgumentException("The input array's length
does not match the number of neurons in the input layer.", "Input");
double[] Output = Input;
// Feed values through all sections
for (int i = 0; i < Sections.Length; i++)
{
Output = Sections[i].FeedForward(Output);
}
return Output;
}
/// <summary>
/// Mutate the NeuralNetwork.
/// </summary>
/// <param name="MutationProbablity">The probability
/// that a weight is going to be mutated. (Ranges 0-1)</param>
/// <param name="MutationAmount">The maximum amount a mutated weight would change.
/// </param>
public void Mutate (double MutationProbablity = 0.3, double MutationAmount = 2.0)
{
// Mutate each section
for (int i = 0; i < Sections.Length; i++)
{
Sections[i].Mutate(MutationProbablity, MutationAmount);
}
}
}
Now that we have our implementation ready, we have to try it on something simple. The XOR function will do as a proof of concept. If you don't know what an XOR function is, here is what you should expect:
Because a picture is worth a thousand words, here is how the training process should work:
If we turn that flow chart into code, this is what we should end up with:
using System;
namespace NeuralXOR
{
class Program
{
static void Main(string[] args)
{
int Iteration = 0; // Current Training Iteration
NeuralNetwork BestNetwork = new NeuralNetwork
(new uint[] { 2, 2, 1 }); // The best network currently made
double BestCost = double.MaxValue; // The cost that the
// best network achieved
double[] BestNetworkResults = new double[4]; // The results that the
// best network calculated
double[][] Inputs = new double[][] // This represents the possible
// inputs or the training dataset
{
new double[] { 0, 0 },
new double[] { 0, 1 },
new double[] { 1, 0 },
new double[] { 1, 1 }
};
double[] ExpectedOutputs = new double[] { 0, 1, 1, 0 }; // This represents
// the expected outputs from the optimum NeuralNetwork
while (true) // Keep Training forever
{
NeuralNetwork MutatedNetwork = new NeuralNetwork(BestNetwork); // Clone the
// current
// best network
MutatedNetwork.Mutate(); // Mutate the clone
double MutatedNetworkCost = 0;
double[] CurrentNetworkResults = new double[4]; // The results that the mutated
// network calculated
// Calculate the cost of the mutated network
for (int i = 0; i < Inputs.Length; i++)
{
double[] Result = MutatedNetwork.FeedForward(Inputs[i]);
MutatedNetworkCost += Math.Abs(Result[0] - ExpectedOutputs[i]);
CurrentNetworkResults[i] = Result[0];
}
// Does the mutated network perform better than the last one
if (MutatedNetworkCost < BestCost)
{
BestNetwork = MutatedNetwork;
BestCost = MutatedNetworkCost;
BestNetworkResults = CurrentNetworkResults;
}
// Print only each 20000 iteration in order to speed up the training process
if (Iteration % 20000 == 0)
{
Console.Clear(); // Clear the current console text
for (int i = 0; i < BestNetworkResults.Length; i++) // Print the best
// truth table
{
Console.WriteLine(Inputs[i][0] + "," +
Inputs[i][1] + " | " + BestNetworkResults[i].ToString("N17"));
}
Console.WriteLine("Cost: " + BestCost); // Print the best cost
Console.WriteLine("Iteration: " + Iteration); // Print the current
// Iteration
}
// An iteration is done
Iteration++;
}
}
}
}
Run it, and you'll get that:
Points of Interest
I was really surprised when I found out that only 20,000 iterations could reduce the cost of the XOR function to under 0.03. I did expect it to take some more time, but it just didn't need more time. If anybody has any questions or just wants to talk about anything, you can leave a comment below. Wait for a future article that explains how you can use this implementation to make cars learn how to drive all by themselves in Unity using reinforcement learning. I'm working on something similar to this one made by Samuel Arzt:
Update on 11th December 2017
I finished the second article and it is currently waiting for submission. You can have a look at the demo video. It looks a bit creepy, but... Here you go:
Another Update on 11th December 2017
Looks like Part 2 is submitted, guys. Have fun! And... never think for a bit that we're done here. My current target is to implement 3 Crossover operators to make evolution a bit more efficient and offer the developer more diversity. After that, Backpropagation is the target.
Update on 20th February, 2018
Part 3 is up and running! It shows a substantial improvement over the system discussed in Parts 1 and 2. Tell me what you think!