How to make a Neural Network in Unity in 11 lines of code using C#

Ever wanted to make a Neural Network in Unity using C#? Now you can do that in just eleven lines of code (excluding brackets) using my newly released open source library called MicrogradCSharp

Introduction

Neural Networks are the backbone of many modern Artificial Intelligence algorithms and libraries. If you are trying to learn Neural Networks you will notice that most tutorials are made using evil Python code, which is annoying if you like me prefer C# and Unity. 

A few weeks ago I found a tutorial by Andrej Karpathy who's one of the main characters in the Artifical Intelligence industry. Among others he has worked with Tesla Motor's self-driving cars. The tutorial, The spelled-out intro to neural networks and backpropagation: building micrograd, was about how to build a scalar-valued automatic differentiation (autograd) engine. 

Why do you need an autograd engine when making Neural Networks? If you have studied Neural Networks you know you have to train them using backpropagation to update the weights and biases, which is a tricky algorithm involving derivatives in loooooong chains. Autograd will figure out these derivatives for you so you don't have to bother with them ever again!

Andrej Karpathy's open source implementation in Python of his YouTube tutorial is called Micrograd. I translated it into C# and my open source implementation is thus called MicrogradCSharp! I've also added a few more functionalities, such as you can pick which transfer function you want to use in each layer when creating a Neural Network (Karpathy's version hard-code tanh or relu). I will add more functionalities in the future.     

XOR Gate

A common "Hello world" example when creating Neural Networks is the XOR gate. You want to train a Neural Network to understand the following table:

Input 1  Input 2  Output
0 0 0
0 1 1
1 0 1
1 1 0

For example if you feed the Neural Network a 1 and a 1 you want it to spit out a 0. The minimum Neural Network to accomplish this task requires 2 input nodes, 3 nodes in the middle layer, and 1 output node. It will look like this (each layer has a bias node):


The code to train and test such a Neural Network looks like this:


MicroMath.Random.Seed(0); 

Value[][] inputData = Value.Convert(new [] { new[] { 0f, 0f }, new[] { 0f, 1f }, new[] { 1f, 0f }, new[] { 1f, 1f } }); 
Value[] outputData = Value.Convert(new[] { 0f, 1f, 1f, 0f }); 

//2 inputs, 3 neurons in the middle layer with tanh activation function, 1 output with linear activation function 
MLP nn = new(2, new int[] { 3, 1 }, new Value.AF[] { Value.AF.Tanh, Value.AF.Linear }); 

//Train 
for (int i = 0; i <= 100; i++) 
    Value loss = new(0f); 
    for (int j = 0; j < inputData.Length; j++) 
    
        loss += Value.Pow(nn.Activate(inputData[j])[0] - outputData[j], 2f); //MSE loss function 
    
    
    Debug.Log($"Iteration: {i}, Network error: {loss.data}"); 
    
    nn.ZeroGrad(); 
    loss.Backward(); //The notorious backpropagation 

    foreach (Value param in nn.GetParameters()) //Update weights and biases 
    
        param.data -= 0.1f * param.grad; //Gradient descent with 0.1 learning rate 
    

//Test 
for (int j = 0; j < inputData.Length; j++) 
    Debug.Log("Wanted: " + outputData[j].data + ", Actual: " + nn.Activate(inputData[j])[0].data); 
}

...which is 11 lines of code if you exclude the Random.Seed, which you only need to recreate the Neural Network (it uses random numbers for the weights), the Test-part, and the brackets. When I ran this Neural Network I got the following results:


Input 1  Input 2  Wanted  Actual
0 0 0 -0,01414913
0 1 1 0,988658
1 0 1 0,9845721
1 1 0 -0,01676899

The output is very close to the 0s and 1s we wanted - they will never be exactly 0 and 1. 

Try it yourself

The link so you can try the code for yourself in Unity is available as open source on GitHub: MicrogradCSharp.

Comments