# Train the Neuron Neural network

In the previous lesson the Neuron that we created wasn’t able to identify correctly the vertical image, so our goal right now is training the neuron so that it can correctly perform that operation.

You will learn:

- Calculate the error.
- What does mean training a neuron
- How we adjust the weights
- How to train the neuron

## 1. Calculate the error

Calculating the error is a crucial operation to understand how wrong the neuron was on its detection. It’s not enought to say that the neuron classified wrongly a “Vertical” image as a “Horizontal” image.

Instead we need to be able to tell how wrong was it’s classificaton. If for example the result of our activation function is 0.1 instead of 0, we can consider it as an error, but if the result is 0.81 then it’s an error, but a bigger one.

And that’s exactly our meter.

We take the output of the activation function, in our case (0.81) and from this one we subtract the label of our image. We labeled vertical as 0, so we subtract 0.

The error is: 0.81.

# 5. Error error = result - 0 print("Error", error)

## 2. What does mean “Training a neuron”?

The concept of training a neuron is really simple, and it’s nothing more than adjusting the values of each weight so that the final output can classify correctly the images that we give to the neuron.

At the beginning we gave a random value to the weights. I choosed to give them all a value of 0.5.

Our initial weight:

weights = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]) convolution = sum(img_flattened * weights) # 3. Activation function result = sigmoid(convolution)

With these weights we got a result of 0.8.

What if we choosed different weights? Would we get a different result?

Definitely we would get a different result. The only problem is we can never know at the beginning what wheights to choose. We can only know it by trying, and adjust them after we know the error.

## 3. How we adjust the weights

Adjusting the weights is the core feature for training a neuron.

Now it will even be clear why we choosed the sigmoid as activation function.

Let’s take a look at the two images below.

The feature of the sigmoid is that of having a steep slope around the value of 0.5 and a more shallow slope when we get closer to 0 or 1.

In the pictures above, on the first one the value is 0.81, while the second one 0.98.

If we multiply the slope (which we can do by calculating the derivative of the sigmoid) with the error, we will get a value, which will be the “adjustment” of the weights.

Using this method will make a big adjustment if the error is big, and a really small adjustment if the error is small.

adjustment = error * sigmoid_der(result) print("Adjustment", adjustment)

After we established how much we should adjust the weights, it’s the time to make the corrections to the original weights.

We first multiply the adjustment value with the input (flattened image), and then we subtract it from the wheights.

weights -= np.dot(img_flattened, adjustment) print("Weights", weights)

At this point we adjusted the weight for the first time. We can say that we completed one round of training.

## 4. How to train the neuron?

At this point the neuron we created is ready for the training.

We need only to put in a loop part of it so that it can be trained over and over again.

On the for loop on** line 23**, we can choose for how many rounds we want to train it. On each training round we will notice the the error decreases and the output will get closer and closer to the correct prediction. In our case, after just 5 iterations the neuron was able to establish correctly that our input is a vertical image.

import numpy as np import cv2 def sigmoid(x): return 1/(1+np.exp(-x)) def sigmoid_der(x): return sigmoid(x)*(1-sigmoid(x)) # 1. Input # 1.1 Image preprocessing img = cv2.imread("images/vertical.png", cv2.IMREAD_GRAYSCALE) img = img / 255 img_flattened = img.flatten() # 2. Weights weights = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]) for i in range(100): print("Round: ", i + 1) convolution = sum(img_flattened * weights) # 3. Activation function result = sigmoid(convolution) # 4. Output # Vertical 0 if result < 0.5: print("Vertical") # Horizontal 1 elif result > 0.5: print("Horizontal") # 5. Error error = result - 0 print("Error", error) # 6. Adjustment adjustment = error * sigmoid_der(result) print("Adjustment", adjustment) weights -= np.dot(img_flattened, adjustment) print("Weights", weights) print()

Hi there, I’m the founder of Pysource.

I help Companies, Freelancers and Students to **learn easily and efficiently how to apply visual recognition to their projects**.

For Consulting/Contracting Services, check out this page.