In this lesson we’re going to simplify the neuron by using functions in our code but first we’ll quickly review the neuron on the whiteboard to explain visually the functions we’re introducing on our code.
Nothing new is going to be added, the only reason why we use the function is to simplify the code and to make it easier to understand.

In this lesson you will learn:

  1. What are Feedforward and backpropagation?
  2. Simplify the code by adding the functions

1. What are Feedforward and Backpropagation?

The feedforward is the procedure where we give an input, we generate weights (weights is an array with the same size of the input) to the neuron and it returns an output.

This is the drawing I did on the whiteboard to explain the feedforward operation.

In our example we used the image “vertical.png” as input. The image correspond to this array [0, 1, 0, 0, 1, 0, 0, 1, 0].
For this image we generated these weights [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5].
By multiplying input and weights, making the sum of the multiplication, then appying the sigmoid activation function we got the value 0.81.
All this process which started from the image “vertical.png” to the final value 0.81 is the feedforward.

The code for the feedforward is really simple. It takes inputs and weights, it multipliplies them, then the activation function returns us the output.

def feed_forward(input, weights):
    convolution = np.dot(input, weights)
    result = sigmoid(convolution)
    return result

The backpropagation is the procedure that trains the neuron. The backpropagation starts from the output (which is the result we get after the feedforward operation) and then returns us updated weights.
Training the neuron is literally changing the value of the weights in order that the feedforward operation will give a correct output.

This is the drawing I did on the whiteboard to explain the backpropagation operation.

In our example we have a result of 0.81, which we got after the feedforward operation.
Starting from this value, we calculate the error by subtracting the result 0.81 with the label. The label for the vertical image is 0, while for the horizontal image is 1. In this case we’re showing the example of the vertical image so we subtracted 0 to 0.81 to get the error.
By multiplying the error with the sigmoid derivative of the result we know the adjustment value which in our case is 0.17.
Then we multiply the adjustmant 0.17 with the input [0, 1, 0, 0, 1, 0, 0, 1, 0] and we get [0, 0.17, 0, 0, 0.17, 0, 0, 0.17, 0].

Finally we update the weights. We remove the adjustment [0, 0.17, 0, 0, 0.17, 0, 0, 0.17, 0] from the actual weights we got [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]. Our update weights will be: [0.5, 0.33, 0.5, 0.5, 0.33, 0.5, 0.5, 0.33, 0.5].

This entire procedure starting from the output 0.81 to updating the weights [0.5, 0.33, 0.5, 0.5, 0.33, 0.5, 0.5, 0.33, 0.5] is called backpropagation.

The function for the backpropagation is this one.

def back_propagation(result, label, input):
    error = result - label
    adjustment = error * sigmoid_der(result)
    weights_update = np.dot(input.T, adjustment)
    return weights_update

2. Simplify the code by adding the functions

Now it should be clear to you what are feedforward and backpropagation.

The inside of neuron from the previous lesson, from line 37 to 59 can be simplified by using just two lines, the feedforward and backpropagation functions.

for i in range(100):
    print("Round: ", i + 1)

    temporary_weights = weights.copy()
    for image, label in zip(dataset, labels):
        convolution = sum(image * weights)
        print("Image")
        print(image)

        # 3. Activation function
        result = sigmoid(convolution)

        # 5. Error
        error = result - label
        print("Label", label)
        print("Error", error)

        # 6. Adjustment
        adjustment = error * sigmoid_der(result)
        print("Adjustment", adjustment)

        temporary_weights -= np.dot(image, adjustment)

    weights = temporary_weights.copy()
    print("Weights", weights)
    print()

This is how the neuron will look like:

for i in range(100):
    print("Round: ", i + 1)

    result = feed_forward(dataset, weights)
    weights -= back_propagation(result, labels, dataset)