Create Layer Class
We’re going to see in this lesson:
- What is a layer
- How to create a Layer class
- How to train the layer
4.7.1 What is a Layer
In this lesson we will see how from a neuron we can create a Layer.
I’ve bounded the neuron with a green rectangle, and we can call that Output layer (see image below).
A layer can contain more neurons, and multiple layers can be fully connected with each other. In this example we have only one layer with just a neuron which takes the input images and gives in return the output. Right now the neuron is not much different from how it was before, but by creating a layer it will be more simple to connect them later.
4.7.2 Create a Layer class
Until now everything we did was just using simple code and functions, but now as we go further and things will start to be more complex, we need to create a class.
Once we define with a class what is a layer, we can easily later create multiple objects with the same class with just one line of code. That means that we will be able to create a multiple layer neural network with just a few lines of code.
Let’s see how to create the class.
On the class we want to put the 3 essential elements of the neuron: the weights, the feed forward and the back propagation functions.
We start by defining the __init__ function, which will run automatically when we create the object. In this function we generate automatically random weights.
import numpy as np def sigmoid(x): return 1/(1+np.exp(-x)) def sigmoid_der(x): return sigmoid(x)*(1-sigmoid(x)) class Layer: def __init__(self, input_size, output_size): print("Initializing layer") self.weights = np.random.rand(input_size, output_size) print("Weights") print(self.weights)
So if now we create an object “output_layer” , we use the class layer and we define input size and output size. Input size is the number of the input that the layer takes. As we’re working with 3×3 images (so 9 pixels in total), the input size is 9, while considering we want to output just a number from 0 to 1, the output is one.
So to create a layer we need this simple code:
output_layer = Layer(9, 1)
We will see this when we run the code:
Initializing layer Weights [[0.95241898] [0.35959828] [0.73596849] [0.44134561] [0.1163296 ] [0.75176321] [0.22947842] [0.13646342] [0.76401306]]
Now we need to complete the class by adding to id the other two functions that we need to train the neuron: the feed_forward and the back_propagation.
class Layer: def __init__(self, input_size, output_size): print("Initializing layer") self.weights = np.random.rand(input_size, output_size) print("Weights") print(self.weights) def feed_forward(self, input): weighted_sum = np.dot(input, self.weights) output = sigmoid(weighted_sum) return output def back_propagation(self, input, output, labels): error = output - labels adjustment = error * sigmoid_der(output) weights_update = np.dot(input.T, adjustment) self.weights -= weights_update
4.7.3 Train the Layer
The class is done. Now we can already test it by traning the layer to detect the images. We will use the same dataset we used in the previous lesson, containing the two images, a vertical line and a horizontal line.
Now as first thing we load the dataset.
# 1. Input # 1.1 Image preprocessing img = cv2.imread("images/vertical.png", cv2.IMREAD_GRAYSCALE) img = img / 255 img_flattened = img.flatten() img2 = cv2.imread("images/horizontal.png", cv2.IMREAD_GRAYSCALE) img2 = img2 / 255 img2_flattened = img2.flatten() # Input dataset = np.array([img_flattened, img2_flattened]) labels = np.array([[0, 1]]).T print("Input") print(dataset) print("Labels") print(labels)
Then we create the layer:
# Output layer output_layer = Layer(9, 1)
And finally we train the layer by looping through our output_layer object and calling the feed_forward and back_propagation functions. And finally we show the output.
# Train for i in range(100): output = output_layer.feed_forward(dataset) output_layer.back_propagation(dataset, output, labels) print("Final weights") print(output_layer.weights) print("Output") print(output)