Conv2D Layer | Computer Vision with Keras p.3
We will see the Conv2D Layer, Convolutional Network with Keras and how to create a convolutional layer, what it is for, and what it is. Before moving on, I recommend that you see the previous lesson: Build a Sequential model | Computer Vision with Keras p.2
In computer vision, we use the convolutional operation to extract information from images or to manipulate images. An example of typical use is image blurring. Operations with the convolutional neural network begin to become more complex because we use it to extract features from images, later we will see what these features are.
Prepare the image for the Conv2D Layer
Before proceeding with the creation of the Conv2D Layer we must prepare the image. As we did in the previous lesson we use OpenCV for image manipulation and as a first step, we transform the image from BGR format to GRAYSCALE format with this code.
import cv2 # Load img img = cv2.imread("dog.jpg", cv2.IMREAD_GRAYSCALE)
This step turns the color image into black and white. Normally the 3 channels (BGR) are used but in this case, we will use only one, in order to make the operations with the Conv2D less complex.
Now we resize the image because it has fewer pixels and is easier to process for the neural network, usually, it uses 224 x 224 px
img = cv2.resize(img, (224, 224))
finally, I take the original image shape
height, width = img.shape
Conv2D Layer with Keras
Before creating the Conv2D Layer we need to add the necessary Keras libraries and create the sequential model
import keras from keras import layers ... model = keras.Sequential()
now there is everything needed to create the Conv2 Layer, we can insert the code to create it and show the details.
model.add(layers.Conv2D(input_shape=(height, width, 1), filters=64, kernel_size=(3, 3))) model.summary()
as you can see in the image below, adding model.summary() prints the Layer type, Shape, and Parameters
going into even more detail about the parameters, in this case we access the single layer
# Access layers parameters filters, _ = model.layers.get_weights()
we get this result
highlighted in red is the shape of the filter, in simpler words, it means that there are 64 images 3×3. To be precise:
kernel_size =(3,3) is the size of the image that is used as a feature
filters = 64 is the number of these filters generated
We show the filters
We can show what we have seen in theory before but before printing the image we have to do some processing with OpenCV to enlarge it, being only 3×3 px they would be too small. This is the final result
in the image I only opened 4 windows in reality there are 64.
what does this represent?
These small windows represent the features of the image and create a map of its features. In an intuitive way, we can say that kernel_size represents the complexity of the filters and the number of filters tells how many features will be considered.