We’re going to learn in this new tutorial series how to create the snapchat filter “Face swap”.

In this first tutorial wi will learn how to:

  1. Get the external boundaries of the face
  2. Extract the face from the image

Here below you will find the full source code and a really quick explanation. You will find a more detailed explanation step by step on the video

We first import the libraries.

import cv2
import numpy as np
import dlib

We then load the image , we convert it into grayscale format and create a mask (a black image with the same size of original image)

img = cv2.imread("bradley_cooper.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = np.zeros_like(img_gray)

At this point we detect the face and the facial landmarks, so that from them we can find the external boundaries of the face.

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = detector(img_gray)
for face in faces:
    landmarks = predictor(img_gray, face)
    landmarks_points = []
    for n in range(0, 68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        landmarks_points.append((x, y))

        #cv2.circle(img, (x, y), 3, (0, 0, 255), -1)

We find the boundaries sorrounding the face.
To do this we need to find the convex hull of the facial landmarks.

    points = np.array(landmarks_points, np.int32)
    convexhull = cv2.convexHull(points)
    #cv2.polylines(img, [convexhull], True, (255, 0, 0), 3)
    cv2.fillConvexPoly(mask, convexhull, 255)

After we have found the convex hull (which covers the area of the face) and we have applied it on the mask, we can put the mask on the original image to extract the face and show everything on the screen.

    face_image_1 = cv2.bitwise_and(img, img, mask=mask)

cv2.imshow("Image 1", img)
cv2.imshow("Face image 1", face_image_1)
cv2.imshow("Mask", mask)
cv2.waitKey(0)
cv2.destroyAllWindows()

Downloads: