Blur Faces in real time with Opencv, Mediapipe and Python
Imagine having to hide the faces of the people who appear in your video. Doing it manually with video editing programs can become a problem if there are too many frames to be edited. What solution can be found?
We can Blur Faces in real-time with Opencv, Mediapipe and Python with a few lines of code and in real-time. We can do it in 2 steps:
- Identify face location before the blur
- Take out the face, blur it, and put the face back in the frame
- Take out the face, blur it, and put the face back in the video
Identify face location before blur face
As always, we must make sure that OpenCV and the Mediapipe library are installed. It can be done simply with “pip install” and the command is identical for all operating systems. In the same way, it is necessary to extract the position and all the points of the face to understand where it must be hidden from time to time. We have already addressed this topic in a previous article, and I invite you to read: Facial Landmarks Detection | with Opencv, Mediapipe and Python .
Also to facilitate the lesson I have prepared and included a file called “facial_landmarks.py” for this project. You can find everything in the download link of this article.
Take out the face, blur it, and put the face back in the frame
To blur faces in real-time with Opencv, Mediapipe, and Python, we need to import Mediapipe, Numpy, Opencv library, and facial_landmarks.py file.
import cv2 import mediapipe as mp import numpy as np from facial_landmarks import FaceLandmarks # Load face landmarks fl = FaceLandmarks()
We need to derive the face contour using the external points. The convexHull() opencv function helps us with this.
# 1. Face landmarks detection landmarks = fl.get_facial_landmarks(frame) convexhull = cv2.convexHull(landmarks)
Drawing a line this is the result
We have to create the mask, using the coordinates of the face, to extract the part of the frame that interests us.
# 2. Face blurrying mask = np.zeros((height, width), np.uint8) # cv2.polylines(mask, [convexhull], True, 255, 3) cv2.fillConvexPoly(mask, convexhull, 255)
This is the result.
We now have everything we need to extract the face and apply the blur. Again, we’ll use an OpenCV function for Image Blurring: Blur().
# Extract the face frame_copy = cv2.blur(frame_copy, (27, 27)) face_extracted = cv2.bitwise_and(frame_copy, frame_copy, mask=mask)
This is the result
We must now invert the mask and take only the background excluding the face.
# Extract background background_mask = cv2.bitwise_not(mask) background = cv2.bitwise_and(frame, frame, mask=background_mask)
As you can see from the detail of the image, the background is perfectly visible but instead of the face you can see the black color. That is the empty space where we will apply the blurred face in the next step.
The last step is to merge the two masks, and we will do it simply with cv2.add()
# Final result result = cv2.add(background, face_extracted)
Take out the face, blur it, and put the face back in the video
All the previous procedure works on the single image, but as we know, on OpenCV, the video is a sequence of images. With a few changes, we can Blur Faces in real-time with Opencv, Mediapipe and Python.
Let’s first change the source of data acquisition
cap = cv2.VideoCapture("person_walking.mp4")
and all the code we have just seen goes into a loop.
while True: ret, frame = cap.read() frame = cv2.resize(frame, None, fx=0.5, fy=0.5) frame_copy = frame.copy() height, width, _ = frame.shape ...