In this day and age, having our identities hidden on any of the social media platforms is next to impossible. However, if ever needed, we can always blur out our faces in any video to be even more discreet. In this Answer, we will be looking at how using different libraries in Python, we can implement such methods.
Computer vision is a significant domain within artificial intelligence, centered around empowering machines to comprehend and interpret visual data derived from images or videos. It entails the development of advanced algorithms and methodologies to efficiently process, analyze, and derive meaningful insights from visual information. A the same time, computer vision also encompasses areas such as object detection, image segmentation, face recognition, and motion tracking. Computer vision's practical applications span across diverse industries, including:
Robotics
Autonomous vehicles
Healthcare
Surveillance
In this implementation, we aim to blur faces in a video while preserving the background. The process involves the following steps:
Detect faces in each frame of the video using a pre-trained face detector.
Apply Gaussian blur to the detected face regions, making them less identifiable.
Create an oval-shaped mask for each face to blend the original and blurred faces.
Merge the original and blurred faces using the masks to achieve face blurring.
Display the video with the blurred faces and preserved background.
In our program, we use multiple libraries to implement this functionality. With the help of their functionalities, we can pull off the desired outcome. Let's see how we can install these libraries in Python.
OpenCV
pip install opencv-python
dlib
pip install dlib
NumPy
pip install numpy
import cv2 import dlib import numpy as np face_detector = dlib.get_frontal_face_detector() cap = cv2.VideoCapture("https://player.vimeo.com/external/434418689.sd.mp4?s=90c8280eaac95dc91e0b21d16f2d812f1515a883&profile_id=165&oauth2_token_id=57447761") while True: ret, frame = cap.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_detector(gray) for face in faces: x, y, w, h = face.left(), face.top(), face.width(), face.height() blurred_face = cv2.GaussianBlur(frame[y:y+h, x:x+w], (51, 51), 0) mask = np.zeros_like(blurred_face) center = (w // 2, h // 2) axes = (w // 2, h // 2) angle = 0 color = (255,255,255) cv2.ellipse(mask, center, axes, angle, 0, 360, color, -1) frame[y:y+h, x:x+w] = cv2.bitwise_and(frame[y:y+h, x:x+w], cv2.bitwise_not(mask)) + cv2.bitwise_and(blurred_face, mask) cv2.imshow('Eye Tracking', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
Line 1: Import the OpenCV library, which is a popular computer vision library for various image and video processing tasks.
Line 2: Import the dlib
library, a C++ library commonly used for face detection and facial landmark detection tasks.
Line 3: Import the NumPy library, which is used for numerical computations and array operations in Python.
Line 4: Create a face detector using the dlib.get_frontal_face_detector()
function. This initializes the face detection model from the dlib
library.
Line 6: Create a VideoCapture object the cap
to capture frames from a video. The video is loaded from the specified URL.
Line 8: Start an infinite loop to continuously read frames from the video.
Line 10: Read the next frame from the video capture, and the variable ret
will be True
if a frame is successfully read.
Line 12: Check if the ret
is False
, which means there are no more frames to read from the video. If so, break out of the loop.
Line 13: Convert the current frame to grayscale using the cv2.cvtColor
function. Grayscale images are often used for faster processing in computer vision tasks.
Line 15: Detect faces in the grayscale frame using the previously created face_detector
. The face_detector
returns a list of rectangles representing the faces' bounding boxes.
Line 17–29: Loop through each detected face in the faces
list and perform the following operations:
Get the coordinates (x, y), width (w), and height (h) of the face bounding box.
Apply Gaussian blur to the face region using cv2.GaussianBlur
to create a blurred version of the face.
Create a binary mask using the np.zeros_like
with the same size as the blurred face.
Draw a filled white ellipse on the mask, representing the fully blurred region.
Combine the original face with the blurred face using the mask to create a partially blurred effect.
Line 32: Display the frame with detected faces and the applied blur, with the window title "Eye Tracking."
Line 34–38: Wait for a key event, and if the pressed key is 'q', break out of the loop, closing all OpenCV windows.
Free Resources