Understanding Facial Recognition with Deep Learning

In this new tutorial, you will learn how to harness dlib to recognize faces in your personal photos.

6 months ago   •   7 min read

By David Clinton

In this article we are going to breakdown facial recognition with deep learning in a simple way for you to understand. After reading the article, you will be able to:

  • Relate Facial Recognition and Deep Learning
  • Define and explain how Facial Recognition works
  • Write a function in Python to practically see how face recognition works.

Note: This article assumes that you have basic knowledge of coding in python

Introduction

Facial recognition has progressed to become the most fitting and rational technique in human identification technology. Out of all techniques like signatures, hand maps, voice identification and speech, facial recognition is preferably used because of its contactless nature. The new applications that enable us to use facial recognition are based on Deep Learning technology.

In that case, we should understand what deep learning is. Have you ever heard of Artificial Intelligence? It’s not our topic today but as the word “artificial” comes to your mind, Artificial Intelligence is the mock-up of human intelligence by computer applications.

Deep Learning operates under artificial intelligence. It is therefore an element of artificial intelligence that simulates the data analysis processes and pattern creation by the human brain in order to make final decisions.

Definition of Facial Recognition.

Facial recognition is a technology of identifying human beings by analyzing their faces from pictures, video footage or in real time.

Until recently, facial recognition has been an issue for computer vision. The introduction of deep learning techniques which are able to grasp big data faces and analyze rich and complex images of faces has made it easier, enabling the new technology to be efficient and later become even better than human vision in facial recognition.

How Facial Recognition Works

The procedure we are going to use for facial recognition is simple to follow. Our aim here is to develop a neural network that gives a set of numbers that represent a face, which are referred to as face encodings.

Let us say that you have a couple images of the same person, the neural network is supposed to produce relatable outputs for the two images to show they are of the same person.

Alternatively, when you have images of two people, the neural network is supposed to produce very distinctive results to show they are of two different people.

Therefore the neural network is supposed to be trained to automatically identify faces and compute numbers based on the differences and similarity.

To simplify our work, we will access a pre-built trained model from dlib.

This tutorial is supposed to give us a basic idea of how facial recognition works. We will use the following steps:

  1. Use the trained model from dlib to identify faces in an image.
  2. Measure face layout for the identified faces.
  3. Compute face encoding using output from steps 1 and 2.
  4. Compare the face encoding of identified faces with unidentified.

Now, let us start building.

You will be required to have the following in the same project  folder before you get started. I have created a folder for this project and named it face-recognition

  1. Get pictures in JPEG format. Ensure all images have only one face and are not group photos. Create another folder under the face-recognition and rename the folder images.
  2. Get multiple images of the same people and place them in a new folder under face-recognition and rename the folder test. Different images of the same people in images will be contained in this folder
  3. Make sure you have Python 2.7 and pip installed. Python 2.7 and pip can be installed using Anaconda 2 here ( a python distribution).
  4. Once you are done installing Python 2.7 and pip install dlib in the terminal using the command below.

Windows

pip install --user numpy imageio dlib

Mac, Linux

sudo pip install --user numpy imageio dlib      

Paperspace Gradient

Run the first cell in the provided notebook to complete installs. You can create your own by placing this Github URL in the Advanced Options > Workspace URL field during Notebook creation, or by forking the provided notebook to your own Gradient team-space.

Note: Regardless of your platform, you can clone the repo to your machine to quickly complete setup. Be sure to still run the first cell in the Notebook if you setup this way.  
  1. Now it’s time to download the existing trained models for facial recognition. You are going to need two models. The first model predicts the layout and position of a face while the second model analyses faces and produces face encodings. Use the following steps to download the files you need.
  • Download dlib_face_recognition_resnet_model_v1.bz2 here and shape_predictor_68_face_landmarks.dat.bz2 here. (use wget for Gradient)
  • Once the download is successful, you will extract the files from the zip folders.
  • Go ahead and copy the downloaded files named dlib_face_recognition_resnet_model_v1.dat  and shape_predictor_68_face_landmarks.dat into the main project folder (in our case, it is supposed to be named face-recognition).

Now that we have everything setup, it is time to start coding. This is always the fun part right?

Bring this project to life

Coding!

Open up your folder using your preferred text editor (I will be using VS Code for this tutorial).

Under the folder add a new file and name it face.py . In this face.py  file we will write all the code that will match the faces in the two different folders images and test. You can either execute this script from the terminal, or follow along in the notebook if you chose to clone the repo. Cloning the repo will give you access to an already made face.py file, as well as sample images.

Step 1: Configure and Start: We integrate the appropriate library and establish the objects/parameters for our facial recognition in this section.

import dlib
import imageio
import numpy as np
import os
# Get Face Detector from dlib
# This allows us to detect faces in image
face_detector = dlib.get_frontal_face_detector()
# Get Pose Predictor from dlib
# This allows us to detect landmark points in faces and understand the pose/angle of the face
shape_predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
# Get the face recognition model
# This is what gives us the face encodings (numbers that identify the face of a particular person)
face_recognition_model = dlib.face_recognition_model_v1('dlib_face_recognition_resnet_model_v1.dat')
# This is the tolerance for face comparisons
# The lower the number - the stricter the comparison
# Use a smaller value to avoid hits
# To avoid false negatives (i.e. faces of the same person doesn't match), use higher value
# 0.5-0.6 works well
TOLERANCE = 0.6

Step 2: Obtain facial encodings from a JPEG: We're building the code that will accept an image title and return the picture's face encodings.

# Using the neural network, this code takes an image and returns its ace encodings
def get_face_encodings(path_to_image):
   # Load image using imageio
   image = imageio.imread(path_to_image)
   # Face detection is done with the use of a face detector
   detected_faces = face_detector(image, 1)
   # Get the faces' poses/landmarks
   # The code that calculates face encodings will take this as an argument   # This enables the neural network to provide similar statistics for almost the same people's faces independent of camera angle or face placement in the picture
   shapes_faces = [shape_predictor(image, face) for face in detected_faces]
   # Compile the face encodings for each face found and return
   return [np.array(face_recognition_model.compute_face_descriptor(image, face_pose, 1)) for face_pose in shapes_faces]

Step 3: Face to face comparisons: We're going to write a code that compares a particular face encoding to a collection of similar face encodings. It will give an array of boolean (True/False) values indicating if a match occurred.

# This function takes a list of known faces
def compare_face_encodings(known_faces, face):
   # Finds the difference between each known face and the given face (that we are comparing)
   # Calculate norm for the differences with each known face
   # Return an array with True/Face values based on whether or not a known face matched with the given face
   # A match occurs when the (norm) difference between a known face and the given face is less than or equal to the TOLERANCE value
   return (np.linalg.norm(known_faces - face, axis=1) <= TOLERANCE)

Step 4: Look for a match: We're going to write a function which takes a number of known face encodings, a list of person names (that correspond to the collection of face encodings obtained), and a face to match. It will use the code in 3 to retrieve the name of the individual whose face matches the presented one.

# This function returns the name of the person whose image matches with the given face (or 'Not Found')
# known_faces is a list of face encodings
# names is a list of the names of people (in the same order as the face encodings - to match the name with an encoding)
# face is the face we are looking for
def find_match(known_faces, names, face):
   # Call compare_face_encodings to get a list of True/False values indicating whether or not there's a match
   matches = compare_face_encodings(known_faces, face)
   # Return the name of the first match
   count = 0
   for match in matches:
       if match:
           return names[count]
       count += 1
   # Return not found if no match found
   return 'Not Found'

Step 5: Obtaining face encodings for all of the photos in the subfolder- images

# Get path to all the known images
# Filtering on .jpg extension - so this will only work with JPEG images ending with .jpg
image_filenames = filter(lambda x: x.endswith('.jpg'), os.listdir('images/'))
# Sort in alphabetical order
image_filenames = sorted(image_filenames)
# Get full paths to images
paths_to_images = ['images/' + x for x in image_filenames]
# List of face encodings we have
face_encodings = []
# Loop over images to get the encoding one by one
for path_to_image in paths_to_images:
   # Get face encodings from the image
   face_encodings_in_image = get_face_encodings(path_to_image)
   # Make sure there's exactly one face in the image
   if len(face_encodings_in_image) != 1:
       print("Please change image: " + path_to_image + " - it has " + str(len(face_encodings_in_image)) + " faces; it can only have one")
       exit()
   # Append the face encoding found in that image to the list of face encodings we have
   face_encodings.append(get_face_encodings(path_to_image)[0])

Step 6: Identifying the recognized faces in every image in the test subfolder (one by one)

# Get path to all the test images
# Filtering on .jpg extension - so this will only work with JPEG images ending with .jpg
test_filenames = filter(lambda x: x.endswith('.jpg'), os.listdir('test/'))
# Get full paths to test images
paths_to_test_images = ['test/' + x for x in test_filenames]
# Get list of names of people by eliminating the .JPG extension from image filenames
names = [x[:-4] for x in image_filenames]
# Iterate over test images to find match one by one
for path_to_image in paths_to_test_images:
   # Get face encodings from the test image
   face_encodings_in_image = get_face_encodings(path_to_image)
   # Make sure there's exactly one face in the image
   if len(face_encodings_in_image) != 1:
       print("Please change image: " + path_to_image + " - it has " + str(len(face_encodings_in_image)) + " faces; it can only have one")
       exit()
   # Find match for the face encoding found in this test image
   match = find_match(face_encodings, names, face_encodings_in_image[0])
   # Print the path of test image and the corresponding match
   print(path_to_image, match)

Once you are done with Steps 1 to 6, you will be able to run the code using your terminal using the following command.

cd {PROJECT_FOLDER_PATH}
python face.py

You will get the following output

('test/1.jpg', 'Sham')
('test/2.jpg', 'Not Found')
('test/3.jpg', 'Traversy')
('test/4.jpg', 'Maura')
('test/5.jpg', 'Mercc')

The name next to the filename is the name of the individual who matched the specified face. Please keep in mind that this may not work on all photos.

Now that you have seen how its done, for the next step you should try utilizing your own photographs. Pictures with the person's face fully visible for the best results using this code.

Add speed and simplicity to your Machine Learning workflow today

Get startedContact Sales

Spread the word

Keep reading