Facial Recognition SPA for BNK48 Idol group using React and face-api.js

That’s why 30–50 members of idol group is a good number for testing.

(not too few, not too many) And we can easily find their portrait photo with various looking angle faces from internet, especially their facebook fanpages.

What are we going to do?Here, In this project, we will make Single Page App with React and face-api.

js library to detect and recognize the Idol face.

Since Vincent did all the hard parts for us in his API that comes with pre-trained face detection, face-landmarks, face-alignment, and face-recognition models, so we don’t have to train models by ourselves.

We don’t even need to write DL model in TensorFlow either.

Indeed, you don’t really need to know how Deep-learning or CNN work to make this App.

All you need to know is at least basic concept of JavaScript and React.

If you can’t wait and want to see how it look like, go visit my demo page here.

And my complete repo of the App is here.

The code that we’re going to make in this tutorial will be simpler one, but don’t worry I will share it in another repo too.

A Brief Explanation of Facial Recognition SystemIf you already know how it work, or don’t really care much, you can go directly to coding part.

Now, let imagine when you go some government office and ask for a copy of your personal document.

The officer behind the counter will usually ask you to proof who you are.

You show them your ID card.

She looks at your name and photo, then check your face, make sure you are the same person you claim to be.

Likewise, facial recognition system should already stored your name altogether with your reference facial information.

Then when you feed another photo to identify the system will, firstly try to detect if any face present on the image, at this step Face Detection Network will do the work.

The model I use in this project is Tiny Face Detector, for its tiny size and mobile friendly.

(The API also provide SSD mobileNet and MTCNN for face detector but let forget about them for now.

)Back to our system.

Once a face (or faces) detected, face detector model will return with bounding boxes of each face, telling us where the face is in the image.

We then use Face Landmark Network to mark 68 points face landmark and use alignment-model to make sure the face is centered before feed to Face Recognition Network.

The Face Recognition Network is another neural network (RestNet-34 like neural network, to be precise) return a Face Descriptor (feature vector contain 128 values) that we can use to compare and identify person in the image.

Just like fingerprint, Face Descriptor is a unique value of each face.

Face Descriptors of same person from different image sources should be very close when we compare them.

In this project we use Euclidean Distance to compare.

If the distance less than threshold that we set, we determine that they are likely to be same person.

(the lower the distance, the higher confident)Usually, the system will store Face Descriptor of each person as reference together with his or her name as label.

When we feed a query image, the system will compare Face Descriptor of new image with all reference Descriptors and identify the person with the lowest one.

If none of comparison lower than the threshold, the person will be identified as Unknown.

Let the Coding Begin!There are 2 functions we want to achieve in this App.

One is to identify Idol from input image file, and another one is using live video as input.

Let start with create-react-app, install react-router-dom, and start the App.

npx create-react-app react-face-recognitioncd react-face-recognitionnpm i react-router-domnpm startOpen your browser and go to http://localhost:3000/ if you see the starting page with React logo, then you’re good to continue.

Now open project folder with any code editor you like.

You should see the folder structure like this.

react-face-recognition ├── README.

md ├── node_modules ├── package.

json ├── .

gitignore ├── public │ ├── favicon.

ico │ ├── index.

html │ └── manifest.

json └── src ├── App.

css ├── App.

js ├── App.

test.

js ├── index.

css ├── index.

js ├── logo.

svg └── serviceWorker.

jsNow goto src/App.

js and replace the code with following one.

src/App.

jsWhat we have here is only import Home component and create one route to "/" as our landing page.

We will create this component very shortly.

Let start with create new folder src/views and create new file Home.

js inside this folder.

Then put the code below to the file and save.

src/views/Home.

jsWe just create 2 links for Photo Input link to "localhost:3000/photo" and Video Camera link to "localhost:3000/camera.

If everything goes right, we should see something like below in landing page.

Landing PageFace APIBefore we continue making new page, we want to install face-api.

js and create our API file to connect React with the API.

Now go back to console and install the library.

npm i face-api.

jsThe library comes with TensorFlow.

js and all components we want, except model weights.

If you don’t know what are they, model weights are neural network weights that have been trained with large dataset, in this case thousands of human face images.

Since many smart people have already trained the models for us, what we need to do is only grasp necessary weights we want to use and put in out project manually.

You will find all the weights of this API here.

Now let make new folder public/models to put all model weights.

Then download all necessary weights below to the folder.

(As I told you we will use Tiny Face Detector model for this project, so we don’t need SSD MobileNet and MTCNN models.

)Necessary ModelsMake sure you have all weights under public/models folder as below, or else our Models will not work without proper weights.

react-face-recognition ├── README.

md ├── node_modules ├── package.

json ├── .

gitignore ├── public │ ├── models│ │ ├── face_landmark_68_tiny_model-shard1│ │ ├── face_landmark_68_tiny_model-weights_manifest.

json│ │ ├── face_recognition_model-shard1│ │ ├── face_recognition_model-shard2│ │ ├── face_recognition_model-weights_manifest.

json│ │ ├── tiny_face_detector_model-shard1│ │ └── tiny_face_detector_model-weights_manifest.

json│ ├── favicon.

ico │ ├── index.

html │ └── manifest.

jsonNow go back and create new folder for API as src/api and create new file face.

js inside the folder.

What we want to do is to load models and create function to feed image to API and return all face descriptions, and also compare descriptors to identify face.

We will export these functions and use in React Components later on.

src/api/face.

jsThere’re 2 important parts in this API file here.

The first one is to load models and weights with function loadModels().

We only load Tiny Face Detector model, Face Landmark Tiny Model and Face Recognition model at this step.

Another part is function getFullFaceDescription() that receive image blob as input and return full face description.

This function use API functionfaceapi.

fetchImage() to fetch image blob to API.

Then faceapi.

detectAllFaces() will take that image and find all faces in the image, then .

withFaceLandmarks() will plot 68 face landmarks before using .

withFaceDescriptors() to return Face Feature of 128 values as Float32Array.

It’s noteworthy to mention that I use image inputSize 512 pixels for image input, and will use 160 pixels for video input later on.

It is recommended by the API.

Now I want you to save image below to new folder src/img and name it test.

jpg .

This will be our test image to test our App.

(In case you don’t know, she is Cherprang, a member of BNK48, by the way.

)Save this image as src/img/test.

jpgLet make new file src/views/ImageInput.

js .

This will be view component to input and display our image file.

src/views/ImageInput.

jsThis component, at this moment, will display only the test image src/img/test.

jpg and start loading API models to your browser, which will take few seconds.

After that, the image will be fed to API to get full face descriptions.

We can store the returned fullDesc in state to use it later and also can see its detail in console.

log.

But before that we have to import ImageInput component to our src/App.

js file.

And create new Route for /photo.

Here we go.

src/App.

js with new Route and ComponentNow, if you go to landing page http://localhost:3000 and click Photo Input you should see the photo display.

If you check Console of your browser, you should see Full Face Description of this image like this.

Face Detection BoxAs you can see, the description contains all face information we need in this project, including descriptor and detection.

Inside detection there are box information such as coordinate x y top bottom left right height width.

face-api.

js library comes with function to draw face detection box using html canvas, which is really nice.

But since we are using React, so why don’t we draw face detection box with CSS, then we can manage box and recognition display all in React way.

What we want to do is to use detection’s box information to overlay face box on top of the image.

We can also display name of each face recognized by the App later on.

This is how I add drawBox to ImageInput component.

Let’s add input tag altogether, so that we can change the input image.

src/views/ImageInput.

jsUsing inline CSS in React, we can position all face boxes overlay the image like this.

If you try changing photo with more faces, you will be able to see more boxes as well.

Facial RecognitionNow come the fun part.

To identify a person, we need at least one reference image to extract 128 values of feature vector or descriptor from the image.

The API has function LabeledFaceDescriptors to create label of descriptors and name for each person we want to identify.

This label will be fed to API together with queried descriptor to match the person.

But before that we will want to prepare a profile of name and descriptors.

Face ProfileWe already have one image reference for Cherprang.

So, let use its descriptor to make one profile.

What we want to do now is to create new JSON file and folder src/descriptors/bnk48.

json.

This file will contain member name and descriptors from their reference photo.

Here is the first sample file with only one descriptor from the image.

Sample face profileIf we have photos of all member, we can add descriptor and name one by one to complete our face profile.

And you know what?.I already made one.

I use 5–10 photos of each member to create this complete face profile.

So, you can just download this file and replace src/descriptors/bnk48.

json, easy peasy.

(sorry, I use Thai and Hiragana as display name)The file size is around 1MB for the whole member, which is not bad for our test App.

But in real world, you might need to store all face profile in DB, so that you don’t have to worry about file size anymore.

Face MatcherNext step we want to create labeledDescriptors and faceMatcher for face recognition task.

Now go back to src/api/face.

js then add the function below to your file.

src/api/face.

js add function createMatcherThis function will receive face profile (the JSON file) as input and create labeledDescriptors of each member’s descriptors with their name as label.

Then we can create and export faceMatcher with label.

You might notice that we configure maxDescriptorDistance 0.

5.

This is the threshold of euclidean distance to determine whether reference descriptor and query descriptor are close enough to say something.

The API default is 0.

6 which is good enough for general cases.

But I found 0.

5 is more precise and less error for me as some idols’ faces are quite similar.

It’s up to you how you tune this parameter.

Since our function is ready, let go back to src/views/ImageInput.

js to finish our code.

And here is our final one.

Final code for ImageInput.

jsIn this final code, we import createMatcher function from face.

js and create faceMatcher with face profile that we prepared.

Inside function handleImage() , after we get fullDesc from the image, we map out descriptors and find best match of each face.

We then use p tag and CSS to display our best match under each face detection box.

Just like this.

Face detect and recognize correctlyIf you already downloaded the complete face profile.

You can try to change image with this one.

I hope you can see all faces detected with correct match!!Try this imageLive Video InputThis section will guide you to use live video as input to face-api.

js with React-webcam.

Let’s start with installing the library.

npm i react-webcamAgain, before making new view component, we also need to add one more Route for video input in /src/App.

js.

We will create VideoInput component very shortly.

Add VideoInput Component and RouteVideo Input ComponentLet create new file src/views/VideoInput.

js and place all code below in the file and save.

This is the complete code for this component.

(no more step by step.

explanation below.

)All mechanism of face detection and recognition are same as ImageInput component, except the input is screenshot capture from webcam every 1500ms.

I set screen size as 420×420 pixels, but you can try smaller or larger.

(Larger size will takes more time to process face detection)Inside function setInputDevice I simply check if the device has 1 or 2 cameras (or more).

If there’s only one camera, our App will assume that it’s a PC, then we will capture from webcam facingMode: user , but if there’re 2 or more then it might be smartphone, then we will capture with camera from back side facingMode: { exact: ‘environment’ }I use same function to draw face detection box as in component ImageInput.

Actually, we can make it as another component, so that we don’t have to repeat it twice.

Now our App is done.

You can test VideoInput with your face, but it’s likely will identify you as unknown or sometimes mistakenly identify you as some idol.

This is because the system will try to recognize all faces if the euclidean distance is less than 0.

5.

Conclusion & Lesson LearnThe App could detect and recognize Idol face quite accurately, but still have some error happen sometimes.

This is due to the subject might not face directly to camera, their faces might tilt, or the photo was edited by some other apps.

Some idols may look alike each other, which make App confuse sometime.

I found that idol face can be different when come from different sources or light setting.

Idols with glasses or heavy make-up can confuse our App as well.

I have to admit that the system is not perfect, yet still have room to improve.

I’ve tested with Chrome and Safari, and it works fine on PC.

I assume that it should work with IE or Firefox as well.

Testing with Android smartphone is working good for both Image Input and Video Input, but react-webcam doesn’t work with iPhone due to security issue, which I’m still looking for solution to workaround.

Older phone tend to not work properly with TensorFlow, as it require enough computing power to run neural networks.

Deploy to Github PagesYou can deploy this App to any static hosting, but this section will guide you to deploy this React App to Github Pages with some trick.

And you will need to have Github account.

If you don’t have, go make one.

It’s free.

First of all, let install gh-pages library.

npm i gh-pagesThen we need to add { basename: process.

env.

PUBLIC_URL } inside createHistory() in src/App.

js like this.

Now goto your Github and create new repository with App name, in our case, react-face-recognition then copy git URL to add in our project later.

Next, open package.

json and add "homepage" with your Github account and App name like this.

"homepage": "http://YOUR_GITHUB_ACCOUNT.

github.

io/react-face-recognition"Don’t close package.

json file just yet, because we will add predeploy and deploy command lines under "scripts" like this.

"scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "predeploy": "npm run build", "deploy": "gh-pages -d build"}Now you can save the file and go back to your console terminal, then run git commands to upload code to your Github repository and runnpm run deploy to deploy to Github Pages.

The page should be published with URL that you set ashttp://YOUR_GITHUB_ACCOUNT.

github.

io/react-face-recognitiongit add .

git commit -m "make something good"git remote add origin https://github.

com/YOUR_GITHUB_ACCOUNT/react-face-recognition.

gitgit push -u origin masternpm run deployYou can check Github Page of this tutorial here, and also complete repo.

I hope you enjoy my tutorial and try making your react facial recognition of your own.

If you find this tutorial is quite simple and want to see more complete version, please visit my demo page here, and also the repo.

.

. More details

Leave a Reply