Build a Machine Learning Model in your Browser using TensorFlow.js and Python

We will be using the ml5.

js library in order to work with PoseNet.


js is a library built on top of TensorFlow.

js along with p5.

js, another library that makes it easier to access the webcam in the browser.


js aims to make machine learning approachable for a broad audience of artists, creative coders, and students.

The library provides access to machine learning algorithms and models in the browser with a simple syntax, building on top of TensorFlow.


For example, you can create an image classification model with MobileNet using ml5.

js in under 5 lines of code like this: It’s this simplicity of Ml5.

js that makes it so good for quick prototyping in the browser and that is why we are also using it for our project.

Let’s get back to PoseNet.

Create a new file index.

html and add the below code: View the code on Gist.

This will create a basic HTML web page and load the necessary files: ml5.

js and p5.

js are loaded through their official URL posenet.

js is the file where we will write code for working with PoseNet Now, we will write JavaScript code for working with PoseNet.

Create a new file posenet.

js in the same folder as index.


Here are the steps needed to make this work: Load the PoseNet model and capture video from your webcam Detect key points in body joints Display the detected body joints Draw the estimated skeleton of the body Let’s start with step 1.

  Step 1: Load the PoseNet model and capture video from your webcam We will load PoseNet using ml5.


At the same time, p5.

js enables us to capture video from webcam using just a few lines of code: View the code on Gist.

The most important things to note in the above code block are: createCapture(VIDEO): It is a p5.

js function that is used to create a video element by capturing video through the webcam ml5.

poseNet(video, modelRead): We use ml5.

js to load the PoseNet mode.

By passing in the video, we are telling the model to work on video input poseNet.

on(): This function is executed whenever a new pose is detected modelReady(): When PoseNet is finished loading, we call this function to display the model’s status   Step 2: Detect key points in body joints The next step is to detect the poses.

You might have noticed in the previous step that we are saving every detected pose in the poses variable by calling poseNet.


This function runs in the background continuously.

Whenever a new pose is found, it gives the location of body joints in the following format: ‘score’ refers to the confidence of the model ‘part’ denotes the body joint/key point that is detected ‘position’ contains the x and y position of the detected part We do not have to write code for this part since it is automatically generated.

  Step 3: Display the detected body joints We know the detected body joints and their x and y location.

Now, we just need to draw them over the video to display the detected body joints.

We’ve seen that PoseNet gives us a list of body joints detected with a confidence score for each joint and its x and y locations.

We will use a threshold value of 20% (keypoint.

score > 0.

2) confidence score in order to draw a key point.

Here is the code to do this: View the code on Gist.

  Step 4: Draw the estimated skeleton of the body Along with the key points or body joints, PoseNet also detects the estimated skeleton of the body.

We can use the poses variable to draw the skeleton: View the code on Gist.

Here, we looped over the detected skeleton and created lines joining the key points.

The code is fairly straightforward again.

Now, the last step is to call the drawSkeleton() and drawKeypoints() functions repeatedly along with the video feed that we are capturing from the webcam.

We can do that using the draw() function of p5.

js which is called directly after setup() and executes repeatedly: View the code on Gist.

Next, go to your terminal window, into your project folder, and start a Python server: python3 -m http.

server Then go to your browser and open the following address: http://localhost:8000/ Voila!.Your PoseNet should be nicely detecting your body pose (if you have followed all the steps correctly).

Here is how my model looks: End Notes You can see why I love TensorFlow.


It is incredibly effective and doesn’t even require you to worry about complex installation steps while building your models.


js shows a lot of promise for making machine learning more accessible by bringing it to the browser.

And at the same time, it has advantages like data privacy, interactivity etc.

This combination makes it a very powerful tool to keep in a data scientist’s toolbox, especially if you want to deploy your machine learning applications.

In the next article, we will explore how to apply transfer learning in the browser and deploy your machine learning or deep learning models using TensorFlow.


The project that we did with PoseNet can be taken even further to build a pose recognition application by just training another classifier over it.

I encourage you to go ahead and try that!.Post in the comments below if you have built something interesting ????.All the code for this article is available on Github.

You can also read this article on Analytics Vidhyas Android APP Share this:Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Reddit (Opens in new window) Related Articles (adsbygoogle = window.

adsbygoogle || []).


. More details

Leave a Reply