Weight Imprinting on the Coral Edge TPU

is equivalent to maximizing the inner product:which is equal to the cosine similarity since we have unit length vectors.

To close the circle, we insert in the NCA loss of the previous section and add the inner product version:And if we compare this to the loss of a general softmax classifier, we can see that these two are very similar except the bias term:This point is crucial.

We derived the equation of the classification model by starting with metric learning.

Furthermore, this means we can train a classifier and learn the weights using a cross-entropy loss, or we simply choose good proxies.

And precisely that is what Weight Imprinting is doing; we set proxies for specific classes.

Now we go back to a general classification network.

The fully connected layer at the end is a matrix where each column can be considered as a proxy.

Instead of learning the fully connected layer, or in other words learning the weights within the matrix, we can directly construct a matrix with proxies.

Each column within the matrix is represented by a proxy that represents a class.

Weight Imprinting Scheme.

Source: https://arxiv.

org/abs/1712.

07136This image above visualizes the underlying network structure.

The embeddings are calculated using a pre-trained network.

In the paper, they are using an Inception V1.

Then the embeddings are normalized to unit length to map them to the unit sphere.

Now, these weights are either used for classification or to expand your model by imprinting the weights.

The image below shows two unit spheres in a 2-dimensional space.

The colored dots indicate the imprinted weights.

On the left side, we have three imprinted weights, on the right side, a fourth one was added.

The lines within the circles are the decision boundaries which are exactly between 2 dots.

So adding a new Imprinted Weight Vector adds a dot and alters the decision boundaries.

Visualization of decision boundaries.

Source: https://arxiv.

org/abs/1712.

07136If we have more data, this method can be extended by mainly two strategies: average embedding and fine-tuning.

Average embedding uses the average over multiple embeddings.

However, this only makes sense for unimodal data.

For example, cubes with different colors are not unimodal.

As stated in the paper, the average over augmented versions of the image does not improve the performance.

Probably because the embedded extractor was trained with data augmentation and should be invariant to the transformations.

The other option is to fine tune the network.

Then the fully connected layer is initialized with the calculated proxies.

From a theoretical standpoint, this method has a few advantages compared to the regular training of a neural network.

First and foremost, it can be used for Low-Shot Learning.

While training a Neural Network requires a large amount of data, this method often works with just one image per class.

Another advantage that you should be aware of is its flexibility.

This method allows you to add new classes easily whenever you want or need to.

Building an Edge TPU powered security cameraIntroductionBeing inspired by the teachable machine project, we think that Weight Imprinting is an exciting technology to build smart security cameras.

While integrating a dedicated Neural Network based detection model into a system can guarantee high predictive power, it also makes the system very inflexible from a user perspective.

Adapting the system to new use-cases requires data gathering, re-training, and re-deploying of the model.

Steps which are not only time consuming, but also require expert-knowledge.

Using Weight Imprinting, on the other hand, lets the system easily adapt to new use-cases while still ensuring system accuracy.

To give you an idea here are a few possible use-cases where such a system could be used:Detect if someone enters your apartment.

Detect if your parking lot is being blocked.

Detect if your pets are leaving the house.

We wanted to build a system which is not only easy to configure but which is also integrated into an existing eco-system.

That’s why our camera is compatible with Apple HomeKit.

Before getting more into detail about the project let’s get a rough overview of the exciting piece of hardware we are using throughout the project: the Coral Edge TPU.

Coral Edge TPUThe Edge TPU comes in two flavors.

You can either buy a self-containing development board or a USB accelerator (we’ll be using this one) which connects to existing systems like a Raspberry Pi or a PC.

The magic behind Edge TPU is its custom-designed ASIC by Google which enables you to do high-performance inferencing at a competitive price point ($80).

Below is a performance chart.

As one can see, the Edge TPU easily outperforms Desktop CPUs.

Taking into account the low price point this is a pretty attractive package.

Edge TPU Performance.

Source: https://coral.

withgoogle.

com/tutorials/edgetpu-faq/Edge TPU only supports TensorFlow.

To run your models, you need to do three things:Train your model using TensorFlow.

Be aware: during Beta, Coral only supports a few network architectures.

Convert and quantize the model to the tflite format.

Finally, compile it using the web-based Edge TPU model compiler.

Having to upload a model to a Google server before being able to use them on the device may be a deal-breaker for some.

Hopefully, Google will provide an offline converter in the future.

Using the Edge TPUSetting up the Coral on a Raspberry Pi is very simple.

Just run the install script and you are done:wget http://storage.

googleapis.

com/cloud-iot-edge-pretrained-models/edgetpu_api.

tar.

gztar xzf edgetpu_api.

tar.

gzcd python-tflite-sourcebash .

/install.

shKeep in mind that not all hardware architectures are supported.

When trying to install the package on a Raspberry Pi Zero W we got a“platform not supported“ error.

The Python API is in an early development stage.

Only the most crucial things are implemented.

The documentation is somewhat scarce.

Luckily, doing Weight Imprinting on Coral is straight forward!.The Python library offers a high level of abstraction.

You only need to write a couple lines of code.

A toy example is outlined below.

# Imprint weights and save modelfrom edgetpu.

learn.

imprinting.

engine import ImprintingEnginetrain_dict = <YOUR TRAINING DATA>engine = ImprintingEngine(<YOUR EMBEDDING EXTRACTOR>)label_map = engine.

TrainAll(train_dict)engine.

SaveModel("classify_model.

tflite>)# Now use it on new picturesfrom edgetpu.

classification.

engine import ClassificationEnginepredictions=self.

engine.

ClassifyWithImage(<IMAGE>)The first step is to load an appropriate embedding extractor into the ImprintingEngine.

You can download a MobileNetV1 based extractor here.

Next, we preprocess our training data by creating a dictionary where keys are class names and values are lists of flattened NumPy arrays of resized images.

Imprinting the weights is just one line of code: engine.

TrainAll().

The resulting and saved model can then be consumed by the ClassificationEngine to make predictions on pictures.

Right now, one can only create a new classification model based on an embedding extractor.

Extending an existing classifier with new classes using Weight Imprinting is not possible.

A Coral powered security cameraOur goal was to come up with a prototype which only requires few components:Raspberry Pi 3 B+Raspberry Pi v2.

1 CSi CameraCoral Edge TPU USB Accelerator(Optional) for enclosure: Makerbeam aluminum profiles and some custom designed 3D printed partsTo set up the camera, you only need to clone the repo linked at the end of this article.

Install the necessary Python packages and start the application.

Enter the device key displayed in the terminal to add the camera to your iOS home app.

Next, open the web GUI and provide some examples of things it should detect.

Also, don’t forget to add some background pictures where no alarm should be triggered.

Finally, click on the imprint weights button and the system is ready to go!.Below you can see the teaching process in action.

In this example, we trained the system to detect if our favorite Pedelec gets taken out of the office.

Imprinting new weights using the Web GUINow let’s check if the system works as intended.

As you can see the alarm is triggered as soon as the Pedelec leaves the area of view and a message is pushed to our iPhone.

If the model does not work as intended: no worries!.Collect some additional examples and imprint the weights again.

The Edge TPU powered security camera in actionFinal thoughts and practical evaluationWe tested the camera on a couple of other tasks and were pretty impressed by the overall accuracy.

The system does a great job in detecting approaching people, opened windows or blocked parking spots.

Weight Imprinting is of course not the solution for every Machine learning task.

But if flexibility and on-device training are vital to you then take a look at it: it is worth investigating!.We‘ll publish the source code on Github in the next coming days.

Also, feel free to share your own Edge TPU based projects in the comment section below.

????.

. More details

Leave a Reply