Intelligent, realtime and scalable video processing in Azure

Intelligent, realtime and scalable video processing in AzureRené BremerBlockedUnblockFollowFollowingJun 181.

IntroductionIn this tutorial, an end to end project is created in order to do intelligent, realtime and scalable video processing in Azure.

In this, a capability is created that can detect graffiti and identify wagon numbers using videos of trains.

Properties of the project are as follows:Intelligent algorithms to detect graffiti and identify wagon numbersRealtime and reliable way of processing videos from edge to cloudScalable for exponential growth of number of videosFunctional project that can be optimized to any video processing capabilityThe architecture of the project can be depicted as follows:1.

Architecture overviewIn this blog this architecture is realized in 4 steps and conclusion as follows:2.

Setup Azure Cognitive Services to detect graffiti on trains and OCR to identify wagon numbers3.

Setup Azure Functions for parallel processing4.

Visualize output using Power BI (optional)5.

Setup IoT Edge architecture (optional)6.

ConclusionIn the next chapter, Azure Cognitive Services will be deployed.

These services will be used to detect graffiti on trains and identify wagon numbers on trains using OCR.

2.

Setup Azure Cognitive ServicesAzure cognitive services are a set of APIs that can be infused in your apps.

It contains intelligent algorithms for speech recognition, object recognition in pictures and language translation.

The models are mostly pretrained and can be integrated “off the shelf” in your project.

In this project, two APIs will be used:Custom Vision that will be used to detect graffiti on trains.

This model needs pictures for trains with/without graffiti to learn.

This step can be seen as “adding the last custom layer in the neural network of an image recognition model that was already trained in Azure Cognitive Services”Computer Vision OCR that will be used to identify wagonnumber on train.

This model does not require training and can be taken off the shelfIn the remain of this chapter the following steps will be executed:2a.

Train and deploy Custom vision API to detect graffiti2b.

Deploy OCR Computer Vision APIAnd the following part of the architecture that is realized:2.

Cognitive services to detect graffiti and identif wagon number2a.

Train and deploy Custom vision API to detect graffitiGo to Custom Vision website and sign in with your Azure AD credentations.

Once you are logged in, select to create a Custom Vision project with properties “classification” and multiclass (Single tag per image)”, see also below.

2a1.

Create Custom Vision API projectThen download the following images the folder CognitiveServices/ CustomVisionImages in the following git project:https://github.

com/rebremer/realtime_video_processing.

gitAs first step, add the graffiti pictures with tag graffiti to your project.

Secondly, add the no_graffiti pictures with tag graffiti and then NEGATIVE to your project.

Then train the model using the fast track, see also below.

2a2.

Train Custom Vision Api projectOnce you trained the model, you can test the model by clicking on “Quick test” and then select an image from the test folder using the git project that was downloaded earlier.

2b.

Deploy OCR Computer Vision APIGo to the resource group that was created in step 2a to deploy your OCR Computer Vision API.

Click on the add button and type “Computer Vision” in the search box.

Select F0 as pricing tier.

After you deployed your Computer Vision API, the resource group will look as follows.

2b1.

Resource group after Custom Vision API and Computer Vision for OCR is deployedIn the next chapter, the APIs will be used to detect graffiti and wagon number from videos.

3.

Parallel video processingOnce a new video is uploaded (synchronized) in Azure Blob Storage, it shall be immediately processed as followed:Azure Blob storage has a trigger that executes a simple Azure function that sends message to Azure QueueAzure Queue has a trigger that executes an advanced Azure function that 1) retrieves video from blob storage account, 2) takes every second a frame of the video using OpenCV and 3) detect graffiti on the frame, identifies the wagon number and writes results to csv fileThe Azure Queue step is necessary to be able video in parallel.

In case the blob trigger directly triggers the advanced Azure funtion, videos are only processed serially.

The parallel video processing architecture is depicted below.

3.

1.

Parallel video processingIn the remain of this chapter the following steps will be executed:3a.

Install preliminaries for Azure Function with docker3b.

Create Azure Storage account with blob containers and queue3c1.

(Optional) create docker image for Azure Function Blob trigger3c2.

Deploy Azure Function Blob Trigger3d1.

(Optional) create docker image for Azure Function Queue trigger3d2.

Deploy Azure Function Queue Trigger3e.

Run test with videoAnd the following part of the architecture that is realized:3.

2.

Steps in blog plotted on Architecture.

Parallel video processing in bold as next stepIn which the details of the parallel video processing capability can be found in picture 3.

1.

3a.

Install preliminaries for Azure function with dockerIn order to create frames from videos, an Azure Function with OpenCV is needed.

For that purpose, an Azure Function with Python using a docker image with OpenCV dependencies preinstalled is used.

To do this, the following preliminaries needs to be installed:Install Visual Studio CodeInstall Azure Core Tools version 2.

x.

Install the Azure CLI.

This blog requires the Azure CLI version 2.

0 or later.

Run az –version to find the version you have.

(optional, in case you want to create you own image) Install Docker(highly recommended) before you run the commands in this tutorial, execute the commands in this tutorial first3b.

Create Azure Storage account with blob containers and queueAn Azure Storage account is needed to upload the videos to and to run the Azure Queue services on which the Azure Fucntion will trigger.

Open your Visual Studio Code, open a new terminal session and execute the following commands:az loginaz storage account create -n <stor name> -g blog-rtvideoproc-rg –sku Standard_LRSaz storage container create -n videoblob –account-name <stor name>az storage container create -n pics –account-name <stor name>az storage container create -n logging –account-name <stor name>az storage blob upload -f Storage/ImageTaggingLogging.

csv -c logging -n ImageTaggingLogging.

csv –account-name <stor name> –type appendaz storage queue create -n videoqueue –account-name <stor name>Make sure that a global unique name is taken for <stor name> as storage account name.

3c1.

(Optional) create docker image for Azure Function Blob triggerIn this step, a simple Azure Function is created that is triggered when a new video is added to the storage account.

The name of the video is then extracted and added to the storage queue that was created in step 3b.

Open your Visual Studio Code, create a new terminal session and execute the following commands (select python as runtime when prompted)func init afpdblob_rtv –dockercd afpdblob_rtvfunc new –name BlobTrigger –template "Azure Blob Storage trigger"Subsequently, open Visual Studio Code, select “File”, select “Open Folder” and then the directory afpdblob_rtv that was created in the previous command, see also below:3c1.

Azure Function Blob triggerIn this project, replace the content of the following filesBlobTrigger/__init__.

pyBlobTrigger/function.

jsonDockerfilerequirements.

txtWith the content of the github project https://github.

com/rebremer/realtime_video_processing/tree/master/AzureFunction/afpdblob_rtv/.

Next step is to build to docker image and publish the docker image to a public Docker Hub.

Alternatively, a private Azure Container Registry (ACR) can also be used, but then make sure credentials are set.

Execute the following commands to publish to docker hubdocker logindocker build –tag <<your dockerid>>/afpdblob_rtv .

docker push <<your dockerid>>/afpdblob_rtv:latest3c2.

Deploy Azure Function Blob TriggerIn this step, the docker image is deployed as Azure function.

In case you skipped part 3c1 to create you own docker image, you can replace <your dockerid> with bremerov, that is, bremerov/afpdblob_rtv:latest.

Execute the following commands:az appservice plan create –name blog-rtvideoproc-plan2 –resource-group blog-rtvideoproc-rg –sku B1 –is-linuxaz functionapp create –resource-group blog-rtvideoproc-rg –os-type Linux –plan blog-rtvideoproc-plan –deployment-container-image-name <your dockerid>/afpdblob_rtv:latest –name blog-rtvideoproc-funblob –storage-account <stor name>az functionapp config appsettings set –name blog-rtvideoproc-funblob –resource-group blog-rtvideoproc-rg –settings remoteStorageInputContainer="videoblob" `AzureQueueName="videoqueue" `remoteStorageAccountName="<stor name>" `remoteStorageAccountKey="<stor key>"az functionapp restart –name blog-rtvideoproc-funblob –resource-group blog-rtvideoproc-rgWhen the functions is deployed correctly, then the functions is created as follows in the portal3c2.

1 Azure Function Blob trigger deployed correctlyWhen you clock on Blob Trigger, you can see the code that is part of the docker image.

As a final step, Add Applications Insights (see screenshot) and follow the wizard.

This enables you to see logging in the Monitor tab.

As a test, find the video Video1_NoGraffiti_wagonnumber.

MP4 in the git project adn upload it to the blob storage container video blog using the wizard, see below3c2.

2 Upload blobAfter the video is uploaded, the Azure function is trigger using the blob trigger and a json file is added to the Azure queue videoqueue, see below3c2.

3 Json file with video name added to queue3d1.

(Optional) create image for Azure Function Queue triggerIn this step, an advancedAzure Function is created that is triggered when a message is sent to the Azure queue that was deployed in step 3c2.

Open your Visual Studio Code, create a new terminal session and execute the following commands (select python as runtime when prompted)func init afpdqueue_rtv –dockercd afpdqueue_rtvfunc new –name QueueTrigger –template "Azure Queue Storage trigger"Subsequently, open Visual Studio Code, select “File”, select “Open Folder” and then the directory afpdblob that was created in the previous command, see also below:3d1.

1 Azure Function Queue triggerIn this project, replace the content of the following filesQueueTrigger/__init__.

pyQueueTrigger/function.

jsonDockerfilerequirements.

txtWith the content of the github project https://github.

com/rebremer/realtime_video_processing/tree/master/AzureFunction/afpdqueue_rtv/.

Next step is to build to docker image and publish the docker image to a public Docker Hub.

Alternatively, a private Azure Container Registry (ACR) can also be used, but then make sure credentials are set.

Execute the following commands to publish to docker hubdocker logindocker build –tag <<your dockerid>>/afpdqueue_rtv .

docker push <<your dockerid>>/afpdqueue_rtv:latest3d2.

Deploy Azure Function Queue TriggerIn this step, the docker image is deployed as Azure function.

In case you skipped part 3d1 to create you own docker image, you can replace <your dockerid> with bremerov, that is, bremerov/afpdqueue_rtv:latest.

Execute the following commands:az functionapp create –resource-group blog-rtvideoproc-rg –os-type Linux –plan blog-rtvideoproc-plan –deployment-container-image-name <your dockerid>/afpdqueue_rtv:latest –name blog-rtvideoproc-funqueue –storage-account <stor name>az functionapp config appsettings set –name blog-rtvideoproc-funqueue –resource-group blog-rtvideoproc-rg –settings `remoteStorageAccountName="<stor name>" `remoteStorageAccountKey="<stor key>" `remoteStorageConnectionString="<stor full connection string>" `remoteStorageInputContainer="videoblob" `AzureQueueName="videoqueue" `remoteStorageOutputContainer="pics" `region="westeurope" `cognitiveServiceKey="<key of Computer vision>" `numberOfPicturesPerSecond=1 `loggingcsv="ImageTaggingLogging.

csv" `powerBIConnectionString=""az functionapp restart –name blog-rtvideoproc-funqueue –resource-group blog-rtvideoproc-rgWhen the functions is deployed correctly, then the functions is created as follows in the portal.

3d2.

1 Azure Function Queue trigger deployed correctlyAgain, select to add Applications Insights (see top screenshot), you can select the same application insight resource that was created for the blob trigger.

Application Insights can be used to see the logging of the QueueTrigger in the monitor tab.

In case the Azure Function Queue Trigger ran successfully, the message that in Azure Queue is processed and the log of pictures can be found in the pics directory, see below3d2.

2 Videos logging in framesAlso the logging can be found in file logging/ImageTaggingLogging.

csv.

In next part the output is visualized in Power BI.

4.

(Optional) Visualize outputPower BI aims to provide interactive visualizations and business intelligence capabilities with an interface simple enough for end users to create their own reports and dashboards.

In this blog, it is used to create a streaming dashboard that create alerts when graffiti is detected accompagnied with the wagon number.

In the remain of this chapter the following steps will be executed:4a.

Install preliminaries for Power BI4b.

Create Streaming data set4c.

Create dashboard from tile4d.

Add Power BI link to Azure FunctionAnd the following part of the architecture that is realized:4.

Steps in blog plotted on Architecture.

Visualize output in bold as next stepNotice that it is not necessary to visualize the output in order to do final step of this blog IoT Hub.

4a.

Install preliminaries for Power BIIn this blog, all datasets and dashboards will be created in Power BI directly and it is therefore not necessary to install Power BI dashboard.

Go to the following link to create an account:Power BI | Interactive Data Visualization BI ToolsSee your company's data in new ways with interactive data visualization BI tools from Microsoft Power BI.

powerbi.

microsoft.

com4b.

Create Streaming data setOnce you are logged in, go to your workspace, select create and then Streaming dataset.

This streaming dataset is pushed from your Azure Function Queue Trigger.

4b1.

Create streaming datasetSelect API {} in the wizard and add then the following fields (fields can also in __init__.

py of the Azure Function Queue trigger in method publishPowerBI()location (Text)track (Text)time (DateTime)trainNumber (Text)probGraffiti (Number)caption (Text)sasPictureTrainNumber (Text)sasPictureGraffiti (Text)4c.

Create dashboard from tileIn the next step, a live dashboard is created based on the streaming dataset that is automatically refreshed once new data comes in.

First, create a report and and a tabular visual.

Simply add all fiels to this tabular.

Subsequenly, select pin visual to create a live dasboard of the visual, see also below.

4c1.

Create streaming datasetThis way, multiple visuals can be created in a report and published to the same dashboard.

See below for an example dashboard.

4c2.

Example dashboard4d.

Add Power BI link to Azure FunctionFinally, the Power BI push URL needs to be added to your Azure Function Queue trigger such that data can be published.

Click on the … of your streaming datasets, select API info and copy the URL, see below.

4d1.

API infoSubsequently, add the Power BI push URL to your Azure Function Queue Trigger and restart the function, see below.

az functionapp config appsettings set –name blog-rtvideoproc-funqueue –resource-group blog-rtvideoproc-rg –settings `powerBIConnectionString="<Power BI push URL"az functionapp restart –name blog-rtvideoproc-funqueue –resource-group blog-rtvideoproc-rgRemove the video Video1_NoGraffiti_wagonnumber.

MP4 and upload it again to your blob storage account to the videoblob container.

This will push data to your Power BI dashboard.

5.

Setup IoT edge architectureAzure Blob Storage on IoT Edge is a light-weight Azure consistent module which provides local block blob storage.

With tiering functionality, the data is automatically uploaded from your local blob storage to Azure.

This is especially usefull in scenario when 1) device (e.

g.

camera) has limited storage capability, 2) lots of devices and data to be processed and 3) intermittent internet connectivy.

In this blog, a camera is simulated on an Ubuntu VM that uses Blob on Edge.

In the remain of this chapter the following steps will be executed:5a.

Created IoT Hub and Ubuntu VM as Edge device5b.

Add module Blob Storage to Edge device5c.

Simulating camera using Edge deviceAnd the following part of the architecture that is realized:5.

Steps in blog plotted on Architecture.

IoT Hub Edge in bold as next step5a.

Install preliminaries for Azure Blob Storage on IoT EdgeIn order to use Azure Blob Storage on IoT Edge, the following commands need to be run (for more detailed information, see here).

az extension add –name azure-cli-iot-extaz vm create –resource-group blog-rtvideoproc-rg –name blog-rtvideoproc-edge –image microsoft_iot_edge:iot_edge_vm_ubuntu:ubuntu_1604_edgeruntimeonly:latest –admin-username azureuser –generate-ssh-keys –size Standard_DS1_v2az iot hub create –resource-group blog-rtvideoproc-rg –name blog-rtvideoproc-iothub –sku F1az iot hub device-identity create –hub-name blog-rtvideoproc-iothub –device-id blog-rtvideoproc-edge –edge-enabledRun the following command to retrieve the keyaz iot hub device-identity show-connection-string –device-id blog-rtvideoproc-edge –hub-name blog-rtvideoproc-iothubAnd add this key to your VM using the following commandaz vm run-command invoke -g blog-rtvideoproc-rg -n blog-rtvideoproc-edge –command-id RunShellScript –script "/etc/iotedge/configedge.

sh '<device_connection_string from previous step>'"When your IoT Hub and edge device is created correclty, you should see the following in the portal5b.

Add module Blob Storage to Edge deviceIn this step Blob storage module is installed on the edge device.

Select your edge device and follow the step in the tutorial using the Azure PortalDeploy the Azure Blob Storage module to devices – Azure IoT EdgeThere are several ways to deploy modules to an IoT Edge device and all of them work for Azure Blob Storage on IoT Edge…docs.

microsoft.

comIn this, use the following Container Create Options{ "Env":[ "LOCAL_STORAGE_ACCOUNT_NAME=localvideostor", "LOCAL_STORAGE_ACCOUNT_KEY=xpCr7otbKOOPw4KBLxtQXdG5P7gpDrNHGcrdC/w4ByjMfN4WJvvIU2xICgY7Tm/rsZhms4Uy4FWOMTeCYyGmIA==" ], "HostConfig":{ "Binds":[ "/srv/containerdata:/blobroot" ], "PortBindings":{ "11002/tcp":[{"HostPort":"11002"}] } }}and the following “set module twin’s desired properties”:{ "properties.

desired": { "ttlSettings": { "ttlOn": true, "timeToLiveInMinutes": 30 }, "tieringSettings": { "tieringOn": true, "backlogPolicy": "OldestFirst", "remoteStorageConnectionString": "<your stor conn string>", "tieredContainers": { "localvideoblob": { "target": "videoblob" } } } }}If everything is deployed successfully, the following should be in the portal5b.

Blob on Edge successfully deployedAlso, you run the following commands from the CLI to see if everything is installed correctly.

ssh azureuser@<<public IP of your Ubuntu VM>>sudo systemctl status iotedge journalctl -u iotedgecd /srv/containerdatals -laIf everything is deployed successfully, we can run a camera simulator that upload a file to your local blob storage in the next part.

5c.

Simulating camera using Edge deviceIn the final part of this blog, we will use a camera simulator that will put a file on the Edge device.

As a first step, you need to open inbound port 11002 of your Ubuntu VM.

Find the Network Security Group (NSG) of your VM and add port 11002, see also below5c1.

Add port 11002 to NSGRun the code from the github in CameraSimulator/CameraSimulater.

py.

In this project, replace the IP address of your UbuntuVM and the location of the video file you want to upload.

This simulator uploads a video and triggers everything that was done in this tutorial, that is, 1) sync video to storage account since auto tiering is enabled, 2) trigger blog trigger and queue trigger function that process video, 3) invoke cognitive services to detect graffiti and identify wagon number and 4) push results to Power BI dashboard, see also below.

5c2.

End result project6.

ConclusionIn this blog, an end to end project was created in order to do intelligent, realtime and scalable video processing in Azure.

In this, a capability was created that can detect graffiti and identify wagon numbers using videos of trains.

In this, the following Azure functions were usedCognitive services were used as intelligent algorithms to detect graffiti on trains (custom vision API) and OCR to identify wagon numbers (computer vision API)Azure Functions with Python and docker were used to process videos in realtime in a scalable methodAzure Blob storage and edge computing were used to process video reliable from Edge to cloud.

Power BI to visualize output using streaming data in dashboardsSee also architecture of project depicted below:6.

Intelligent, realtime and scalable video processing in Azure.

. More details

Leave a Reply