Can Edge Analytics Become a Game Changer?

By Sciforce, software solutions based on science-driven information technologies.

One of the major IoT trends for 2019 that are constantly mentioned in ratings and articles is edge analytics.

It is considered to be the future of sensor handling, and it is already, at least in some cases, preferred over usual clouds.

 First of all, lets go deeper into the idea.

Edge analytics refers to an approach to data collection and analysis in which an automated analytical computation is performed on data at a sensor, network switch, or another device instead of sending the data back to a centralized data store.

What this means is that data collection, processing, and analysis are performed on-site at the edge of a network in real-time.

 You might have read dozens of similar articles speculating over the necessity of any new technique, like “Does your project need Blockchain? No!” Is Edge Analytics yet another one of such gimmicky terms?The truth is, it is really a game-changer.

At present, organizations operate millions of sensors as they stream endless data from manufacturing machines, pipelines, and all kinds of remote devices.

This results in the accumulation of unmanageable data, 73% of which will never be used.

Edge analytics is believed to address these problems by running the data through an analytics algorithm as its created, at the edge of a corporate network.

This allows organizations to set parameters on which information is worth sending to a cloud or an on-premise data store for later use — and which isnt.

Overall, edge analytics offers the following benefits:Edge analytics benefits.

Reduced latency of data analysis: it is more efficient to analyze data on the faulty equipment and immediately shut it up instead of waiting for sending data to a central data analytics environment.

Scalability: the accumulation of data increases the strain on the central data analytics resources, whereas edge analytics can scale the processing and analytics capabilities by decentralizing to the sites where the data is collected.

Increased security due to decentralization: having devices on the edge gives absolute control over the IP protecting data transmission, since its harder to bring down an entire network of hidden devices with a single DDoS attack, than a centralized server.

Reduced bandwidth usage: edge analytics reduces the work on backend servers and delivers analytics capabilities in remote locations switching from raw transmission to metadata.

Robust connectivity: edge analytics potentially ensures that applications are not disrupted in case of limited or intermittent network connectivity.

Reduce expenses: edge analytics minimizes bandwidth, scales operations, and reduces the latency of critical decisions.

 The connected physical world is divided into locations — geographical units where IoT devices are deployed.

In an Edge architecture, such devices can be of three types according to their role: Edge Gateways, Edge Devices, and Edge Sensors and Actuators.

Edge Devices are general-purpose devices that run full-fledged operating systems, such as Linux or Android, and are often battery-powered.

They run the Edge intelligence, meaning they run computations on data they receive from sensors and send commands to actuators.

They may be connected to the Cloud either directly or through the mediation of an Edge Gateway.

Edge Gateways also run full-fledged operating systems, but as a rule, they have unconstrained power supply, more CPU power, memory, and storage.

Therefore, they can act as intermediaries between the Cloud and Edge Devices and offer additional location management services.

Both types of devices forward selected subsets of raw or pre-processed IoT data to services running in the Cloud, including storage services, machine learning, or analytics services.

They receive commands from the Cloud, such as configurations, data queries, or machine learning models.

Edge Sensors and Actuators are special-purpose devices connected to Edge Devices or Gateways directly or via low-power radio technologies.

A four-level edge analytics hierarchy.

 If edge analytics is only paving its way to ruling the next-generation technology, deep learning, a branch of machine learning for learning multiple levels of representation through neural networks, has been already there for several years.

Will deep learning algorithms applied to edge analytics yield more efficient and more accurate results? In fact, an IDC report predicts that all effective IoT efforts will eventually merge streaming analytics with machine learning trained on data lakes, marts, and content stores, accelerated by discrete or integrated processors by 2019.

By applying deep learning to edge analytics, devices could be taught to better filter unnecessary data, saving time, money, and human resources.

One of the most promising domains of integrating deep learning and edge analytics is in computer vision and video analytics.

The underlying idea is that edge analytics implements distributed structured video data processing, and takes each moment of recorded data from the camera and performs computations and analysis in real-time.

Once the smart recognition capabilities of a single camera are increased, and camera clustering enables data collision and cloud computing processing, the surveillance efficiency increases drastically, at the same time reducing the workforce requirements.

Deep learning algorithms integrated into frontend cameras can extract data from a human, vehicle, and object targets for recognition and incident detection purposes, significantly improving the accuracy of video analytics.

At the same time, shifting analytics processing from backend servers and moving them into the cameras themselves is able to provide end users with more relevant real-time data analysis, detecting anomaly behavior and alarm triggering during emergency incidents that do not rely on backend servers.

This also means that ultra-large-scale video analysis and processing can be achieved for projects such as safe cities where tens of thousands of real-time.

 Edge computers are not just a new trend, but they are a powerful tool for a variety of AI-related tasks.

While Raspberry Pi has long been the gold standard for single-board computing, powering everything from robots to smart home devices, the latest Raspberry Pi 4 takes Pi to another level.

This edge computer has a PC-comparable performance, plus the ability to output 4K video at 60 Hz or power dual monitors.

Its competitor, the Intel® Movidius™ Myriad™ X VPU has a dedicated neural compute engine for hardware acceleration of deep learning inference at the edge.

Google Coral adds to the competition offering a development board to quickly prototype on-device ML products with a removable system-on-module (SoM).

In our experiments, we used them as a part of a larger computer vision project.

 Human detection is a process similar to object detection, and in the real world settings, it takes raw images from (security) cameras and puts them in the camera buffer for processing in the detector&tracker.

The latter detects human figures and sends the processed images to the streamer buffer.

Therefore, the whole process of human detection can be divided into three threads: camera, detector&tracker, and streamer.

As the detector, we used sdlite_mobilenet_v2_coco from TensorFlow Object Detection API, which is the fastest model available (1.

8 sec.

per image).

As the tracker, we used MedianFlow Tracker from the OpenCV library, which is also the fastest tracker (30–60 ms per image).

To compare how different devices work on the real-time object detection problem, we tested Coral Dev Board and Coral Accelerator for human detection from two web-cameras against Desktop CPU with Coral Accelerator and Raspberry Pi with the same Accelerator:Coral Accelerator — Edge TPU Accelerator v.


0, model WA1Coral Dev Board — Edge TPU Dev Board v.


0 model AA1RaspberryPi — Raspberry Pi 3 Model B Rev 1.

2Desktop CPU — Intel Core i7–4790WebCam — Logitech C170 (max width/height — 640×480, framerate — 30/1 — used these parameters)As it turned out, the desktop CPU showed the lowest inference and the highest fps, while Raspberry Pi demonstrated the lowest performance: Another experiment addressed a more general object detection task, as we used this method for model conversion for Coral Dev Board and Accelerator and one of the demo scripts for object detection.

We compared the performance of the Coral Dev Board and Accelerator against the Neural Compute Stick 2.

For the latter, we used the openVino native model-optimization converter and this model+script.

Our experiments proved that the Coral Dev Board showed the lowest inference, while the Intel Neural Compute Stick 2 had the inference more than four times higher:These experiments confirm the potential of modern edge devices that show similar performance with desktop CPUs.

 Deep learning can boost accuracy, turning video analytics into a robust and reliable tool.

Yet, its accuracy usually comes at the cost of power consumption.

Power balancing is an intricate task based on improving the performance of edge devices, introducing dedicated video processing units, and keeping neural networks small.

Besides, as only a subset of data is processed and analyzed in the edge analytics approach, a share of raw data is discarded, and some insights might be missed.

Therefore, there is a constant tradeoff between a thorough collection of data offline and prompt analysis in real-time.

Therefore, edge analytics may be an exciting area of great potential, but it should not be viewed as a full replacement for central data analytics.

Both can and will supplement each other in delivering data insights and add value to businesses.


Reposted with permission Bio: Sciforce is a Ukraine-based IT company specialized in development of software solutions based on science-driven information technologies.

We have wide-ranging expertise in many key AI technologies, including Data Mining, Digital Signal Processing, Natural Language Processing, Machine Learning, Image Processing and Computer Vision.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.



js; (document.

getElementsByTagName(head)[0] || document.


appendChild(dsq); })();.

Leave a Reply