In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge.
The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.
Today, companies across a wide breadth of industries are able to collect massive amounts of data.
The emerging challenge is to transform this raw data to real time actionable intelligence.
Historically, mining large data sets to acquire business intelligence has been the realm of centralized datacenters processing non-real time data to assist non-real time business decision making.
The imperative now is to move this processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.
An important development assisting this effort is the emergence of GPU accelerated database technology from companies like OmniSci and Kenetica.
With tools such as these, users can interactively query, visualize and power data science workflows over billions of records.
The massively parallel processing of GPUs alongside traditional CPU compute provides extraordinary performance at scale and return query results in milliseconds, even on tables with many billions of rows.
In cases where time and location matter, the computational power of GPUs can process streaming data as it arrives, and, in conjunction with geospatial data, use the GPU graphic power for interactive visualization.
The application spaces for real time decision making leveraging massive streaming data sets are varied and wide ranging.
In telecommunications, massive telco data sets are mined for network insights to optimize operational efficiency.
Cyber security and fraud detection utilize the ability to interactively analyze and visualize network, and log data to uncover anomalies and suspicious behavior.
In military applications, real-time analysis and visualization on hundreds of streaming, historical, and geospatial data feeds allow instant identification and reaction to threats by land, sea, or air.
Autonomous vehicles generate huge volumes of telematics data which, in addition to operating the vehicle, can be leveraged by fleet managers and even insurance companies to drive real time decision making.
Medical applications simultaneously ingest, analyze, and visualize medical IoT data for real-time monitoring and diagnostics.
To realize the real time decision making capability of these tools, an emerging set of edge optimized high performance computing technologies is required.
HPC edge deployment requires specialized, high-performance, accelerated computing resources for real time data analysis moved to the field near the data source.
Systems can react to real-time changes in the environment without sending the new data back to a centralized datacenter.
Avoiding data movement over relatively slow or unsecure networks to remote datacenters also provides significant benefits in cost, responsiveness and security.
Where rapid response is critical, the benefits are clear; from autonomous vehicles, to personalized medicine to threat detection.
The three required hardware system elements necessary to achieve these results are high throughput data acquisition, high capacity and low latency data storage and high performance compute acceleration.
Ideally, the building blocks for each of the functions utilize the latest high performance technology including CPUs, GPUs, NVMe storage, and high speed data acquisition technology that are all interconnected with PCIe Gen 4.
Additionally, this technology needs to be designed for deployment outside traditional datacenter environments at the edge.
These environments are often harsh and rugged, and in many cases, solutions must meet unique criteria for shock and vibration, humidity, altitude, and large operating temperature ranges.
At the edge, acquisition data rates can be extremely high and often require a range of high speed data capture hardware.
Multiple FPGAs, frame grabbers, video capture, industrial IO and smart NICs can capture data in the 100s of Gbps.
Systems that need the flexibility to support this wide range of data acquisition can do so by providing a large number of PCIe slots interconnected with high speed PCIe switches.
The high speed data must move from acquisition to persistent storage supporting the same high speed throughput rate, while simultaneously moving data to the compute engines and archiving systems.
Features in PCIe allow for simultaneous multi-casting of data to the multiple sub-systems using RDMA transfers to avoid system memory bottlenecks without adding network protocol latency.
Direct PCIe attached NVMe storage devices scale from 10TB to 1PB capacities to handle these requirements.
The high performance compute subsystem forms the last major element in these solutions.
Depending on the scale of computing power required, the compute elements range from single compute nodes to several interconnected building blocks typically housing multiple PCIe GPU or FPGA accelerators.
The compute functions include GPU accelerated AI, data analysis and query tasks using GPU database tools and visualization.
One Stop Systems, Inc.
has developed a full product line of HPC edge solutions that meet all the criteria for deploying big data analytics at the edge to transform raw data to actionable intelligence.
For more information about its AI on the Fly® PCIe Gen4 product portfolio, visit www.
Disclaimer: This article may contain forward-looking statements based on One Stop Systems, Inc.
’s current expectations and assumptions regarding the company’s business and the performance of its products, the economy, and other future conditions and forecasts of future events, circumstances and results.
About the Author Tim Miller is Vice President of Strategy at One Stop Systems.
Tim has over 33 years of experience in high tech operations, management, marketing, business development, and sales.
He previously was the CEO of Dolphin Interconnect Solutions and CEO and founder of StarGen, Inc.
Tim holds a Bachelor of Science in Engineering from Cornell University, a Masters of Business Administration from Wharton, and a Masters in Computer Science from the University of Pennsylvania.