When Milliseconds Matter: Optimize Real-Time Data Sources Before It’s Too Late

In this special guest feature, Sam Mahalingam, Chief Technology Officer, at Altair Knowledge Works, believes that when milliseconds matter, it’s important to optimize real-time data sources before it’s too late.

With more than 20 years of experience in software development, software architecture, technical management, and project management, Mahalingam focuses on shaping the current products and identifying newer products and solutions to ease the cloud adoption and mobile strategy for Altair customers both in simulation lifecycle management and high-performance computing lifecycle management.

Data drives business.

And regardless of the sector or the scope of your business, chances are you’re storing vast quantities of data.

Business applications including security, network communications, fraud detection, retail purchase activities, energy utilization, and production processes are all producing an endless stream of data to monitor and track.

Additionally, that data is being captured through a wide array of sources including IoT sensors, and are being delivered via an even larger variety of message queues like Kafka, Solace, ActiveMQ, RabbitMQ and more.

  Operators, analysts, and users … the people at the core of your business need to be able to make decisions based on this flood of real-time data.

That means they need visualization tools to allow them to compile all of the information, access it quickly and compare it against historical data.

  googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); Companies with a large number of assets, such as those with fleets of vehicles, are finding these toolsets crucial to monitoring business.

They’ve taken to building “mission control” facilities where they can track the movement of each vehicle, their fuel economy and the efficiency of each driver.

When you have thousands of vehicles deployed all over the country it’s important to have all of that information centralized.

Trying to gather meaningful insights from several different databases with a whole team of analysts at the helm is far more difficult and not nearly as efficient.

Additionally, when it comes to spotting anomalies, you often don’t know what you are looking for until you find it.

Going back to the fleet of vehicles, let’s say an operator notices that vehicles in a particular region are deviating from the optimal route.

From there (with the right tools and data) the analyst can drill down and find out whether the drivers are partaking in fraudulent activity, such as delivering packages to places they’re not supposed to or if the new route is a result of ongoing road construction.

If it turns out to be the latter of the two, there may be a more optimal path that can be planned for the drivers during the time of construction, saving the company a considerable amount of money and time.

The goal of these data-driven solutions is to facilitate intelligent information-backed decisions that improve operations.

To enable this, companies need to act quickly.

Operators need to be able to find and identify these anomalies and tackle the small problems before they become a much larger issue.

  All too often companies build business intelligence (BI) systems that give high-level visibility into the past days’ performance.

That’s no longer adequate.

Waiting until the end-of-day or even worse, end-of-week, instead of identifying and addressing issues in real-time, will put you behind your competitors.

Time matters.

For some industries, it’s nanoseconds, others milliseconds and sometimes seconds, but rarely are hours or days good enough.

  The financial services industry has been using this type of technology for many years.

In particular, firms engaged in high-frequency trading of equities, bonds, and foreign exchange have developed methods for monitoring massive amounts of real-time data effectively.

To provide a sense of scale, the U.

S.

markets are making on the order of 9.

5 billion trades every day on 14 exchanges and in over 40 dark pools (private exchanges) for equities alone.

In order to trade profitably and maintain compliance with the myriad regulations they are subject to, banks have adopted streaming analytics technologies that are fully capable of handling such high volume, high-velocity data and putting it on the screen in ways that humans can understand.

The real-time data visualizations that provide the front end for these trading systems enable traders, people, to comprehend their order flows, spot shortfalls and compliance issues, and develop responses to threats and opportunities efficiently — in minutes instead of hours or days.

  We’re not at the point where companies can simply set thresholds and forget about the rest.

Automation and machine learning algorithms drastically cut the time it takes to parse these vast stores of information, but operators must still drive the process.

Without the right visualization tools, they may not spot trends, clusters, or outliers hidden in all their data until it’s too late.

This is fundamentally different than the traditional BI approach, which involves collecting large amounts of data, cleansing and normalizing it, and then delivering reports and (often) static dashboards to present findings.

  Remember, operators are not only trying to spot small problems before they become big issues, but also identify causal links on which they can take action.

Having the ability to manipulate data visualizations to alter perspectives on the fly and drill down for a more granular view of the information is critical.

Unearthing vital information is particularly important to oil exploration companies, which equip their drilling rigs with a variety of sensors measuring temperature, torque, friction, pressure, location, the volume of production, and rate of penetration.

All these data streams in real-time into a central control room over a low latency message bus and then processed by a stream processing application.

Engineers use high-density data visualizations to monitor the various data points for all the rigs in operation and look for evidence of potential failures.

As the engineers spot anomalies, they can reorient their visualizations to eliminate “noise” and determine whether what appears to be an outlier is, in fact, an issue with a particular type of bit, a single field or region, or type of sensor.

They can identify causal links that onsite drilling and maintenance teams can use to develop action plans.

   The ability to remove noise and change perspective is critical to engineers who are working with so many moving parts.

With real-time data visualization tools operators can find and spot problems on the fly, that used to require full system diagnostics in the past.

Quickly spotting anomalies can help highlight potential issues in the field, before they manifest into costly machine malfunctions down the road.

  Of course, all of this is for naught if you can’t get these tools into the right hands.

Operators shouldn’t need a Ph.

D.

or extensive coding experience to use and understand their visual analytics system.

You want the power to spot these anomalies in the hands of the people who know your business best.

Therefore, self-service is critical.

When fractions of a second matter, you don’t have the time for reports to be drafted, or meticulous lines of code to be built to construct a query.

You need operators on the front lines unencumbered with the ability to drill down deep and spot these issues before they impact your bottom line.

Sign up for the free insideBIGDATA newsletter.

.

. More details

Leave a Reply