insideBIGDATA Latest News – 3/25/2020

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning.

Our industry is constantly accelerating with new products and services being announced everyday.

Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting.

Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

BrainChip and Socionext Provide a New Low-Power Artificial Intelligence Platform for AI Edge Applications BrainChip Holdings Ltd (ASX: BRN), a leading provider of ultra-low power high performance AI technology announced that Socionext Inc.

, a leader in advanced SoC solutions for video and imaging systems, will offer customers an Artificial Intelligence Platform that includes the Akida SoC, an ultra-low power high performance AI technology.

googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); BrainChip has developed an advanced neural networking processor that brings artificial intelligence to the edge in a way that existing technologies are not capable.

This innovative, event-based, neural network processor is inspired by the event-based nature of the human brain.

The resulting technology is high performance, small, ultra-low power and enables a wide array of edge capabilities that include local inference and incremental learning.

Socionext has played an important role in the implementation of BrainChip’s Akida IC, which required the engineering teams from both companies to work in concert.

BrainChip’s AI technology provides a complete ultra-low power AI Edge Network for vision, audio, and smart transducers without the need for a host processor or external memory.

The need for AI in edge computing is growing, and Socionext and BrainChip plan to work together in expanding this business in the global market.

“Our neural network technology enables ultra-low power AI technology to be implemented effectively in edge applications”, said Louis DiNardo, CEO of BrainChip.

“Edge devices have size and power consumption constraints that require a high degree of integration in IC solutions.

The combination of BrainChip’s technology and Socionext’s ASIC expertise fulfills the requirements of edge applications.

We look forward to working with the Socionext in commercial engagements.

” IBM Advances Watson’s Ability to Understand the Language of Business IBM (NYSE: IBM), a leader in artificial intelligence for business, announced several new IBM Watson technologies designed to help organizations begin identifying, understanding and analyzing some of the most challenging aspects of the English language with greater clarity, for greater insights.

The new technologies represent the first commercialization of key Natural Language Processing (NLP) capabilities to come from IBM Research’s Project Debater, the only AI system capable of debating humans on complex topics.

For example, a new advanced sentiment analysis feature is defined to identify and analyze idioms and colloquialisms for the first time.

Phrases, like ‘hardly helpful,’ or ‘hot under the collar,’ have been challenging for AI systems because they are difficult for algorithms to spot.

With advanced sentiment analysis, businesses can begin analyzing such language data with Watson APIs for a more holistic understanding of their operation.

Further, IBM is bringing technology from IBM Research for understanding business documents, such as PDF’s and contracts, to also add to their AI models.

“Language is a tool for expressing thought and opinion, as much as it is a tool for information,” said Rob Thomas, General Manager, IBM Data and AI.

“This is why we believe that advancing our ability to capture, analyze, and understand more from language with NLP will help transform how businesses utilize their intellectual capital that is codified in data.

” Red Hat Accelerates Petabyte-Scale Object Storage for Cloud-Native Workloads Red Hat, Inc.

, a leading provider of open source solutions, announced the general availability of Red Hat Ceph Storage 4 to deliver simplified, petabyte-scale object storage for cloud-native development and data analytics.

Red Hat Ceph Storage 4 is based on the Nautilus version of the Ceph open source project.

With enhanced scalability and new, simplified operational features, Red Hat Ceph Storage 4 helps enable organizations across a wide variety of industries, such as financial services, government, automotive and telecommunications, to better support application development, data analytics, artificial intelligence (AI), machine learning (ML) and other emerging workload capabilities.

  “Scalability is imperative to our customers as they seek a competitive advantage with their vast volumes of data,” said Sarangan Rangachari, vice president and general manager, Storage, Red Hat.

“However, the power of scale is lost if performance capabilities can’t match it.

Red Hat Ceph Storage 4 significantly raises the bar on object storage scalability, performance, and simplicity, enabling our customers to grow their businesses and operating efficiency.

” Hazelcast Speeds Time-to-Market for Operationalization of Machine Learning in Enterprise Applications Hazelcast , a leading in-memory computing platform, announced the easiest way to deploy machine learning (ML) models into ultra-low latency production with its support for running native Python- or Java-based models at real-time speeds.

The latest release of the event stream processing engine, Hazelcast Jet , now helps enterprises unlock profit potential faster by accelerating and simplifying ML and artificial intelligence (AI) deployments for mission-critical applications.

Recent research 1 shows that 33% of IT decision-makers see ML and AI as the greatest opportunity to unlock profits, however, 86% of organizations are having difficulty in managing the advances in technology.

From its recent integration as an Apache Beam Runner to the new features announced today, Hazelcast Jet continues to simplify how enterprises can deploy ultra-fast stream processing to support time-sensitive applications and operations pertaining to ML, edge computing and more.

“With machine learning inferencing in Hazelcast Jet, customers can take models from their data scientists unchanged and deploy within a streaming pipeline,” said Greg Luck, CTO of Hazelcast.

“This approach completely eliminates the impedance mismatch between the data scientist and data engineer since Hazelcast Jet can handle the data ingestion, transformation, scoring and post-processing.

” Alluxio Unveils Structured Data Service, Enabling Structured Data Applications to Interact Up to 5x Faster with Data Alluxio, the developer of open source cloud data orchestration software, announced the availability of Alluxio Structured Data Service (SDS) featuring a data Catalog Service and Transformation Service, two new major architectural components of its Data Orchestration Platform.

 Data engineers, architects and developers can now spend less resources storing data and more time delivering data to analytical compute engines.

As users and enterprises leverage widely-available analytics engines such as Presto, Apache Spark SQL or Apache Hive, they often run into inefficient data formats and face performance challenges.

 Typically, those engines consume structured data in different databases with “tables” consisting of “rows” and “columns”, rather than “offset” and “length” in files or objects.

This gap creates multiple challenges and inefficiencies, such as mappings or creating converted copies of the data.

With this announcement, users benefit from a more simplified data platform that enables connections to different catalogs for access to structured data, with less copies and pipelines and more compute-optimized data.

“Alluxio now provides just-in-time data transform of data to be compute-optimized, independent of the storage format for OLAP engines, such as Presto and Apache Spark,” said Haoyuan Li, founder and CTO, Alluxio.

“These schema-aware optimizations are made possible with the new Alluxio Catalog Service which abstracts the widely-used Apache Hive Metastore, so regardless of how the data was initially stored – CSV and text formatted files, for example – the data is now transformed into the generally recognized compute-optimized parquet format.

Almost every organization has a surprising amount of data in CSV or other text formats and this removes the manual work to make that data more usable.

A second type of transformation will coalesce many smaller files, enabling the data to be combined into fewer files, which is more efficient to process for SQL engines.

And yet a third type of transformation is for sorting, enabling table columns to be sorted adding to the efficiency of queries, newly available in our Enterprise Edition.

” Striim Bolsters Cloud Security, Adds Advanced Partitioning and New Data Pipeline Manageability Features for Streaming Data Integration to the Cloud Striim, provider of an enterprise-grade platform for streaming data integration to the cloud, has introduced a broad set of enhancements to its on-prem-to-cloud data integration offerings that strengthen platform and data security, improve data accountability, speed application development, increase performance, and enable greater flexibility and extensibility.

Striim 3.

9.

8 introduces new security features for the cloud integration use cases that enable more fine-grained and robust use of passwords and encryption to provide end-to-end data protection, particularly in hybrid cloud environments.

“Moving and deploying cloud-native applications is part of next generation digital transformation.

More often than not, these new applications require data feeds from existing on-prem operational systems.

As enterprises embrace their cloud strategy, they face challenges around data security, data accountability, scalability and real time data synchronization,” said Alok Pareek, founder and EVP of Products at Striim.

“With this release, Striim is raising the bar on enabling world-class protection of data streaming at-scale, across both on-premises systems and cloud environments for digital transformation.

” TigerGraph Continues Product Innovation with Newest “Graph for All” Release TigerGraph, the scalable graph database for the enterprise, unveiled TigerGraph 3.

0, which delivers the power of scalable graph database and analytics to everyone — including non-technical users.

The announcement comes as companies of all sizes – from emerging startups to global Fortune 1000 businesses – continue to build forward-looking applications with TigerGraph.

During the past four months alone, more than 1,000 developers have harnessed the power of graph to build applications on top of TigerGraph Cloud, the company’s graph database-as-a-service.

“Our mission at TigerGraph is to uncover meaningful, actionable, real-time insights from data – insights that can make a real difference in people’s lives – and to make scalable graph analytics available to everyone,” said Dr.

Yu Xu, CEO and founder, TigerGraph.

“Companies across a broad array of industries are upgrading to TigerGraph to unlock real value from connected data.

Banks and other financial services organizations are preempting fraud, while healthcare companies use graph data to improve the patient wellness journey.

TigerGraph’s work in advanced graph analytics has been validated by market recognition, next-generation customer applications and steady product innovation -– and we expect 2020 to be even better.

” Run:AI announces General Availability of its K8s-based deep learning virtualization platform Run:AI, a company virtualizing AI infrastructure, announced the general availability of its.

Now supporting Kubernetes-based infrastructures, Run:AI’s solution enables IT departments to set up and manage the critical AI infrastructure that data science teams need, providing control and visibility while maximizing hardware utilization and development velocity.

Data science workloads often need ‘greedy’ access to multiple computing resources such as GPUs for hours on end, but instead face bottlenecks and long experimentation times.

Typically, data scientists are statically allocated a few GPUs each, with those expensive hardware resources sitting idle when not used.

IT departments struggle to allocate the right amount of resources to data science teams, suffering from poor visibility and a lack of control.

Data scientists, meanwhile, either have more GPU capacity then they can currently use, or are limited when they try to run large experiments.

Instead of statically assigning GPUs to data scientists, Run:AI creates a pool of GPU resources, and will automatically and elastically “stretch” a workload to run over multiple GPUs if they’re available.

Important jobs can be given guaranteed quotas, and Run:AI’s software will elastically and automatically scale the workloads to the available hardware based on defined priorities.

To simplify workflows, Run:AI’s virtualization platform plugs into Kubernetes with a single line of code.

The platform’s powerful visibility tools enable companies to understand how their GPU resources are being used by their data science teams, helping with infrastructure scaling and identifying bottlenecks.

  “About six months ago, we decided to build our scheduler as a plug-in to Kubernetes,” said Dr.

Ronen Dar, CTO and co-founder of Run:AI.

“This approach was based on the widespread adoption of containers as the de-facto platform for AI workloads.

Containers are portable, light, and are a good fit for experiments that need to run for days or weeks on end.

Building our powerful platform to simply plug in to Kubernetes makes it seamless to install and requires no additional training or change to a data scientist’s workflows.

” Algorithmia Expands ML Infrastructure Options with VMWare Integration Algorithmia announced the availability of Algorithmia Enterprise on VMWare, a fully integrated solution for connecting, deploying, scaling, and managing ML models in the cloud or behind the firewall.

Now, customers can run Algorithmia on their existing VMWare infrastructure, in their data center, with the lowest latency and highest security required for their ML–enabled applications.

Machine learning has the most impact on a company’s core line of business applications that are often behind the firewall, particularly in regulated industries like financial services, insurance, health care, and laboratory sciences.

For ML infrastructure to serve those industries, an on-premise product is a requirement.

Furthermore, a major concern for enterprises conducting machine learning is the security implications of moving data (often customer, financial, or other sensitive info) between systems.

It can also be expensive and difficult to move, so building and running ML models close to the data source is a preferred practice as it reduces costs, increases iteration speed, and satisfies security, compliance, and privacy requirements that many businesses have.

Algorithmia Enterprise on VMWare addresses these concerns with an on-premise platform that allows data scientists and ML engineers to easily automate the DevOps, management, and deployment of AI/ML models, while providing unparalleled protection for sensitive and proprietary data.

By choosing VMWare as its preferred on-premise infrastructure, Algorithmia is enabling enterprises to achieve their full AI/ML potential.

“From the early days at Algorithmia, we knew that multi-cloud was critical to enabling our customers’ success, given the vastly different infrastructure choices one could make,” said Hernan Alvarez, VP of Product at Algorithmia.

“So we focused on getting the foundation right as we knew the speed and quality of the deployment experience is a crucial advantage for customers.

By delivering on a multi-cloud platform that has UX, feature, and operational parity, we solve these problems and deliver on that promise.

” Rockset Releases Query Lambdas for Developers Building Real-Time Data Applications Rockset, the real-time database in the cloud, announced the release of Query Lambdas, an industry-first capability that runs developers’ queries in response to events, enabling developers to build data applications faster than ever before.

Modern real-time data applications — such as customer 360, inventory management, fraud detection and personalization — help companies stay ahead of their competition by realizing massive internal efficiencies and increasing customer satisfaction.

Rockset, built by the team behind RocksDB and Facebook’s online data platform, is focused on increasing developer velocity for teams building modern data applications.

  Rockset is the real-time database in the cloud that stores and indexes real-time data from transactional databases and event streams, with schema-free JSON documents and declarative SQL over REST.

It is used for building applications that make intelligent decisions on real-time data.

  “The team at Standard is always looking to increase the accuracy of the computer vision platform and add new features to the product.

We need to be able to drive product improvements from conception to production rapidly, and that involves being able to run experiments and analyze real-time metrics quickly and simply,” said Tushar Dadlani, computer vision engineering manager at Standard Cognition in a recent case study.

“Using Rockset in our development environment gives us the ability to perform ad hoc analysis without a significant investment in infrastructure and performance tuning.

We have over two thirds of our technical team using Rockset for their work, helping us increase the speed and agility with which we operate.

” Deque Brings Machine Learning to Accessibility Testing  Deque Systems, a leading software company specializing in digital accessibility, continues to redefine automated accessibility testing by leveraging Machine Learning technology in its axe Pro beta.

In an industry first, Deque has successfully integrated Machine Learning technology to perform powerful visual analyses within axe Pro’s automated and intelligent guided testing, which significantly reduces the amount of manual work required to identify and fix accessibility issues.

Catching these issues quickly and easily is a crucial step to ensure that websites and apps are accessible to all people, including those with disabilities.

“Much of accessibility testing involves determining whether digital content is accurately conveyed to assistive technologies and the users who rely on them to access that content,” comments Preety Kumar, CEO, Deque Systems.

“By leveraging Machine Learning technology, we’ve continued to automate many legacy manual testing efforts, drastically reducing testing costs and making better use of a developer’s time.

” Information Builders Omni-HealthData Leveraging Data and Analytics to Understand Social Determinants of Health Information Builders Inc.

(IBI), a leading data and analytics company, announced new capabilities with its healthcare information management platform, Omni-HealthData, that enable better understanding and use of social determinants of health (SDOH) in providing quality patient care and integrated services.

Omni-HealthData brings together clinical and SDOH data within a geospatial context that helps identify individuals and populations for care interventions and delivers insights for strategic healthcare initiatives.

  “It’s clear to anyone who works in healthcare that there are opportunities to better manage SDOH by making relevant information more discoverable and actionable,” said Krishna Venugopal, SVP and chief technology officer at Community Care of North Carolina, an organization dedicated to improving the health and quality of life of all North Carolinians by building and supporting better community-based healthcare delivery systems.

“Our work with Information Builders and Omni-HealthData to ingest and maintain unitary and integrated healthcare data creates a more comprehensive view of our patients’ needs, allowing us to provide more optimal care.

”  Databricks Delivers Security and Scalability Enhancements to Accelerate Enterprise Deployments Databricks, a leader in unified data analytics, announced new features within its platform that provide deeper security controls, proactive administration and automation across the data and ML lifecycle.

  As data teams enable analytics and machine learning (ML) applications across their organizations, they require the ability to securely leverage data at massive scale.

Doing this can be complex and risky, especially when operating in a multi-cloud environment.

Security is fragmented, which makes corporate access policies difficult to extend, administration is reactive and inefficient, and devops processes like user management or cluster provisioning are manual and time consuming.

Databricks’ Unified Data Analytics Platform addresses these challenges by helping organizations bring all their users and data together in a simple, scalable and secure service that can leverage the native capabilities of multiple clouds.

“The biggest challenge for organizations today is selecting an enterprise platform that can handle all of your data and all of the people that interact with it – today and in the future,” said David Meyer, senior vice president of Product Management at Databricks.

“Databricks is the only platform that has successfully achieved the massive scale and simplicity that enables enterprises to make data, business analytics and machine learning pervasive enterprise-wide.

We’re committed to preserving this for our customers, regardless of if and how their cloud strategies evolve over time.

These new features are a great example of how we’re doing that.

”  Nutanix Brings Invisible Infrastructure to Big Data and Analytics Nutanix, a leader in enterprise cloud computing, announced it extended the Nutanix platform with new features for big data and analytics applications, as well as unstructured data storage.

These capabilities, part of Nutanix Objects 2.

0, include the ability to manage object data across multiple Nutanix clusters for achieving massive scale, increased object storage capacity per node, and formal Splunk SmartStore certification.

The enhancements add to a cloud platform that is already optimized for big data applications, to deliver performance and incredible scale, while also reducing cost by maximizing unused resources.

Big data workloads demand cloud environments that can efficiently manage extremely large volumes of unstructured data, as well as deliver the high performance necessary to analyze the data in real-time to extract business insight.

With companies reliant on business data to create personalized customer experiences, IT teams often struggle with siloes, complexity, and operational inefficiencies.

Options currently available do not offer secure, end-to-end solutions to run big data applications that can easily scale.

“Digital transformation requires web-scale storage for enterprise workloads.

Object storage is rapidly becoming the storage of choice for next gen and big data applications.

As object storage makes the leap from the cloud to the datacenter and mission critical workloads, economics must be balanced with performance,” said Amita Potnis, research director in IDC’s Storage team.

“Nutanix is known for flexibility and simplicity.

Multi-cluster support and certification with Splunk SmartStore with Nutanix Objects will allow for massive scale at the right price and performance that these workloads require.

” Domo Shines a Light on Dark Data with New Augmented Capabilities in the Business Cloud Domo (Nasdaq: DOMO) announced it is making it even easier to get BI leverage at cloud scale in record time through new augmented capabilities in the Domo Business Cloud.

 In a new Dimensional Research study sponsored by Domo, 92% of individuals surveyed said they’ve made decisions in the past three months without having all the information they wanted, with most reporting that data is just too hard to access.

And while 77% of respondents reported they know “dark data” across their organizations goes unused, 88% of people said they struggle to access data that is outside their control.

  “We believe that moving fast and using great data is what will define great companies in the cloud era, however getting the right data and making it usable has been too difficult and time consuming,” said Catherine Wong, chief product officer and EVP of Engineering.

“We’ve introduced these new intelligent capabilities into the Business Cloud to augment how data is accessed and leveraged with the goal of turbo-charging the speed at which companies can innovate and move their business forward.

”  Eventador Shatters the Barrier Between Streaming Data and Applications  Eventador.

io, the streaming data engine for building killer apps,  unveiled the Eventador Platform version 2.

0, the first end-to-end, produce-to-consume stream processing platform that gives companies the ability to quickly and easily build applications from the firehose of streaming data.

With Eventador, data science, developer, and data engineering teams can unlock new value and power from data streams in their models, applications, and ETL flows.

Eventador Platform v2.

0 solves the complex problem of providing a queryable, time-consistent state of streams via materialized views.

Views are defined using ANSI-SQL, are automatically indexed and maintained, and arbitrarily queried via RESTful endpoints.

Users can query by secondary key, perform range scans, and can utilize a suite of common operators against these views.

With the new release, the Eventador Platform removes the need for additional database, web server, load balancing, or other complex infrastructure, which means developing streaming applications is now faster and less costly than with current, often piecemeal stream processing systems.

This not only provides organizations the lowest possible TCO for their streaming platforms, but also increases innovation and access to new revenue streams with faster application time-to-market.

“Companies have adopted Apache Kafka as the de facto data bus for streaming data,” said Kenny Gorman, Co-founder and CEO of Eventador, “By using the Eventador Platform, customers no longer need to provision expensive, often slow database infrastructure and can instead query streaming data directly using materialized views.

The Eventador Platform is a bespoke solution, not a simple add-on to Kafka, that solves the disconnect between streaming data and applications.

” Sign up for the free insideBIGDATA newsletter.

.

Leave a Reply