Could Rogue AI Services Become the New Tool for Harvesting Data and Distributing Malware?

In this special guest feature, Ramesh Mahalingam, CEO of Vizru Inc.

, discusses how organizations need to adopt a cohesive framework for embedding AI within the enterprise.

The resulting guardrail simultaneously prioritizes regulatory compliance, data privacy, and security to mitigate these risks while maximizing revenues.

More importantly, it helps companies to avoid two detrimental outcomes.

Vizru Inc.

provides a no-code, autonomous application development and digital transformation platform that allows users to build AI-based business automation apps in minutes, over existing cloud and on-premises.

There’s an imminent danger associated with Artificial Intelligence (AI) systems that, if left unchecked, could potentially negate its prized advantages of automation and acceleration.

This danger tends to center on how the industry functions as a whole and, as a result, it often goes largely unnoticed by organizations until it’s too late unfortunately.

Most AI vendors require years (or decades) of proprietary enterprise data to build cognitive models involving machine learning and natural language processing (NLP).

This practice leaves organizations susceptible to rogue AI systems harvesting data and distributing malware, which jeopardizes data’s competitive value and security, respectively.

Consequently, security and compliance officers are skeptical, AI initiatives stall, and instead of transforming core business processes, these technologies are only deployed for fringe use cases.

googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); Organizations can avoid these issues by adopting a cohesive framework for embedding AI within the enterprise.

The resulting guardrail simultaneously prioritizes regulatory compliance, data privacy, and security to mitigate these risks while maximizing revenues.

More importantly, it helps companies to avoid two detrimental outcomes: Consequence #1: Data Harvesting Data harvesting typifies the concerns of rogue AI systems because it’s foundational to common perceptions of AI.

It’s an immediate consequence of the data quantities AI vendors require.

It’s also enabled by the belief that the more data used to train AI, the more accurate its models are.

In reality, the more relevant, quality data organizations have are what really improves these models’ performance.

Moreover, it’s often the line of business—as opposed to IT—driving the need for AI.

Without IT’s control over their data, organizations are at the mercy of vendors (who may have ulterior motives) for their data integrity.

Data harvesting occurs when unscrupulous vendors slice and dice organizations’ data anyway they want, then resells this data for the vendor’s profit.

Once they have an organization’s data, vendors can analyze, compute, and create new metrics from them.

This data is easily repackaged and sold to others, negatively impacting the initial organization in many ways: they not only lose the competitive value of its data assets, but they also risk a number of regulatory compliance violations regarding data privacy and security.

In healthcare, an organization submitting claims information to AI vendors can unwittingly enable them to sell the health reports to one customer, the mental health data to another, and the demographic data to yet another customer.

Each of these acts devalues the initial organization’s data while increasing risk for non-compliance and data privacy issues.

Consequence #2: Distributing Malware The potential to distribute malware is a particularly formidable result of rogue AI systems.

This hazard stems from the inherent security issues of receiving information back from vendors after AI models have been trained and is best explained by an analogy of application vendors for phone operation systems.

One well known vendor has a low value entry point for apps available to customers.

It’s not uncommon for them to contain memory leaks and potential legal issues because there’s no framework to vet them before they’re in the playstore.

Conversely, the value to entry of its competitor is much higher, because the latter’s framework verifies apps’ integrity prior to making them available.

Without a similar framework, organizations can receive malware in files returned from vendors.

Companies may access the cloud for optical character recognition services, for instance, and receive infected files.

Preserving the security of internal environments is essential, yet point solutions for AI may worsen those environments with malware or even ransomware attacks.

With so many AI tools accessible in the cloud, organizations must diligently guard their assets to prevent security compromises with compounding regulatory consequences.

A Cohesive AI Framework A cohesive AI framework is the most effective means of preventing malware distributions and data harvesting.

Leveraging the framework, users implement controls based on those policies for their enforcement—such as tokenizing PII or encrypting data.

This means organizations no longer have to solely rely on vendor integrity to prevent malware infections, as they’ll have their own detection or isolation methods in their frameworks.

This guardrail shifts the entire means of accessing AI from an isolated, external process to a more internal one, reducing the chances of harvesting data and distributing malware.

It prevents AI systems from going rogue and instead holds them accountable to the same enterprise protocols and procedures as every other IT system.

The ability to access external resources for AI (or any other purpose) in a manner consistent with that of internal resources is essential to their longstanding, enterprise value.

Safely Deploying AI Incorporating AI into the enterprise hold tremendous benefits but all too often data harvesting and malware distribution concerns curb actual deployments.

These issues are substantially mitigated by a cohesive AI framework that internalizes controls for accessing AI services, regardless of where they are This guardrail also minimizes the threat of rogue AI systems by centralizing how they’re incorporated within the enterprise and is imperative for safely implementing AI in core production settings.

  Sign up for the free insideBIGDATA newsletter.

.. More details

Leave a Reply