People don’t trust AI. We need to change that.

People don’t trust AI.

We need to change that.

Susannah ShattuckBlockedUnblockFollowFollowingFeb 19This past week, I had the pleasure of attending and speaking at THINK, IBM’s annual customer and developer conference that brought over 25,000 attendees to San Francisco.

Of course, it’s next to impossible to sum up an event of that scale in a simple blog post — but I’d like to share a few key ideas that stuck out to me from the conversations and sessions in which I took part.

One theme, in particular, was the focus of my time at THINK, and that was the question of our trust in artificially intelligent systems.

To put it bluntly, most people don’t trust AI — at least, not enough to put it into production.

A 2018 study conducted by The Economist found that 94% of business executives believe that adopting AI is important to solving strategic challenges; however, the MIT Sloan Management Review found in 2018 that only 18% of organizations are true AI “pioneers,” having extensively adopted AI into their offerings and processes.

This gap illustrates a very real usability problem that we have in the AI community: people want our technology, but it isn’t working for them in its current state.

And I believe that lack of trust lies is one of the chief causes of this problem.

There are some very good reasons why people don’t trust AI tools just yet.

For starters, there’s the hot button issue of bias.

Recent high-profile incidents have justifiably garnered significant media attention, helping to give the concept of machine learning bias a household name.

Organizations are justifiably hesitant to implement systems that might end up producing racist, sexist, or otherwise biased outputs down the line.

Here’s the thing: AI is biased by design.

As Brandon Purcell, Principal Analyst at Forrester, so eloquently put it during our THINK panel discussion on AI fairness and explainability, machine learning models are called “discriminators” for a reason.

There’s intentional, necessary bias — for example, more heavily weighting peer-reviewed papers than Wikipedia articles in a question-answer system — and then there’s unintentional, harmful bias — building a facial recognition model that can’t recognize people of color.

But “bad bias” is a difficult problem to solve in algorithmic decision-making systems, in part because we haven’t been able to eliminate it from human decision-making systems.

We live in a biased world, full of biased data, which will train biased models.

The author (right), Brandon Purcell of Forrester Research (middle), and Rohan Vaidyanathan of IBM (left) discuss AI fairness and explainability at THINK 2019.

In addition to the issue of bias, another key topic that came up as a barrier to trust was explainability — or lack thereof.

There has been much discussion, particularly after GDPR went into effect, of explainability in the context of understanding how a given model has arrived at an individual decision.

This type of explainability can be very difficult to achieve in more complex machine learning systems that employ “black box” techniques — like neural networks, or XGBoost models.

This type of explainability is very important, both from a regulatory and an ethical standpoint.

I would argue, however, that there is another element of explainability that is just as important to building trust in AI models, and that is the ability to understand the impact of an AI system — to be able to connect model outputs to tangible outcomes.

Having an audit trail for machine learning models is critical for understanding how those models are performing over time.

Without the ability to audit their models’ function (and ensure consistent benefit, or know when to make corrections in the event of degrading performance), many organizations will rightly be hesitant to place real trust in any AI system.

Image via giphy.

The last challenge to AI adoption that came up again and again at THINK was the skill gap that many organizations are facing.

In a 2018 study from IBM, 63% of respondents cited a lack of technical skills as a barrier to AI implementation.

Through my own conversations with customers, I have learned that most organizations either don’t have a dedicated data science team, or they have a very small team that is overextended.

According to Deloitte, by 2024, the United States is projected to face a shortage of 250,000 data scientists, based on current supply and demand.

For companies without dedicated resources, business owners end up being responsible for AI projects.

Without tools to help them understand how these systems are actually working, they struggle to take experiments out of the lab, into production.

It’s no wonder that the MIT Sloan Management Review’s 2018 research report found that the majority of organizations — 82% of those surveyed — had failed to adopt AI beyond pilot or proof-of-concept projects.

So, people don’t trust AI — and I think our top priority as technologists should be to change that, as quickly as possible.

There are a few reasons why this lack of trust in AI systems is so troubling to me.

Firstly, I keep returning to the MIT Sloan study.

There are companies that have made the leap to AI adoption at scale — those 18% of organizations classified as “pioneers.

” These are companies with a deep bench of data science teams, who have the resources to build their own solutions to the three barriers I identified above.

And with the help of AI, these companies stand to dominate the market.

I am a staunch believer in the importance of healthy competition as a safeguard for customer and employee interests.

And so, I want to help as many companies keep up with the changing tide as possible.

I want to ensure that a wide variety of businesses are able to use the same tools that a handful of large corporations are already leveraging.

If we don’t help the little guys trust AI, they will be swallowed whole by the market.

Secondly, I fear that if we don’t succeed in building models and systems that are worthy of widespread trust, we will never be able to realize the full positive potential of AI (sure; you can call me a techno-idealist).

I believe that the real promise of AI is that of leveling the playing field — of disrupting the economies of scale that have dominated the market for hundreds of years.

But the playing field cannot be leveled if the majority of people and organizations don’t want to play.

The skeptics in the back will be quick to point out that democratization of technology has produced plenty of bad results, too — I realize that it’s not all roses and ponies when it comes to widespread adoption.

But I would argue that the risks of bad actors causing tremendous harm is higher if we keep our technology closed and opaque.

If we don’t provide everyone with the tools they need to make sense of AI systems, to make smart decisions about when and where to use those systems, we are leaving more room for those with malicious intent to take advantage of people’s lack of understanding.

So how do we solve this urgent problem of lack of trust in AI?.It starts by addressing the sources of mistrust.

The people I spoke with at THINK have a few ideas.

To tackle the issue of bias, datasets designed to expand training data to eliminate blind spots, like the Diversity in Faces dataset, are a good start.

Tools like AI Fairness 360 that help data scientists identify bias in their models are helpful.

And expertise in checking for and mitigating from groups like the Algorithmic Justice League is essential to putting these tools to work effectively.

The Diversity in Faces dataset includes 10 facial coding methods, offering a jumping off point for researchers working on facial recognition models.

Image via IBM Research.

Technology that increases the legibility of AI systems is also a must — the product I work on, Watson OpenScale, is focused on this issue.

We particularly need tools that are designed for non-technical or semi-technical audiences, that bring transparency to AI models in a language that business owners can understand.

And smart regulation will be key to driving work in these areas.

GDPR in the EU and the forthcoming CCPA in California are already demanding higher levels of explainability from “algorithmic decision-making” systems.

Regulations around the level of bias allowed in a system might push technologists to develop new methods to ensure fairness in AI systems.

Lack of trust in AI systems is a symptom of larger problems that we desperately need to solve.

The technical capabilities of the narrow AI we have today are rapidly advancing, much faster than our abilities to manage the risks that they carry.

Researchers are already developing models that they don’t feel comfortable sharing for fear of malicious use.

If the AI community doesn’t focus our efforts on solving the problems with this technology today, someone else will solve them for us tomorrow — in the form of extreme, innovation-limiting regulations or, much worse in my opinion, a complete loss of faith and disavowal of AI by the general public.

The good news is that we’re discussing the issue of mistrust in AI at industry events like THINK.

I am made hopeful by the way I have seen the technology community rally around challenges like bias, explainability, and legibility in the past year.

And I want to encourage wider participation in these discussions, from technical and non-technical stakeholders alike.

It will take a village to build AI systems that everyone can trust.

.

. More details

Leave a Reply