Explainable AI: The key to Responsibly Adopting AI in Medicine

In this special guest feature, Niv Mizrahi, CTO & Co-Founder of Emedgene, discusses a field of technology that constantly is rising in importance – explainable (or interpretable) AI, and specifically how it has become a key responsibility for adopting AI in medicine.

Emedgene is a genomics company using AI to automatically interpret genetic data so that health organizations can scale personalized care to wider populations.

An expert in machine learning and big data, Niv has led Emedgene’s development from idea to a mature solution used by leading genomics labs.

Who has an algorithm problem? A recent entry in the ranks of algorithm scandals is the Apple credit card.

Unfortunately for Apple, a tweetstorm was ignited when Ruby on Rails creator David Heinemeier claimed that despite filing joint tax returns with his wife, the Apple card gave him 20 times more than her credit limit.

Apple’s co-founder Steve Wozniak also joined the conversation, saying he received 10 times the credit his wife did.

Goldman Sachs, the bank issuing the card on behalf of Apple, claimed the algorithm was vetted for bias by a third party and doesn’t even factor sex as a parameter, but the New York Department of Financial Services (NYDFS) has launched an investigation.

The Apple algorithm fiasco comes on the heels of Amazon scrapping a recruiting algorithm that preferred men, before it was incorporated into their hiring practices.

When Google was accused of racist autocomplete queries, Google’s vice president of news, Richard Gingras, said: “As much as I would like to believe our algorithms will be perfect, I don’t believe they ever will be.

” googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); Algorithms in healthcare: The problem with inaccurate models Incorporating algorithms in healthcare carries an even greater responsibility, as people’s lives and health are at stake.

In a recent publication in Science, researchers demonstrated that an algorithm widely used to identify and help patients with complex health needs introduces a racial bias that reduces the number of black patients identified for extra care by more than half.

Moreover, the decision-making of clinicians may be impacted by the algorithmic recommendations.

In a study of algorithms in the clinic, the accuracy of a model that identifies carcinomas significantly impacted pathologist performance.

When the model was accurate, pathologists were more accurate as well.

When the model was inaccurate, their performance dropped.

How can we responsibly adopt AI in healthcare? One way to raise confidence in an AI model’s recommendation and raise awareness of its biases is to incorporate explainable AI (XAI) models.

Such models attempt to answer the question “why did the model predict that?” in a way that is clearly understood by users.

Of the EU’s recently published seven requirements for trustworthy AI, one requirement is for transparency, specifically calling for explainable AI systems and making humans aware that they are interacting with AI, with all its capabilities and limitations.

When DARPA, the U.

S.

Defense Advanced Research Projects Agency, issued a program funding XAI, they cited the need for warfighters to understand, trust, and effectively manage their artificially intelligent machine partners.

The goal is to make it easy for humans to understand the key features driving AI results, as well as to detect error or bias.

   How do we design good explainable AI models? Explainable models are often secondary models, attempting to shed light on the original model features.

Most of the work in XAI revolves around retrofitting approximate models over the more complex original models.

But does this answer a user’s need for justification and assurance? When researchers turn to social sciences’ research on useful explanations, we arrive at a more nuanced understanding.

In social sciences, explanations are described as contrastive, a user should understand why X was selected over Y.

Humans don’t require a complete explanation, they are adept at selecting the facts that support the decision.

Probabilities don’t matter as much as causality, a statistical representation on its own will be less satisfying than one accompanied by a causal explanation.

Lastly, explanations are social, and represent a transfer of knowledge.

  When viewed from the social science perspective, can redefine our XAI requirements for healthcare practitioners? All models today are intended for use with a human in the loop.

How can we best transfer knowledge, explain causality rather than just probability, and focus on the key facts necessary for decision-making? Perhaps the best XAI models are the ones that overlay the causal decision-making factors on the original model’s recommendation, allowing us to apply both pre and post-modeling to achieve our goals.

Regardless of our specific implementation, explainable models should be used to make it easier for medical professionals to utilize AI in clinical workflows.

In a 2019 survey of pathologists, 75% were concerned about AI errors, despite an overall positive attitude towards adoption of the technology in their workflows.

Explainable or interpretable systems will increase clinicians’ trust in the results provided by the model, as well as give them tools to assess the model’s outputs.

Explainable models also advance human-machine collaboration.

They allow the AI to perform tasks that it’s better suited for, like collecting and structuring information or extracting patterns from massive amounts of data.

Meanwhile, humans can focus on extracting meaning from the data collected.

Finally, we should have a method of assessing explainable models to ensure their usefulness to clinicians.

Wilson et al defined a concise model they call “the three Cs of interpretability.

” These include completeness, correctness, and compactness—the explanation should be as succinct as possible.

We would add one additional point, explanations should be actionable, so that medical professionals reviewing the algorithmic output and the accompanying explanation have a clear — and hopefully improved — path to clinical decisions.

  Sign up for the free insideBIGDATA newsletter.

.

Leave a Reply