Machine Learning Explainability vs Interpretability: Two concepts that could help restore trust in AI

And how what do they actually mean for those of us doing data mining, analysis, science in 2019?In the context of machine learning and artificial intelligence, explainability and interpretability are often used interchangeably..While they are very closely related, it’s worth unpicking the differences, if only to see how complicated things can get once you start digging deeper into machine learning systems.Interpretability is about the extent to which a cause and effect can be observed within a system..Or, to put it another way, it is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters..It’s being able to look at an algorithm and go yep, I can see what’s happening here.Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms..It’s easy to miss the subtle difference with interpretability, but consider it like this: interpretability is about being able to discern the mechanics without necessarily knowing why..Explainability is being able to quite literally explain what is happening.Think of it this way: say you’re doing a science experiment at school..The experiment might be interpretable insofar as you can see what you’re doing, but it is only really explainable once you dig into the chemistry behind what you can see happening.That might be a little crude, but it is nevertheless a good starting point for thinking about how the two concepts relate to one another.If 2018’s techlash has taught us anything, it’s that although technology can certainly be put to dubious usage, there are plenty of ways in which it can produce poor – discriminatory – outcomes with no intention of causing harm.As domains like healthcare look to deploy artificial intelligence and deep learning systems, where questions of accountability and transparency are particularly important, if we’re unable to properly deliver improved interpretability, and ultimately explainability, in our algorithms, we’ll seriously be limiting the potential impact of artificial intelligence..Which would be a shame.But aside from the legal and professional considerations that need to be made, there’s also an argument that improving interpretability and explainability are important even in more prosaic business scenarios..Understanding how an algorithm is actually working can help to better align the activities of data scientists and analysts with the key questions and needs of their organization.While questions of transparency and ethics may feel abstract for the data scientist on the ground, there are, in fact, a number of practical things that can be done to improve an algorithm’s interpretability and explainability.The first is to improve generalization..This sounds simple, but it isn’t that easy..When you think most machine learning engineering is applying algorithms in a very specific way to uncover a certain desired outcome, the model itself can feel like a secondary element – it’s simply a means to an end..However, by shifting this attitude to consider the overall health of the algorithm, and the data on which it is running, you can begin to set a solid foundation for improved interpretability.This should be obvious, but it’s easily missed.. More details

Leave a Reply