Four Approaches to Explaining AI and Machine Learning

We used a simple model to demonstrate these differences to make a point that even a model with just two variables can’t be explained by LOCO, PI, or LIME as accurately as ZAML explainability can..If an explainability technique has a hard time explaining the simplest of “toy” models, it may be unwise to trust it on a real-world application where large dollar amounts are at stake.LOCO, LIME, and PI all have shortcomings with respect to accuracy, consistency, or speed..LOCO and PI work by analyzing one feature one at a time..This means as the number of model variables goes up, the algorithms become slower and more expensive to run..LOCO, LIME, and PI look only at the inputs and outputs of the model, which means they have access to much less information than explainers such as ZAML’s that look at the internal structure of the model..Probing the model externally (i.e. the inputs and outputs) is an imperfect process leading to potential mistakes and inaccuracies..So is analyzing refitted and/or proxy models, as LOCO and LIME require, instead of analyzing your final model..We believe that analyzing the real model is really important – if you don’t you’re opening yourself up to lots of risk.When attempting to explain advanced ML models, it’s important that your explanations capture the effect of a feature holistically, or in relation to other features..Univariate analysis, as is typically employed with LOCO and PI, will not properly capture these feature interactions and correlation effects..Again, accuracy suffers..Explainers should also calculate feature explanations from a global, and not just local perspective..Let’s say you built a model to understand why Kobe Bryant was a great basketball player.. More details

Leave a Reply