Explainable AI or Halting Faulty Models ahead of Disaster

How can machine learning systems be deployed without likely ensuing disaster?Explainable artificial intelligence, or XAI, seeks to answer these questions.

It aims to provide explanations which are understandable for humans to ultimately increase trust in produced models.

Based on the idea that humans can superiorly generalize even in unknown situations, a user can then check this explanation and see if the model uses the right cues to infer its decision.

If it does, it probably classifies similar, albeit unknown, situations correctly and creates trust.

A novel addition to this new field of research, called anchors, seems particularly promising (find the paper here).

It was proposed by the inventor of the LIME approach and patches several shortcomings of its predecessors.

Anchors provides local explanations to any black box classifier regardless of the underlying technology and algorithm.

Each explanation is valid for a single selected prediction using IF-THEN rules, of which each predicate fixes one feature of the input instance.

Therefore, each result provides clear coverage, i.

e.

, states for which other instances it is valid and with which probability.

This article examines two different models that were trained on the Titanic training dataset.

The anchors algorithm is then used to explain why and based on which features these black boxes predict a passengers survival or death.

Before going into detail with the tutorials implementation, anchors mode of operations and components is briefly outlined: at its core, the algorithm deploys a perturbation-based strategy.

That means that the observed or explained instance gets perturbed, i.

e.

, its feature values changed to some application-specific policy.

The resulting data instances resemble neighbors of the initial instance.

This way, feature importances, and contributions can be determined systematically by evaluating them using the model.

Understandably, querying the model often is expensive.

Also, there can be no exhaustive search in the case of continuous or sufficiently complex models.

Reinforcement learning and its multi-armed bandits (MABs) provide a solution to this problem.

They help to significantly reduce the number of samples required by using stochastic exploration approaches.

The above process, algorithmic improvements and several add-ons are implemented by our newly released anchorj Java application.

anchorj was designed as a high-performance alternative to the initial authors proof-of-concept (see here) and can be found on GitHub.

It constitutes the first open-source Java anchors implementation and is licensed under the BSD 3-Clause License.

Thus, it can be used freely with minimal restrictions.

Open collaboration and discussions on this open-source GitHub project are more than welcome.

We produce explanations of two data instances from the training set that exhibit distinct attributes and whose explanations can well be comprehended by users.

Humans can most probably make an educated guess about why the Countess of Rothes did survive the Titanic disaster, while third class passenger Mr.

Dooley did not (see here for more info).

Using this knowledge users can validate the models when given explanations about a models functioning.

The provided .

csv files containing all data instances are easily loaded by using our implementation.

Further, models are used that are able to predict outcomes for these instances.

These will later be explained by using anchorj.

One is a pre-trained GBM imported from h4O/R and the other is a random forest.

However, it really does not matter which kind of model is used.

Anchors is able to deal with any type of model, as long as its predict/classify function is accessible.

Using anchorj is as simple as defining the model and the instance to be explained.

All other parameters can be configured optionally.

The results are visualized below:Imported Model:Random Forest:Imported Model:Random Forest:Explaining the explanation: a results first part consists of its predicates, i.

e.

, conditions and prediction specifying for which instances it is valid.

After this, the precision and coverage are stated.

In our case, coverage refers to the share of entries the rule holds for.

The last rule, for example, covers 33% of the instances, meaning 33% are male, third-class and so forth.

In these cases, the result is 100% precise, meaning for these passengers the prediction is the same with a 100% probability.

These results show that both models have learned mostly correct associations.

Both models come to their decisions by “thinking” that female first-class passengers likely survive, while male passengers do not.

However, the imported model takes into account more specific features that are probably harder to generalize.

In combination with the low accuracy and coverage, this points to a faulty model which probably generalizes poorly.

On the contrary, the random forest takes features into account that we would actually expect to be present in an explanation.

Its explanations also exhibit a high coverage, indicating it has learned generally valid coherencies.

Nonetheless, both models miss taking the name of the passengers into account.

We would expect longer, aristocratic, names to be an indicator for survival.

The knowledge we gained out of these explanations could be used to amend the random forests models training process until, ultimately, explanations are satisfactorily, and build trust.

Anchors and anchorj aim to make machine learning productively viable by providing the means to detect faulty models before they are deployed.

They close the gap between opportunities created by machine learning technology and the associated risks.

In our example, we have shown how a specific model can be validated or refuted by including humans into the loop.

Anchors application is not limited to this type of problem.

It can, for example, be used by global explainers to explain a larger part of the model.

Such algorithms and various other features are included in our implementation.

See you on GitHub!Bio: Tobias Goerke is an IT-Consultant and XAI researcher at the viadee Consulting AG, Germany.

He recently finished his M.

Sc.

in Information Systems.

Resources:Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.. More details

Leave a Reply