Why the ‘why way’ is the right way to restoring trust in AI

Explaining explainability can be challenging.

Those steps are necessary, but far from enough:Some decision trees might just as well be a forest (picture courtesy Alain Briancon).

Data scientists’ designs must be built in terms of business impact, not model impact.

If you can’t put a $ measurement or market impact measurement to your results, you have not done enough for quality and explainability.

Explainability must be designed as a quality metric AND an audit method.

Model quality is important.

Every model deployed should be measured along with more than a dozen technical performance metrics for model quality.

The impact of missing data, data source quality, expected or affected customers, the usage of variables and features, along the lifecycle of deployment and valuation of impact should all be part of the design from the get-go.

While Subject Matter Experts reviews are the first line of defense to ensure the rationale behind AI-powered decisions, the explanation must be outside the language of AI geeks.

Explaining is quality control on Cerebri Values CX platform (picture courtesy Alain Briancon)A sound level of understanding needs to be adapted to the CEOs, IT, SMEs, and, of course, customers.

For CEOs and CTOs, understanding the impact of missing and imputed data, protected and private attributes, the balance of multiple KPIs should be integral to the rollout of AI across their businesses.

Care needs to be taken to ensure model training is unbiased, fair (whether its supervised, unsupervised or utilizes reinforcement learning) and avoids forcing a fit to preconceived behaviors.

Large-scale organizations often gather customer data in one “customer journey per function”, and then analyze these journeys for insights that drive engagement and financial results.

However, true customer journeys cut across sales, marketing, support, and all other functions that touch the customer.

That means that any AI-driven decisions impact multiple departments and P&L centers.

Goodwill within an organization is important as well.

Robing Q4 product sales with Q3 service sales might be ok if everyone knows about it.

Audit means inspection and traceability of decisions.

It implies a friendly user interface integrated with normal AI operation.

Design for explainability can require trading some performance for explainability.

Data scientists must strive for best in class design, and then pull back performance enough to provide explainability.

Focusing on explainability as a quality metric has additional benefits that compensate for potential performance issues.

Especially when dealing with systems that leverage customer journeys, in contrast with factor-based or demographic based systems, which only look at static variables.

Explaining is inherently a causal interaction.

New techniques are emerging to deal with causality, that in turn improve the performance of models based on customer journeys.

They include SHAPLEY analysis, do logic, interventional logic, counterfactual analysis, Granger causality, graph inference.

These techniques can be used for feature engineering for modeling and improve modeling significantly.

There are significant benefits in building explainability and interpretability into an AI system.

Alongside helping to address business pressures, adopting good practices around accountability and ethics improves the confidence in AI, thus hasten the deployment for CLS applications.

An enterprise will be a stronger position to foster innovation and move ahead of its competitors in developing and adopting new AI-driven capabilities and insight.

For AI to be adopted thoroughly, the backlash against the obvious abuses of privacy from social media concerns (in the western world) and the ‘slap happy’ approach to data security have to be worked out.

To succeed in the long term, AI must be impact/outcome centric.

That means stakeholder explanation centric.

Above all, AI must be customer-centric and that means explaining embedded from the beginning.

“Why?.- Because I am your customer, that is why.

” Bio: Dr.

Alain Briançon, Chief Technology Officer and VP Data Science of Cerebri AI, is a serial entrepreneur and inventor (over 250 patents worldwide) with a vast experience in data science, enterprise software, and mobile space.

 Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.



js; (document.

getElementsByTagName(head)[0] || document.


appendChild(dsq); })();.

. More details

Leave a Reply