Artificial intelligence (AI) in banking: The double-edged sword

This could be another can of worms.

While this definition is correct it is hardly a practical explanation.

At the other end, we have technical explanations where AI is used as an umbrella term for a combination of techniques such as machine learning (ML), deep learning (DL), statistics, mathematics, and other advanced analytical techniques.

The industry is in dire need of a workable explanation and classification of AI which will help in improving the quality of discussions around AI and risks associated with it.

The Defense Advanced Research Projects Agency (DARPA), the standard bearer for cutting edge innovation in AI, has come up with a novel way to explain AI by their ability to process information.

The DARPA defines AI as a “programmed ability to process information”.

The notional intelligence to process information is measured in four categories perceiving, learning, abstracting and reasoning.

The evolution of AI is explained in three waves, handcrafted knowledge, statistical learning and contextual adaptation which can be applied to understand the use of AI in the financial industryWave 1: Handcrafted KnowledgeHandcrafted knowledge models are rule-based AI models where humans define the structure of knowledge and machines explore the specifics.

The rule-based engine is a rudimentary AI application that enables reasoning over ‘narrowly defined problems’ but nonetheless, they are AI as per DARPA.

For example, the way banks calculate Basel II regulatory capital where a pre-determined capital formula is applied for a set of pre-defined products.

If there is new information such as a new product then the rule needs to be updated.

These AI models do not have any learning capability and handle uncertainty very poorly.

These first wave AI models are still relevant in emerging and accelerating risks such as cybersecurity.

Recently, a system called ‘Mayhem’ based on first wave principles solved a decade-old security challenge in DARPA’s Cyber Grand Challenge (CGC).

Banks have handrafted knowledge AI models in abundance but very few recognise them as models let alone AI.

They are usually considered ‘deterministic tools’ or ‘software’.

Governance and risk management are not explicit but usually managed under the umbrella of technology risk.

The inherent risk of first wave AI models as explained by their ability to handle uncertainty is very high.

However, these models exist in a highly controlled environment as they do not cope very well with the dynamics of the natural world.

The realised risk of first wave AI models can be very low as they are applied to narrowly defined problems, which reduces uncertainty, to begin with.

Nonetheless, model usage can pose catastrophic risks if these models are making material decisions.

Courtesy: The DARPAWave 2: Statistical LearningStatistical models are capable of learning within a ‘defined problem domain’.

For example, the domain could be language processing or visual pattern recognition.

The complexity of the problem is represented by data.

Richer data will yield greater information requiring more complex (non-linear) learning algorithms to represent them.

For example, if you have 10 pictures of dogs you can develop a simple algorithm that learns from 10 pictures and identifies a dog with reasonable accuracy.

However, if you have a thousand pictures with different breeds of dog you can develop a much more complex model, which can identify not just the dog but the breed as well.

Statistical learning AI models possess nuanced classification and prediction capabilities but have minimal reasoning ability and no contextual capability.

The financial industry is rife with statistical learning models.

The credit rating models that determine the probability of an individual or a company defaulting and anti-money laundering models that estimate the propensity of money laundering are some of the examples.

Most of these existing statistical learning AI models are a simpler representation of the real world and follow the Occam’s razor principle.

Some of the newer models used in modelling conduct, fraud, etc.

are beginning to exploit big data and complex concepts such as Natural Language Processing (NLP).

The inherent risk of the second wave models is high as explained by their ability to handle uncertainty.

They cope reasonably well with uncertainty but dependent on data that represents the uncertainty.

More complex algorithms, which use big data, are statistically impressive but individually unreliable.

They are prone to inherent biases that can be exploited.

Any autonomy of these models needs to be monitored and governed, as maladaptation unwanted behaviour is possible.

Overall, the realised risk of these AI models can be very high as the uncertainty of the problem domain can vary by large margins.

As with the first wave models the use of statistical AI models can result in catastrophic risks based on the materiality of decisions.

Courtesy: The DARPAWave 3: Contextual AdaptationContextual adaptation AI models explain decisions and usually constructed by systems with a contextual explanation of the real-world phenomenon.

The key here is explainability and automation.

Generative models create explanations that provide context to the decisions and probabilities.

This is cutting edge research under the banner of Explainable Artificial Intelligence (XAI).

This is important in dealing with complex decisions such as ones involving ethical dilemmas.

Contextual adaptation is critical in reducing the model risk and ultimately the risk of decisions as most of the decisions are expected to be automated in the future.

The inherent and realised risks of the third wave AI models are very high due to the immaturity of the field.

While explainability is important to be aware of AI decisions it is absolutely critical to manage risks.

Courtesy: The DARPAIn summary…The financial industry is already using advanced techniques discussed in the first and second wave of AI models to make decisions or to inform decisions.

Any concerns raised about the use of AI in banking needs to be measured and specific.

A blanket call on the risks posed by AI models doesn't necessarily incorporate the fact that banks have built significant capabilities to handle most of the risks discussed in the first and second wave of AI models.

Regulation needs to catch up and speed up the pace of policy setting and guidance to alleviate any inertia in implementing AI.

There is a considerable risk of either hindering the progress of AI use banking or not appropriately realising the risks that AI models bring to banking.

The AI governance efforts need to start with the appropriate identification and classification of AI models.

The DARPA method discussed here may not be the final solution but it is certainly the starting point.

Finally, AI is meant to reflect human intelligence.

More scared we are of it scarier it looks.

It is time to approach the AI risks head on and promote the use of AI in the banking industry.

.

. More details

Leave a Reply