Model Risk Management in the Age of AI

77% of financial services industry professionals expect AI/ML to be extremely important to their business by 2022, while only 16% currently employ AI/ML models.

Clearly financial services organizations possess the impetus to take advantage of AI and ML capabilities, and yet models still aren’t being deployed– which exposes a quagmire in the process of model deployment.

Could it be they’re focusing too much on the development aspect and ignoring the criticality of ModelOps? Model validation is required across all regulated industries, but FinServ institutions especially face significant regulatory compliance mandates from the federal government – placing yet another roadblock on their path to AI success.

Given these same institutions leverage thousands of models per day, they must typically staff large teams across their model risk management program, including spinning up large teams of model validators.

The Criticality of ModelOps googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); ModelOps refers to the process of enabling data scientists, data engineers, and IT operations teams to collaborate and scale models across an organization.

This drives business value by getting models into production faster and with greater visibility, accountability and control.

Most of the resources financial service organizations spend on AI initiatives specifically support model development.

This isn’t to say organizations are completely ignoring ModelOps.

Instead, they appear to be treating ModelOps as an afterthought rather than a continuous cycle of deployment, governance and monitoring that serves as the keystone of the AI/ML model lifecycle.

It’s true that AI/ML models are only as good as the data and simulations used to develop them.

However, this has driven many enterprises to prioritize DataOps before taking ModelOps capabilities into consideration.

Organizations that serialize their investments in AI, and strictly focus on DataOps prior to engaging ModelOps capabilities will lose time and ground to competitors in the AI race.

This causes them to miss important opportunities to allow AI/ML initiatives to significantly inform and continuously improve DataOps.

Enterprises should recognize that AI/ML processes will be both a consumer and a producer of data in the long run.

Allow AI/ML investments to immediately drive models into business while providing continuous feedback to the DataOps process.

Model Validation Challenges Model validation is nothing new for FinServ organizations, who must ensure predictive models adhere to an ever-expanding variety of consumer protection and recession-proof regulatory acts like CCAR, FCRA, FILA, and others; as well as model risk management requirements set forth by governmental regulations such as SR 11-7 in the U.

S.

Many financial institutions have poured countless resources into model validation efforts to meet the requirements of regulatory standards.

Thanks to recent advancements in AI and ML technology, and the much-publicized hype behind the AI industry over the past several years, the number of AI and ML models and projects are growing rapidly.

Unfortunately, this also comes with an increase in frequency of updates required for AI/ML models, which necessitates additional model validation.

To compound the issue, the complexity of AI/ML models provided by black box AI programs causes significant interpretability issues for the organizations that wish to achieve explainable AI.

This can lead to model bias if organizations do not understand why their models are making certain predictions.

Banks and financial enterprises are under increasing pressure to reduce operational expenses (OPEX), especially given the current state of the global market.

That said, banks cannot take shortcuts when it comes to model validation.

Because of the mounting media focus on AI bias, it’s critical to ensure that models are looked at from an ethical fairness perspective.

Automating Model Validation to Reduce OPEX Due to the strict regulatory requirements of FinServ model validation, 100% automation is not feasible – human validation will be required to ensure models perform to regulations and standards.

That said, automating facets of the model validation process provides a host of benefits, given business at hand has a highly functioning ModelOps team that enables seamless and efficient transfer of models from 1st to 2nd line teams.

For many FinServ organizations, models guide hundreds of decisions and automate both human and machine-driven operations.

A single business unit may deploy dozens of models simultaneously.

Model development teams, often referred to as “model factories,” use an array of data science tools and techniques to generate and update each model.

This strategy of AI operations is difficult to scale to dozens, much less hundreds of simultaneously deployed models – necessitating the automation of model validation to drive AI at scale.

Once a model has been deployed, it must be continuously monitored.

Unlike software, models decay over time and performance must be tracked in three key metrics: statistically, technically, and from a business perspective.

If any of the model’s metrics fall outside pre-set goals and parameters, the best practice is to automate the model updating and approval process.

This allows a newly optimized version of the model to quickly return to production for further monitoring and assessment.

Management can reproduce the model at any point in its life cycle with automatic updates to model metadata.

This is particularly important in industries or functional areas that require clear model interpretability and rigorous adherence to regulations, such as the financial services industry.

Partially automating model validation enables enterprises to do more with less when it comes to the OPEX surrounding ModelOps.

With so many streamlining factors, it’s easy to see why institutions would do well to adopt automated model validation.

Companies looking into automating the model validation process should engage a ModelOps expert to guide them in the process.

Most enterprises should start small by implementing automated processes into the most repetitive functions of model development, starting with 30-40% of the least complicated models.

From there they can work with the ModelOps expert to build out automated capabilities that are optimized for their business goals and operations.

Analytical models in FinServ will only continue to grow in number and complexity.

Automating validation of the least complex models frees existing staff to concentrate on the most complex (and thus most error-prone) models that require their full attention.

FinServ enterprises can’t make model validation an afterthought in their AI program and strategy.

Development and tooling are crucial to taking advantage of AI and ML capabilities, but those are just a small portion of the processes required to get AI models in business.

About the Author Stu Bailey is the Co-Founder and Chief AI Architect of ModelOp.

He is a technologist and entrepreneur who has been focused on analytic and data intensive distributed systems for over two decades.

Stu is the founder and most recently Chief Scientist of Infoblox (NYSE:BLOX).

While bringing successful products to market over the years, Stu has received several patents and awards and has helped drive emerging standards in analytics and distributed systems control.

During his six years as technical lead for the National Center for Data Mining, Stu pioneered some of the very first analytic applications to utilize model interchange formats.

Sign up for the free insideBIGDATA newsletter.

.

Leave a Reply