Blog

What is Model Risk Management (and why it’s important)

January 21, 2021
What is Model Risk Management (and why it’s important)

"When the last financial crisis hit in 2008, Model Risk Management (SR11-7) guidelines did not exist. When COVID hit in 2020, (the banking industry) got the benefit of MRM." This is one of the many insights Agus Sudjianto, EVP Corporate Model Risk at Wells Fargo shared with the audience at the recent Global Artificial Intelligence Virtual Conference as part of the Model Risk Management Round Table along with Professor Nikola Gradojevic (University of Guelph), Josh Bottum (Arrikto), David Van Bruwaene (Fairly AI), Stuart Maiden (Fairly AI).



AI has a significant hold over today’s society in a way that affects every one of us, not just in the banking industry. Yet many of them are not nearly as refined as they could be. When AI controls everything from hiring processes to advertisement to manufacturing, one would expect AI to be held to a higher standard. Model risk management (MRM) is how that standard is met, and the process by which changes are made.


At its core, any AI can be boiled down to a process. Google wants to show you the search results that satisfy the user and keeps them engaged in the platform. A Tesla wants to get someone from point A to point B safely while abiding by traffic laws. AI in video games wants to simulate a believable player. There is always a desired result, and an actual result. Before any changes to a model take place, actual results are compared against expected results. The goal is for the AI to produce the correct results within a given frequency and within a given accuracy. If these thresholds are breached, a change may need to be made to the model to maintain its integrity.


At this point, both the data and model are analyzed. Are outliers causing odd results? Are there external factors? Is the model itself biased and inefficient? Taking the example of a self-driving car, there are many reasons it may fail its task and perform slowly. Perhaps it was simply manufactured poorly. Perhaps the AI is making poor decisions. Or perhaps every time a model was tested, there was bad weather or severe traffic on the road, and this reflected poorly in the data. Whatever the case, these questions are investigated and answered. Finally, changes are made, external factors are accounted for, and the model is tested again. This cycle repeats until the desired accuracy is met.


Three lines of defense


While the above process sounds simple enough, the reality of the matter is that MRM requires a complex network of teams and professionals to be achieved. The European Commission’s draft regulation around AI requires that high-risk models are independently validated and audited, and the potential risk caused by faulty AI provides enough incentive to strictly monitor every model regardless of imposed regulations. The network of teams responsible for the MRM typically look something like this:


The first line of defense are model owners, model users and model developers. They are the most familiar with the behind-the-scenes processes that allow the AI to work and are typically the first to be contacted by other teams when something goes wrong. The first line of defense may not know current internal policies and external regulations but are required to build around those requirements. This may include things such as risk thresholds or preventing discrimination by using algorithms that comply with their country’s existing laws such as the Equal Opportunity Act.


The second line of defense is where an AI model is stress tested to ensure it fits within various thresholds, and typically consists of Model Validation team, Model Governance team and an Oversight committee. Validation is the process of testing a product or system to make sure it does what it is intended to do, that the model is fit for purpose. In the case of an AI model, various inputs are tested to search for any outliers that may be cause for concern, and to double check that the model’s results line up with its expectations. The role of the Model Governance team is to ensure proper governance processes are in place.


Compliance team also sits in the second line of defense at many financial institutions. While validation ensures that an AI model fulfills its purpose, the Compliance team checks if it follows government regulations and compliance standards. The logic behind decisions is checked to protect against legal trouble, and by proxy (because most laws are in place to protect the public), this also ensures fairness and non-bias behind a model’s logic.


The third line of defense is Internal Audit. The role of Internal Audit is to ensure expected processes and procedures are followed diligently and accurately. Read more about Internal Audit’s role in auditing AI/ML in our blog post: Who is in charge of auditing AI/ML models? The important gatekeepers of Responsible AI.


Summary


Model risk management is an important process that prevents, detects and mitigates financial and ethical risks, and keeps the complex decision making of artificial intelligence just and fair. MRM protects businesses by preventing discrimination lawsuits and PR disasters and protects the public by preventing discrimination due to arbitrary factors such as name or race. Artificial intelligence is an inevitable part of our future. It opens gateways to efficient automation and extremely fast data management and has the potential to ramp up the productivity of society to unprecedented levels. It is important that a technology so powerful and integral to our society is kept in check and given chains while in its infancy, else the risks of such a technology may become equal to its pros.


Download Fairly AI’s whitepaper on Model Risk Management

"Would You Risk This Happening to Your Firm?"



About FAIRLY


FAIRLY’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance SaaS solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world.  Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help