arrow left facebook twitter linkedin medium menu play circle
September 2, 2020 - Kaila Cappello

A Guide to Model Governance for Financial Institutions: Q&A with Kaila Cappello

Financial institutions (FIs) use models to address a variety of business needs. Models serve to bring vague or intangible ideas to life by illustrating concepts and data, which are then used to inform and streamline management decisions. 

For all their benefits, introducing models into the management structure also introduces a new source of risk. When models are not properly constructed, they cannot effectively fulfill the purpose for which they were intended. Financial institutions must also implement model governance that will ensure data and modeling remain intact, accurate, reliable, and effective. 

Kaila Cappello, Technical Account Manager for DataVisor, shared more on model governance frameworks in a recent interview. Here’s what she had to say regarding the role of model governance for financial institutions.

What Is Model Governance?

All models are created using specific data sets. To ensure the quality of the data and the reliability of the data is used, FIs are legally required to establish model governance processes. 

Kaila explains:

“Model governance ensures that a model is operating in the way it is intended and evaluates the efficacy and limitations of a model. Through governance processes, FIs can better understand how to interpret and apply the model results to inform their decision-making.” 

In doing so, they’re able to minimize the risks introduced by the potential for incorrect insights from model output, which could lead to ill-informed decision-making.

Use Cases for Model Governance in FIs

Businesses across many industries use models to aid in rapid decision-making. Specifically for financial institutions, these decisions may include (but are not limited to):

  • Credit scoring models to predict creditworthiness, defaults, or delinquencies
  • Interest rate risk models
  • Pricing models to determine asset values
  • Credit application approvals
  • Fraudulent transaction identification

For each of these and other use cases, FIs require an evaluation framework that can support the reliability of the results the model generates. This framework also helps to support trustworthy decision-making and reduce the potential negative impacts associated with inaccurate or incomplete model data. 

For example, FIs may be able to minimize losses associated with loans or credit based on credit scoring models. Modeling also helps to promote fair lending and discourages discriminatory lending practices.  

How Model Governance Frameworks Are Developed

FI internal model risk management (MRM) teams are typically the ones charged with creating model governance processes. They review the model complexity, risks associated with the model, and the impact of the model on business processes based on how the model will be used. 

When constructing the framework, MRM teams take into account the model’s end-to-end development, testing, and implementation processes, along with the following factors:

  • Data sources and data quality
  • Model methodology (DataVisor’s unsupervised machine learning (UML) falls into this category)
  • Test plan
  • Model performance evaluation for ongoing monitoring
  • Ongoing monitoring to account for data changes or performance shifts
  • Implementation
  • Change control processes

The specific use cases of the model will inform the specific metrics and procedures that need to be met for the model evaluation.

How DataVisor’s UML Models Benefit Model Risk Management for FIs

Kaila notes that because unsupervised machine learning is still relatively new to many financial institutions, MRM teams place heavy emphasis on the model methodology. “A lot of FIs still default to supervised machine learning, where you’re training a model on certain sets of data,” Kaila explains. “Our processes with UML and AI are completely different. There’s no training, per se, since we are not using labels as input.”

Because of UML’s unique, untrained approach, FIs can expect a number of benefits from using DataVisor in their model risk management and governance processes:

No sensitivity to outliers or data skews

Traditionally, understanding how a model reacts to outliers or data skews is a major focus of model governance. However, UML isn’t sensitive to these data changes because the data is contained in a dynamic profile that’s observed over time.

An example of this would be the introduction of a new model of iPhone. Many data models might pick up on a new device being used to conduct financial activity and deem it suspicious, but UML would account for this and make it part of the dynamic profile. UML is inherently more flexible and resilient than “trained” models because it leverages global data to consider common values and similarities as opposed to only the values that are currently known.

Performs consistently over time

Because UML uses a dynamic data profile, there’s little maintenance required in maintaining model accuracy over time. There’s also no re-training required, which isn’t the case for supervised data models. Ongoing performance monitoring is usually performed to ensure value consistency.

Kaila recalls that DataVisor performed a POC for a client in 2018, then evaluated the most recent production data in 2020: “It was pretty much the same as what we saw in the POC two years ago,”  she observes.

Though DataVisor’s UML model doesn’t require data training, Kaila notes that DataVisor does go through a fine-tuning process with each client around their specific business logic. “We want to understand if we’re detecting any groups of false positives or signals. For example, we might be getting an odd signal from a specific ISP and need to drop the values associated with that ISP. These are very specific to each client, which is why we take this extra step before implementing the algorithm and letting it do its work.”

Doesn’t require external data sources

DataVisor’s model uses clustering across internal data to find patterns, such as the same transactions coming from the same IP addresses.  

Using this data, DataVisor outputs a risk score from 0 to 1 of how risky a user is and why the risk was detected in comparison to other users. 

Extensive documentation of model development process

To create DataVisor’s UML model, each feature uses a unique algorithm to identify patterns within specific data sets. (e.g. email naming patterns, usernames, etc.) Each feature includes multiple layers of engineering that feeds into the clustering algorithm to create the model. The algorithm reviews data inputs against multiple sources and dimensions to identify risks with higher accuracy compared to supervised machine learning.

Extensive documentation of how the model functions provides assurance to FIs that the model is accurate and working the way in which it is intended. 

Talk with our experts to see DataVisor’s machine learning models in action with a quick demo.

Photo of Kaila Cappello
about Kaila Cappello
Kaila is a Technical Account Manager at DataVisor. She has 7 years of experience in technical account support and data-driven solutions implementations.