arrow left facebook twitter linkedin medium menu play circle

SR 11-7 Compliance: 3 Core Elements for Model Validation

An SR 11-7 compliant validation framework includes 3 core elements: an evaluation of conceptual soundness, ongoing monitoring, and outcomes analysis.

By Alex Niu October 18, 2018

Photo of Alex Niu

about Alex Niu
Alex Niu is Director of Solution Engineering at DataVisor. He brings a decade of experience in the financial industry to his role, with a focus on risk management analytics. He was previously Director of Decision Science at American Express, where he led a team of data scientists developing and implementing advanced machine learning solutions.

Model Validation: 3 Core Elements for SR 11-7 Compliance Thumbnail

Model validation is a defined set of processes and activities intended to verify that models are performing as expected, aligned with their design objectives, and business uses, and identify potential limitations, assumptions, along with assessing their possible impact. Whether you are a data scientist or a risk analyst, model validation is critical for you not only to meet the regulatory expectations, like those outlined in the SR 11-7 guidance, but also to enable you to work with confidence knowing that your results are accurate. Systematic procedures for validation help banks understand the internally built models and vendor products and its capabilities, applicability, and limitations.

By failing to validate models, banks run the increased risk of regulatory criticism, fines, and penalties.

I’ve had the good fortune of working with several banks over the last few years guiding them in the development of machine learning models to serve their customers better and enhance risk management. Armed with information on what to look for while conducting a reliability assessment of a given model, banks can practice due diligence and be regulatory compliant.

Three Key Elements of SR 11-7

So what does an effective and comprehensive validation framework look like? According to the SR 11-7 guidance issued by the Federal Reserve, here are its three core elements:

• Evaluation of conceptual soundness, including developmental evidence: This step involves assessing the quality of the model design and construction. It entails review of documentation and empirical evidence supporting the methods used and variables selected for the model. A lack of documentation is a red flag. The documentation captures all assumptions about data, usage, purpose, criticality, validation, redundancy et al. It also includes data sources and as well as the business case. Documentation and testing should convey an understanding of model limitations and assumptions, and validation should ensure that judgment exercised in model design and construction is carefully considered.

• Ongoing monitoring, including process verification and benchmarking: This step is essential to evaluate whether changes in products, exposures, activities, clients, or market conditions necessitate adjustment, redevelopment, or replacement of the model and to verify that any extension of the model beyond its original scope is valid. Process verification checks that all model components are functioning as designed. It includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design. Sensitivity analysis and other checks for robustness and stability should be repeated periodically – based on the model category. They can be as useful during ongoing monitoring as they are during model development. If models only work well for certain ranges of input values or market conditions, they should be monitored to identify situations where these constraints are approached.

• Outcomes analysis: This analysis is a comparison of model outputs to corresponding actual outcomes. It helps to evaluate model performance, by establishing expected ranges for those actual outcomes in relation to the intended objectives and assessing the reasons for observed variation between the two. If outcomes analysis produces evidence of poor performance, the bank should take action to address those issues with the vendor. Outcomes analysis typically relies on statistical tests or other quantitative measures. It can also include expert judgment to check the intuition behind the outcomes and confirm that the results make sense. This analysis should involve a range of tests because any individual test will have weaknesses. Models with long forecast horizons should be back-tested, but given the amount of time it would take to accumulate the necessary data, that testing should be supplemented by evaluation over shorter periods.

Bringing it All Together

At DataVisor, we understand the importance of being SR 11-7 compliant and collaborate with our customers to provide model validation in the space of fraud. The algorithm of DataVisor’s Unsupervised Machine Learning platform is transparent and offers end-to-end processing with robust documentation.

We are built to handle fraud prevention and follow industry best practices in theory, methodology and performance.

So the next time you evaluate a vendor, be sure to ask the right questions and cover all ground. Because your organization’s growth depends on it.


Popular Posts

Intelligent solutions. Informed decisions. Unrivaled results.

DataVisor Fraud Index Report: Q2 2019

Learn More

The DataVisor Q2 2019 Fraud Index Report is here.

Customers online want convenience, ease, and access. Fortunately, your business offers it all. Unfortunately, that’s what fraudsters want too. To a cyber criminal, those features mean vulnerabilities. To bring you the very latest and most actionable insights about where the risks are and what you…

Dumb & Dumber vs Ocean’s 11

Learn More

Understand the range of modern fraud attacks to ensure complete coverage for your organization.

Complex and coordinated fraud attacks that are extensively planned, hard to detect, and highly scalable are the new normal for online platforms. Explore and understand the full spectrum of fraud attacks—from simple to sophisticated—and learn how you can defend against each type in this…

Diagnose and Defeat Application Fraud with the Latest AI-Powered Tools

Learn More

Learn how leading financial institutions are using ML to proactively detect card application fraud.

In this insightful webinar, you’ll explore how organizations are leveraging AI-powered fraud management solutions to get tangible, real-world benefits as they work to proactively detect and defeat sophisticated modern fraud attacks. Plus, you’ll discover strategies for empowering cross-team…


Protect your business, your customers, and your data.

Request Demo