MODEL RISK

Model Risk Management in the Age of AI: What the CBI Expects

For years, model risk management in Irish financial services meant managing the risk of quantitative models used in credit scoring, market risk, and capital calculations. The frameworks were well-established, the expectations were understood, and the validation processes were mature.

Artificial intelligence has changed that picture fundamentally. AI models — particularly machine learning models — behave differently from traditional quantitative models. They are more complex, less transparent, and more sensitive to changes in the data they are trained on. They can produce outputs that are difficult to explain, and they can fail in ways that are hard to predict. For regulated firms, this creates a new and urgent challenge: how do you apply rigorous model risk management disciplines to systems that were not designed with those disciplines in mind?

The Central Bank of Ireland is asking exactly that question. And it expects your firm to have a credible answer.

What the CBI Expects: The Core Principles
The CBI's model risk expectations are not new. The principles of independent validation, documentation, governance, and ongoing monitoring have been embedded in supervisory guidance for years. What has changed is the scope of their application. The CBI now expects these principles to apply to AI and machine learning models with the same rigour as they apply to traditional quantitative models.

Expectation: What it means in practice
Model Inventory:
A complete, documented register of all AI and ML models in use, including third-party models and models embedded in vendor software.
Risk Classification:
Each model classified by risk tier, with high-risk models subject to enhanced validation and oversight.
Independent Validation:
High-risk AI models validated by a function independent of the model development team, with documented evidence of testing for accuracy, robustness, fairness, and explainability.
Ongoing Monitoring:
Continuous monitoring of model performance against defined thresholds, with clear escalation procedures when performance degrades.
Human Oversight:
Documented processes for human review and override of AI-driven decisions, particularly in high-stakes areas such as credit, claims, and fraud.


The Challenge of Explainability
One of the most significant challenges AI creates for model risk management is explainability. Traditional credit scoring models could be interrogated relatively easily, you could trace a decision back through the model's logic and explain why a particular outcome was reached.

Many AI models particularly deep learning models do not work that way. They produce outputs based on patterns in data that can be extraordinarily difficult to articulate in plain language. This creates a direct tension with the CBI's expectation that firms can explain AI-driven decisions to customers, to internal governance bodies, and to the regulator.

The EU AI Act addresses this directly. For high-risk AI systems which include AI used in credit scoring, insurance underwriting, and employment decisions, the Act requires that firms be able to provide meaningful explanations of automated decisions. This is not a future obligation; it applies now to systems already in use.

Bias and Fairness: A Supervisory Priority
The CBI and EIOPA have both signalled that bias in AI models is a specific supervisory concern for 2026, particularly in areas where AI-driven decisions could disadvantage certain groups of customers. This includes:

Credit decisioning: AI models that systematically disadvantage applicants based on protected characteristics, even indirectly through proxy variables.
Insurance pricing: AI-driven pricing models that introduce discriminatory bias into premium calculations.
Claims processing: AI systems that apply different standards to different groups of policyholders.
Addressing this requires more than a one-time bias test at model deployment. It requires ongoing monitoring, with defined metrics, thresholds, and escalation procedures. Firms that cannot demonstrate this level of ongoing oversight are exposed to significant supervisory risk.

Third-Party AI: A Gap Many Firms Are Missing
Perhaps the most significant gap in AI model risk management at many Irish firms is the treatment of third-party AI. Many organisations use AI models embedded in software from vendors credit scoring engines, fraud detection systems, AML monitoring tools without having the same level of documentation and validation that they would apply to internally developed models.

"The regulatory obligation does not transfer to the vendor. If your firm uses an AI model to make or inform a regulated decision, your firm is responsible for demonstrating that the model is governed, validated, and monitored to the required standard — regardless of who built it." — CBI Regulatory & Supervisory Outlook 2026

This means firms need to identify every AI model in use including those embedded in third-party software obtain sufficient documentation from vendors to enable independent validation, and where documentation is not available, assess whether the model can continue to be used in a regulated context.

Where to Start
For most firms, the starting point is a comprehensive AI model inventory. You cannot manage what you have not mapped. A well-structured inventory covering model purpose, risk classification, validation status, and ownership gives you the foundation for everything else.

From there, the priority is to identify your highest-risk models and ensure they have the validation evidence, explainability documentation, and monitoring frameworks the CBI will expect to see. This is not a theoretical exercise. It is the practical work of building a governance posture that will withstand regulatory scrutiny.

Questions to Ask About Your AI Models Today
Do you have a complete inventory of every AI model your firm uses, including those in third-party software?
Has each model been risk-classified, and are your highest-risk models subject to independent validation?
Can you explain, in plain language, how each of your high-risk AI models reaches its outputs?
Do you have ongoing bias monitoring in place for AI systems used in credit, pricing, or claims?
Is there a documented human oversight and override process for AI-driven decisions?

References
1. Central Bank of Ireland. (2026). Regulatory & Supervisory Outlook 2026. centralbank.ie
2. European Parliament. (2024). Regulation (EU) 2024/1689, The EU AI Act. eur-lex.europa.eu
3. EIOPA. (2023). Artificial Intelligence Governance Principles. eiopa.europa.eu

Liz Bancroft-Turner
Founder & Managing Director, Tolt Innovations
With over 25 years of experience leading technology and transformation programmes across the UK, US, Europe, and Asia for organisations including Credit Suisse, HSBC, Barclaycard, AIB, EY, Accenture, and Microsoft ,Liz founded Tolt Innovations to help Irish regulated financial services firms achieve CBI supervisory readiness.