Why AI Governance is Not an IT Problem

As financial services firms grapple with the implications of the EU AI Act and the Central Bank of Ireland's increasing focus on technology risk, a common mistake is emerging: treating AI governance as an IT problem.

It is an understandable error. Artificial intelligence is a complex technology, and its implementation is often led by IT departments and data science teams. But to delegate the governance of AI to the IT function is to fundamentally misunderstand the nature of the risk. AI risk is not primarily a technological risk, it is a business risk with significant regulatory, reputational, and conduct implications.

When an AI model used for credit scoring is found to be biased, the consequences are not a server outage, they are a breach of the Consumer Protection Code, a supervisory review by the CBI, and a loss of customer trust. When an AI-powered investment recommendation tool fails, the issue is not a software bug, it is a potential violation of MiFID conduct of business rules.

This is why the board and senior management cannot afford to see AI governance as something that happens in the server room. It must be owned at the highest levels of the organisation, as a core component of the firm's overall risk management framework.

The Three Lines of Defence Model for AI
The well-established Three Lines of Defence model for risk management provides a clear and effective framework for assigning ownership of AI governance:

1. First Line: The Business - Owns and manages the risks associated with the AI systems they use. Responsible for day-to-day risk management, controls, and ensuring that AI systems are used appropriately and ethically.

2. Second Line: Risk & Compliance - Sets the firm's AI risk appetite and governance framework, and provides independent oversight and challenge to the first line. Responsible for ensuring the firm's use of AI aligns with regulatory requirements.

3. Third Line: Internal Audit - Provides independent assurance to the board that the AI governance framework is designed effectively and operating as intended. Tests the controls and provides an objective assessment of the firm's AI risk posture.

In this model, the IT department is a critical partner, but it is not the owner of AI risk. IT's role is to ensure the security, robustness, and resilience of the underlying technology infrastructure. But governance and compliance must be owned by the business and the risk functions, with ultimate accountability resting with the board.

Questions for the Board
For board members and senior executives at regulated financial services organisations, the shift to this model requires asking a different set of questions about AI:

Board-Level Questions on AI
1. Not just "what can this technology do?" but "what are the risks if it goes wrong?"
2. Not just "who is building the models?" but "who is validating them independently?"
3. Not just "is the system secure?" but "is the system fair, explainable, and auditable?"
4. Not just "what is our IT strategy for AI?" but "what is our AI risk appetite, and who owns it?"

Treating AI governance as a business-wide responsibility, owned by the board and managed through the three lines of defence, is no longer a matter of best practice. With the EU AI Act now in force and the CBI's supervisory focus intensifying, it is a regulatory necessity. The firms that make this shift now will be the ones that can innovate with confidence, knowing that their use of AI is not just powerful, but also safe, fair, and compliant.

Liz Bancroft-Turner
Founder & Managing Director, Tolt Innovations
With over 25 years of experience leading technology and transformation programmes across the UK, US, Europe, and Asia for organisations including Credit Suisse, HSBC, Barclaycard, AIB, EY, Accenture, and Microsoft. Liz founded Tolt Innovations to help Irish regulated financial services firms achieve CBI supervisory readiness.