Apply the proven Three Lines of Defense model to your AI governance program for robust oversight and accountability.
The Three Lines of Defense model is one of the most durable frameworks in risk governance. Originally developed for financial services, it has been adopted across industries as a way to organize accountability, oversight, and assurance functions. For AI governance, it provides a structural answer to a question that many organizations struggle with: who is responsible for governing AI, and how do those responsibilities relate to each other?
The answer is not simple, because AI governance cuts across organizational boundaries in ways that traditional risk categories do not. A single AI system might be built by technology, deployed by a business unit, monitored by risk management, and audited by internal audit — with legal, compliance, and procurement all playing roles in between. Without a clear accountability structure, governance becomes diffuse, duplicative, and ultimately ineffective.
The Three Lines model provides that structure.
The First Line — Business Ownership and Operational Control
The first line of defense consists of the business units and functions that own, develop, and deploy AI systems. In the AI context, this includes the business sponsors of AI initiatives, the technology and data science teams that build and operate AI systems, and the process owners whose workflows AI systems are embedded in.
First-line accountability for AI governance means that business owners are responsible for ensuring their AI systems are governed appropriately — not that they delegate governance to a central function and consider the matter closed. This includes conducting or commissioning risk assessments before deployment, implementing required controls, maintaining documentation, monitoring system performance, and escalating issues through defined channels.
In practice, the first line often needs support to discharge these responsibilities effectively. AI governance functions in the second line typically provide the policies, frameworks, tools, and guidance that enable first-line teams to govern AI without becoming governance specialists themselves.
The Second Line — AI Governance and Risk Oversight
The second line of defense provides independent oversight of first-line AI governance activities. In most organizations, this function sits within risk management, compliance, or a dedicated AI governance office.
Second-line responsibilities in AI governance include developing and maintaining the AI governance framework — policies, standards, procedures, and risk taxonomy. They include operating the AI inventory and risk register, conducting or overseeing AI risk assessments, reviewing and approving high-risk AI deployments, monitoring compliance with governance requirements, and reporting AI risk to senior management and the board.
The second line is also typically responsible for regulatory intelligence — tracking the evolving AI regulatory landscape and translating new requirements into governance obligations. In a period of rapid regulatory development (EU AI Act, NIST AI RMF, OSFI guidance, ISO 42001), this function is critical.
Critically, the second line must maintain independence from the first line. It cannot both own AI systems and provide independent oversight of them. Where this boundary is blurred — as it sometimes is in organizations where the Chief Data Officer or Chief Technology Officer owns both AI development and AI governance — the oversight function is compromised.
The Third Line — Internal Audit
The third line of defense provides independent assurance to the board and senior management that AI governance is operating as intended. Internal audit's role is not to govern AI — that is the first and second line's responsibility — but to assess whether governance is effective and to identify gaps, weaknesses, and control failures.
For internal audit functions, AI governance presents new challenges. Traditional audit methodologies were not designed to assess AI systems, and many audit teams lack the technical expertise to evaluate model risk, algorithmic fairness, or agentic AI controls. Building AI audit capability — through training, specialist hiring, or co-sourcing arrangements — is a priority for organizations with material AI exposure.
AI audit coverage should include the governance framework itself (are policies adequate and current?), the AI inventory (is it complete and accurate?), the risk assessment process (are assessments being conducted, and are they rigorous?), high-risk AI systems (are controls operating effectively?), and the second-line oversight function (is it independent and effective?).
Organizations that struggle with AI governance under the Three Lines model typically exhibit one of several failure patterns.
First-line abdication occurs when business units treat AI governance as a second-line responsibility — submitting AI systems for review but taking no ownership of ongoing governance. The result is a second-line function overwhelmed with governance work it was never designed to perform, and first-line teams with no accountability for AI outcomes.
Second-line capture occurs when the second-line function becomes so embedded in AI development decisions that it loses its independence. This is particularly common in organizations where AI governance is housed within the technology function. When the same team that advises on AI design is also responsible for approving AI deployments, the oversight function is structurally compromised.
Third-line avoidance occurs when internal audit defers AI governance assurance because of technical complexity. The result is a governance program that has never been independently tested — and board assurance that is based on management representation rather than audit evidence.
Governance gaps at the boundaries occur when accountability for AI systems that cross organizational lines — vendor-supplied AI, AI embedded in enterprise software, AI used by one business unit but owned by another — falls between the lines. Clear ownership assignment for every AI system in the inventory is the remedy.
Implementing the Three Lines model for AI governance requires explicit design, not organic emergence. Organizations should define first-line AI governance responsibilities in role descriptions and governance policies. They should establish second-line AI governance functions with clear mandates, resources, and independence. They should build internal audit AI capability and establish a risk-based AI audit program. And they should create the escalation paths, reporting structures, and information flows that connect the three lines into a coherent oversight system.
The Three Lines model does not eliminate AI risk. No governance structure does. But it creates the organizational conditions under which AI risk can be identified, owned, managed, and assured — which is the foundation of defensible AI governance.
Aeon AI Risk Management
We help regulated enterprises build AI governance frameworks that satisfy regulators, protect the business, and enable responsible innovation.
Practical insights on AI governance frameworks, regulatory developments, and risk management — written for practitioners in regulated enterprises.
No spam. Unsubscribe at any time.