Governing complex IT and AI: a board-level assurance checklist

Featured

Featured connects subject-matter experts with top publishers to increase their exposure and create Q & A content.

3 min read

Governing complex IT and AI: a board-level assurance checklist

© Image Provided by Featured

Governing complex IT and AI: a board-level assurance checklist

Authored by: Erika Eliasson-Norris

The board’s relationship with technology has fundamentally shifted. Where once IT was delegated downward as an operational matter, boards now find themselves accountable for systems they may not fully understand—from legacy infrastructure groaning under modern demands to AI applications whose decision-making logic remains opaque even to their developers.

This creates an accountability trap. Directors cannot simply defer to management on technology governance when shareholders, regulators, and the public expect board-level oversight of cyber resilience, data ethics, and algorithmic fairness. Yet without the right assurance framework, boards risk oscillating between rubber-stamping technical presentations they don’t comprehend and micromanaging details they shouldn’t touch.

The solution lies not in directors becoming technologists, but in boards asking the right questions and demanding the right assurance.

The assurance gap

During an inquiry-level investigation I conducted for a mid-sized financial services firm, the board had received quarterly cyber security updates for three years. The papers were detailed, the presentations polished. Yet when a significant data incident occurred, directors discovered they’d been reviewing metrics that measured activity—not effectiveness. They knew how many patches had been deployed. They had no idea whether critical vulnerabilities remained unaddressed.

This isn’t unusual. Technology governance often suffers from what I call “assurance theatre”—impressive-looking reports that create an illusion of control without delivering genuine oversight.

A practical checklist

Effective technology governance requires boards to move beyond receiving information to actively testing the robustness of what they’re told. Consider this framework:

Strategic alignment. Does the board understand which technology investments are genuinely strategic versus simply necessary? Can directors articulate how IT and AI capabilities connect to the business model and competitive position? If asked, could each director explain the company’s three most significant technology dependencies?

Risk comprehension. Has the board identified its crown jewels—the data, systems, and algorithms that would cause existential damage if compromised or lost? More importantly, does the board know what would happen if these failed? In one FTSE 250 company where I served, we discovered the board had sophisticated cyber security oversight but had never systematically considered operational resilience if core systems went down for forty-eight hours.

Governance architecture. Who at board level has genuine competence to challenge management on technology decisions? A technology committee can be valuable, but only if it has the expertise and authority to provide real assurance rather than becoming an echo chamber. The board must also clarify decision rights: which technology matters require board approval, which need board notification, and which sit firmly with management.

AI-specific oversight. For AI applications, boards need distinct assurance. What training data is being used, and has bias been tested? Who is accountable when an algorithm makes a consequential decision? What governance exists around model updates and version control? These aren’t theoretical questions—they’re becoming regulatory requirements and litigation risks.

Assurance testing. Boards should periodically test their own understanding. Can directors explain the company’s disaster recovery plans without looking at papers? Do they know which external parties have access to sensitive systems? Have they satisfied themselves that management isn’t simply reporting what they want to hear?

Making it work

This level of oversight doesn’t require monthly board papers on IT. It requires structured assurance built into existing governance rhythms. Annual strategy sessions should include technology dependencies. Risk committees should regularly stress-test management’s assumptions. Board evaluations should honestly assess directors’ collective technology literacy.

Most crucially, boards must create an environment where management feels comfortable escalating technology concerns early. In my experience, the gravest technology failures don’t occur because of sophisticated attacks or complex technical failures. They occur because warning signs were visible but never reached board level until crisis struck.

Directors who take technology governance seriously don’t need to understand code. They need to understand accountability, ask penetrating questions, and demand assurance frameworks that are genuinely fit for purpose. That’s not a technical challenge. It’s a governance one.

———

Erika Eliasson-Norris is CEO of Beyond Governance, a multi-award winning corporate governance consulting firm, the Governance Assessor to the Post Office Horizon IT Inquiry. A Chartered Governance Professional, she is the author of The Secret Diary of a Company Secretary and has held top governance positions at FTSE companies.

Up Next