I have been writing about this issue for over three years. Major research conducted by NACD has just verified the major gaps, opportunities and risks that impact a board’s fiduciary responsibilities.
Their recent report advises that despite 95% of directors acknowledging its future impact on their businesses, only 28% indicate that AI is a regular feature in their board’s conversations. Two key findings from this report were, although not a surprise were that:
1.) Dedicated AI governance is essential. Boards, expecting a significant impact on their businesses from AI, must assess their readiness and prepare for effective oversight.
2.) Boards must approach AI oversight as an organization-wide strategic imperative.
3.) Boards must not consider AI simply as an IT operational issue or focus exclusively on AI’s risks. Boards, as AI stakeholders, must ensure the safe and ethical development and use of AI, aligning its governance of the technology with the company’s core values, purpose, and mission, and taking measures to mitigate potential harm.
I have been doing a number of private educational sessions to board directors and C-Levels educating them on AI governance and also equipping them with the right questions to ask relevant to a board director to be leading on.
I do find that many people training in AI have never designed and built an AI model and sustained an AI environment are dawning AI leadership hats and it is important I believe that Boards ensure all training on AI for board directors is from an authority and pragmatic experienced position otherwise more risks are incurred.
AI is a highly nuanced field and Board Directors need to know the detailed questions and ensure evidence is produced to manage applications using AI that are deemed high risk.