The Savvy Director >> Weekly insights delivered to your inbox on Sunday mornings. Click here

Creating the Board's AI Workplan

prepare for meetings May 28, 2023

For me, reading and learning about artificial intelligence (AI) has been like drinking from a firehose. I could choose to ignore it and get a nice glass of water from the filtered jug in my fridge. But that wouldn’t end well.

Instead, I’ve been learning that boards today aren’t yet seeing AI as an important education topic for their meetings or board retreats. There’s some curiosity around ChatGPT, but otherwise little sense of urgency. In my view, that’s a mistake.

Does it get your attention when companies like Apple and Samsung ban the use of ChatGPT in the workplace? What do they know that we don’t?

As I write this, I’ve just come from a board meeting where I raised a question about the use of AI in the film and music industries. There was familiarity with the term 'AI' based on what’s appeared in the media, but we haven’t yet started to bring the story home.

Yet a couple of local movie projects on the summer schedule are now on hold because of the Hollywood writers’ strike, where one of the big issues is fear of job loss due to AI in the writers’ room. Because of the threat of AI to the livelihood of writers in the film industry, there may be no scripts, actors, architects, designers, film crews, or employment this summer. And that’s just one industry.

What’s happening in your world?

Maybe it’s time we all realize – regardless of what industry we’re in – that we need to start a conversation about AI to plan for our collective futures.

 

The Burning Platform

The use of AI will no doubt impact your organization differently than mine. Still, let me suggest that savvy directors reading this article can help create the conditions for their boards to begin to understand how to participate in the AI conversation.

Let’s face it. We don’t know what we don’t know. But there’s no need for undue alarm. If we think of the glass as half full, we can appreciate what may be possible with an intentional approach and a curious question or two about how to figure it out.

So, what should be our next right step? It’s best to start by reaching common ground on the basics to ensure everyone is building on the same foundation. Do we all understand the basic tenets of what AI is all about?

Could we set a goal to draft an AI policy over the next year - one that is specific to our own board, our own organization, and our own industry? Could we agree to leverage the strategic opportunities that might emerge as we do this work?

 

How do we get there?

AI education for the board needs a home – preferably within whichever board committee is charged with oversight of strategy and risk.

From there, it would be helpful to identify a champion to drive the bus over the next twelve to eighteen months – a director who is curious about AI, its opportunities, and its risks. Might you be the savvy director who steps up to coordinate the board’s AI education and policy efforts?

It may feel new, but AI has been quietly working in the background of our lives for some time now. The reality is that most of us are late to the party, although no doubt boards that are familiar with the disrupter role will be more comfortable with the topic.

Where AI boardroom discussions are today is where cybersecurity discussions were ten years ago. But AI’s momentum is much faster. We can’t take too long to get our collective minds around the risks involved.

Right now, AI risk discussions tend to reside in the IT department. As with cybersecurity in the past, we haven’t yet appreciated the need for an enterprise-wide approach to artificial intelligence. In my view, that means accepting a lot of risk that we haven’t begun to understand from a board oversight perspective. Are we comfortable with that?

AI isn’t a fad, but there’s a lot of fiction out there. We need to get to the real risks – and the opportunities. We need a way to screen out those who hype AI just for the social media hits.

Here’s what I know to be true – boards are legally responsible for oversight of privacy and culture. In the age of AI, how do we monitor and guide in those areas?

From people smarter than me …

AI’s strategic possibilities will likely be as transformative as the Internet in the late ‘90s – maybe even more so. And yes, the board needs to have a workplan for its role in overseeing AI risk in the organization, but also a plan for the opportunities presented by the new technology.

Right now, the AI madness seems out of control. The firehose of information on AI is open full throttle. Our boards would benefit from a more controlled approach, so we can catch up in ways that are relevant to what we do.

An intentional change management approach in our boardrooms may help us appreciate the opportunities in front of us. If the board understands it, others will too. The leadership leverage is immense.

This article doesn’t pretend to have all the answers. It doesn’t even have all the questions. Here at DirectorPrep, we’ll try to help by sharing curated information we’ve gleaned from others. (Check out the Resources section at the end of the article.)  We’ll continue to ask relevant questions in the hopes of distilling the information we come across into a practical narrative that’s helpful without being intimidating.

 

You’re not alone.

It’s clear that AI needs to be on the board agenda. It needs to be considered seriously as part of the ‘G' in ESG (Environmental, Social and Governance).

AI doesn’t need to be confined within the IT department, although in a large organization the Chief Information Officer (CIO) might be the one who takes responsibility for implementation and management.

The article AI in the Boardroom from the UK Institute of Directors (IoD) includes many relevant questions within a framework that supports board education about the risks and opportunities of AI.

And that’s what savvy directors do – build their skills, prepare well, ask great questions, collaborate with others, think independently, and use their governance courage to get important topics on the board’s agenda.

The same article provides background that may offer comfort that you and your board aren’t alone on this journey:

  • A 2022 members’ survey revealed that 80% of boards didn’t have a process in place to audit their use of AI. They said they didn’t know what questions to ask.
  • Board governance of AI at project inception is important because boards don’t want to have to unravel ethical issues later when there may be a reputation and cost impact.
  • There’s a gap between board governance and organizational use of AI - over 86% of businesses already use some form of AI without their board being aware of it.
  • AI can amplify existing bias in human decisions. Safeguards are needed to prevent AI from perpetuating existing bias in the culture.
  • AI governance should be rooted in the core ethical values of the business.
  • An AI risk and governance model requires a framework that boards can use as a blueprint.

 

Next steps

Here are a few suggestions I’ve come across so far. I’m trying to put them to work as a director.

  • Engage with your board chair offline about AI and your interest in raising awareness.
  • Find one good article on the board’s role in AI that resonates with you in practical language. Share it with your fellow directors.
  • Collaborate with management to organize an introductory board presentation on the current use of AI in your industry sector. Keep the detail at a high level appropriate for the board. Prepare discussion questions in advance.
  • Find other directors to join you in a working group to prepare an AI workplan for the next 12-18 months. Hint: people interested in cybersecurity are a good place to start.
  • Look for examples of AI board policies. Be patient as there are only a few available at present. See one example from Jackie Lyons in the Resources section below.
  • As you develop your AI workplan and policy, be on the lookout for external resources and experts who could present to your board on relevant AI topics and facilitate a follow-up discussion.
  • In your working group, start drafting your board’s first AI policy while gathering relevant context for your organization.
  • Learn as you go.

 

Your takeaways:

  • The board is legally responsible for privacy and culture. It’s past time for directors to have AI discussions from a board oversight perspective.
  • Pursue progress over perfection. Your board just needs to get started with its AI learning.
  • Identify a board champion to lead the AI education workplan. If AI interests you, and you find the articles in the Resources section to be interesting, you’re probably an excellent candidate to champion your board’s AI education efforts.
  • Bias is the number one inherent risk embedded in the output provided by AI tools. How will you mitigate that risk? How will you keep your organization’s information secure and private?
  • Identify the stakeholders who may be impacted by the organization’s use of AI.
  • Development of the board’s AI policy and guardrails is an iterative process that management, staff, and stakeholders will appreciate.

 

Resources:

We’ve curated more resources than you need at this point. Our purpose is to introduce you to some of the better AI content available. You can use them to start forming your own views and to guide AI discussions in the boardroom.

 

Thank you.

Scott

Scott Baldwin is a certified corporate director (ICD.D) and co-founder of DirectorPrep.com – an online hub with hundreds of guideline questions and resources to help directors prepare for their board role.

 

We Value Your Feedback: Share your suggestions for future Savvy Director topics.

 

Comment

Close

Welcome to the Savvy Director Blog

Stay connected with our weekly posts about what it takes to be a savvy board director