The Savvy Director >> Weekly insights delivered to your inbox on Sunday mornings. Click here

AI in the Boardroom

prepare for meetings Mar 05, 2023

Image - AI bot at the boardroom table - generated by Shutterstock

Today’s post is by David Jaworski, Principal Product Manager for Microsoft Teams and co-founder of DirectorPrep. Dave serves on the board of INEO Solutions (TSXV: INEO) as well as non-profit boards. He previously served on the Advisory Board of Payworks and the public board of PNI Digital Media which was acquired by Staples. [email protected] LinkedIn:

Thanks to ChatGPT for also contributing to today’s article.


Recent leaps in Artificial Intelligence technology have enabled powerful new applications and capabilities, ushering us into an era where AI is becoming a critical asset to businesses at all levels. Let’s explore the essential nature of AI, its implications for the boardroom, and how to ensure it's used ethically and responsibly.

The explosion of OpenAI’s ChatGPT bot has brought the reality of AI to the front pages of every major news site in the world.

How big is OpenAI’s ChatGPT? It took TikTok nine months to get to 100 million monthly users. ChatGPT did that in one month! It’s the fastest growing use of technology in the history of our planet. This explosive growth is driving adoption of AI and its potential uses.

So, what’s different about this AI versus the AI we’ve been hearing about for decades?

One thing that’s different is that recent breakthroughs in Natural Language Processing (NLP) have allowed Large Language Models (LLMs) like ChatGPT to understand natural language queries and provide semantic understanding. This means that they can keep context from one discussion point to the next, making them more helpful and intuitive to use.

Another thing that’s different is accessibility. Large language models like ChatGPT are now widely available to the public through platforms like OpenAI. This accessibility has allowed people from all walks of life to experiment with AI and explore its potential uses.

It’s essential to understand that this AI is not an "overnight sensation." AI technology has been around for decades. Only recently it has become more accessible and easier to use.

The vision for this kind of AI has been around even longer. Did you ever watch The Jetsons? Beyond early forays into science fiction writing, companies like Apple produced videos to help people understand what the future would hold for us all.

And Microsoft shared a more recent vision in this video showing its Cortana AI as a true digital assistant. The assistant has a very human dialog with context being kept throughout the exchange.

The technology is here now.

AI is not a replacement for human intelligence, but it is a tool that can help us solve complex problems more efficiently. The most successful AI is being used as a “co-pilot” to augment us.

Today, software developers can use Github Co-Pilot to have AI code with them and help suggest code improvements. Writers can have AI offer suggestions to improve their writing.


What does this mean for the Boardroom?

Image - AI bot at the boardroom table – generated by OpenAI’s DALL-E.

In the boardroom, AI can also be a co-pilot. It can be leveraged to help ensure directors explore more aspects of a situation. By asking great questions, we can get additional points of consideration from the AI.

AI can provide financial reviews and assessments to augment the skills of the board.

AI can be used to summarize discussions and produce draft board notes, even capturing all motions.

If desired, AI can provide complete transcriptions and speaker attribution for every note.

One Silicon Valley CEO with multiple billion dollar acquisitions claims he never makes a board decision without asking AI for its assessment after listening to all angles. He says this step is necessary to ensure a full assessment and to avoid Groupthink.


AI Terminology

To discuss AI in the boardroom, let’s be sure we’re using the right terminology. Here are common terms as defined by ChatGPT:
AI stands for Artificial Intelligence. It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as perception, reasoning, learning, and decision-making. AI is a broad field that encompasses various subfields, including machine learning, natural language processing, robotics, and computer vision.
Generative AI is a type of artificial intelligence that is designed to create or generate new content, such as images, text, or music, that did not previously exist. Unlike other types of AI, which are typically used to classify or recognize existing data, generative AI models are trained to generate new data based on patterns and trends learned from large datasets. Generative AI is also a term used by a board of directors to describe a "what if" or future-type question that can act as a catalyst for discussion.
A Large Language Model (LLM) is an artificial intelligence system designed to understand and generate human-like language. LLMs are typically neural network-based models that have been pre-trained on vast amounts of text data, such as books, articles, and web pages, to learn the patterns and rules of language. These models are then fine-tuned on specific language tasks, such as language translation, text summarization, or question-answering.
ChatGPT stands for "Chat Generative Pre-trained Transformer." It refers to the fact that it is an AI-powered chatbot that uses a pre-trained transformer model to generate responses to user input.
DALL-E is an artificial intelligence program created by OpenAI that can generate images from textual descriptions. It’s named after the artist Salvador Dali and the character WALL-E from the Pixar animated film.
Ethical AI, also known as responsible AI, refers to the development and use of artificial intelligence in ways that are fair, transparent, and socially responsible. Ethical AI aims to ensure that AI systems are designed and deployed in ways that respect human rights, minimize bias, and promote social and environmental well-being.
Some of the key principles of ethical AI include:
Fairness and non-discrimination: AI systems should be designed to avoid bias and ensure equal treatment of all individuals.
Transparency and explainability: AI systems should be transparent about their decision-making processes and explainable to users.
Privacy and data protection: AI systems should respect user privacy and protect personal data.
Accountability: AI developers and operators should be accountable for the decisions and actions of their systems.
Social and environmental responsibility: AI systems should be developed and used in ways that promote social and environmental well-being.
Ethical AI is an important consideration in the development and deployment of AI systems, as it helps ensure that AI is used for the benefit of society as a whole, rather than for the benefit of a few. It also helps build trust in AI among users and stakeholders.


So, what’s not to like?

AI can be a contentious topic, with concerns about sentient AI and AI plagiarism. These concerns are often based on misconceptions about AI technology or on interactions designed to create social media views. Sentient AI, for example, is still science fiction. (Think of the movie Terminator.)

If that’s true, why do AI models answer in a sentient way?

When I was working on my computer science degree at the University of Manitoba, we had a saying: “Garbage in. Garbage out.” (GIGO) It still applies today. When you feed the entire internet including volumes of social media content to an AI large language model and then let it run free, you’ll get immature responses filled with vitriol unless you put up guardrails.

When an AI-driven car killed a pedestrian in Phoenix a few years ago, many were quick to blame the AI. Behind every AI model is a programmer (or a lot of programmers.) Turns out, the driver had suffered a heart attack and was incapacitated. The programmer’s model assumed they would give warnings to the driver, making the warnings more intense as the probability of an accident increased. The programmer never assumed the driver would be unable to respond. Another form of GIGO. (Unfortunately, one with life threatening consequences.)

All of this points out one of the issues central to “good” vs “bad” AI … BIAS!


What, me biased?

One of the greatest risks of using AI is bias, which can be present in the data or in the coding. It’s important to acknowledge that, as humans, we all have biases in our thinking, yet most of us are blind to it.

Amazon started using AI in screening job applicants and quickly abandoned it. Why? Because Amazon used its history of hiring to inform the AI hiring models. And they quickly realized they had historically hired more males than females, so when screening two equally qualified candidates, one male and one female, the AI viewed the male candidate to be the better choice. This kind of bias in the data is easily missed or hidden.

To mitigate this risk, it’s essential to ensure that the data used by AI is diverse and representative of the goals you’re after. Successful examples include training AI to recognize tumors in high resolution images - train it on both known tumors and known benign samples.

Additional risks can be mitigated … or even valued

Other risks of using AI include personal data exposure, sharing private data with public AI engines, and ownership of the results created by AI.

To use AI ethically and responsibly, it’s essential to avoid exposing privacy or confidential data. This can be achieved by sanitizing the queries used with AI and questioning the results. Discernment and critical thinking are crucial when using AI. It’s essential to be aware of the limitations and biases of the technology.

Image - AI bot at the boardroom table – generated by OpenAI’s DALL-E.

However, there are also risks associated with not using AI, such as missing out to the competition and failing to learn new perspectives. AI can provide a competitive advantage by helping businesses stay ahead of the curve and identifying new opportunities.

Additional ways Generative AI (GPT) can be helpful in the boardroom:

  • Writing Reports and Presentations: GPT can be used to generate high-quality reports and presentations in a fraction of the time it would take a human to write them. By inputting a few prompts or keywords, GPT can generate comprehensive, grammatically-correct, and coherent reports.
  • Data Analysis: GPT can be used to analyze and summarize large amounts of data, making it easier for board members to quickly identify patterns, trends, and insights. By inputting relevant data into the model, GPT can generate summaries and recommendations that can inform decision-making.
  • Language Translation: If you are conducting business internationally, GPT can be used to translate different languages in real-time, allowing for seamless communication between team members and stakeholders from different countries.
  • Natural Language Processing: GPT can be used for natural language processing tasks such as sentiment analysis, entity recognition, and question-answering. This can be particularly useful for analyzing customer feedback, tracking brand reputation, and answering common queries from stakeholders.

Overall, GPT can be a powerful boardroom tool to streamline decision-making, increase efficiency, and improve communication. It's important to note, though, that GPT is not a replacement for human expertise, but rather a tool that can enhance it.

In conclusion, AI is becoming an essential tool for businesses, including the board. What questions will you ask to help move your board forward in leveraging this powerful technology for good?

  • What are you curious about with AI?
  • What impacts do you see in the boardroom?
  • What questions can you ask in the boardroom to help ensure the organization is taking full advantage of AI in the organization?


Your takeaways:

  • AI is here and usable now. AI has reached a new level of use as it can now handle natural language queries, use large language models, and keep context across multiple interactions. Like any tech, it can be used for good or bad.
  • AI will impact the boardroom. Savvy directors can help keep their organizations aware of the positive and negative aspects of this technology.
  • Think of AI first as a co-pilot and accelerator - a thought and perspective expander.
  • Ethical AI needs to be a part of the framework for your board's use of AI.
  • Your mindset plays a key role in your approach to AI.




Thank you.


Scott Baldwin is a certified corporate director (ICD.D) and co-founder of – an online hub with hundreds of guideline questions and resources to help directors prepare for their board role.

We Value Your Feedback: Share your suggestions for future Savvy Director topics.




Welcome to the Savvy Director Blog

Stay connected with our weekly posts about what it takes to be a savvy board director