Image - AI bot at the boardroom table - generated by Shutterstock
Today’s post is by David Jaworski, Principal Product Manager for Microsoft Teams and co-founder of DirectorPrep. Dave serves on the board of INEO Solutions (TSXV: INEO) as well as non-profit boards. He previously served on the Advisory Board of Payworks and the public board of PNI Digital Media which was acquired by Staples. [email protected] LinkedIn: https://linkedin.com/in/DaveJaworski
Thanks to ChatGPT for also contributing to today’s article.
Recent leaps in Artificial Intelligence technology have enabled powerful new applications and capabilities, ushering us into an era where AI is becoming a critical asset to businesses at all levels. Let’s explore the essential nature of AI, its implications for the boardroom, and how to ensure it's used ethically and responsibly.
The explosion of OpenAI’s ChatGPT bot has brought the reality of AI to the front pages of every major news site in the world.
How big is OpenAI’s ChatGPT? It took TikTok nine months to get to 100 million monthly users. ChatGPT did that in one month! It’s the fastest growing use of technology in the history of our planet. This explosive growth is driving adoption of AI and its potential uses.
So, what’s different about this AI versus the AI we’ve been hearing about for decades?
One thing that’s different is that recent breakthroughs in Natural Language Processing (NLP) have allowed Large Language Models (LLMs) like ChatGPT to understand natural language queries and provide semantic understanding. This means that they can keep context from one discussion point to the next, making them more helpful and intuitive to use.
Another thing that’s different is accessibility. Large language models like ChatGPT are now widely available to the public through platforms like OpenAI. This accessibility has allowed people from all walks of life to experiment with AI and explore its potential uses.
It’s essential to understand that this AI is not an "overnight sensation." AI technology has been around for decades. Only recently it has become more accessible and easier to use.
The vision for this kind of AI has been around even longer. Did you ever watch The Jetsons? Beyond early forays into science fiction writing, companies like Apple produced videos to help people understand what the future would hold for us all.
And Microsoft shared a more recent vision in this video showing its Cortana AI as a true digital assistant. The assistant has a very human dialog with context being kept throughout the exchange.
The technology is here now.
AI is not a replacement for human intelligence, but it is a tool that can help us solve complex problems more efficiently. The most successful AI is being used as a “co-pilot” to augment us.
Today, software developers can use Github Co-Pilot to have AI code with them and help suggest code improvements. Writers can have AI offer suggestions to improve their writing.
Image - AI bot at the boardroom table – generated by OpenAI’s DALL-E.
In the boardroom, AI can also be a co-pilot. It can be leveraged to help ensure directors explore more aspects of a situation. By asking great questions, we can get additional points of consideration from the AI.
AI can provide financial reviews and assessments to augment the skills of the board.
AI can be used to summarize discussions and produce draft board notes, even capturing all motions.
One Silicon Valley CEO with multiple billion dollar acquisitions claims he never makes a board decision without asking AI for its assessment after listening to all angles. He says this step is necessary to ensure a full assessment and to avoid Groupthink.
AI can be a contentious topic, with concerns about sentient AI and AI plagiarism. These concerns are often based on misconceptions about AI technology or on interactions designed to create social media views. Sentient AI, for example, is still science fiction. (Think of the movie Terminator.)
If that’s true, why do AI models answer in a sentient way?
When I was working on my computer science degree at the University of Manitoba, we had a saying: “Garbage in. Garbage out.” (GIGO) It still applies today. When you feed the entire internet including volumes of social media content to an AI large language model and then let it run free, you’ll get immature responses filled with vitriol unless you put up guardrails.
When an AI-driven car killed a pedestrian in Phoenix a few years ago, many were quick to blame the AI. Behind every AI model is a programmer (or a lot of programmers.) Turns out, the driver had suffered a heart attack and was incapacitated. The programmer’s model assumed they would give warnings to the driver, making the warnings more intense as the probability of an accident increased. The programmer never assumed the driver would be unable to respond. Another form of GIGO. (Unfortunately, one with life threatening consequences.)
All of this points out one of the issues central to “good” vs “bad” AI … BIAS!
One of the greatest risks of using AI is bias, which can be present in the data or in the coding. It’s important to acknowledge that, as humans, we all have biases in our thinking, yet most of us are blind to it.
Amazon started using AI in screening job applicants and quickly abandoned it. Why? Because Amazon used its history of hiring to inform the AI hiring models. And they quickly realized they had historically hired more males than females, so when screening two equally qualified candidates, one male and one female, the AI viewed the male candidate to be the better choice. This kind of bias in the data is easily missed or hidden.
To mitigate this risk, it’s essential to ensure that the data used by AI is diverse and representative of the goals you’re after. Successful examples include training AI to recognize tumors in high resolution images - train it on both known tumors and known benign samples.
Other risks of using AI include personal data exposure, sharing private data with public AI engines, and ownership of the results created by AI.
To use AI ethically and responsibly, it’s essential to avoid exposing privacy or confidential data. This can be achieved by sanitizing the queries used with AI and questioning the results. Discernment and critical thinking are crucial when using AI. It’s essential to be aware of the limitations and biases of the technology.
Image - AI bot at the boardroom table – generated by OpenAI’s DALL-E.
However, there are also risks associated with not using AI, such as missing out to the competition and failing to learn new perspectives. AI can provide a competitive advantage by helping businesses stay ahead of the curve and identifying new opportunities.
Additional ways Generative AI (GPT) can be helpful in the boardroom:
Overall, GPT can be a powerful boardroom tool to streamline decision-making, increase efficiency, and improve communication. It's important to note, though, that GPT is not a replacement for human expertise, but rather a tool that can enhance it.
In conclusion, AI is becoming an essential tool for businesses, including the board. What questions will you ask to help move your board forward in leveraging this powerful technology for good?
Thank you.
Scott
Scott Baldwin is a certified corporate director (ICD.D) and co-founder of DirectorPrep.com – an online hub with hundreds of guideline questions and resources to help directors prepare for their board role.
We Value Your Feedback: Share your suggestions for future Savvy Director topics.
Comment
Stay connected with our weekly posts about what it takes to be a savvy board director