In the last couple of weeks, DirectorPrep has released ChatDPQ™, our brand new AI-powered resource, custom-designed for board directors like you and me. It’s a tool that provides easy, time-saving access to high-quality insights on anything to do with boards and board work. ChatDPQ replaces the DirectorPrep Questions App (DQA) and, just like the DQA, it’s available exclusively to DirectorPrep members.
I learned from DirectorPrep co-founder Dave Jaworski that ChatDPQ is built around Microsoft’s Responsible Artificial Intelligence framework, ensuring it was developed with an approach that’s ethical, transparent, and accountable.
That got me thinking … What should board directors know about Responsible AI? What questions can they ask to reassure themselves that, if AI is used in their organization, it’s done in a way that’s ethical, transparent, and accountable?
For the board, governance is often a matter of scale. Directors of small-scale organizations might assume that AI isn’t a relevant issue for them. Those same directors might be surprised to learn that AI is already having an impact — if not directly on their own organization, then certainly within their industry.
Artificial Intelligence has been imagined for decades (remember HAL 9000 in the 1968 movie 2001: A Space Odyssey?), and it’s been a reality in some industries for several years now — especially those with copious data and a need for sophisticated analytics. Using AI for predictive maintenance, tracking customer behavior, or automating manual processes didn’t seem particularly problematic from a risk point of view.
But the arrival of generative AI, and the rise of large language-model platforms like OpenAI’s ChatGPT and Google’s Gemini, propelled the technology to the forefront of the public consciousness, setting executives and boards wondering about AI opportunities and risks.
By now, many organizations could be using AI in their operations without the board’s knowledge. It might have acquired or developed its own AI tools, or employees may be using publicly available AI tools to generate content for reports, news releases, marketing material, etc. Even if the organization isn’t using AI itself, it’s quite likely that others within their industry, as well as third-party providers, are.
AI is a transformational technology with huge potential. At the same time, it comes with significant risks — and it’s risk that we’re focusing on in this article. As directors, it’s our fiduciary duty to ensure that the use of AI is safe, that it’s aligned to the company’s core values, purpose, and strategy, and that there are measures to prevent harm.
The term Responsible AI (RAI) refers to designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society — allowing organizations to engender trust and scale AI with confidence.
RAI is the prime way of mitigating AI-associated risks. Before exploring RAI principles, let’s make sure we have a handle on just what those risks are.
AI risks continue to evolve as the technology becomes ever more sophisticated and widely adopted. For directors who aren’t immersed in the technology (which is most of us), AI risk can be grouped into the following categories.
Performance Risk. AI algorithms that ingest real-world data and preferences can learn and imitate biases and prejudices, giving rise to errors, bias, and instability.
Security Risk. All automated systems have security risks, and AI is no different. Security risks include adversarial attacks, cyber intrusion, and privacy breaches.
Control Risk. AI control risks include lack of human agency, rogue AI, unintended consequences, and unclear accountability.
Enterprise Risk. AI’s objectives may be misaligned with the organization’s core values, purpose, and strategic goals, causing reputation harm, impaired financial performance, and legal and compliance issues.
Economic and Societal Risk. Adoption of AI impacts jobs and requires new skill sets, giving rise to job displacement, increased inequality, and concentration of power. There’s also a risk of misinformation, manipulation, and surveillance.
Over the last few years, companies that develop and use AI have worked to create a set of RAI principles to foster a positive impact on individuals and society while respecting privacy and minimizing harm.
Board directors needn’t be technology experts to use these principles to oversee AI risk. There are a number of RAI frameworks out there, and they’re all based on principles similar to Microsoft’s, namely fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. You can’t go too far wrong if your AI-related questions are rooted in these principles.
Fairness. AI systems should treat all people fairly, eliminating bias based on age, gender, ethnicity, etc. They should provide service and allocate resources and opportunities in a way that limits disparities and minimizes the potential for stereotyping, demeaning, or erasing demographic groups.
For the board, promoting the fairness principle involves understanding the scope of AI systems and how they’ll be used, ensuring that processes are in place to identify bias, and learning about the datasets on which AI systems are trained.
Reliability and Safety. AI systems should perform consistently, prevent harm or unintended consequences, and be constantly monitored to troubleshoot issues and improve systems.
For the board, promoting reliability and safety involves understanding the organization’s level of AI maturity, ensuring rigorous design and testing, inquiring about feedback mechanisms, and requiring periodic AI audits.
Privacy and Security. AI systems should incorporate robust data protection measures and enable users to control their personal information.
For the board, oversight of privacy and security includes ensuring compliance with all relevant data protection, privacy, and transparency laws, regulations, and standards, and keeping up with the regulatory environment as it evolves.
Inclusiveness. AI systems should be accessible to all users, including people with disabilities. The goal is to help bridge the digital divide.
For the board, inclusiveness involves ensuring compliance with laws on accessibility and inclusiveness, and inquiring about how system design reflects inclusiveness principles and standards.
Transparency. AI systems should be understandable to users. When an AI system makes a decision, there should be a clear explanation of how it was made. Users should be informed that they are interacting with an AI system or using a system that generates or manipulates image, audio, or video content that might appear to be authentic. Scandals can emerge when content is not clearly labeled as being AI-generated.
Responsible AI in the Boardroom. AI-generated image from Shutterstock.
For the board, concerns about transparency include how employees are trained to interpret AI outputs and how users are informed about their use of AI systems and AI-generated content.
Accountability. There should be clear responsibility for systems performance and mechanisms to address issues, including adverse impacts on people, organizations, and society. AI systems should include capabilities for informed human oversight, control, and intervention.
The board needs to think about its own role in AI governance and accountability — this will vary according to the organization’s scale and purpose. One key decision is to identify which board committee, if any, will be mandated to oversee AI. Some boards might establish a new committee to do so, while others decide to delegate it to an existing committee such as audit or risk. Still others may leave AI oversight to the entire board.
The board also needs to be very clear in its direction to management about its commitment to RAI principles. Directors should satisfy themselves that responsibility for AI is being adequately managed, coordinated, and communicated within the organization. Development of a policy on the acceptable uses of AI is a key step in this direction — there are samples and templates available online.
RAI principles can be a jumping off point for a savvy director’s questions. Here are just a few board-level questions that might be useful the next time AI is on the agenda.
General AI Questions
Accountability Questions
Privacy and Security Questions
Fairness Questions
Transparency Questions
Thank you.
Scott
Scott Baldwin is a certified corporate director (ICD.D) and co-founder of DirectorPrep.com – an online membership with practical tools for board directors who choose a growth mindset.
We Value Your Feedback: Share your suggestions for future Savvy Director topics.
Comment
Stay connected with our weekly posts about what it takes to be a savvy board director