The Savvy Director >> Weekly insights delivered to your inbox on Sunday mornings. Click here

Responsible AI for Board Directors

prepare for meetings Mar 17, 2024

In the last couple of weeks, DirectorPrep has released ChatDPQ™, our brand new AI-powered resource, custom-designed for board directors like you and me. It’s a tool that provides easy, time-saving access to high-quality insights on anything to do with boards and board work. ChatDPQ replaces the DirectorPrep Questions App (DQA) and, just like the DQA, it’s available exclusively to DirectorPrep members.

I learned from DirectorPrep co-founder Dave Jaworski that ChatDPQ is built around Microsoft’s Responsible Artificial Intelligence framework, ensuring it was developed with an approach that’s ethical, transparent, and accountable.

That got me thinking … What should board directors know about Responsible AI? What questions can they ask to reassure themselves that, if AI is used in their organization, it’s done in a way that’s ethical, transparent, and accountable?

For the board, governance is often a matter of scale. Directors of small-scale organizations might assume that AI isn’t a relevant issue for them. Those same directors might be surprised to learn that AI is already having an impact — if not directly on their own organization, then certainly within their industry.

 

The Rise of Generative AI

Artificial Intelligence has been imagined for decades (remember HAL 9000 in the 1968 movie 2001: A Space Odyssey?), and it’s been a reality in some industries for several years now — especially those with copious data and a need for sophisticated analytics. Using AI for predictive maintenance, tracking customer behavior, or automating manual processes didn’t seem particularly problematic from a risk point of view.

But the arrival of generative AI, and the rise of large language-model platforms like OpenAI’s ChatGPT and Google’s Gemini, propelled the technology to the forefront of the public consciousness, setting executives and boards wondering about AI opportunities and risks.

By now, many organizations could be using AI in their operations without the board’s knowledge. It might have acquired or developed its own AI tools, or employees may be using publicly available AI tools to generate content for reports, news releases, marketing material, etc. Even if the organization isn’t using AI itself, it’s quite likely that others within their industry, as well as third-party providers, are.

AI is a transformational technology with huge potential. At the same time, it comes with significant risks — and it’s risk that we’re focusing on in this article. As directors, it’s our fiduciary duty to ensure that the use of AI is safe, that it’s aligned to the company’s core values, purpose, and strategy, and that there are measures to prevent harm.

The term Responsible AI (RAI) refers to designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society — allowing organizations to engender trust and scale AI with confidence.

RAI is the prime way of mitigating AI-associated risks. Before exploring RAI principles, let’s make sure we have a handle on just what those risks are.

 

Mitigating AI Risks

AI risks continue to evolve as the technology becomes ever more sophisticated and widely adopted. For directors who aren’t immersed in the technology (which is most of us), AI risk can be grouped into the following categories.

Performance Risk. AI algorithms that ingest real-world data and preferences can learn and imitate biases and prejudices, giving rise to errors, bias, and instability.

  • For the board, it’s important to understand the potential for bias. Bias can creep into the tools, methodology, and underlying assumptions. AI algorithms can amplify any biases that were present in the data in the first place.

Security Risk. All automated systems have security risks, and AI is no different. Security risks include adversarial attacks, cyber intrusion, and privacy breaches.

  • For the board, data protection within a secure environment needs to be top-of-mind. Directors should realize that, when people interact with publicly accessible AI platforms, there’s no guarantee of data privacy. That means when employees use these platforms, there’s a risk of confidential information leaking into what is essentially a public forum.

Control Risk. AI control risks include lack of human agency, rogue AI, unintended consequences, and unclear accountability.

  • For the board, it’s important to be mindful that AI is capable of inventing facts — known as an hallucination. They don’t happen often, but when they do, people can take the information at face value and use it for making decisions. Human intervention is a critical factor in mitigating hallucination risk — real people need to be actively involved and capable of intervening when needed.

Enterprise Risk. AI’s objectives may be misaligned with the organization’s core values, purpose, and strategic goals, causing reputation harm, impaired financial performance, and legal and compliance issues.

  • For the board, strategic and cultural alignment of AI is an ongoing concern. AI needs to be more than just a shiny new toy — it should be of strategic importance and well-integrated into the business model and the corporate culture.

Economic and Societal Risk. Adoption of AI impacts jobs and requires new skill sets, giving rise to job displacement, increased inequality, and concentration of power. There’s also a risk of misinformation, manipulation, and surveillance.

  • For the board, concerns about jobs and training needs are paramount — within their own organization and in their industry.

 

Principles of Responsible AI

Over the last few years, companies that develop and use AI have worked to create a set of RAI principles to foster a positive impact on individuals and society while respecting privacy and minimizing harm.

Board directors needn’t be technology experts to use these principles to oversee AI risk. There are a number of RAI frameworks out there, and they’re all based on principles similar to Microsoft’s, namely fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. You can’t go too far wrong if your AI-related questions are rooted in these principles.

Fairness. AI systems should treat all people fairly, eliminating bias based on age, gender, ethnicity, etc. They should provide service and allocate resources and opportunities in a way that limits disparities and minimizes the potential for stereotyping, demeaning, or erasing demographic groups.

For the board, promoting the fairness principle involves understanding the scope of AI systems and how they’ll be used, ensuring that processes are in place to identify bias, and learning about the datasets on which AI systems are trained.

Reliability and Safety. AI systems should perform consistently, prevent harm or unintended consequences, and be constantly monitored to troubleshoot issues and improve systems.

For the board, promoting reliability and safety involves understanding the organization’s level of AI maturity, ensuring rigorous design and testing, inquiring about feedback mechanisms, and requiring periodic AI audits.

Privacy and Security. AI systems should incorporate robust data protection measures and enable users to control their personal information.

For the board, oversight of privacy and security includes ensuring compliance with all relevant data protection, privacy, and transparency laws, regulations, and standards, and keeping up with the regulatory environment as it evolves.

Inclusiveness. AI systems should be accessible to all users, including people with disabilities. The goal is to help bridge the digital divide.

For the board, inclusiveness involves ensuring compliance with laws on accessibility and inclusiveness, and inquiring about how system design reflects inclusiveness principles and standards.

Transparency. AI systems should be understandable to users. When an AI system makes a decision, there should be a clear explanation of how it was made. Users should be informed that they are interacting with an AI system or using a system that generates or manipulates image, audio, or video content that might appear to be authentic. Scandals can emerge when content is not clearly labeled as being AI-generated.

Responsible AI in the Boardroom. AI-generated image from Shutterstock.

For the board, concerns about transparency include how employees are trained to interpret AI outputs and how users are informed about their use of AI systems and AI-generated content.

Accountability. There should be clear responsibility for systems performance and mechanisms to address issues, including adverse impacts on people, organizations, and society. AI systems should include capabilities for informed human oversight, control, and intervention.

The board needs to think about its own role in AI governance and accountability — this will vary according to the organization’s scale and purpose. One key decision is to identify which board committee, if any, will be mandated to oversee AI. Some boards might establish a new committee to do so, while others decide to delegate it to an existing committee such as audit or risk. Still others may leave AI oversight to the entire board.

The board also needs to be very clear in its direction to management about its commitment to RAI principles. Directors should satisfy themselves that responsibility for AI is being adequately managed, coordinated, and communicated within the organization. Development of a policy on the acceptable uses of AI is a key step in this direction — there are samples and templates available online.

 

RAI Questions for The Savvy Director

RAI principles can be a jumping off point for a savvy director’s questions. Here are just a few board-level questions that might be useful the next time AI is on the agenda.

General AI Questions

  • How will AI impact our industry?
  • Which business units and functions currently use AI tools?
  • What type of tools are used and how are they used?
  • Are we building AI tools or were they procured? If procured, from whom?
  • What framework are we using to implement AI technology responsibly?
  • What policies and controls do we have to safeguard AI models against risks, misuse, and unauthorized use?
  • How do we independently verify that AI systems are operating in a manner consistent with policies and objectives?
  • Is the management team aware of how and when employees are using publicly available AI tools to complete tasks?
  • What future uses of AI are we currently exploring?
  • What are AI’s potential impacts on the organization’s strategy and business model?
  • What are the gaps that hinder our adoption of AI?

Accountability Questions

  • How can we properly govern and monitor this powerful technology?
  • Who on the management team is accountable for AI?
  • Is there a management-level committee focused on AI?
  • What AI metrics and information would the board like to see, and how often?
  • Which board committee should provide primary oversight of AI, if any?
  • How will the board continue to be educated on AI?
  • Do we need AI expertise at the board table to oversee our strategy?

Privacy and Security Questions

  • Is our use of AI violating anyone’s privacy?
  • What are the concerns with our planned use of AI with respect to laws and regulations?
  • How do we keep up with compliance and legal issues that AI initiatives might pose?

Fairness Questions

  • Are our AI systems making accurate, bias-aware decisions? How do we know?
  • Have we considered the responsible use and societal impact of AI technology?
  • How does AI align with our organizational values, purpose, and corporate culture?

Transparency Questions

  • How should we communicate to stakeholders about our AI use?
  • How are we disclosing the use of AI systems and AI-generated content to users?

 

Your takeaways:

  • AI has huge potential, but it comes with risk. Boards must ensure that AI is used safely and that it’s aligned with the organization’s core values and ethical standards.
  • Using AI gives rise to performance, security, control, enterprise, economic, and societal risks.
  • Responsible AI involves developing and using AI with good intention to empower employees and businesses, and fairly impact customers and society. The RAI principles are fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
  • Directors can use RAI principles as a jumping-off point to address the risks posed by AI.
  • Make sure directors and leaders have some knowledge of AI, its uses and its risks, and ensure that there’s relevant training for all employees.

 

Resources:

 

Thank you.

Scott

Scott Baldwin is a certified corporate director (ICD.D) and co-founder of DirectorPrep.com – an online membership with practical tools for board directors who choose a growth mindset.

 

We Value Your Feedback: Share your suggestions for future Savvy Director topics.

 

Comment

Close

Welcome to the Savvy Director Blog

Stay connected with our weekly posts about what it takes to be a savvy board director