How Mature Is Your Responsible AI Program?

How Mature Is Your Responsible AI Program?

Approximately 55% of all organizations overestimate the maturity of their responsible artificial intelligence (RAI) program — the structures, processes, and tools that help organizations ensure that their AI systems work in the service of good while transforming their businesses. This is according to a new global survey conducted by consulting group BCG GAMMA.

In addition, less than half of the organizations that reported achieving AI at scale have a fully mature RAI implementation.

The finding is particularly important because an organization cannot achieve true AI at scale without ensuring that it is developing AI systems responsibly.

Stages of Responsible AI maturity

The survey, which was one of the first to assess the maturity of RAI implementations, collected data from senior executives at more than 1,000 large organizations. It found that organizations fell into four distinct stages of RAI maturity: lagging (14%), developing (34%), advanced (31%), and leading (21%).

An organization's stage reflects its progress in addressing seven generally accepted dimensions of RAI, including fairness and equity, data and privacy governance, and human plus AI. The latter is to ensure that AI systems are designed to empower people, preserve their authority over AI systems, and safeguard their well-being.

Steven Mills, BCG GAMMA's chief ethics officer and a coauthor, noted, "The results were surprising in that so many organizations are overly optimistic about the maturity of their responsible AI implementation. While many organizations are making progress, it's clear the depth and breadth of most efforts fall behind what is needed to truly ensure responsible AI."

Responsible AI is more than risk mitigation

Although C-suite executives and boards of directors are concerned with the organizational risks posed by a lapse of an AI system, the survey finds that businesses are not pursuing RAI simply to mitigate potential risks. Instead, leading organizations know that RAI is an opportunity to realize significant business benefits.

"Increasingly, the smartest organizations I'm talking to are moving beyond risk to focus on the significant business benefits of RAI, including brand differentiation, improved employee recruiting and retention, and a culture of responsible innovation — one that's supported by the corporate purpose and values," said Sylvain Duranton, BCG GAMMA's global leader and a coauthor.

Other key findings include:

  • Organizations' RAI programs typically neglect three RAI dimensions — fairness and equity, social and environmental impact mitigation, and human plus AI — because they are difficult to address.
  • Most organizations that are in the leading stage of RAI maturity have both an individual and a committee guiding their RAI strategy.
  • An organization's region is a better predictor than its industry of its RAI maturity.
  • Some regions are clearly more mature, on average, than others; Europe and North America have the highest RAI maturity.
    Organizations in different industries engage in RAI for different reasons; the public sector, for example, is less focused on business benefits, compared with the industrial goods and automotive industries.

Best practices for reaching Responsible AI maturity

RAI leaders consistently have policies and processes that are fully deployed across their organizations covering all seven RAI dimensions. At these leading organizations, several key markers are indicative of broader RAI maturity.

  • Both the individuals responsible for AI systems and the business processes that use these systems adhere to their organization’s principles of RAI.
  • The requirements and documentation of AI systems’ design and development are managed according to industry best practices.
  • Biases in historical data are systematically tracked, and mitigating actions are proactively deployed in case issues are detected.
  • Security vulnerabilities in AI systems are evaluated and monitored in a rigorous manner.
  • The privacy of users and other people is systematically preserved in accordance with data use agreements.
  • The environmental impact of AI systems is regularly assessed and minimized.
  • All AI systems are designed to foster collaboration between humans and machines while minimizing the risk of adverse impact.

Organizations that do not follow these practices or do not have them fully deployed are most likely not leading in RAI and should dig more deeply into their RAI efforts. Even for those that do, digging deeper into their efforts to look for further opportunities to improve is important.

View the full article and survey results.

Article published by icrunchdata
Image credit by Getty Images, DigitalVision Vectors, miakievy
Want more? For Job Seekers | For Employers | For Contributors