By Patrick A. Lee and Jonathan Dambrot
The era of AI has begun with startling speed. AI and machine learning are increasingly driving business decisions and activities, and pressures continue to mount—from customers, regulators, and other stakeholders—for greater transparency into how these data-driven technologies and algorithms are being used, monitored, and managed. In particular, they want to understand how companies are addressing the risks associated with AI systems—risks such as algorithmic biases in healthcare scoring and access to healthcare services; job application vetting and recruiting and hiring practices; loan credit decisions; privacy violations; cybersecurity; disinformation and deepfakes; worker monitoring; and more recently, the risks posed by generative AI.
Despite the explosive growth in the use of AI systems and increasing concerns about the risks these systems pose, many organizations have yet to implement robust AI governance processes. In a recent global survey of more than 1,000 executives by BCG and MIT Sloan Management Review, an overwhelming majority—84 percent—said that responsible AI should be a top management priority. Yet, just 16 percent of their companies have mature programs for achieving that goal.1 Notably, a recent KPMG survey found that relatively few C-suite executives are directly involved in, or responsible for, strategies to manage AI risk and data/model governance, including establishing new processes or procedures (44 percent), reviewing AI risks (23 percent), and developing and/or implementing governance to mitigate AI risk (33 percent).2
Given the legal and reputational risks posed by AI, many companies may need to take a more rigorous approach to AI governance, including (i) monitoring and complying with the patchwork of rapidly evolving AI legislation, (ii) implementing emerging AI risk management frameworks, (iii) securing AI pipelines against adversarial threats; and (iv) assessing their AI governance structure and practices to embed the guardrails, culture, and compliance practices that will help drive trust and transparency in tandem with the transformational benefits of AI. The goal is often referred to as “ethical” or “responsible” AI—that is, making AI systems transparent, fair, secure, and inclusive. Below, we offer comments on these four areas of board focus.
Monitoring and complying with evolving AI legislation.
In addition to general data privacy laws and regulations, we are now seeing the emergence of AI-specific laws, regulations, and frameworks globally. For example, the EU’s Artificial Intelligence Act appears to be on the path to becoming law, perhaps by the end of 2023. The act may set a precedent for future risk-based regulatory approaches, as it would rank AI systems according to their risk levels, and ban or regulate AI systems based on those risk levels. There is no similar legislative framework in the U.S.; however, in October, the White House released the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, which could be the basis for future AI legislation. While nonbinding, the Blueprint identifies five principles “to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” A number of other countries are developing similar non-binding principles or frameworks. Finally, various federal regulators have proposed AI-specific regulations, and there is a growing patchwork of AI-specific state and local laws. Monitoring and complying with evolving AI legislation and regulation will be a key priority for companies over the next year.