New regulations and initiatives on AI indicate an end to self-regulation for the development, deployment and use of AI systems. As AI becomes increasingly integral to organizations, proactively developing and maintaining risk management and governance frameworks for AI systems will guarantee compliance with upcoming regulation.
Regulatory Policies and Initiatives
As of today, AI systems have only been impacted by the regulation of the data market. The General Data Protection Regulation (GDPR) was a critical legislation for AI systems, but further regulatory policies are now in motion going in this same direction. On 2 March 2021, the state of Virginia enacted yet another comprehensive consumer data protection legislation, the “Virginia Consumer Data Protection Act”. This new legislation (which is expected to go into effect on 1 January 2023) creates consumers rights and imposes security and assessment requirements for businesses similar to the ones of the GDPR.
Data is undoubtedly an important part of AI systems - but it is not the only one. The European Commission released its proposal for an AI legal framework on 21 April 2021 that is expected to have a greater impact on AI applications than the aforementioned policies. Although it seems that the obligations for organizations will eventually depend on the amount of risk an AI application creates, providers of “limited-risk” AI systems will need to adhere to transparency obligations, whereas providers of “high-risk” AI systems will need to comply with several strict requirements.
In the United States, a national AI policy continues to take shape. Similar to the EU proposal for AI, the “Algorithmic Accountability Act” was proposed in 2019. Although the bill did not pass, it would have mandated similar reviews of the costs and benefits of AI systems related to AI risks. The bill is expected to be reintroduced in the near future, as it continues to enjoy broad support in both the research and policy communities. Furthermore, the U.S. Federal Trade Commission (FTC) released a strict set of guidelines on “truth, fairness, and equity” in AI. It has warned organizations that that their artificial intelligence should not result in racial or gender bias. The FTC bluntly stipulated that a failure to keep these biases in check may result in “deception, discrimination – and an FTC law enforcement action”.
The evolving policies around AI systems have a number of consequences for the business world. On the one hand, for companies that are about to embrace AI technologies, there is a growing uncertainty as to how/whether they should use AI. This could impact their decision to use AI technology, which might negatively affect their competitiveness and market share over the next years.
On the other hand, for companies that have already adopted AI technology, the uncertain regulatory environment could pose return-on-investment challenges. For instance, an R&D team could spend months working on new AI capabilities only for it to be non-compliant with upcoming regulation.
Last but not least, regulatory differences across countries could hamstring AI development in some regions and cause an imbalanced international competition landscape. For instance, many European officials have raised concerns that Europe will fall further behind the United States and China in the field of AI once the recently proposed EU regulations take shape.
How should we prepare?
A plethora of roadmaps and guidelines have been published by countries and independent organizations that signal the shape of the upcoming regulatory frameworks on AI. Following tangible actions described therein is the best chance that companies have at avoiding that their AI systems will conflict with future policies and laws. More specifically, companies should:
- conduct “algorithmic impact assessments”, i.e. identify, catalogue, and describe the AI risks, and assess how such risks can be mitigated or addressed. Documentation needs to be in place that clearly captures all risk aspects of AI systems. In this way organizations will be able to comply with new and upcoming AI policies.
- develop internal accountability frameworks. Different stakeholders need to participate in the risk assessment process for AI systems - not only data scientists. For instance, business owners, upper management and lawyers may have different incentives with respect to the company’s AI system, which will eventually shape inter-company alignment and consensus with respect to its characteristics.
- prepare independence frameworks around their AI systems. Especially for “high-risk” AI systems, it is paramount that organizations turn to a neutral third party, i.e. an outside AI counsel, that will run an in-depth assessment of technical and governance aspects of the AI systems.
- continuously review and monitor their AI systems. “Edge-cases”, i.e. situations where similar data are scarce or non-existent during the training phase of an AI system, might occur at any point in time. This could expose the organizations to unpredictable risks. Furthermore, the underlying mathematical models typically change as the systems are retrained, which means that AI risks need to be re-assessed continuously.
All of this requires high-end expertise that many companies lack. It is likely that third-party support will be needed. As regulatory events unfold, such experts will become scarcer, meaning that it is best to think about this early on.