In the fast-moving world of artificial intelligence (AI), regulation is evolving to keep pace. That may be a tough challenge, but we’ve seen a lot of activity and developments in the UK and EU in recent months. Effective regulation is key to ensuring a responsible approach to AI and maintaining customer trust. As we saw in KPMG’s recent Customer Experience Excellence report, trust is one of the key pillars in any positive customer relationship – as AI increasingly becomes embedded in how companies operate, the trust question becomes critical.
EU and UK regulatory developments
It was timely therefore that our latest ‘Future of’ session for TMT covered this topic of AI regulation. We began by reminding ourselves what the current state of play is. The EU has drawn up an AI Act which is very near to finalisation although it has not actually been enacted yet. In the UK, the government has published a white paper setting out its approach which has gone through a consultation phase. The paper establishes 5 key principles.
As Usman Wahid, partner at KPMG Law, colourfully described it, if we think in terms of a technology or software development project, the EU’s approach is akin to a traditional waterfall approach while the UK is going down a more agile route. The EU’s Act establishes the structure from which regulation will flow; while the UK will not be creating a separate set of laws or regulations to apply to AI but rather will be integrating it into existing regulatory frameworks.
The EU’s Act will come into effect across three phases (of 3, 6 and then 24 months) and its chief feature is to put AI use cases into three categories of risk: unacceptable (which will therefore be prohibited), high (this includes areas such as using AI to assist in credit scoring) and limited (where the focus will be more on transparency that AI is involved).
In the UK, the principles that will guide regulatory approaches include safety, security and robustness; transparency and explainability; fairness, accountability and governance; and contestability and redress. These will overlay existing key regulation such as the Consumer Protection Act or the Data Protection Act, and will fall under the remit of some of our principal regulators to oversee and enforce.
Creating a unified and coherent UK approach
To help bring this about, a new body has been created – the Digital Regulation Cooperation Forum (DRCF) which brings together four regulators for the purpose of regulating AI: CMA, FCA, ICO and Ofcom. We were delighted that Kate Jones, CEO of DRCF joined our session – she described DRCF as the “connective tissue” between the regulators who are working closely, through the DRCF, to “bring the pieces of the jigsaw together so we get a coherent whole”. A lot of work has already been done, as Kate outlined. For example, the CMA has conducted an interim review of foundation models; the FCA has been actively examining the use of AI in financial services; the ICO has published an AI and data protection toolkit; Ofcom is working on an Online Safety Act.
The DRCF’s work is about establishing how the government’s principles can be applied across regulatory frameworks in a consistent way, looking at the interplays and inter-connections – and also recognising the differences. As Kate observed, “fairness” in a human rights or data protection context might have a different meaning to that in a competition law scenario, for example.
Finding the balance
Undoubtedly, there will be issues to resolve. We were also joined by Gina Neff, Executive Director at Cambridge University’s Minderoo Centre for Technology and Democracy. The Centre recently published a white paper on AI policy in which they explored various areas where there is a danger of things “falling between the cracks”. For example, in relation to employment law. If an AI tool is used to help with hiring processes, but the tool has some discriminatory features, who bears the legal liability for that? There are also important questions relating to intellectual property – how can businesses protect their existing IP in the context of generative AI and when they create new products and services using foundation models? As Gina said, “We need clarity on many areas, because clarity helps business make decisions. If we can get the balance right, AI can spark new products and services that will improve markets and delight customers.”
In my view, the word Gina used here – balance – is really key, because getting this right is all about establishing the appropriate balance between strong ethical approaches and the encouragement of innovation. In fact, embedding an ethical and responsible approach will not only help businesses maintain their crucially important compliance with rules and legislation, but it should also provide the solid foundations from which creativity and innovation can follow.
Data, trust and transparency
Data is a case in point. Isabel Simpson, partner at KPMG Law, highlighted the vital importance of complying with data protection and privacy rules when developing and deploying AI. What data is going into the AI engine, do customers know/consent to this, is the data being used for a secondary purpose rather than the one it was initially collected for? “The ethical use of data will get you a long way towards compliance with rules and guidance. It is key to establishing trust,” Isabel said.
We also heard from a software vendor – Heath Ramsey, VP of Product Management at ServiceNow. Heath stressed the importance of transparency. “We recognise that we need to be very clear about how we’re using our models and data, and about the points where AI or machine learning has been injected into our products. Transparency builds trust and understanding amongst our customers.”
Heath also outlined that ServiceNow has a responsible AI framework, along with mandatory training for staff that is regularly updated.