• Nat Gross, Partner |
5 min read

In the fast-moving world of artificial intelligence (AI), regulation is evolving to keep pace. That may be a tough challenge, but we’ve seen a lot of activity and developments in the UK and EU in recent months. Effective regulation is key to ensuring a responsible approach to AI and maintaining customer trust. As we saw in KPMG’s recent Customer Experience Excellence report, trust is one of the key pillars in any positive customer relationship – as AI increasingly becomes embedded in how companies operate, the trust question becomes critical.

EU and UK regulatory developments

It was timely therefore that our latest ‘Future of’ session for TMT covered this topic of AI regulation. We began by reminding ourselves what the current state of play is. The EU has drawn up an AI Act which is very near to finalisation although it has not actually been enacted yet. In the UK, the government has published a white paper setting out its approach which has gone through a consultation phase. The paper establishes 5 key principles.

As Usman Wahid, partner at KPMG Law, colourfully described it, if we think in terms of a technology or software development project, the EU’s approach is akin to a traditional waterfall approach while the UK is going down a more agile route. The EU’s Act establishes the structure from which regulation will flow; while the UK will not be creating a separate set of laws or regulations to apply to AI but rather will be integrating it into existing regulatory frameworks.

The EU’s Act will come into effect across three phases (of 3, 6 and then 24 months) and its chief feature is to put AI use cases into three categories of risk: unacceptable (which will therefore be prohibited), high (this includes areas such as using AI to assist in credit scoring) and limited (where the focus will be more on transparency that AI is involved).

In the UK, the principles that will guide regulatory approaches include safety, security and robustness; transparency and explainability; fairness, accountability and governance; and contestability and redress. These will overlay existing key regulation such as the Consumer Protection Act or the Data Protection Act, and will fall under the remit of some of our principal regulators to oversee and enforce.

Creating a unified and coherent UK approach

To help bring this about, a new body has been created – the Digital Regulation Cooperation Forum (DRCF) which brings together four regulators for the purpose of regulating AI: CMA, FCA, ICO and Ofcom. We were delighted that Kate Jones, CEO of DRCF joined our session – she described DRCF as the “connective tissue” between the regulators who are working closely, through the DRCF, to “bring the pieces of the jigsaw together so we get a coherent whole”. A lot of work has already been done, as Kate outlined. For example, the CMA has conducted an interim review of foundation models; the FCA has been actively examining the use of AI in financial services; the ICO has published an AI and data protection toolkit; Ofcom is working on an Online Safety Act.

The DRCF’s work is about establishing how the government’s principles can be applied across regulatory frameworks in a consistent way, looking at the interplays and inter-connections – and also recognising the differences. As Kate observed, “fairness” in a human rights or data protection context might have a different meaning to that in a competition law scenario, for example.

Finding the balance

Undoubtedly, there will be issues to resolve. We were also joined by Gina Neff, Executive Director at Cambridge University’s Minderoo Centre for Technology and Democracy. The Centre recently published a white paper on AI policy in which they explored various areas where there is a danger of things “falling between the cracks”. For example, in relation to employment law. If an AI tool is used to help with hiring processes, but the tool has some discriminatory features, who bears the legal liability for that? There are also important questions relating to intellectual property – how can businesses protect their existing IP in the context of generative AI and when they create new products and services using foundation models? As Gina said, “We need clarity on many areas, because clarity helps business make decisions. If we can get the balance right, AI can spark new products and services that will improve markets and delight customers.”

In my view, the word Gina used here – balance – is really key, because getting this right is all about establishing the appropriate balance between strong ethical approaches and the encouragement of innovation. In fact, embedding an ethical and responsible approach will not only help businesses maintain their crucially important compliance with rules and legislation, but it should also provide the solid foundations from which creativity and innovation can follow.

Data, trust and transparency

Data is a case in point. Isabel Simpson, partner at KPMG Law, highlighted the vital importance of complying with data protection and privacy rules when developing and deploying AI. What data is going into the AI engine, do customers know/consent to this, is the data being used for a secondary purpose rather than the one it was initially collected for? “The ethical use of data will get you a long way towards compliance with rules and guidance. It is key to establishing trust,” Isabel said.

We also heard from a software vendor – Heath Ramsey, VP of Product Management at ServiceNow. Heath stressed the importance of transparency. “We recognise that we need to be very clear about how we’re using our models and data, and about the points where AI or machine learning has been injected into our products. Transparency builds trust and understanding amongst our customers.”

Heath also outlined that ServiceNow has a responsible AI framework, along with mandatory training for staff that is regularly updated.

Keeping fit for the future

Clearly, ethics and compliance is a topic on the minds of many as they pursue the AI agenda. Something to watch out for is the DRCF’s launch this Spring of an AI and digital hub through which innovators can apply for advice and guidance in connection with any aspect of AI regulatory compliance and receive one response that covers the remits of all four regulators under the DRCF umbrella.

More broadly, as Usman Wahid encouraged, businesses should focus on ensuring they understand what rules and requirements are out there already and what’s coming. “Make an inventory of everywhere in your business where you use AI or semi-autonomous AI such as machine learning. Look at your governance framework and controls around this – and then map it to the regulatory landscape to see where you need to adjust or uplift your approach.”

The AI whirlwind continues – and getting the regulatory compliance piece right is fundamental to progressing on the journey.

Watch our Future of AI event here