David Wren – I think we have to start from the perspective that AI is probably correct and that's remarkable but we're bored with that now.
Emily Salathiel – Have I got the right people using AI and what does that need to look like now and in the future?
John Georgiou – A lot of groups have built and developed AI policies but have yet to really apply them to tax.
Amar Thakrar – AI presents a significant opportunity for tax functions, so it's critical and crucial that tax functions adopt AI safely and within the broader risk environment of their organisation.
Amar Thakrar – So David, starting with you, if we think about the risks in AI, how do they apply to tax and what should tax leaders be thinking about?
David Wren – I think we have to start from the perspective that AI is probably correct, and that's remarkable.
David Wren – You know, the idea a few years ago that you could ask an agent a question and get something that was more likely right than wrong is amazing. But we're bored with that now. And so we now need to really think about how we manage that probably.
David Wren – And probably is a range between everything slightly better than 50% right up to absolute certainty and being correct.
David Wren – So when we think about it in a context of tax and legal and regulatory environment, we really need to know that our AI is not just more likely to be right than wrong, but actually that we have absolute confidence in that answer, that we can explain it, that we're going to be accountable for it, that we really understand what we're doing with it.
David Wren – And that means there are some use cases that are emerging very quickly, I think, in tax that work really well. Any time that you can contain a problem, you can set the parameters around it and you can you can box it up.
David Wren – Those use cases are working really well. We're getting great results across data and really that's kind of enhancement to what we see with data and other things.
David Wren – And then we start to see agents working together, the idea of agents reviewing other agents and working together to give us a better outcome and a better insight into what is there.
David Wren – Having said that, I think we're a long way away from removing humans in the loop.
David Wren – There's always going to be that need in a tax environment that we can look at the final answer, that someone with real expertise can review that, make sure that it's correct, that we're not just relying purely on the AI.
David Wren – And so the questions we really have to ask ourselves are, how do we wrap those guardrails? How do we wrap those controls around the AI tools?
David Wren – And how do we make sure that we are properly equipped to review what comes out and to explain that, if we have to, to a tax authority?
Amar Thakrar – Thank you. Really insightful. Emily, a question for you. Lots of tax leaders are thinking about their operating model. When you bring AI into the context of the operating model, how does that thinking change?
Emily Salathiel – Yeah, no, it's a great question. And I think with any operator model change, you've got to think about doing that safely, as you talked about, and reducing the risk.
Emily Salathiel – So very much when I think about AI, I look at it across all the different layers, as with any target operating model. So thinking about your processes, for example, making sure those are documented.
Emily Salathiel – As you mentioned, David, we think about that across the broader stakeholder organisation. So what are you linking into broader policies?
Emily Salathiel – And the thinking about people, really important lens to think through is, have I got the right people using AI? Are they rightly Do they have the right training?
Emily Salathiel – And what does that need to look like now and in the future? And then I think you talk about guardrails, super important to think about governance.
Emily Salathiel – And I think particularly with tax, it's very binary often, and you actually do need to think about the differences across taxes.
Emily Salathiel – So for example, employment tax, if you're thinking about using AI as an employer, you might get the data say a PAYE assessment agreement and actually the data you're getting there from an expensive system using AI you can do that and you've got a human in the loop there but when you're using it perhaps an employee so you're thinking about getting say their payslip and they're using AI because they've got questions queries on it with chatbot that's quite different so actually what's the governance you need around that because they probably haven't got that ability to think about hallucinations or how do I filter that
Emily Salathiel – And then I think it's thinking about the technologies we've talked about. So a lot of my clients are grappling, do I build that internally? And I know John, you're going to talk a bit more about this. Do I do that externally?
Emily Salathiel – How do I manage some of that a bit more broadly? And then thinking about the broader service delivery model. Do I manage that myself within my team? Do I manage it at a global level?
Emily Salathiel – Do I manage it a bit more locally? So thinking through all those different aspects.
Amar Thakrar – Well, clearly a lot to think about there. David, same question for you.
David Wren – What's your take on it? I think the people aspect of what Emily is saying is really interesting. You know, we get to work with brilliant people here at KPMG and our clients all the time.
David Wren – Really knowledgeable people, people who've got a huge history in tax and working through kind of tax issues. Equally, most of those people have got 20, 30 years left of their careers.
David Wren – Slightly scary statistic in its own right. And so how do we really kind of engage with all of our people? How do we work with our clients to upskill around AI?
David Wren – What is it we can best use it for right now and maybe five years in the future? But what risks are there?
David Wren – And how do we stop people from taking the AI results and just applying them and actually really start thinking about the new critical skills that are going to be needed to review something that's been generated by AI?
Amar Thakrar – So John, it would be great to bring you into the conversation now. So thinking about all of the risks around AI and tax specifically, what should a tax leader be thinking about to mitigate and manage these challenges?
John Georgiou – Thanks Amar. If I were a tax leader, the first thing I'd want to do is to really build an AI inventory for the tax function.
John Georgiou – So I'd want to understand what AI is currently being used in the tax function, but also what AI is being used in wider finance and also people processes of which the tax function is dependent.
John Georgiou – So that's really, really important. Know what you're dealing with first. I'd then want to build a framework for managing the risks related to AI within tax.
John Georgiou – So, for example, is it clear within the organisation who is responsible, who is accountable for the management of AI related risks within the tax function?
John Georgiou – And what is the risk appetite for using AI within tax?
John Georgiou – There are certain tax processes where it would be completely appropriate for AI to be involved, but there are other tax processes where you'd want a greater level of human involvement in the process and perhaps less reliance on AI.
John Georgiou – So understanding that risk appetite is really, really important within the tax function. A lot of groups have built and developed AI policies, but have yet to really apply them to tax.
John Georgiou – So that's something else I'd want to be doing as a tax leader, tapping into the broader AI governance and policies within the organisation, but ensuring the appropriate application to my tax function.
John Georgiou – I think more broadly, it's going to be really, really important for tax leaders to be able to risk assess the AI technology that they're implementing.
John Georgiou – And the risks do differ by AI technology, right? So if you're looking at generative AI, for example, the risks are more centred around hallucinations
John Georgiou – Whereas with more machine learning focussed transactional processing, the risks are more heavily related to data accuracy, reliability, the risks that your data changes over time and therefore your model becomes less effective.
John Georgiou – So being able to risk assess the AI is absolutely critical because the next step is then to put the appropriate safeguards and controls in place to manage those risks.
John Georgiou – And obviously those controls will vary depending on the specific risks identified within that risk assessment process.
John Georgiou – As a tax leader, I'd want to have some key performance indicators in place so I can measure the success of the AI and its accuracy. And that's really, really important.
John Georgiou – A, to be able to demonstrate to a tax authority that the AI is working, but also it helps to reinforce the business case around the use of AI.
John Georgiou – If you can demonstrate that the AI that you've implemented is actually doing what you anticipated it was going to be doing, then I think that would encourage the greater adoption of AI within a tax, but also a broader finance function.
John Georgiou – So that's really, really key.
John Georgiou – And here at KPMG, we have built a trusted AI framework, which we are helping organisations apply to tax so that tax functions can implement AI safely.
Amar Thakrar – Thanks, John. So a lot to do, but it feels like being proactive will go a long way here. Absolutely.
Emily Salathiel – Certainly the point you make around data and think about key performance indicators, KPIs, is super important.
Emily Salathiel – I think particularly when it comes to data as well, you know, think about the input data and what you're getting, where you're getting that from, different sources, and then think about how you use the output data. So who's consuming that? So things like tax authorities.
Emily Salathiel – Do you find that as a one-time piece because you're looking at an audit or a VRR, this risk review, or is that more of an ongoing basis?
Emily Salathiel – And again, how do you measure the key forms and caters to the success of that? And actually, that's going to keep evolving as AI is. I mean, we know how fast paced it is.
Emily Salathiel – So actually what you're measuring now might not be what you're looking to measure in the future.
Amar Thakrar – Well, thank you, John, Emily and David. My key takeaways are the impact this has on people and how we need to think about people at the heart of this.
Amar Thakrar – The impact on data, data going into the AI models, data coming out to the AI models and where that's going. But also think about the wider finance organisation.
Amar Thakrar – and how AI really fits in, in terms of the tax context to what's going on in the broader organisation and in finance, and especially aligning from a risk perspective.
Amar Thakrar – So I think clearly lots of thinking to do there.