KPMG interviewed leaders from 300 companies across industries from all over the world. More than 70% believe that AI will positively impact their company, planning to implement AI solutions within the next two years. At the same time, 92% of respondents are aware of risks related to AI solutions, listing cybersecurity, data leaks, copyright issues, or intellectual property as the most serious. Read our summary of risks and challenges brought forth by AI solutions.

In simple terms, an artificial intelligence model is a “brain” with highly specialized training focused on one niche activity. To train that brain, you need a good algorithm, a high-quality, balanced set of data, and well-established processes in case you need to quickly retrain the model because it failed to deliver the expected results. Generative artificial intelligence takes it a bit further – it is a set of algorithms that are capable of generating text, graphics, video, or other type of content.

Biggest AI adoption barriers: lack of experts, lack of money, or an absent business strategy

Our study revealed that 77% of business leaders from large companies believe generative AI to be the most relevant emerging new technology they have ever seen, putting it above other current top technologies like 5G or extended reality. 71% of these leaders plan to introduce the first generative AI-based solutions within two years, believing they will boost their company’s productivity.

Adam Vytlačil, Customer & Digital Manager at KPMG believes generative AI can be successfully applied in areas where “bots” can take over routine tasks, draft answers, or gather information for the human operator. “Companies have been using virtual assistants providing their customers with 24/7 support for some time now. Current generative AI models are capable of better understanding customers’ questions and providing answers that are closer to how a human would respond. This greatly improves the quality of such tools, which were not too great in the past, expanding their use too.”

However, despite the current AI hype train, most leaders from large companies do not feel ready to implement it. At this moment, companies simply lack the experts capable of implementing these new AI solutions – only 1% of our respondents said they do have such experts. Others still only plan to train or hire the required talent (or both). Other top two barriers standing in the way of AI are development costs and unclear business plans, our study shows.

Czech companies are not ready to use AI models systematically or efficiently – most managers still have no idea which departments could implement AI, especially the generative type, into their work or how they could use it. We’ve had clients come up with over 100 potential use cases they didn’t even think about before. So, the first thing every business should do is put together a list of all possible uses of AI across the entire company – without it, you can’t efficiently manage related risk or prepare for the upcoming AI Act,” says Ondřej Michalák, KPMG’s expert on responsible use of artificial intelligence.

Up your cybersecurity game and keep an eye on the data

There are plenty of possibilities when it comes to using AI, but new opportunities come with new risks too – and leaders are well aware of those. The majority (77%) believe they can deal with them. Let’s take a closer look at what these new risks entail.

Potential data leaks and misuse

Most generative AI models use their own data combined with data provided by the user. This way, the models learn and build up their knowledge base. The service provider can then use data acquired from users to answer someone else’s questions, which creates a risk of sensitive data being leaked. Furthermore, licence agreements often change in the case of tools that operate as external services (for now, at least). Data are also often stored on the provider’s servers, with users often having no clue about their potential further use.

One way to limit the risk of your data being misused is to anonymize it by removing or hiding all personal or sensitive information. However, even that doesn’t guarantee the complete and total safety of your data. Studies show that algorithms can identify a person even with anonymized data. This gives such tools at least a vague idea of what issues your company is dealing with, making your new strategy or plans not so secret anymore.

Garbage in, garbage out

Anyone working with AI should keep in mind that what they’re using is not a finished product – it’s a tool that they help shape and train every day. A tool whose output will only ever be as good as the input, which is why AI models need to receive good, quality data.

That’s why KPMG decided to offer all our staff training on generative AI as a part of our Digital and Data Foundations program designed to provide an overview of AI development and teach people how to use it to create credible content. Maybe consider providing similar training to your employees, too.

Cybersecurity  

Cybercriminals can use generative AI to create more convincing phishing attacks or to generate access codes to break into your internal systems. Another risk is so-called data poisoning when a targeted tool is being fed incorrect or harmful information on purpose, corrupting the output created by AI.

The good thing is you can prepare for these types of risks too. How? Set up general directives for the use of AI and make sure they are tailored to your tool of choice.

Misleading information

Generative AI models need enormous amounts of data to train on, which means they will inevitably use data from less trustworthy sources or unqualified users. Therefore, your AI training for employees should put a strong emphasis on the importance of double and triple-checking all AI outputs. This strategy will help you stay away from spreading misleading or incorrect information that could damage your company’s credibility.

Don’t forget about data governance

Data governance plays a key role in ensuring ethical, safe, and responsible use of data within AI systems. If you want to use your own AI models, efficient data governance should be on your to-do list.

Start by building a specialized team responsible for your data, and make sure to include people from different departments (like IT, legal, compliance, and others).

Copyright and intellectual property

Who owns the content that was processed by AI? That’s a million-dollar question with no definite answer as of yet. Many factors come into play, from the T&Cs of your tool of choice to how you then use the content it creates. Defining exactly how much you would have to edit a piece of text, for instance, to claim it as your creation, is very difficult.

One way to significantly reduce the risk of breaching copyright laws is training AI models using your original, non-copyrighted data.

AI models can greatly boost productivity, but they also make us more vulnerable – which is why you need to be certain that the AI technology you employ works as intended and creates the best possible solutions that fit your needs. Want to make sure your company is covered? Our Responsible AI is here to help.

 

This year, KPMG conducted an online survey with 300 participants from large companies (with a turnover of 1+ billion dollars) from around the world to get their take on generative artificial intelligence. Companies who participated in the survey work in a wide range of different fields and markets. The first survey took place in March 2023, shortly after the world was overtaken by the Chat GPT craze. Later, in June, KPMG surveyed the same participants again to see if and how their attitude towards generative AI models changed over time. Read the full survey here.