There are plenty of possibilities when it comes to using AI, but new opportunities come with new risks too – and leaders are well aware of those. The majority (77%) believe they can deal with them. Let’s take a closer look at what these new risks entail.
Potential data leaks and misuse
Most generative AI models use their own data combined with data provided by the user. This way, the models learn and build up their knowledge base. The service provider can then use data acquired from users to answer someone else’s questions, which creates a risk of sensitive data being leaked. Furthermore, licence agreements often change in the case of tools that operate as external services (for now, at least). Data are also often stored on the provider’s servers, with users often having no clue about their potential further use.
One way to limit the risk of your data being misused is to anonymize it by removing or hiding all personal or sensitive information. However, even that doesn’t guarantee the complete and total safety of your data. Studies show that algorithms can identify a person even with anonymized data. This gives such tools at least a vague idea of what issues your company is dealing with, making your new strategy or plans not so secret anymore.
Garbage in, garbage out
Anyone working with AI should keep in mind that what they’re using is not a finished product – it’s a tool that they help shape and train every day. A tool whose output will only ever be as good as the input, which is why AI models need to receive good, quality data.
That’s why KPMG decided to offer all our staff training on generative AI as a part of our Digital and Data Foundations program designed to provide an overview of AI development and teach people how to use it to create credible content. Maybe consider providing similar training to your employees, too.
Cybersecurity
Cybercriminals can use generative AI to create more convincing phishing attacks or to generate access codes to break into your internal systems. Another risk is so-called data poisoning when a targeted tool is being fed incorrect or harmful information on purpose, corrupting the output created by AI.
The good thing is you can prepare for these types of risks too. How? Set up general directives for the use of AI and make sure they are tailored to your tool of choice.
Misleading information
Generative AI models need enormous amounts of data to train on, which means they will inevitably use data from less trustworthy sources or unqualified users. Therefore, your AI training for employees should put a strong emphasis on the importance of double and triple-checking all AI outputs. This strategy will help you stay away from spreading misleading or incorrect information that could damage your company’s credibility.
Don’t forget about data governance
Data governance plays a key role in ensuring ethical, safe, and responsible use of data within AI systems. If you want to use your own AI models, efficient data governance should be on your to-do list.
Start by building a specialized team responsible for your data, and make sure to include people from different departments (like IT, legal, compliance, and others).
Copyright and intellectual property
Who owns the content that was processed by AI? That’s a million-dollar question with no definite answer as of yet. Many factors come into play, from the T&Cs of your tool of choice to how you then use the content it creates. Defining exactly how much you would have to edit a piece of text, for instance, to claim it as your creation, is very difficult.
One way to significantly reduce the risk of breaching copyright laws is training AI models using your original, non-copyrighted data.
AI models can greatly boost productivity, but they also make us more vulnerable – which is why you need to be certain that the AI technology you employ works as intended and creates the best possible solutions that fit your needs. Want to make sure your company is covered? Our Responsible AI is here to help.
This year, KPMG conducted an online survey with 300 participants from large companies (with a turnover of 1+ billion dollars) from around the world to get their take on generative artificial intelligence. Companies who participated in the survey work in a wide range of different fields and markets. The first survey took place in March 2023, shortly after the world was overtaken by the Chat GPT craze. Later, in June, KPMG surveyed the same participants again to see if and how their attitude towards generative AI models changed over time. Read the full survey here.