Managing risks in large language models
Large language models (LLMs) such as GPT-3 and Copilot have become increasingly popular for efficieint translation, chatbots and content generation. However, as with any technology, this type of artificial intelligence can open up new attack surfaces.
Consider the following potential vulnerabilities in both the training phase and production phase.
1
An LLM can only perform as well as it is trained to do, based on select datasets provided by AI engineers. These datasets create risk for:
2
Once an LLM has been deployed, users throughout an organization can typically access it for a wide array of purposes. Look out for risks such as:
3
To mitigate risks throughout the LLM lifecycle, consider the following high-level approach for security testing:
This kind of testing is an important part of cybersecurity, as threats to LLMs can result in additional attack vectors against related application programming interfaces (APIs) and networks. Conducting a thorough penetration test for LLM applications at each phase of the lifecycle can help you identify vulnerabilities, improve your security posture, and mitigate risks.
KPMG offers end-to-end security testing as an outcome-based managed service, helping you consistently validate controls while minimizing remediation efforts. That’s because business transformation is not a fixed destination; it’s an ongoing journey. With managed services, we help you continually evolve your business functions to keep up with ever-changing targets, while driving outcomes like cost reduction, resilience, and stakeholder trust.
Social engineering
A fatal flaw in cybersecurity
Preventing broken trust
Managed services can help fill critical role of application security
Application security as a culture
How to counter agile adversaries