Artificial Intelligence (AI) promises to transform our lives, helping to make us more efficient, productive, healthier and innovative. This exciting technology is already being used across the private and public sectors, harnessing the power of data to improve forecasting, make better products and services, reduce costs, and free workers from routine administrative work.

However, as with any emerging technology, there are risks. The widespread and unregulated use of this technology raises concerns about its impact on human rights and personal privacy. This is especially true for generative AI (GenAI), which uses deep-learning algorithms and powerful foundation models that train on massive quantities of unlabeled data to yield output produced by artificial intelligence.

This paper investigates the privacy implications of the widespread adoption of AI. It aims to uncover what this means for businesses and outlines the key steps organizations can take to utilize AI responsibly. By staying informed about the privacy implications of AI adoption and taking proactive steps to mitigate risks, companies can harness this technology’s power while safeguarding individuals’ privacy.

Download PDF

Privacy in the new world of AI

How to build trust in AI through privacy.



Download PDF (1.9 MB) ⤓



Explore the five key steps that can help companies build trust in AI


Legislators, policymakers and regulators consistently stress aligning AI systems with recognized standards. So, it’s essential to identify which regulatory frameworks apply to your business, determine which you choose to comply with and plan how your AI will be deployed. Create a baseline for AI usage that satisfies varying regimes and streamline your AI development or AI-related business activities accordingly.




Assess the impact on privacy and address compliance issues at the ideation stage — and throughout the AI lifecycle — through a systematic privacy impact assessment (PIA) or data protection impact assessment (DPIA). Privacy by Design, as outlined in the ISO 31700 Privacy by Design Standard and KPMG Privacy by Design Assessment Framework, can help organizations build privacy into AI systems.


Even if you believe your system only uses anonymized or non-personal data, privacy risks can emerge, including re-identification from training data sets and even AI models and downstream impacts of non-personal data used to train models that impact individuals and communities. A robust assessment will also include security and privacy threat modeling across the AI lifecycle and stakeholder consultation where appropriate. Consider broader privacy issues such as data justice (how fairly people are treated in the way you use their data) and indigenous data sovereignty (the rights of indigenous peoples to govern data about their communities, peoples, lands and resources).




Assess privacy risks associated with developing in house AI solutions or using public models that train on public data. Be sure these models adhere to newly developed AI and ethical standards, regulations, best practices and codes of conduct to operationalize the requirements (e.g. NIST, ISO, regulatory guidance). This applies whether you are the developer or a client developing or acquiring and integrating an AI system.


If you are a client, ask the developer for documentation to support their PIA and related AI privacy risk assessments and also conduct your own private models. If they can’t provide this documentation, consider another provider. In many jurisdictions, including the UK and the EU, a PIA/DPIA is already a legal requirement and a baseline that should bake in AI considerations. The PIA/DPIA should address initial AI use and design considerations (e.g. problem statement, no-go zones, etc.). Focus on the articulation of necessity and proportionality for the data collection, as well as consent.




If you are a developer of AI systems or a third party/vendor of AI, you should assure clients and regulators that you have taken the necessary care to build trustworthy AI. One way to do this is through an audit against recognized standards, regulatory frameworks and best practices, including an algorithmic impact assessment.

To illustrate, testing the AI system using test scripts which can address real-world scenarios to gain user feedback and help ensure its effectiveness, reliability, fairness and overall acceptance before deployment. This includes explaining what data was used, how it was applied to the end user as well as how the end user can contest or challenge the use of AI for automated decision-making purposes to prevent biased outcomes.




Be prepared to answer questions and manage the preferences of individuals impacted by your development or use of AI systems. Organizations that want to use AI for automated decision-making should be able to explain in plain language how AI can impact their end users.


Explainability is the capacity to articulate why an AI system reached a particular decision, recommendation or prediction. Be prepared to answer questions and manage the preferences of individuals impacted by your development or use of AI systems. Consider developing documented workflows to identify and explain what data was used, how it was applied to the end user and how the end user can contest or challenge the use of AI for decision-making purposes.



Get in touch

Connect with us

Related content