Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

What’s Your aIQ?

How solid guardrails can help you scale AI faster

Essential elements of responsible AI

This article is the third in our ongoing "What’s Your aIQ?" series, which provides KPMG insights on the most essential issues on clients’ minds as we navigate the AI era. Our insights are backed up by learnings from our internal AI transformation journey, ensuring we share practical, tested solutions that add tangible value. 

Diving headlong into AI without safeguards is risky business: Inaccurate or biased content could erode your organizations’ reputation and brand. Less-than-robust data privacy measures could expose sensitive intellectual property. A data breach involving customer data could result in legal or regulatory repercussions.

These risks are just the tip of the iceberg. As more and more companies are looking to AI to transform their organizations for the long term, setting up strong guardrails can allow them to move quickly and with confidence toward scaling from AI use cases and pilots to holistic AI integration. And to do so without taking on unnecessary risk. In this article, we discuss the essential elements of responsible AI, emphasizing the importance of ethics, data privacy, and governance so that companies are able to derive value from AI systems with the assurance that they are secure, fair, and reliable.

Build trust with your teams and clients by: 

1. Keeping AI fair and square

Trusted AI is all about ensuring fairness—for your workforce, within your products and services, and in your wider influence across society. Regular audits and thorough reviews can help organizations spot and address biases that might manifest in AI models and algorithms. Don’t just talk the talk; mandate continuous training on responsible and ethical AI for everyone from new practitioners to seasoned pros, making it a fundamental part of your culture. Organizations should strive to ensure that their AI systems align with their core values, such as transparency, inclusivity, and equity. This will raise the likelihood that your AI output is fair, inclusive, and reflective of the full spectrum of society. Finally, adopting a “human in the loop” approach ensures that human judgment plays a critical role in the AI lifecycle, thus driving fairness and accuracy from design to deployment.

Lessons from the KPMG aIQ journey

At KPMG LLP (KPMG) we have established a multi-faceted approach—including a transformed organizational structure, dedicated workstreams, and new ways of working-that provides the guardrails to ensure AI is implemented safely and securely:

  • Built on 10 sustainable principles, our Trusted AI framework guides us in developing AI systems that are not only ethical, but also sustainable and inclusive.
  • Our Trusted AI Council leads the charge and continuously updates policies to keep up with evolving technology capabilities and regulatory shifts.
  • A dedicated Head of Trusted AI manages internal governance processes and controls and oversees the Trusted AI Council.
  • Trusted AI training is mandatory for all members of our KPMG firm.
  • Keeping humans “in the loop” for oversight and fact checking ensures that our AI implementations combine the best of human and artificial intelligence.

2. Doubling down on data integrity

As AI systems become more integral to business operations, maintaining data integrity and hygiene is essential. Stringent data management methods encompass risk assessment and robust governance frameworks that ensure data is used responsibly. Educating the workforce on data privacy is another way to minimize the possibility of unintentional exposure of sensitive information and intellectual property, while ensuring that implementations are secure. Proactive adaptation to data privacy requirements is key to ensuring compliance with evolving laws, such as the EU AI Act, and maintaining high ethical standards. 

Lessons from the KPMG aIQ journey:

From data integrity to mindfulness about potential biases, all KPMG AI initiatives strive to be fair and inclusive. We expect our entire workforce to demonstrate their dedication to these principles through the following:

  • Internal policies like the Responsible Use policy set the standard for AI applications, ensuring they are both ethical and compliant.
  • Training programs customized for employees at various levels of the organization, AI maturity, and responsibilities, including courses comprising essential information on data integrity, statistical validity, and model accuracy.
  • Courses on ensuring inclusivity help employees recognize and correct biases in data sets, algorithms, and model outputs.
  • Regular audits and updates of AI models ensure they remain free of bias and adhere to evolving ethical standards.

3. Championing AI governance

Effective AI governance requires strong leadership and collaboration across various stakeholders. Organizations should consider active participation in industry groups and initiatives aimed at shaping global AI governance frameworks. Leadership in AI governance also involves internal structures to oversee AI deployment, such as a cross-functional council comprising members from diverse functions including risk, legal, compliance, and IT. A clear responsible use policy that demands strict adherence to ethical AI practices is a critical imperative as AI becomes more ubiquitous. 

Lessons from the KPMG aIQ journey:

At KPMG, we believe that AI governance needs to be addressed beyond the four walls of our organization. To this end, we partner and collaborate with industry leaders, think tanks, academic institutions, and others on developing common standards that will help us all reach the ideal of responsible AI. Our internal efforts include:

  • Robust data governance practices at the heart of our operations, addressing risks related to data integrity, confidentiality, and permissible use.
  • Collaborations with organizations that contribute to the development of shared standards for ensuring that AI is used responsibly and ethically.
  • A Trusted AI Council that plays a crucial role in reviewing and guiding AI policies, ensuring they align with both internal ethical standards and external regulatory requirements.
  • Readiness for proactive adaptation to regulatory changes—both in the US and globally -to ensure continuous compliance.

With a solid framework and guardrails in place, the journey toward an AI-forward future promises to be a fruitful one. A focus on data integrity and compliance solidifies trust in AI applications, making them both effective and ethical. By implementing a “human in the loop” approach, organizations can ensure that human judgment remains integral throughout the AI lifecycle. Collaboration on AI governance is vital, e.g., through proactive involvement in global AI governance initiatives designed to establish common standards and principles. Through a shared commitment to fostering trust in AI use, we can all look forward to a future where productivity enhancement and profitability gains can be achieved rapidly, securely, and with confidence.

Footnotes

1 Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.

2
Press release: Workday Global Survey Reveals AI Trust Gap in the Workplace, Workday, January 10, 2024.

Put AI into Action

KPMG is at the forefront of AI strategy, offering the Trusted AI framework—a comprehensive guide for businesses ready to enhance their AI capabilities

Featured AI insights

Get the latest thinking from KPMG on artificial intelligence and machine learning.

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline