Over the past few years, Artificial Intelligence (AI) has transformed from being an abstract idea to becoming part of daily life. AI technologies including chatbots, smart speakers, and virtual assistants are now such a routine part of everyday activities, most of us can hardly remember life before them.

      AI adoption continues to accelerate. A recent survey led by the University of Melbourne in collaboration with KPMG “Trust, attitudes and use of Artificial Intelligence: A global study 2025”1 shows over 70 percent of organizations report plans to implement AI in the next two years. Organizations are leveraging the power of AI to help improve data-based predictions, optimize products and services, scale innovation, and enhance productivity. From diagnosing illnesses, to detecting fraud, to screening resumes, AI is now reshaping some of our most critical industries including healthcare, insurance, and government services.

      The promise of AI is clear: it offers smarter decision-making, greater efficiency, and lower costs. The rewards, however, are not without risks. If not implemented responsibly, AI can copy systemic inequalities into systems, enhancing disadvantages for the more vulnerable parts of society. Underpinning AI growth with enhanced design and governance principles that address social fairness should now be an imperative for public and private institutions alike.

      Barriers to fairness in AI

      The AI dilemma we are facing today is not whether AI is good or bad, but how it can be designed and governed for fairer outcomes. While most organizations are rapidly adopting AI, fewer are, arguably, addressing the question of AI fairness despite growing concerns.

      Most AI systems are built on historical data, which means the same societal patterns and inputs that shaped the past are also shaping the future. This could perpetuate inequities for disadvantaged groups through common applications of AI, including, for example, a healthcare system trying to reduce diagnostic gaps; an insurer automating claims processing; a public agency hiring frontline staff; or a financial institution working to improve credit access for new customers. As AI expands across industries, addressing its biases becomes increasingly important.

      The survey2 reveals that three out of five people remain wary about trusting AI. While AI can level the playing field with its capabilities, adoption without considering fairness and equity can undermine AI’s credibility. In sectors like insurance and healthcare, where customer trust is essential, strong design and robust governance to check for AI bias can result in an institution either being seen as innovative or as running faulty systems that can risk their reputation.

      Fairness in AI cannot and will not emerge independently. It will only exist as the result of deliberate choices made throughout the AI lifecycle. This includes how problems are framed, how algorithms and models are trained, and monitoring results so they are adjusted over time —all with human oversight.

      Unless the question of social fairness in AI is intentionally addressed, existing divisions in society will be reinforced. It means that the rapid pace and reach of AI, often seen as its greatest strengths, are becoming significant liabilities.


      AI biases

      How does AI reflect and amplify inequities that already exist in our systems? The root of the issue lies in how AI models learn, which is largely from historical patterns and proxies rather than neutral and independent truths and data. As these patterns often include deeply embedded social and institutional disparities, the models can unknowingly reproduce them. These AI systems are not malicious; they’re simply replicating what they’ve learned.

      The “Trust, attitudes and use of Artificial Intelligence” report3 shows that people trust commercial organizations and governments least when developing, using, and governing AI.

      The KPMG “Trust in AI” survey reveals low levels of public trust in commercial organizations (and governments) when developing and using AI. This is not without some justification:

      • In healthcare, diagnostic models trained primarily on datasets from white male patients have produced less accurate outcomes for women and people of color, leading to misdiagnoses and delayed treatment.
      • In insurance, AI tools used to assess claims or calculate premiums have correlated postal/zip codes with risk profiles affecting individuals from low-income or racialized communities, even when their personal behavior suggests otherwise.
      • In government, AI has been deployed to streamline decision-making from screening job applicants to prioritizing social benefits. However, when location, education, or employment history are used as proxies for performance, the result can be biased. Recruiting models might deprioritize applicants from certain neighborhoods, inadvertently excluding the very communities the public institution is meant to serve.
      • In financial services, where AI-driven fraud detection and credit scoring are now standard, unintended harms are emerging. Transactions from new immigrants or individuals with limited credit histories can be flagged as high-risk, not because of behavior, but because of a lack of sound historical data and “recognized” patterns. Loan approvals have been delayed or denied without clear explanations, eroding trust in the very systems meant to support all customers.

      What makes addressing AI fairness more complex is that the decision-making logic is often opaque and biases are often “baked into” decision making.

      When an AI model produces a result like denying a mortgage, assigning a lower insurance payout, or recommending fewer healthcare interventions, there may be no clear human-readable explanation behind it. This lack of transparency makes it harder to intervene, challenge decisions, or even recognize when something has gone wrong.

      Without thoughtful design and meaningful governance, these models can reinforce social disadvantages under the guise of objectivity. When fairness isn’t embedded at the design stage, into how problems are framed, data is selected, and models are trained, even well-intentioned systems can produce skewed or unbalanced outcomes.

      As AI systems increasingly shape decisions about who gets a job, who receives a cancer diagnosis, who gets access to a mortgage, and who obtains the right end of life care, the AI fairness challenge grows more urgent.


      Start at the beginning

      Fairness in AI isn’t just about writing better code, it’s about embedding safeguards into the way systems are planned, built, and used. The challenge isn’t just how to build smarter systems, it’s how to build ones which are fairer from the start. The first step in designing AI for fairness starts long before its deployment. It begins at the strategy table, with questions about purpose and people.

      Fairness must also be embedded in how systems are designed from the outset. This includes how problems are framed, data is selected, and models are trained. Organizations need clear oversight over their AI models: they must understand who is accountable if they do not work and, most importantly, when human interventions are needed.

      Many organizations are now introducing solutions to address this need, including fairness frameworks, bias testing protocols, and interpretability tools, to gain greater oversight into AI systems, transforming them into systems which are more transparent, accountable, and fair.

      Trust will follow

      The research4 shows that most people would be more willing to trust an AI system when assurance mechanisms are in place. A key factor in building more responsible AI is never denying the existence of bias. Then an AI system can be built, tested and deployed with stronger ethics checks and human oversight. In this way the AI will evolve to become more independent and just.

      Success is not just determined by technical interventions of checks and balances, continuous tracking and testing. It is also a question of leadership, making conscious decisions to make sure systems are being developed with fairness paramount. Models also need to be built by more diverse teams and involve more cross-functional thinking; from data scientists and compliance leads to HR directors and community advocates. The more inclusive and collaborative the input, the greater the opportunity for AI to reduce social disadvantages rather than amplify them.

      Keeping it human

      Increasingly, regulations governing AI include provisions to check for and mitigate bias in AI systems. For example, the European Union’s AI Act5 mandates risk assessments, data governance, bias detection, and conformity assessments for high-risk AI, while the U.S. uses existing laws including the Federal Trade Commission Act and Fair Credit Reporting Act. Indeed, KPMG’s “Trust in AI” survey shows that most people (over 70 percent) expect AI to be regulated to ensure these systems are fair, transparent, and will not discriminate against certain groups.

      Ensuring fairness as AI scales is not just about greater regulatory control; it’s also about maintaining the involvement of the humans who are creating and using it. Human judgment, control, and decision making are even more crucial in an AI-enabled world. AI might be taking on many roles that humans used to carry out, but it should not be deciding by itself who gets jobs, mortgages and care or who benefits from the efficiencies it brings.

      To support organizations in building AI systems that promote fairness and reduce harm, several design and governance principles are critical:


      Embed governance early
      Establish oversight structures that clarify responsibilities, align with ethical standards, and allow for responsive course correction.
      Mitigate bias at every stage
      Source representative data, stress-test models for disparate impacts, and avoid over-reliance on sensitive proxies like postal/zip codes or education level. Regular bias auditing will eliminate the worst effects.
      Ensure transparency and explainability
      Equip developers and business users with tools such as model cards (which document a model’s purpose, limitations, and performance), fairness indicators, interpretability techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), and interactive visualization tools like the What-If Tool. These solutions make it easier to interrogate and understand AI decisions- whether in insurance, healthcare, hiring, or public services.
      Engage stakeholders meaningfully
      Involve workforce representatives, domain experts, AI practitioners, and diverse communities throughout the lifecycle to ground AI decisions in lived experience and practical context.
      Upskill the workforce
      Pair AI adoption with job redesign, critical thinking training, and digital fluency programs so employees can use AI as an enabler, not a replacement.
      Implement assurance mechanisms
      Integrate independent assessments, audits, and monitoring processes throughout the AI lifecycle to verify that systems meet fairness and compliance standards. These mechanisms help build trust by providing evidence that AI is working as intended and that identified risks are addressed.

      Embed governance early

      Establish oversight structures that clarify responsibilities, align with ethical standards, and allow for responsive course correction.

      Mitigate bias at every stage

      Source representative data, stress-test models for disparate impacts, and avoid over-reliance on sensitive proxies like postal/zip codes or education level. Regular bias auditing will eliminate the worst effects.

      Ensure transparency and explainability

      Equip developers and business users with tools such as model cards (which document a model’s purpose, limitations, and performance), fairness indicators, interpretability techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), and interactive visualization tools like the What-If Tool. These solutions make it easier to interrogate and understand AI decisions- whether in insurance, healthcare, hiring, or public services.

      Engage stakeholders meaningfully

      Involve workforce representatives, domain experts, AI practitioners, and diverse communities throughout the lifecycle to ground AI decisions in lived experience and practical context.

      Upskill the workforce

      Pair AI adoption with job redesign, critical thinking training, and digital fluency programs so employees can use AI as an enabler, not a replacement.

      Implement assurance mechanisms

      Integrate independent assessments, audits, and monitoring processes throughout the AI lifecycle to verify that systems meet fairness and compliance standards. These mechanisms help build trust by providing evidence that AI is working as intended and that identified risks are addressed.


      Conclusion

      Ultimately, the true promise of AI lies in how well its intelligence and efficiency are balanced with fairness and accountability — especially for those most impacted by its outcomes.

      That confidence will come from how AI systems are regulated, designed, and governed, as well as the ability of humans to understand them and intervene to address the inherent biases they retain.

      Embedding fairness into every stage of AI development, how problems are defined, how data is chosen, and how models are trained, is a huge challenge. However, as AI increasingly influences access to jobs, financial services, healthcare, and more, there is also an opportunity. Training it to be fair could be a giant leap forward for social fairness and equity.

      How KPMG can help

      There’s no single AI roadmap, but there are frameworks that help. KPMG’s Trusted AI framework guides organizations through the responsible design, deployment, and monitoring of AI systems. It emphasizes fairness, transparency, and accountability—not as afterthoughts, but as core design principles. With the right guardrails, AI doesn’t just improve efficiency and effectiveness. It actively enables more fair and accountable decisions.


      Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919

      2Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919

      3Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919

      4Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919

      5European Commission “AI Act enters into force”, August 1st, 2024

      Related content

      How are you ensuring your supply chain is resilient, ethical, and aligned with your sustainability goals?

      Navigating the path to meaningful social impact

      You can discover endless opportunities with AI. Let KPMG show you how.


      Our people

      Silvia Gonzalez- Zamora

      Partner, Management Consulting, Global Social Sustainability Leader

      KPMG in Canada

      Steve Chase

      Global Head of AI and Digital Innovation KPMG International & Vice Chair – Artificial Intelligence & Digital Innovation

      KPMG in the U.S.

      Headshot of Bryan McGowan
      Bryan McGowan

      Global and US Trusted AI Leader

      KPMG in the U.S.