Skip to main content

      Historically, product and service creation and delivery have been framed by the well-known “fast, cheap, and good” triangle - the idea that one can achieve only two of these attributes at the same time.

      With the rise of agentic and generative AI, this long‑standing assumption is being challenged. Is the trade‑off still valid, or are we entering an era where speed, low cost, and high quality can coexist?


      Creation Is Easy. Validation Is Hard.

      If the triangle is being reshaped, quality is where the tension becomes most visible.

      The time required for quality assurance and validation of generative AI outputs is growing rapidly. Common quality risks in generative AI systems include:

      Author


      Alexander Zagnetko
      Manager
      Process Organization and Improvement

      • Model drift over time
      • Hallucinated or fabricated outputs
      • Embedded or amplified biases
      • Limited transparency in training data and decision logic

      In many cases, the accuracy of generated content remains inconsistent, or simply poor. Misinformation can arise not only from inherent model limitations or design flaws, but also from intentional manipulation by malicious actors.

      When AI Trains on AI

      These quality challenges are not static, they are compounding. Trusted sources of information and media are degrading, largely due to the increasing volume of AI‑generated content riddled with errors. As a result, new AI models are increasingly trained on already corrupted data. This creates a vicious cycle in which errors, and fabricated narratives grow exponentially often supported by references to sources that are themselves incorrect, yet appear plausible.

      “When AI systems increasingly learn from AI‑generated data, errors don’t just persist, they scale.“

      The Paradox of Expertise in an AI‑Driven World

      One could argue that professionals with broad, interdisciplinary knowledge will become increasingly valuable. Such individuals are better equipped to detect errors in generated content and perform meaningful fact‑checking across domains ranging from history and economics to science and technology.

      At the same time, there is a growing risk that human cognitive abilities themselves begin to erode. Just as widespread calculator use reduced the need for mental arithmetic, heavy reliance on generative and agentic AI may weaken our ability to analyse information independently, verify facts, identify trusted sources, apply constructive scepticism, or think critically.

      The Limits of Human Oversight at Scale

      As agentic systems increasingly communicate directly with one another using machine‑level protocols, meaningful human validation becomes practically impossible.

      With an exponentially growing share of internet content becoming readable primarily, or exclusively, by machines, traditional concepts such as human in the loop oversight are increasingly difficult to maintain. Gartner, for example, predicts that by 2028, 90% of B2B buying interactions will be mediated by AI agents.

      In such an environment, humans will simply be unable to assess, validate, or meaningfully oversee the information flows and decisions produced by billions of interacting agents.

      The Myth of Effortless Quality

      As more organisations rely on AI, even employees who genuinely want to perform rigorous quality assurance often lack the time and space to do so.

      Common leadership assumptions include:

      • AI automatically improves quality
      • Speed does not require trade‑offs
      • Reduced cost implies reduced risk

      Many business leaders assume that AI simultaneously reduces costs, accelerates delivery, and improves quality. This belief frequently leads to pressure on teams to produce outputs as quickly as possible, leaving little room for verification, or correction.


      What Comes Next?

      The question is no longer whether AI can deliver faster and cheaper outcomes. The more pressing issue is whether organisations and societies are developing the skills, governance models, and cultural discipline required to preserve quality, trust, and independent thinking along the way.

      That is the real triangle of the AI age.

      Not fast, cheap, and good.

      But speed, scale, and trust.



      A Pragmatic Path Forward

      Companies are under pressure to move faster while maintaining high standards. In this environment, the most valuable professionals may not be those who rely on AI most heavily, but those who can question it effectively.

      Businesses can take several practical steps to use AI more responsibly:


      • Prioritise transparency:

        Document where and how AI is used, including known limitations and risks.

      • Build validation into workflows:

        Treat fact‑checking and quality assurance as mandatory steps, not optional add‑ons.

      • Balance speed with accuracy:

        Avoid equating faster delivery with better outcomes.

      • Monitor for drift and degradation:

        Regularly review AI performance and retrain or recalibrate when needed.

      • Protect critical thinking skills:

        Train teams to challenge outputs, verify sources, and apply informed scepticism.

      • Consider “AI-free” skills assessments for new hires:

        Evaluate candidates’ ability to think and solve problems without AI support, ensuring a strong baseline of independent reasoning and judgment.


      Learn more in the studies:

      Download

      AI Governance Principles for Boards

      Download the full version

      Download

      Global AI Pulse Q1 2026

      Download the full version

      Download

      Trust, attitudes and use of artificial intelligence

      Download the full version

      Download

      KPMG Global tech report 2026

      Download the full version

      Download

      The age of Intelligence

      Download the full version

      Download

      AI value depends on AI security

      Download the full version



      Contact us

      Should you wish more information on how we can help your business or to arrange a meeting for personal presentation of our services, please contact us.


      Alexander Zagnetko

      KPMG Global AI Initiative Coordinator

      KPMG in Slovakia

      Strategic insights for business leaders

      Business-led, digitally enabled, data-driven transformation.

      Book a free consultation

      Submit your enquiry and connect with KPMG professionals.

      NEW JERSEY - MARCH 20: Empty Sky Memorial with sunshine on March 20, 2014 in New Jersey. It is the official New Jersey September 11 memorial to the victims of the September 11 attacks.


      Related content

      A comprehensive overview of the key concepts, principles, and applications of AI.

      From risk management and transparency to customer trust and sustainable growth

      A practical overview of obligations, risks, and recommendations for businesses in the era of regulated AI