2026 is shaping up to be the year Artificial Intelligence (AI) takes its place as a critical component of business infrastructure, marking a significant shift from an experimental tool to core business asset. Initiatives are inherently early in their innovation cycle, with assessments of new risks often considered low priority. However, as AI becomes embedded in the systems and processes that organizations rely on, they are fast becoming a target for disruption, manipulation, and exploitation. The speed and breadth of these developments create a growing imperative to assess the impact on overall organizational resilience.

      The KPMG Global tech report 2026 reveals just how fast AI is being integrated into business operations. Confidence is high: half of participating organizations reported they are now deploying AI use cases more broadly, with 68 percent expecting a good return on their investment by the end of this year. That is a 44 percent increase over last year, reflecting a stronger understanding of how to derive value. Eighty-eight percent said they are investing in the development of agentic AI systems in particular, looking to accelerate and, ultimately, reshape functions and workflows. Day to day, employees increasingly rely on AI tools and features to draft content, summarize meetings, analyze data and generate code, either through licensed enterprise access or unmonitored private online subscriptions where governance may be limited.

      Rapid adoption of AI without governance and assessment of advancing levels of operational risk can open organizations up to serious threats. Exposures can be significant, ranging from potential failures in the quality of AI components in the development supply chain, to a proliferation of new vulnerabilities undermining existing cyber defenses. AI models themselves are becoming a target, as they represent new intellectual property being developed by organizations. Concerns extend beyond technology risk, pointing to a need for stronger protections for AI‑driven operations and outcomes.

      Critical reliance on agentic systems

      Agentic systems powered by Large Language Models (LLMs) create software agents that work together to execute multi-step tasks, regularly using machine-learning techniques to optimize processes and outcomes. Their advancement is aptly illustrated in the creation of software where developers are becoming architects and managers of systems that now perform the coding, testing and other tasks that had previously dominated their work.

      Opportunity for compromise and disruption can increase as dependency grows. A growing body of research on ‘poisoning cyber-attacks’ against frontier LLMs (which underpin agentic systems) shows that even low levels of rogue data has the potential to introduce error and disrupt operations. Recent cybersecurity alerts of ‘distillation attacks’ describe adversaries’ rising ability to mimic legitimate agent-creation workflows, making the surrounding cloud-based ecosystem susceptible to hard-to-detect compromise. Such attacks target the novel IP encoded in models – to exfiltrate valuable artifacts (weights, training data, system prompts) for resale and to use the compromised models to enhance common cyberattack tooling.1 Recent reports of suspected government-backed espionage using agentic AI2 with minimal human oversight underscore the potential for a sharp increase in both the volume and precision of adversarial campaigns.

      Prioritize advancing dependencies

      Such disclosures, sitting alongside escalating ambitions to innovate with AI, signal an urgent need to anticipate and shore up growing dependencies. Current cyber-risk analyses may underestimate AI risk because the impact is still poorly understood and evidence of the use of agentic capabilities by adversaries is hard to detect on networks. Governance around how AI systems are being developed and embedded is also nascent: tools to monitor use remain immature, alongside understanding of what should be monitored. KPMG incident-response analysts are beginning to surface ‘unidentifiable’ tools within networks, while noting increases in the speed of compromise and more high-value targeting – indicators consistent with agentic techniques for credential and data theft. In our view, co-ordination across systems and corporate risk management is needed to elevate visibility, guide acceptable levels of risk, and assign clear accountability needed to manage these evolving threats.

      Human-centric foundations

      KPMG developed its Trusted AI framework to foster the organization-wide conversations needed for the challenges ahead. With AI agents increasingly working alongside employees in hybrid workflows, it emphasizes the characteristics to be preserved in employee-driven processes today: to be values-driven, human-centric, and trustworthy. Priority should be given to the ethical judgements organizations want these systems to uphold so they can adhere to corporate values and codes of conduct. Systems should, for example, be developed with wrap around controls that ensure they act within rules and even report themselves when they break them, creating opportunity to highlight compromise and error. Developers may not be able to rely on model training for such assurances. Foundational considerations for developing them include:

      • Purposeful deliberation for what must be preserved across processes, systems, and data governance to inform development of controls and assurance mechanisms.
      • The articulation of policies that set parameters for decision-making and reliability thresholds to guide the design of AI-assisted workflows.
      • Identifying meaningful, auditable evidence of the assurances and explainability needed to evaluate whether AI agents behave as desired.

      Review cyber defenses

      AI agents are becoming new targets, as they are often enabled with corporate identities, and can be prompted to spin up other digital identities. Disparate systems can be developed to interact with each other and across departments, supported by a fast-evolving ecosystem of new services, app builders and related tools for working with the LLMs. Introducing such components into core systems brings great potential for new vulnerabilities that can lead to disruption or their use as a portal to navigate wider networks to do harm. Cyber defences should be tuned to this evolving risk landscape and assure:

      • Progression of AI asset inventories and monitoring systems that can elevate visibility for cyber security teams across the entire supply-chain of components.
      • Roll out of zero trust architectures, or least privileged access controls, for AI agents, reflecting the relevant controls that apply to employees.
      • Defensive AI capabilities that match the nature and velocity of how attacks are changing.

      Looking ahead

      Recognition of AI’s strategic value is influencing development of unified programs, and in some cases the thinking around the controls that underpin corporate policy and culture. The appearance of the Chief AI Officer is becoming more common. This lays ground for the visibility and coordination needed to prioritize management of the changing risk landscape, preserve resilience and, crucially, facilitate the opportunities being pursued with AI.

      KPMG’s Cybersecurity practice is ready to help you move forward with your AI journey.


      KPMG Trusted AI framework

      AI is transforming the world in which we live—impacting many aspects of everyday life, business and society.

      Pink Violet 3D Abstract structure

      Our insights

      Building trust and enabling innovation in a dynamic world

      Our new report explores the next phase of AI adoption. Sign up now to be among the first to access the report.

      Explore how organizations navigate today’s emerging tech — and prepare for what’s next.

      Making non‑human identities a cybersecurity priority


      [1] https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use
      [2] Zscaler report

      Our people

      Jordan Barth

      Managing Director and Cyber Resilience Leader

      KPMG in the U.S.

      Oisín Fouere

      Global Head of Cyber Response

      KPMG International