Skip to main content

As AI Agents Scale Across the Enterprise, Who Is Accountable for Their Decisions?

In April 2026, KPMG US addresses a question enterprises can no longer defer as AI agent deployment accelerates: as AI agents scale across operations and functions, who is accountable for their decisions and how do leaders stay in control? KPMG’s position is that governance must become operational—embedding human validation, decision rights, and accountability into daily workflows—if AI agents are to scale without amplifying risk.

April 24, 2026
CENTRAL QUESTION

As AI agents spread across the enterprise, who is actually accountable for their decisions—and how do leaders stay in control at scale?

This question is emerging because AI agents are no longer confined to isolated pilots or technical teams. More than half of organizations are now actively deploying agents, and those agents are increasingly coordinating work across functions, routing information, and supporting shared decision‑making. As a result, accountability is no longer theoretical; it shows up in everyday operations.

The shift has been fast. In 2024, AI agents were largely exploratory. Today, they are embedded in operations, technology, and cross‑functional workflows, even as organizations accelerate AI investment. The risk is not that agents act without rules—it is that organizations scale them faster than they define who owns outcomes when humans and agents work together. 

Insight
KPMG AI Quarterly Pulse Survey
Enterprises shift from AI experimentation to large-scale production in 2026

Why It’s Harder Than It Looks

Accountability breaks down because most organizations were not designed for hybrid decision‑making between people and machines. Traditional governance models assume either full human control or tightly scoped automation. AI agents sit uncomfortably between those two models, operating with autonomy inside boundaries set by people.

As agents take on broader responsibilities, leaders must reconcile speed with oversight. Too much friction slows deployment and frustrates teams; too little oversight erodes trust and raises risk. Finding the right balance requires clarity on validation, escalation, and responsibility—areas that often lag behind technical deployment.

The Evidence

1

54% of organizations are actively deploying AI agents today, compared with 12% in 2024, according to the KPMG Q1 2026 AI Pulse. Source: Fortune

2

73% use AI agents to automate workflows spanning multiple functions, increasing the need for shared accountability. Source: Fortune

3

53% rely on agents to route information and decisions between teams, while 51% use them as shared knowledge bases or unified dashboards. Source: Fortune

4

63% now require human validation of AI agent outputs, up from 22% in Q1 2025, reflecting rising governance expectations. Source: Fortune

5

91% of leaders say data security, privacy, and risk will influence AI strategies over the next six months, making governance a prerequisite for scale. Source: Fortune
News
Investment and AI Agent Deployment Surge as Execution Becomes the Differentiator
Capital continues to flow into AI, with organizations projecting average AI spending of $207 million over the next 12 months, nearly double figures from the same period last year, according to the KPMG US Q1 AI Quarterly Pulse.

KPMG’s Answer

KPMG’s position is that sustained AI value depends on making governance operational, not aspirational. 

As AI agents scale, organizations must define accountability as clearly as they define technical performance. That means specifying who sets boundaries, who validates outcomes, and who intervenes when agents operate across teams.

The data shows that leaders are already moving in this direction, with a sharp increase in requirements for human validation of agent outputs. This reflects a broader realization: trust, control, and accountability are not barriers to scale—they are enablers of it. Organizations that embed these controls into daily workflows move faster over time because teams trust the system.

Without this foundation, AI agents amplify complexity instead of reducing it. Decisions move faster, but ownership becomes diffuse, making it harder to manage risk or sustain momentum.

What This Means for You

Define accountability before expanding agent scope. Leaders should clearly document who owns outcomes when AI agents route decisions, automate workflows, or coordinate across functions.

Make governance part of day‑to‑day work, not a separate layer. Embedding validation, escalation, and oversight into workflows builds trust and allows AI agents to scale without slowing the business.

Explore more

Get in touch

Start the conversation

Connect with our team today to learn how we can help you realize the full potential of GenAI.

Image of Steve Chase
Steve Chase
Global Head & US Vice Chair – AI & Digital Innovation, KPMG LLP

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.
All fields with an asterisk (*) are required.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline