AI Agents Are Moving Into Production — Are Your People Ready to Direct Them?
In April 2026, KPMG US answers the question senior leaders must resolve as AI agent deployment moves into daily operations: what does workforce readiness mean when people are expected to manage and direct AI agents? KPMG’s position is that AI outcomes now hinge on whether organizations rapidly build the skills, role clarity, and accountability needed for people to apply judgment and take responsibility for results in human‑led, AI agent‑enabled work.
What does workforce readiness mean when people are expected to manage and direct AI agents?
This question is pressing because AI agents have moved past experimentation and into production, and their impact is showing up in how work is coordinated, not just how tasks are automated. Today, 54% of organizations are actively deploying AI agents, and leaders increasingly expect people to oversee them rather than step aside.
The data also shows a widening tension: the enterprise wants speed, but the workforce needs readiness. More than half of leaders (57%) now expect people to manage and direct AI agents, and employee response is beginning to align, with 55% of employees reporting some level of adoption or integration. That is a meaningful baseline—but it also means a large share of the workforce is still catching up as agents spread into real workflows and real decisions.
Why It’s Harder Than It Looks
Workforce readiness is hard because it is not just training; it is a shift in responsibility. When people “use” a tool, the burden is limited to adoption. When people direct AI agents, the burden expands to judgment, accountability, and knowing when human validation is required.
The Evidence
1
2
3
4
5
KPMG’s Answer
KPMG’s position is that “workforce readiness” now means something specific: people must be able to direct AI agents, apply judgment, and take responsibility for results, because the limiting factor is no longer the technology.
As I said in the AI Pulse release, “The limiting factor isn’t the technology, it’s whether people have the skills to direct AI, apply judgment and take responsibility for results.”
That shift changes what leaders should measure and prioritize. If the operating expectation is that humans will manage agents, then readiness is demonstrated when employees can adopt agents into workflows while maintaining appropriate oversight and accountability. The AI Pulse data signals that leaders recognize this reality: 87% prioritize upskilling and reskilling ahead of hiring or job redesign, and the talent profile is moving toward adaptability and continuous learning over pure technical skills—especially in entry-level roles.
The consequence of treating readiness as optional is predictable: organizations will scale agent deployment faster than they scale responsible use. When that happens, adoption plateaus, resistance hardens, and the organization loses momentum at the moment it most needs coordinated human-led execution.
Make “directing AI agents” a defined capability, not an implied expectation. Start by naming what employees are responsible for when agents are embedded in workflows—what they validate, what they escalate, and what they own end-to-end—so accountability is clear as adoption grows.
Invest in readiness where resistance is coming from. Address skills gaps with upskilling plans that match the 57% expectation that people will manage agents, and address job security concerns directly, because the data shows both are central barriers to adoption.
Explore more
Get in touch
Start the conversation
Connect with our team today to learn how we can help you realize the full potential of GenAI.