Behaviors Behind High-Impact AI Use
UT Austin and KPMG analysis of 1.4 million interactions show how employees achieve sophisticated AI collaboration
NEW YORK and AUSTIN, Texas — A landmark study of 1.4 million real workplace interactions with artificial intelligence reveals teachable differences between routine and sophisticated AI use that offer organizations a concrete road map for identifying and scaling high-impact AI capability.
The joint study by KPMG LLP, the U.S. audit, tax, and advisory firm, and the McCombs School of Business at The University of Texas at Austin identifies distinct, observable patterns in how high‑impact users frame problems, guide AI reasoning, and apply AI across complex tasks that KPMG is applying internally and in its work for clients. The study published today in Harvard Business Review.
The researchers spent eight months studying KPMG LLP’s back-office operations, analyzing how people use AI at work. Users who were most successful with AI, the “sophisticated” users in the study, were not those who simply use it most frequently or those with the best technical skills; rather, they were the ones who excel in patterns of engagement with AI to frame problems, direct the AI model’s approach to tasks, and apply AI across their work.
What Is Sophisticated AI Use?
To move beyond assumptions about what “good” AI use looks like, KPMG LLP collaborated with Zach Kowaleski, Nick Hallman, and Jaime Schmidt, faculty members in McCombs’ Shulkin Department of Accounting, to analyze behavioral signals embedded in real-world AI interactions, evaluating more than 30 characteristics of prompt behavior across months of usage data, including task complexity, prompting techniques, and iteration patterns.
“We weren’t looking for power users in the abstract,” said Schmidt, McCombs professor of accounting and director for C. Aubrey Smith Center for Auditing Education & Research. “We were looking for people who had figured out how to think with the model, not just ask it questions.”
What separated the best users wasn’t experience or technical know-how. This research approach surfaced consistent differences in how a small group of sophisticated users engaged with AI over time.
What Sophisticated AI Behavior Actually Looks Like
Sophisticated users treated AI as a reasoning partner, shaping how it approached problems by asking the model to assume a certain role or perspective; providing concrete direction and examples; showing the AI how to reason through a task; requiring the model to explain how it got to a response; and offering ongoing feedback. Rather than accepting first outputs, they refined the model’s work over multiple exchanges and applied it to their most complex and ambitious tasks.
They also set boundaries, specified structure, articulated clear objectives, and delegated cognitively demanding tasks across brainstorming, analysis, technical guidance, and problem-solving. For these users, AI was being used as a general cognitive tool, not a narrow productivity aid.
Crucially, these behaviors left visible, measurable patterns that organizations can observe. Sophisticated use correlated strongly with four signals: how often users return to AI, how persistently they refine outputs, how ambitious their initial requests are, and how intentionally they select tools or models.
“The gap between routine and sophisticated AI use is not hidden in prompts themselves, but in patterns of engagement. And once those patterns are visible, they become possible to recognize, discuss, and scale,” said Anu Puvvada, KPMG Studio Leader, who led the research for the firm. “Iteration enables ambition, ambition drives strategic tool choice, and repeated success reinforces engagement.”
How KPMG Leveraged the Insights to Upskill Employees
KPMG undertook a firmwide training and enablement effort to help employees build the more sophisticated skills and behaviors identified in the research. Approximately 5% of users consistently demonstrated these behaviors across months of usage data, providing a clear, data-backed signal of what effective human-AI collaboration looks like in practice.
“We realized early on that access to AI alone doesn’t drive better outcomes, a challenge many organizations are still grappling with,” said Steve Chase, global head of AI and Digital Innovation at KPMG. “That’s why we put a deliberate set of AI‑enabled tools, training programs, and routines in place to make effective behaviors visible and expected, and to teach better problem framing, stronger supervision of AI, and purposeful iteration.”
For KPMG, these insights have been translated into a set of AI-First behaviors, supported by practical playbooks, training, and peer-led champion networks. By embedding these research-backed behaviors into the firmwide learning ecosystem — through the firm’s aIQ Learning Academy, role-based skills development, and hands-on practice — more of KPMG’s workforce can move from routine prompting to higher-impact human‑AI collaboration, using AI as a thinking partner to brainstorm, refine, and validate work through intentional iteration.
These same insights now inform how KPMG employees work with clients: helping them define what effective AI use looks like within their own organizations, build role‑aligned capabilities, and enable leaders to scale sophisticated human‑AI collaboration as part of everyday work.
Q&A
| Question | Answer |
| What was the purpose and scope of the KPMG and McCombs School of Business AI study? | The landmark study analyzed 1.4 million real workplace interactions from 2,597 unique users over eight months to identify the teachable differences between routine and sophisticated AI use, creating a road map for scaling high-impact AI capability. |
| According to the study, what distinguishes a "sophisticated" AI user? | Sophisticated AI users are not defined by technical skill or frequency of use, but by their patterns of engagement. They treat AI as a "reasoning partner" to think with, rather than just a tool to get answers from. |
| What specific behaviors did sophisticated users demonstrate when interacting with AI? | They shaped how the AI approached problems by asking it to assume a role, providing concrete examples, and requiring it to explain its reasoning. They also delegated complex, cognitively demanding tasks and refined the AI's output over multiple iterations rather than accepting the first result. |
| Can sophisticated AI use be measured? If so, how? | Yes. By evaluating more than 30 characteristics of prompt behavior and usage patterns, the study found that sophisticated AI use correlated strongly with four measurable signals: 1) how often users return to AI, 2) how persistently they refine outputs, 3) the ambition of their initial requests, and 4) their intentionality in selecting tools or models. |
| How did KPMG leverage these research insights to upskill its employees? | KPMG created a firmwide training effort to make these effective behaviors visible and expected. This included developing "AI-first behaviors" supported by practical playbooks, peer-led champion networks, and integrating training into their "aIQ Learning Academy" to foster higher-impact human-AI collaboration. |
| How do these findings now inform KPMG's work with its clients? | The same insights are used to help clients define what effective AI use looks like within their own organizations, build role-specific capabilities, and enable their leaders to scale sophisticated collaboration as part of everyday work. |
Data Tables
How Users Engage with AI: Behavioral Signals in Practice
Frequency of Use
| Metric | Min | Max |
| Number of Conversations | 1.00 | 46.00 |
| GPT Usage Dates | 1.00 | 30.00 |
Flexibility
| Metric | Min | Max |
| Number of Models Used | 1.00 | 3.00 |
| Copilot Usage Dates | - | 27.00 |
Ambition
| Metric | Min | Max |
| Average length of first prompt | 26.00 | 48,670.75 |
Persistence
| Metric | Min | Max |
| Iteration in Conversation | 1.00 | 45.00 |
Media Contacts
KPMG: Olivia Weiss (oweiss@kpmg.com) and/or Alyssa Mora (alyssamora@kpmg.com).
UT Austin: Judie Kinonen (judie.kinonen@mccombs.utexas.edu)
Explore more
KPMG LLP Launches Tax AI Accelerator Program
Program is designed to help corporate tax departments build practical AI skills and integrate generative AI into daily operations.
AI at Scale: How 2025 Set the Stage for Agent-Driven Enterprise Reinvention in 2026
Insights from the KPMG Q4 AI Pulse Survey reveal business leader priorities that will drive agents into the enterprise in 2026, led by continued investment in the technology and critical foundations, from the workforce to cybersecurity.
KPMG 2026 Perspectives: Local Insights from New York
New York City business leaders are entering 2026 with near-universal confidence in their own growth, but a 10-point gap between confidence in the growth prospects of their companies and the city itself.