error
Subscriptions are not available for this site while you are logged into your current account.
close
Skip to main content

Loading

The page is loading.

Please wait...


      AI is no longer a workplace trend; popular tools such as Microsoft’s Copilot have eclipsed traditional methods and established themselves as the primary operating system of work. Global organisations are already proving that, when thoughtfully designed, solutions backed by artificial intelligence can be both sophisticated and embedded neatly into the flow of work.

      Other businesses are rethinking their entire strategy, by boldly resisting non-human generation and banking on brainpower to make authentic appeals to clients. One thing is evident: inviting AI into the office without the right infrastructure and skills can unleash communication issues, risk sensitive data and create an identity crisis for staff.

      The question is: what does it mean to be human in a team leveraging AI – and what core components are needed for firms to create opportunity instead of unleashing chaos?


      Humans as orchestrators

      When KPMG created Spark, a conversational AI learning assistant, its careful curation enabled an entirely new system for learning. Instead of replacing human judgment, the decision to create Spark was strategic and intentional. Accessible within Microsoft Teams, it was built to feel intuitive and purposeful, providing personalised learning recommendations to thousands of employees.

      What made Spark particularly powerful for our organisation as its first adopter was that it could ‘speak KPMG’. Built with a jargon-busting glossary of more than 700 acronyms and key phrases, Spark was an example of how learning assistants can be built to deeply connect with the organisation’s language and culture.

      A defining part of Spark’s story was the ambition and strategy behind it. Rather than treating AI as a ‘plug-and-play’ tool, our team architected its purpose from the start, deciding the tools it would integrate with and how it would interact. Creating Spark effectively signified becoming its orchestrator, maintaining harmony between human and machine. In AI-human teams, it’s essential to have:


      • Orchestrators with a strong mission.

        Leaders need to define a mission-led purpose for new projects and work. Before deploying AI, ask: “what problems are we solving? How does this advance our strategy?”.

      • Trust baked in, not bolted on.

        By building with trustworthy AI principles, such as privacy safeguards and embedding governance, trust can be developed early on to maximise confidence.


      A strong learning infrastructure

      Having a strong foundational understanding of AI enables teams to question, validate and interpret data. When 2 in 3 employees rely on AI without evaluating its outputs, the organisation’s collective “brain” risks becoming passive rather than critical. KPMG’s global study on Trust, attitudes and use of Artificial Intelligence shows that 66% of people use AI daily, but only 46% trust it. Fewer than 40% have received training. Over half of employees in the same report admitted to mistakes caused by AI, making structured upskilling an urgent priority.

      Rather than approaching AI literacy as a one-off development opportunity, an AI-human team should be underpinned by a strategic, multi-layered journey. In the case of Spark, the technology doesn’t just launch courses – it creates a centralised learning ecosystem inside Teams, streamlined by role and career path to encourage lifelong learning. Our team also focused on psychological safety and governance to ensure that people felt comfortable to ask questions and challenge suggestions. To set a good standard, AI-human teams must have:

      • Clear organisational change

        Setting the rules, governance and workflows that humans need can help them to use AI harmoniously.

      • Robust learning solutions

        Adding targeted courses to tech stacks can build confidence and analytical capabilities.

      Creatives and critical thinkers

      As AI builds its web of capability, the question becomes: why invest in human skills like critical thinking and creativity? The answer lies in what AI cannot do. Rather than trying to match what AI can achieve, the real edge is in bringing depth through judgment and context from lived experiences that AI sometimes struggles to replicate at scale.

      Rather than leaving Spark as a static tool, a cross-functional team of learning specialists and linguists stepped in to teach it vital keywords. The team also applied critical thinking and empathy to design conversational flows. Instead of generic prompts, they created scenario-based guidance, such as “How do I use AI responsibly in client proposals?”, to reflect realistic challenges employees face. To achieve solutions that reflect the real world, it’s critical for teams to embrace:

      • Empowered thinkers

        At the root of successful teams are critical thinkers, backed by AI literacy and relevant prompts, to validate, challenge and elevate AI’s output.

      • Inquisitive creatives

        In an AI-human team, humans can use LLMs to push back and refine already established ideas.

      Collaboration without boundaries

      Historically, rigid teams made more operational sense – employees were separated into specialist groups, with clearly defined access to specific data. These structures slow down progress in many of today’s departments, where AI thrives on access to rich and diverse datasets. The same principle also applies to human teams - collaboration is a strategic advantage and businesses must dismantle silos to create shared spaces for innovation.

      According to the KPMG Trust and AI report - based on external market research across organisations - half of employees say they now use AI instead of collaborating with peers or supervisors, and one in five report reduced communication, interaction and co‑creation at work. As AI adoption accelerates, collaborators need new scaffolding to ensure that outputs are visible, auditable and reflect diverse perspectives. AI-human teams should consider:

      • Cross-functional squads

        Joint-up teams can utilise tools to create transparent, cross-functional environments and seek to leverage new minds outside of your immediate team.

      • Adaptive learning models

        By providing a diverse range of data across teams, models can learn more and gain a broader understanding of unique situations.


      The Bottom Line

      In the age where “learning how to learn” has been identified as an essential skill, organisations require a combination of learning infrastructure, human empathy and cross-functional visibility to ensure harmony between people and generative tools. The teams that succeed will be those who embrace AI with courage and challenge, whilst never losing sight of the human edge. Redefine roles, invest in complementary skills, break down silos and avoid the trap of blind adoption. These aren’t optional steps; they’re the foundation for resilience and innovation in an AI-driven world. If your AI strategy doesn’t include a human strategy, it’s incomplete.


      Our advisory insights

      Something went wrong

      Oops!! Something went wrong, please try again


      MTD TEST

      Get in touch


      Discover why organisations across the UK trust KPMG to make the difference and how we can help you to do the same.