error
Subscriptions are not available for this site while you are logged into your current account.
close
Skip to main content

Loading

The page is loading.

Please wait...


      We’re living in what I call the ‘careless age’; an era defined by loneliness, overstretched public services, and crumbling safety nets. One in three adults in the UK reports feeling lonely often or always. This isn’t because people have stopped caring but because the scale of need has outstripped what humans alone can deliver. Aging populations, shrinking workforces, and increasingly complex health and social challenges have created a care gap that traditional models simply cannot fill.

      Loneliness is rising not because empathy has vanished, but because our systems lack the capacity to reach everyone at the right depth. For millions, the alternative to AI-assisted care is often no care at all. That stark reality forces us to ask: is synthetic care better than no care?

      Jonathan Orritt

      Director - Data & AI

      KPMG in the UK


      The rise of agentic AI; scaling empathy, not replacing humanity

      Imagine a world where AI agents can proactively check on elderly citizens during heatwaves, coordinate transport for vulnerable individuals, or monitor health data to trigger timely interventions. Enter Agentic AI – autonomous systems capable of reasoning, outreach, and collaboration. These aren’t just chatbots; they can be digital teammates, assistants or carers designed to act, support and process.

      This isn’t the erosion of humanity but the scaling of empathy. By handling routine outreach and monitoring, Agentic AI frees up human professionals to focus on what truly matters, applying complex judgment and emotional support. In other words, AI takes care of the predictable so humans can deliver the profound.


      How could it work? The Invisible Infrastructure of Care

      Agentic AI is powered by new technical protocols that transform passive tools into active participants:


      model_training

      Model Context Protocol (MCP)

      Think of this as the universal translator that connects AI agents to real-world resources including patient records, weather forecasts and logistics systems. MCP enables agents to access and interpret data across countless platforms, much like a human professional using multiple applications.

      support_agent

      Agent-to-Agent (A2A) and Agent Communication Protocol (ACP)

      These protocols allow AI agents to “talk” to each other, share information, and delegate tasks. A health agent can coordinate with a logistics agent to arrange transport for a vulnerable individual, while a scheduling agent books appointments. This creates a sophisticated dance of digital collaboration with agents working together like a well-coordinated human team.


      These protocols enable a shift from transactional interactions to relational care, where systems can remember, recognise, and respect individual needs.


      The controversy of synthetic empathy

      Critics argue that relying on AI for care risks losing the human touch. But the truth is more nuanced. Synthetic empathy doesn’t replace human compassion, it augments it. The real danger lies not in using AI, but in deploying it without trust and the right governance in place.

      Trust and safety must be explicit: high‑risk scenarios (for example, self‑harm or suicide risk) require robust safeguards—reliable detection, immediate escalation to trained professionals, and hard hand‑offs where automation ends and humans take over.

      That’s why frameworks like KPMG’s Trusted AI Framework and Citizen Experience Excellence (CEE) are critical because trust is the foundation of any successful care initiative. Integrity stands as the number one pillar: if citizens do not trust the organisation or the service (including AI), they simply will not engage. Building this trust requires more than technical excellence; it demands transparency, accountability, and a genuine commitment to ethical principles. The Trusted AI framework provides ethical guardrails rooted in being values-driven, human-centric, and trustworthy, while the CEE framework brings practical pillars—led by integrity, and supported by resolution, expectations, time & effort, personalisation, and empathy.

      This ensures that even synthetic interactions feel human-centred and trustworthy—and that outcomes are continuously monitored and improved rather than assumed. Recommended signals to enable this continuous monitoring include:

      • Human outcomes: resolution achieved, time‑to‑intervention, appropriate escalations.
      • Experience metrics (CEE‑aligned): perceived empathy, clarity, effort.
      • Safety and reliability: false‑negative/false‑positive rates in risk detection, near‑miss reviews.
      • Preference matching: channels, timing, language and carer preferences honoured.

      A proof of concept - Hippocratic AI in action

      This isn’t just theoretical. During recent heatwaves, Hippocratic AI deployed generative AI agents that proactively called thousands of at-risk individuals. These agents didn’t just make automated calls; they held genuine conversations, screened for heat stroke symptoms, and arranged transport to cooling centres.

      The results? Thousands of citizens checked on with 10% identified as needing urgent help. Satisfaction score: 9/10. Crucially, what made this service trustworthy was its transparency, clear identification, and alignment with trusted frameworks, ensuring citizens knew who was contacting them and why.

      Lives were potentially saved, not by replacing humans, but by filling the gaps an overwhelmed system couldn’t cover. Citizens didn’t care that the voice was synthetic. What mattered was that someone recognised their needs, respected their time, and remembered them.


      The UK Opportunity – Building an empathetic infrastructure

      In the UK, public services are exploring Intelligent Government models. By embracing Agentic AI within trusted frameworks, we can close the care gap and ensure that every citizen is remembered, recognised, and respected. For KPMG, the imperative is clear: leveraging agentic AI not as a substitute for human connection, but as a powerful means to scale it. The question is no longer “if” we adopt AI in care, but “how” we do so responsibly.

      Agentic AI, deployed within trusted frameworks, offers a practical solution to overstretched systems. By embedding trust, ethics, and citizen-centric design into these systems, we can create an infrastructure of empathy that ensures no one is left behind. In a world where capacity is collapsing, synthetic care isn’t second best, it’s the bridge that saves lives and restores dignity. The future of care isn’t about choosing between humans and machines; it’s about combining their strengths to build a society where everyone is truly cared for.

      If you’re interested in how trusted, agentic AI can improve outcomes and deliver more empathetic services for your customers, patients or citizens, contact Jonathan Orritt.


      Related insights

      Citizen Experience Excellence report 2025 - The Public Service Experience, Redefined

      Drive value through trusted AI


      Something went wrong

      Oops!! Something went wrong, please try again


      MTD TEST

      Get in touch


      Discover why organisations across the UK trust KPMG to make the difference and how we can help you to do the same.