Critics argue that relying on AI for care risks losing the human touch. But the truth is more nuanced. Synthetic empathy doesn’t replace human compassion, it augments it. The real danger lies not in using AI, but in deploying it without trust and the right governance in place.
Trust and safety must be explicit: high‑risk scenarios (for example, self‑harm or suicide risk) require robust safeguards—reliable detection, immediate escalation to trained professionals, and hard hand‑offs where automation ends and humans take over.
That’s why frameworks like KPMG’s Trusted AI Framework and Citizen Experience Excellence (CEE) are critical because trust is the foundation of any successful care initiative. Integrity stands as the number one pillar: if citizens do not trust the organisation or the service (including AI), they simply will not engage. Building this trust requires more than technical excellence; it demands transparency, accountability, and a genuine commitment to ethical principles. The Trusted AI framework provides ethical guardrails rooted in being values-driven, human-centric, and trustworthy, while the CEE framework brings practical pillars—led by integrity, and supported by resolution, expectations, time & effort, personalisation, and empathy.
This ensures that even synthetic interactions feel human-centred and trustworthy—and that outcomes are continuously monitored and improved rather than assumed. Recommended signals to enable this continuous monitoring include:
- Human outcomes: resolution achieved, time‑to‑intervention, appropriate escalations.
- Experience metrics (CEE‑aligned): perceived empathy, clarity, effort.
- Safety and reliability: false‑negative/false‑positive rates in risk detection, near‑miss reviews.
- Preference matching: channels, timing, language and carer preferences honoured.