Who is hallucinating?
As recently as 2023 (remember ChatGPT was only released on 30 November 2022), the Cambridge Dictionary selected “hallucinate” as its word of the year. In spite of the chaos around the world, this wasn’t due to a sudden increase in the global population experiencing a compelling sense of a false reality. Instead, it reflected a new reality - the tendency of AI models to potentially generate inaccurate, nonsensical or fabricated information and present it as though it were factual.
Operational risks associated with AI can range from mildly embarrassing to potentially very costly, with recent examples including:
• In January 2025, Virgin Money issued an apology after a customer was reprimanded by an AI-powered chatbot for using the word "virgin.“
• Last year, a tribunal mandated that Air Canada honour a discount that its chatbot for customer support had invented.
• A portion of the customer support bot used by courier organization DPD Laser was disabled after it cursed at clients and referred to its owner as the "worst delivery service company in the world."
• WestJet's chatbot mistakenly forwarded a customer a suicide prevention website link.