error
Subscriptions are not available for this site while you are logged into your current account.
close
Skip to main content

       

      Artificial intelligence is no longer a future concept in Canada’s public sector—it is already being used daily by public servants, often informally and without the guardrails required to protect citizen data, public trust, and institutional integrity. Recent KPMG in Canada research in 2025 fall reveals a widening gap between bottom‑up AI usage and top‑down readiness: while fewer than one‑quarter of public sector organizations have formally adopted AI, nearly half of public servants are already using AI tools in their work, with half relying on publicly available platforms.

      This report explores what that gap means for governments across Canada, why responsible AI adoption has become urgent, and how public sector leaders can move from experimentation to trusted, sovereign, and value‑driven AI at scale. Drawing on KPMG survey findings and public‑sector use cases, the below outlines a practical path forward focused on governance, data sovereignty, workforce enablement, and public confidence.

      Hear from Michael Klubal, National Industry Leader, Infrastructure, Government and Healthcare on how AI should support human judgment and improve citizen experience, not erode confidence.


      AI adoption in the Canadian public sector remains uneven and fragmented. Only 22 per cent of organizations report having implemented AI tools, while another third per cent are piloting or experimenting. At the same time, nearly half of public servants report using AI in their day‑to‑day work—most often to summarize information, draft or edit documents, conduct preliminary research, and generate ideas.

      This disconnect reflects a broader challenge: AI is being pulled into the workplace by employees seeking productivity gains, rather than pushed strategically by institutions with clear policies, controls, and investment plans.

      With public sector employees already adopting AI tools to carry out their job responsibilities, public sector organizations must accelerate their deployment of formal AI adoption policies. Strong governance, oversight, and training are essential to balance innovation with accountability.
      Michael Klubal
      Michael Klubal

      National Industry Leader, Infrastructure, Government and Healthcare

      Without coordinated action, governments risk creating a shadow AI environment—one that exposes sensitive data, undermines consistency, and erodes trust.

      Public servants clearly see the potential of AI, but conviction remains shallow. While nearly 80 per cent say AI is important for improving productivity and operational efficiency, only a small fraction view it as “extremely important.” This suggests that many employees recognize AI’s theoretical value but have yet to see it meaningfully embedded in how work gets done.

      Operational efficiency has emerged as the dominant measure of return on investment, with governments focusing AI investment on areas such as cybersecurity, finance, and administrative automation. Most expect returns within one to three years—an ambitious timeline that will be difficult to achieve without addressing foundational gaps in data quality, governance, and skills.

      AI’s value proposition in the public sector is not about replacing judgment or automating policy decisions. It is about freeing capacity, improving service consistency, and enabling employees to focus on higher‑value work that directly benefits Canadians.

      One of the most striking findings from KPMG’s research is the prevalence of AI anxiety. Among public servants in organizations that have implemented AI, nearly 70 per cent say concern is widespread ranging from job security and ethical dilemmas to privacy, deepfakes, and misinformation.

      These concerns are compounded by low levels of AI literacy and fluency. The vast majority of respondents rate both as low across the public service, and many report having little or no training to help them understand how AI systems work, where they can fail, and how to use them responsibly.

      Addressing AI anxiety requires more than technical controls. It demands transparent communication, visible leadership, and a sustained investment in workforce education that empowers employees rather than alienates them.

      Animated circle statistical graphic showing 90% Graphique statistique en cercle animé montrant 90% 90%

      90% of respondents agree that investment in AI education and training is required

      Modern AI tools rely on and are granted access to vast amounts of organizational data. Without clearly communicated data policies, strategies, and training, organizations face significant challenges and risks when implementing AI.
      Ven Adamov
      Ven Adamov

      Partner, National Leader, Data and analytics risk services

      Few issues resonate more strongly in Canada’s public sector than data sovereignty. An overwhelming majority of public servants believe citizen data must be stored within Canada, and many fear that public trust will erode if sensitive information is subject to foreign jurisdictions.

      Animated circle statistical graphic showing 93% Graphique statistique en cercle animé montrant 93% 93%

      93% ​Feel the need to safeguard Canadian data and intellectual property and 94% feel personal citizen information must be stored in Canada.

      Animated circle statistical graphic showing 85% Graphique statistique en cercle animé montrant 85% 85%

      85% Feel all levels of government should collaborate to incent Canadian data centre construction and 91% feel we need to think seriously about how to make that data storage sustainable.

      Animated circle statistical graphic showing 80% Graphique statistique en cercle animé montrant 80% 80%

      80% Feel we lack the necessary data practitioners to help shape policy and develop the frameworks necessary to protect privacy and ensure responsible data use.

      As governments increasingly rely on cloud infrastructure and AI models, questions of control, jurisdiction, and accountability move to the forefront. When data is processed or stored outside Canada, it may be subject to foreign laws, disclosure requirements, or security risks beyond the government’s direct control.

      Digital sovereignty is not about rejecting global technology providers—it is about making deliberate choices that align with Canadian values, legal frameworks, and public expectations. For AI, this means: Clear data residency requirements, strong contractual and governance controls with vendors, investment in Canadian AI infrastructure, and transparent communication with citizens about how data is used and protected.

      Trust, once lost, is difficult to regain. Data sovereignty is therefore not just a technical consideration, but a cornerstone of public legitimacy.

      While challenges are real, there are already meaningful examples of AI delivering value across Canadian governments:

      • Service delivery and digital assistants

        Several federal and provincial departments have deployed AI‑enabled chatbots to handle routine citizen inquiries, reducing call‑centre volumes and improving response times while escalating complex cases to human agents.

      • Fraud detection and compliance

        AI and advanced analytics are being used to identify anomalies in benefits administration, tax compliance, and procurement, helping governments detect potential fraud earlier and allocate investigative resources more effectively.

      • Document processing and records management

        Machine learning tools are supporting the classification, redaction, and retrieval of large volumes of documents, particularly in areas such as access‑to‑information requests and regulatory filings.

      • Transportation and infrastructure planning

        Municipal and provincial governments are using AI‑driven models to optimize traffic flows, predict infrastructure maintenance needs, and improve capital planning decisions.

      • Public safety and emergency management

        Predictive analytics and AI‑assisted modelling are being explored to support emergency response planning, wildfire risk assessment, and disaster preparedness.

      These examples demonstrate that AI can enhance—not replace—public service, provided it is implemented responsibly and aligned with clear policy objectives.

      To move from fragmented experimentation to trusted AI at scale, public sector leaders should focus on five priorities:

      • Establish clear AI governance

        Define accountability, oversight, and decision‑rights for AI across the organization, including ethical review and risk management.

      • Strengthen data foundations

        Invest in data quality, clearly communicated data strategies, and defined roles and responsibilities to ensure AI systems are built on reliable inputs.

      • Embed security and privacy by design

        Treat privacy, cybersecurity, and compliance as core design requirements, not afterthoughts.

      • Enable the workforce

        Deliver practical AI literacy and fluency training that helps employees understand both the potential and the limitations of AI.

      • Build and maintain public trust

        Communicate openly with Canadians about how AI is used, why it is used, and how their data is protected.

      Governments have a unique opportunity to demonstrate AI’s positive potential by leading with responsibility, transparency, and purpose.


      Moving now, leading responsibly

      Artificial intelligence is already embedded in the day‑to‑day work of Canada’s public servants. The question is no longer whether AI will influence the public sector, but whether governments will shape that influence deliberately—or allow it to evolve without the safeguards, trust, and accountability Canadians expect.

      KPMG’s perspective is clear: the greatest risk facing public sector AI today is not moving too fast, but moving without intention. Informal adoption, fragmented pilots, and unclear governance create more exposure than a well‑designed, responsibly scaled AI program ever will.

      Public sector leaders—deputy ministers, CIOs, and senior municipal executives—have a narrow window to act. By establishing clear AI governance now, investing in data foundations and workforce capability, and making sovereignty and trust non‑negotiable design principles, governments can unlock productivity gains while strengthening public confidence. This is not a technology transformation alone. It is an institutional one.

      Those that move decisively will:

      • Regain control over how AI is used across their organizations
      • Reduce privacy, security, and reputational risk
      • Enable employees to use AI confidently and responsibly
      • Deliver faster, more consistent, and more citizen‑centric services

      Those that delay risk falling into a permanent state of reactive governance—responding to incidents rather than shaping outcomes.

      KPMG believes responsible AI can and should become a defining strength of Canada’s public sector. With the right choices today, governments can move from pilots to platforms, from anxiety to confidence, and from experimentation to trusted impact—for the benefit of Canadians.

      The time to lead is now.


      Insights

      Unlock the value of AI with KPMG – explore real-world success stories and practical AI use cases.

      Insights to accelerate your organization's digital transformation, no matter where you are in your journey.

      A strategic imperative for Canadian economic and business growth.

      KPMG’s 2025 Canadian CEO Outlook shows growth is shovel-ready.

      Connect with us

      KPMG. Make the Difference.

      We’re here to help your organization thrive.

      building

      Michael Klubal

      National Industry Leader, Infrastructure, Government & Healthcare

      Toronto

      KPMG Canada

      Ven Adamov

      Partner, National Leader, Data & Analytics Risk Services

      Oakville

      KPMG Canada