A district welfare officer scans a list of households flagged by an AI system as high risk for benefit fraud. The model has drawn on vast datasets, including land records, tax filings, subsidy histories, and patterns of electricity consumption, to prioritise cases for review. While the output is clear, the reasoning behind it is not. Some selections appear obvious, but others raise difficult questions. Acting solely on the system’s recommendations could result in deserving families losing essential support. Ignoring it, however, risks allowing genuine misuse to go unchecked. In that moment, the officer is not just making an administrative decision; he is navigating the complex balance between efficiency, fairness, and trust in the use of AI.

      This is not a distant or hypothetical scenario. Governments across the world are already experimenting with artificial intelligence to support decisions in welfare delivery, regulatory oversight, and public service provision. It highlights a central challenge that Al introduces into public governance: citizen trust must evolve alongside technological capability, not follow it.

      Trust in government technology has never been defined only by uptime, dashboards or interface design. While these elements matter but, the deeper question in a democracy is whether the citizens believe that the state exercises its power fairly. AI does not change that expectation; it intensifies it.

      From ‘AI for ALL’ to ‘AI You Can Trust’

      As artificial intelligence begins to guide welfare targeting, regulatory oversight and service delivery in India, that question becomes even more pertinent. AI can help governments work faster and make more informed decisions but when people cannot see or understand how those decisions are made, the system can start to feel distant or opaque. The shift, therefore, is not just from ‘digital systems’ to ‘AI-enabled systems.’ It is from ‘AI for All’ to something more demanding which is ‘AI you can trust.’

      India’s early articulation of ‘AI for All’ through NITI Aayog’s National Strategy for AI (NSAI) was aspirational and forward looking.1 It placed artificial intelligence at the centre of inclusive growth across health, agriculture, education, smart cities and mobility. That framing was important as it indicated an ambition for adoption of AI with trust and governance being core to it.

      The India AI Mission and the development of national AI governance guidelines have taken the policy level conversation to the architecture level. The underlying recognition is although simple but critical in the sense that it stresses trust to be designed in the systems rather than just focusing on efficiency gains.

      AI governance: From ambition to architecture

      India’s approach to governing AI is gradually taking shape around a set of practical principles. A key recognition is that ethics, transparency and accountability cannot be treated as optional features; they are fundamental to building AI systems that people can trust.The articulation of responsible AI Sutras as enumerated in the India AI Governance Guidelines – trust, human-centric, innovation over restraint, fairness and equity, accountability, explainability, safety, reflects a desire to remain technology neutral while still being normatively clear.

      Equally important is the need for a whole-of-government approach. AI systems often rely on shared data infrastructure and interoperable platforms that cut across departmental boundaries. Fragmented governance could lead to uneven safeguards and potential blind spots. A coordinated model, supported by central oversight and common standards are therefore essential for effective risk management.

      One of the essential acknowledgements is that the technological components and capabilities are still evolving. It is yet to be understood completely in terms of its impact on large scale adoptions, environment effects and impact on human values. A roadmap which could be phased out across institutional building, standard setting and regulatory framework would ensure that there is a scope to adapt without stifling technological innovation.

      Trust-First AI state

      The articulation of the MANAV vision by the Hon’ble Prime Minister of India at the India AI Impact Summit adds an important political dimension to this discussion.3 By framing AI around values such as morality, accountability, national sovereignty, accessibility and legitimacy, the emphasis shifts from what technology can do to what it means for people. That shift is significant, because public trust is rarely built through technical documents alone; it strengthens when political leadership repeatedly signals that dignity, inclusion and citizens’ rights remain central.

      The real test now lies with ministries and state governments. Translating high-level principles into day-to-day administrative practice requires careful choices. These include procurement frameworks, model validation processes, audit mechanisms, human oversight structures, and accessible grievance redressal mechanisms. It also requires civil servants who are equipped comfortable to question, interpret, and challenge technological outputs. It demands clarity on who is accountable when automated outputs influence high-stakes outcomes.

      In our view, three shifts could determine whether India becomes a ‘trust-first’ AI state
      • First

        Capability must keep pace with ambition. Public institutions need in-house expertise, not necessarily to build every model, but to interrogate them intelligently

      • Second

        Risk management must be proportionate. Not every AI use case carries the same stakes. A chatbot summarising information does not demand the same scrutiny as an algorithm influencing eligibility or enforcement. Governance frameworks should recognise that difference

      • Third

        Communication with citizens must be more deliberate. If AI influences public services, people deserve to know when it is being used and how they can seek recourse. Transparency cannot remain abstract; it must be visible and understandable

      India stands at a pivotal moment where it can pursue innovation while upholding accountability. Success will not be defined by the extent of AI adoption, but by the integrity with which it is integrated into everyday governance. Trust cannot be established through policy intent alone; it must be built through consistent and visible practice. For AI to scale responsibly in the public sector, systems need to be clearly explainable, decisions must be open to challenge, and outcomes should remain aligned with citizen interests. Only then can AI move beyond administrative efficiency and serve as a credible and trusted instrument of public value.

      Authors

       

      Abhishek Verma

      Partner and Lead, Digital Government Advisory

      KPMG in India

      Shashikant Shukla

      Partner

      KPMG in India

      How can KPMG in India help

      Collaborating for progress, development, citizen empowerment and upliftment

      Large-scale transformation programmes and efficient public service delivery

      Solutions to guide your AI transformation journey


      Access our latest insights on Apple or Android devices