A district welfare officer scans a list of households flagged by an AI system as high risk for benefit fraud. The model has drawn on vast datasets, including land records, tax filings, subsidy histories, and patterns of electricity consumption, to prioritise cases for review. While the output is clear, the reasoning behind it is not. Some selections appear obvious, but others raise difficult questions. Acting solely on the system’s recommendations could result in deserving families losing essential support. Ignoring it, however, risks allowing genuine misuse to go unchecked. In that moment, the officer is not just making an administrative decision; he is navigating the complex balance between efficiency, fairness, and trust in the use of AI.
This is not a distant or hypothetical scenario. Governments across the world are already experimenting with artificial intelligence to support decisions in welfare delivery, regulatory oversight, and public service provision. It highlights a central challenge that Al introduces into public governance: citizen trust must evolve alongside technological capability, not follow it.
Trust in government technology has never been defined only by uptime, dashboards or interface design. While these elements matter but, the deeper question in a democracy is whether the citizens believe that the state exercises its power fairly. AI does not change that expectation; it intensifies it.