Education is a high-impact, high-sensitivity sector. Decisions influenced by Artificial Intelligence (AI) such as learning progression, assessment outcomes or access to employment opportunities can have lifelong consequences across stakeholders especially the learners. At the same time, India’s education ecosystem is characterised by varying institutional capacity, learner preparedness, language contexts and levels of digital maturity. Alongside this, are some real risks such as learner data privacy and misuse, AI tutoring that over-personalises and replaces critical human mentorship, models trained on global datasets misaligned with India’s linguistic, cultural, pedagogical diversity and exclusivity from paywalled platforms that widens inequality.

      Global policy guidance and empirical research have raised similar concerns. International frameworks on the ethical adoption of AI underline the need for strong human-rights protections and caution against surveillance risks, especially for vulnerable populations1. Policy analysis has also emphasised privacy, bias and autonomy risks in education use-cases2. Academic research has linked excessive screen time to cognitive/behavioral harms3.

      Against this backdrop, ‘one-size fits-all’ regulatory models built for homogeneous systems or smaller markets cannot be directly adopted. Instead, India’s response should be enablement-first, light-but-tight approach i.e. encourage experimentation while tightly guarding public interest. This is consistent with the National Education Policy, which promotes technology for personalisation and access while embedding safeguards and with India’s national AI strategy that prioritises foundational infrastructure and locally relevant datasets.

      A fit-for-purpose AI framework for education should cover the entire value chain from curriculum design, content creation, formative assessment, tutoring and learning analytics to administrative systems (admissions, credentialing), teacher professional development and labour market linkages. In our view, it revolves around five areas:

      • Risk-based differentiation

        Not all AI applications in education require the same level of oversight. Personalised learning tools or AI-assisted content creation pose fundamentally different risks compared to automated assessment engines or AI-supported admissions decisions. Regulation should be calibrated accordingly with enhanced scrutiny reserved for applications that materially affect learner outcomes and factor in explainability and human-in-the-loop options

      • Innovation through controlled experimentation

        Education specific sandboxes can allow institutions, researchers, ed-techs and public bodies to test AI solutions under regulatory oversight. Such environments can accelerate safe innovation while producing evidence for national scale-up

      • Minimum trust standards and registries

        Baseline requirements could include documentation of data sources, model limitations, performance benchmarks and human oversight mechanisms particularly for systems deployed in publicly funded institutions. A public registry of diverse AI systems could improve transparency and accountability

      • Capacity building

        Effective regulation depends on the capacity of those implementing and overseeing it. Teachers, administrators, assessors and regulators require foundational AI literacy to interpret system outputs, question recommendations, safe-guarding student well-being and exercise informed judgement

      • Alignment with public interest objectives

        AI adoption in education should advance national priorities such as inclusion, quality and mobility. Regulatory design should actively encourage solutions that address language diversity, regional disparities and access challenges. We should learn from global practice but evolve India-specific norms that reflect linguistic diversity, equity goals and local pedagogy

      Across India, educational institutions are integrating AI in ways that reflect an enablement-oriented philosophy. Some of these initiatives include AI-driven learning analytics to identify student learning gaps while preserving faculty authority; language and speech technologies to expand access across multiple Indian languages; embedding AI into academic programmes with modules on ethics and responsible use. A notable example of a leading institution which allows students to use generative AI tools in exams and assignments but requires them to submit the prompts used. This shifts evaluation from rote memory to prompt quality, reasoning and critical thinking. This reflects a system that wants to create active innovators and not passive consumers.

      An enablement-first approach can be operationalised through pragmatic policy instruments, like:

      Tiered regulatory obligations linked to the risk profile of AI applications

      Education-specific AI sandboxes run with states

      Public funding tied to defined trust standards

      Registries of AI systems deployed in publicly funded educational institutions

      Incentives and grants for solutions supporting language inclusion, low-bandwidth access, open datasets (like AI Kosh) and auditable AI systems

      Many of these measures can be implemented through guidelines, funding conditions and institutional mandates without requiring immediate legislation. Over the next 12 to 24 months, India has an opportunity to shape AI governance in education by publishing baseline trust standards for high-risk AI education use cases, piloting state-level regulatory sandboxes and investing in capacity building.

      The objective is not to eliminate risk, which is neither feasible nor desirable, but to ensure AI in education remains transparent, accountable and aligned with public purpose. For India, the right path is pragmatic and contextual i.e. build open standards and local datasets, experiment prudently across the whole education value chain and uphold the highest trust standards. Enable innovation however with fairness, accountability and national purpose at the core.

      [1] Recommendations on the Ethics of Artificial Intelligence, UNESCO, November 2021

      [2] Vincent-Lancrin, S. and R. van der Vlies (2020), “Trustworthy artificial intelligence (AI) in education: Promises and challenges”, OECD Education Working Papers, No. 218, OECD Publishing, Paris, https://doi.org/10.1787/a6c90fa9-en

      [3] Children born early at risk from too much screen time, Stanford Digital, Erin Digitale, October 2021

      Co-Authored with Deewakar Gupta, Technical Director, G&PS, KPMG in India

      Author

       

      How can KPMG in India help

      Our strategic sector focus on the education industry helps us develop an in-depth understanding of industry issues.

      The economic, social and political environment globally and in India seems to be evolving.

      Solutions to guide your AI transformation journey


      Access our latest insights on Apple or Android devices