x

AI for children: Navigating the legal and regulatory terrain in India

20 五月 2025

by TMT Team

Google reportedly launched a child-oriented version of its Gemini AI chatbot i.e., Gemini AI Kids, marking a significant development in the integration of generative artificial intelligence (Gen-AI) into early education.[1] This version of Gemini AI Kids is rolled out for users below the age of thirteen through supervised accounts on the Family Link app.

The introduction of artificial intelligence (‘AI’) products and services to children raises critical concerns regarding child safety, data privacy, psychological well-being, and regulatory oversight in India, particularly on account of the lack of appropriate legal frameworks governing AI and children’s digital rights.

Regulatory framework for AI and children’s data in India

AI operates on the premise of data collection and analysis, making it crucial to address data protection concerns while engaging with AI technologies. India’s primary statutory framework for data protection—the Digital Personal Data Protection Act, 2023 (‘DPDP Act’) explicitly recognizes the importance of protecting processing activities associated with personal data of children in digital ecosystems. To this end, Section 9 of the DPDP Act mandates that Data Fiduciaries (entities which determine the purposes and means of processing) obtain ‘verifiable consent’ of parents or legal guardians before processing personal data relating to children, i.e., individuals below eighteen years of age, however, falls short of recommending an age gating requirement.

The DPDP Act explicitly restricts Data Fiduciaries from processing personal data likely to harm a child’s well-being and undertaking behavioural monitoring or targeted advertising at children. Courts in India, have consistently held that a child’s well-being extends beyond physical care to include moral, ethical and emotional development.[2] While many applications implement certain safeguards, such as opting not to use children’s interactions for future model training, these measures are self-regulatory in nature and may not specifically cater to the well-being of each individual child. Further, in many cases, platform disclaimers expressly acknowledge that users (particularly children) may still be exposed to inappropriate content, underscoring the limitations of existing controls.

The DPDP Act further imposes specific obligations on certain Data Fiduciaries, who, amongst other criteria, may be categorised as significant data fiduciaries (‘SDFs’), based on the volume and sensitivity of personal data they process. Given that AI chatbots interacting with children may collect and process personal data, they may be categorised as SDFs and be required to comply with enhanced requirements, including observing due diligence to verify that algorithmic software (such as AI tools) deployed by such entities is not likely to cause any harm to the rights of the children while processing their personal data.

While the DPDP Act and the Draft Rules address the specific methodology concerning obtaining of parental consent in a ‘verifiable manner’, it does not, as is typically with a data privacy legislation, provide a broader vision to secure children from potentially harmful outputs or consider the long-term psychological impact of minors engaging with generative AI models or emotional safety.  While some requirements (such as informing users to not upload content harmful to children, technology-based measures for detection of child sexual abuse content, measures to safeguard children by gaming intermediaries, content classification by providers of online curated content and offences against children) have been provided as part of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Bharatiya Nyaya Sanhita, 2023 and other laws, a comprehensive framework that regulates AI systems, measures for age-gating, specification of risks and harms, risk-based classification and associated restrictions and limitations remains absent.

Conclusion

Globally, certain regulatory bodies have defined the contours of AI safety in relation to minors. In the United States, California is advancing legislation, the Leading Ethical AI Development for Kids Act that would compel AI systems interacting with children to clearly disclose their non-human nature, preventing over-reliance on AI for serious and emotional issues and undergo periodic safety audits. The European Union’s AI Act goes even further, classifying AI applications targeted at children as ‘high-risk,’ thereby subjecting them to enhanced transparency, accountability, and human oversight requirements.

While the legislation in India (such as mandatory reporting obligations under POCSO[3]) or guidelines issued to schools and educational institutions on cyber bullying encourage reporting of cybercrimes, bullying and other incidents; they do not propose a strong framework with an active regulator that would be able to tackle growing issues associated with emerging technologies such as Gen-AI. Amidst a global consensus of such protection to children, the DPDP Act, and other legislations must recognize children as a vulnerable group and provide a framework to protect their digital rights while balancing innovation.

[The article is authored by TMT Team at Lakshmikumaran & Sridharan Attorneys]

 

[1] Google Plans to Roll Out Gemini A.I. Chatbot to Children Under 13 - The New York Times

[2] Sheoli Hati v. Somnath Das, AIR 2019 SC 3245

[3] Sections 19 and 20, Prevention of Children from Sexual Offences Act, 2012.

Browse articles