Written by Mar Negreiro.
Children are intensive users of digital tools such as artificial intelligence (AI). Generative AI – AI that can create new content such as text, images, videos and music – is becoming increasingly sophisticated, making it difficult to distinguish user-generated content from AI-generated (synthetic) content. If not supervised properly, these tools might carry risks for children, whose cognitive capacities are still developing. The need to strengthen generative AI literacy for children, educators and parents is therefore growing increasingly important, along with greater efforts by industry and enhanced implementation of AI legislation, including monitoring indicators.
The first generation of digital natives growing up with AI
Children and teenagers are avid internet users. Most children in the EU use their smartphones daily, and do so from a much younger age than a decade ago. Often, however, the online environments children access were not originally designed for them. Some countries such as Australia have passed laws to prevent children under the age of 16 from using social media platforms. At the same time, younger children have no problem bypassing the age requirements set by services such as social media. Likewise, children are already using AI embedded in apps, toys, virtual assistants, games, and/or learning software. A 2024 survey conducted in United Kingdom (UK) showed that 77.1 % of 13- to 18-year-olds had used generative AI, and findings suggest that they are twice as likely as adults to use it. The most common use is for helping with homework and seeking entertainment. The UK, under its AI opportunities action plan, favours the implementation of AI in school provided it is used under supervision.
As with other digital technologies, the most popular AI tools are not adopting specific measures to adapt their features to under-age users – aside from a minimum age for use. Claude’s Anthropic, for instance, does not allow users under the age of 18 to use its services. ChatGPT requests parental consent from users between 13 and 18 years old. Google has recently adapted its Gemini AI chatbot by lowering the minimum age requirement from 18 to 13 years (for their student accounts only), and adopting additional protection measures, such as excluding those young users’ data from its AI model training. Brazil adopted a similar protection measure by banning social media platform X from training its AI using children’s personal data.
Opportunities and challenges
Opportunities
Generative AI may bring many potential benefits, for instance in terms of integrating AI-driven tools into education – explaining how AI can help children develop a sense of curiosity and innovation; encouraging them to ask questions, to experiment, and to find solutions to real-world problems.
When designing AI tools for learning purposes, providers of generative AI may influence children’s interactions positively; for example, guiding children in developing their writing skills rather than replacing them. Integrating AI into education could also enhance accessibility for students with disabilities by supporting diverse learning needs.
Challenges
Conversely, if not implemented adequately, generative AI may interfere with a child’s learning and school development. UNICEF highlights that the way children interact with AI has both physiological and psychological implications. The recently formed international Beneficial AI for Children Coalition, involving multiple stakeholders, has committed to put forward guidelines that evaluate impact and mitigate risk.
The following are some key challenges associated with generative AI.
Synthetic reality
The 2024 edition of the World Economic Forum’s Global Risks Report ranks disinformation as the most serious risk the world may face in next 2 years – a risk likely to increase with the rise in synthetically generated content. In one survey, even expert linguists incorrectly perceived 62 % of AI-generated content as human-created. Children are particularly vulnerable to synthetic content such as deepfakes, and because of their still-developing cognitive abilities, can be manipulated more easily. A Massachusetts Institute of Technology Media Lab study has shown that 7‑year‑olds tend to attribute real feelings and personality to AI agents. Generative AI may also be used for malicious purposes towards children, including cyberbullying or online grooming. The increase in AI‑generated online child sexual abuse is already a growing challenge for law enforcement.
Reduced critical thinking
Significant concerns focus on the potential consequences of AI-assisted learning for students’ research, writing and argumentation skills, as generative AI’s capability in data analysis and automation could reduce students’ cognitive skills, in particular their critical thinking and problem solving. However, some research advocates integrating AI into learning tools to enhance critical thinking and problem solving, as this would help students develop the analytical skills needed for a technology-driven future.
Digital divides and AI literacy
According to UNESCO, AI literacy entails the skills and knowledge required for effective use of AI tools in everyday life, with an awareness of the risks and opportunities associated with them. Incorporating AI literacy is therefore essential for building foundational understanding and skills to bridge the digital divide and foster inclusion. Despite the pivotal role of learning development, AI literacy is still more commonly implemented at secondary schools and universities than it is at primary schools. From a gender perspective, the Organisation for Economic Co-operation and Development (OECD) highlights that AI may exacerbate gender disparities if gender equality issues are not addressed adequately when the AI tools are trained. Moreover, AI tools are mainly trained on the world’s three most spoken languages (Chinese, English and Spanish), thereby making AI less safe for people who speak low-resource languages (those for which limited linguistic data for training AI models are available), since AI tools are less precise in those languages. Educational stakeholders will likely have a key role to play in tackling these concerns by preparing teachers for an ethical use of AI and adapting curricula.
Current EU action and way forward
The EU seeks to secure children’s online rights. The Digital Services Act (DSA) requires digital platforms to prioritise children’s safety and privacy and protect them from illegal content. Measures that have to be put in place include age verification and not showing ads based on profiling of minors. The EU’s Better Internet for Kids (BIK+) strategy seeks to boost digital literacy, provide awareness raising material, information and educational resources, and create a safer internet environment for young people. Yet neither the Council recommendation on key competences for lifelong learning (last updated in 2018) nor the Digital Decade policy programme 2030 include AI literacy as a specific competence, and only few Member States have introduced AI competences in their school curricula. The European Commission’s ethical guidelines on the use of AI and data in teaching are meant as a tool for educators.
The EU’s recently adopted AI Act is the world’s first comprehensive AI law. It sets uniform rules to create a single market for trustworthy AI applications that fully respect fundamental rights, including children’s rights. It has entered into force and will apply in full from 2 August 2027. The act classifies AI systems as high risk for some areas of education, such as access or admission to education, systems to evaluate learning outcomes, assessment of educational levels, and detection of students’ prohibited behaviour. General provisions will also benefit children once implemented, such as a requirement to watermark deepfakes and other AI-generated materials, and to inform children when they are interacting with AI.
The General Data Protection Regulation (GDPR) states that children merit specific protection, as they may not be fully aware of the risks and consequences that the disclosure of personal data in data processing may involve. In the context of the rise of AI, the need for more data literacy is therefore being proposed.
Overall, a significant gap needs to be addressed to avoid AI digital divides, and additional research is necessary to fully understand the future implications of generative AI use by children. Under the Digital Decade compass and the digital economy and society index (DESI) dashboard, for instance, no common indicators or statistics at EU level are currently available. The next review of the Digital Decade policy programme, envisaged for June 2026, might offer an opportunity for the European Commission to introduce target indicators in this area.
Read this ‘at a glance’ note on ‘Children and generative AI‘ in the Think Tank pages of the European Parliament.




Comments are closed for this post.