Launch of Our New Report: Children’s Perspectives on AI and Online Safety
Today, we launched our new research report exploring the opportunities and risks that Artificial Intelligence (AI) presents for children across diverse global contexts.
AI is often described as a double-edged sword. On the one hand, it offers enormous potential for education, creativity, productivity and even emotional support. On the other hand, it can accelerate risks, including online child sexual exploitation and abuse. This report set out to better understand this balance, grounded not only in legal and technical debates, but in the lived experiences and voices of children themselves.
What We Did
The study combined three components:
- Desk review mapping the current evidence on AI and online sexual exploitation of children
- In-depth interviews with six leading AI and technology experts
- Primary research with children and stakeholders across five contexts: Cambodia, Nepal, the Philippines, Kenya, and children displaced from Ukraine consulted in Poland and Slovakia
In total, we conducted focus groups with 155 children and young people and 87 stakeholders, using child-centred and participatory methods such as drawing exercises, voting activities, and the co-creation of an “AI rulebook.”
What We Found
Children are active and frequent users of AI tools such as ChatGPT, Gemini and Siri, often on a daily basis. They use AI primarily for educational support, creative activities, and entertainment. Notably, some children described using AI chatbots for emotional support and advice.
While children generally see AI as a neutral and helpful tool that makes life “faster” and “easier,” important gaps in understanding remain. Many children are unsure how AI works, how their data is used, or how AI differs from social media platforms. Discussions revealed mixed feelings: excitement about AI’s possibilities, alongside concerns about misinformation, deepfakes, privacy, addiction, scams, and over-reliance.
Strikingly, there was a disconnect between children’s discussions and the strong emphasis in existing literature on AI-facilitated sexual exploitation risks. While children were aware of certain risks, these were not always top of mind, and in some cases appeared difficult to discuss in group settings.
At the same time, children were clear: they do not want to disengage from AI. Instead, they want guidance, clearer safeguards, and stronger protection systems that allow them to benefit from AI safely.
The report also highlights an important gap — while much attention focuses on AI as a risk factor, far less was discussed by participants about how AI could actively be used to prevent harm and strengthen child protection.