Nothing About Us Without Us: Shaping Our Digital Childhood
Position Paper by Children and Young People in the Philippines on
Our Rights in the Age of Artificial Intelligence
Why We Are Speaking Up
We are a generation growing up alongside artificial intelligence. AI is not an abstract idea for us. It is something we interact with almost every day. It influences how we study, communicate, create, play, and express ourselves. As a result, AI is shaping our childhoods in ways no previous generation has experienced.
For many of us, AI has become a “study buddy,” simplifying lessons when classroom explanations aren’t enough. Others see it as a creative partner that helps us design videos, edit photos, or organise school projects. Some of us use AI for emotional support, such as to rant, express our frustrations, or ask questions we might be embarrassed to ask adults. As one participant shared, “When I have no one to talk to, I turn to AI. It listens without judging.”
But AI also confuses us, exposes us to risks, overwhelms us, and sometimes makes us feel unsafe. We worry about harmful content, violations of privacy, and the blurring of what is real and what is AI-generated. As one of us said, “The quality is so high that if we don’t look closely, we won’t notice the AI signs.” Another added, “You can barely tell which is real and which is not anymore.”
We are raising our voices because AI is part of our lives now, whether adults are ready for that reality or not. And because it affects us so deeply, we want and deserve to be part of the conversation about how AI is used, governed, and designed.
How AI Is Already Part of Our Everyday Lives
Across the consultations, we realised that we, children and young people all over the country, use AI in similar ways. In school, we turn to tools like ChatGPT, Gemini, Notion AI, Grammarly, and others to simplify difficult topics, summarise long modules, translate texts, and improve our writing. When teachers give topics without full explanations, we use AI to research on our own so we don’t fall behind.
AI is also part of our creative lives. We use CapCut and Canva to edit performance videos and make presentations. Some of us use photo editors or filters to enhance school projects or create digital art. For children with diverse learning needs, AI makes education more accessible by reading text aloud, simplifying complex lessons, converting speech to text, and providing visual learning aids. Because of this, some of us see AI as a powerful tool for inclusive education.
Outside academics, AI is woven into our daily routines. It listens when we rant, motivates us when we are stressed, and gives advice when we feel lost. It helps us plan our days, organise tasks, or even reflect on personal problems. In moments when we have no one to talk to, AI sometimes feels like a companion.
Some of us shared that our siblings or friends talk to AI every day, to the point that it affects their daily behaviour at home. Some already prefer chatting with AI rather than with real people, and others developed deep feelings for AI characters and now struggle to relate to real people because “no one can meet the standards set by AI.”
AI is not “the future” for us; it is our present.
What AI Helps Us With
Because AI is readily available and responsive, it enables many of us to learn faster and more independently. It provides instant explanations for concepts we struggle with and allows us to prepare school materials quickly, especially when we are overwhelmed with deadlines or juggling school with advocacy, part-time work, or family responsibilities.
AI also supports the way we express ourselves. Some of us use AI to seek the attention and belonging we need. We share our emotions with AI because it feels safer than opening up to other people. It can be comforting to have a space that listens without judgment. It also boosts our productivity by helping us organise our thoughts and create materials quickly. AI makes multitasking more manageable.
For children with disabilities or learning difficulties, AI can be a life-changing tool. Tools that read aloud, simplify text, visualise concepts, or convert speech to writing allow us to learn in ways that suit our needs.
Some of us also see AI as a partner in our advocacy work. AI helps in promoting children’s rights, reminding us of our capabilities and motivating us to face challenges. For others, AI is a way to “work smarter, not harder.”
Used responsibly, AI makes us feel more capable, creative, and confident.
How AI Can Hurt Us
But alongside these benefits, we face harms that cannot be ignored.
One of our biggest concerns is over-reliance. Many of us have noticed how easy it is to depend on AI before even trying to think for ourselves. Some of us no longer attempt to write or brainstorm on our own because AI can do it faster. Many worry that this over-reliance will harm our futures, not just our learning today. We fear entering the workforce with weaker critical thinking, communication, and problem-solving skills, which could lead to “poor career prospects,” “difficulty competing,” or feeling unprepared for jobs where creativity, judgment, and human insight are essential.
When we face exams that do not allow AI or tasks that require our own ideas, we sometimes feel unprepared or anxious. We also worry that too much dependence on AI makes learning feel empty – as if the process itself matters less because AI can do it for us.
Another major concern is safety. Some of us encounter sexual or violent content even while using AI for school. Ads with nudity appear on educational platforms. AI-generated violent scenes confuse us because they look so real. Manipulated or “nudified” photos are becoming common. A few of us have experienced harassment when our pictures were taken and shared without permission.
These harms do not affect us equally. Girls and LGBTQI+ youth told us they experience more harassment, more body-shaming, and more risks of their photos being sexualized or stolen. In BARMM, Muslim girls stressed how AI-generated or edited images violate their cultural and religious expectations of modesty. Because AI can easily manipulate photos, we become especially vulnerable.
These experiences leave us feeling violated, scared, or ashamed, even if no physical contact ever happened.
Misinformation is also a constant problem. AI can give inconsistent or wrong answers. Deepfakes and fake content spread quickly, and even adults, including our own parents, struggle to identify what is real. AI chatbots can also generate harmful words or dangerous suggestions if not used carefully, and anonymity online emboldens offenders. The pressure to keep up with AI’s speed adds to our burnout.
Privacy is another major worry. We often don’t understand how our data is collected, stored, or used. Even when we delete things, we fear that someone, somewhere, still has access to them. Some of us feel uneasy that AI can infer personal details based on past prompts, making us wonder what information we may have unknowingly revealed.
For many of us, AI feels like a “double-edged sword.” On one hand, it gives us a sense of safety because it does not judge us, does not get angry, and listens anytime. But on the other hand, every message we send becomes part of a dataset we do not fully understand. We may share personal stories, emotions, or identifiable information without grasping how these will be used. AI’s comfort can make us forget the risks, lowering our guard even as the system quietly harvests sensitive data.
Many of us shared that AI also affects our mental and emotional well-being. Social media, powered by AI, often shows idealised lifestyles and beauty standards that pressure us to compare ourselves with others. Online negativity and hate speech, especially toward LGBTQI+ youth, drain our self-esteem.
What Makes Us Feel Safe, or Unsafe, Online
From our discussions, we feel safest when we have guidance from trusted adults – parents, teachers, relatives – who talk to us openly about risks and help us navigate the digital world. We feel more secure when platforms automatically filter harmful content, protect our privacy, and give us clear reporting tools that actually work. We feel safer in digital spaces that are inclusive, respectful, and aware of the unique experiences of children from different backgrounds, including those of us in marginalised or conflict-affected communities.
We feel unsafe when we navigate the online world alone, especially when curiosity leads us to unsafe spaces. Weak age verification makes it easy for us to access content meant for adults or for predators to find us. We feel unsafe when platforms ignore reports or take too long to respond. We feel exposed when our data is collected without clear explanations. We also feel unsafe when adults themselves lack digital or AI literacy, because it means they cannot support or protect us.
A safe online space, for us, is one where harmful content is automatically filtered out, predators are blocked, privacy is respected, and children can explore, learn, and connect without fear.
Our Stand
We believe that AI is inevitable, and it will continue to grow. But harm is not inevitable. We have the right to use AI in ways that support our education, creativity, and well-being. We also have the right to be protected from AI-enabled abuse, manipulation, misinformation, discrimination, and privacy violations. Most importantly, we have the right to be heard. We should be part of decisions about AI because we are the ones who live with it every day.
We want adults, tech companies, and the government to recognise that keeping children safe online is a shared responsibility. None of us can do this alone, not children, not parents, not teachers, not policymakers. Everyone must work together.
What We Expect from Adults, Tech Companies, and Government
We demand the government to create clear child-focused AI policies that recognise our rights. We want laws that address AI-generated sexual abuse materials and deepfakes as serious crimes. We also want digital and AI literacy taught in schools, so that children and adults can understand how AI works and how to stay safe. In remote regions like BARMM, we hope for child protection policies that reflect our cultural and community realities. In urban areas like NCR, we call for stronger enforcement of anti-cyberbullying laws and faster responses to online harassment. In the Visayas, we ask for stricter rules on AI image generation, deepfakes, and the misuse of AI in art and creative spaces. In regions across Mindanao, we aim for accessible reporting systems and enhanced digital literacy programs for families.
We urge the National Privacy Commission to require platforms to explain data collection in language children can understand and to ensure that our data is handled responsibly and transparently. High-risk AI systems used in schools and social media should be monitored strictly.
We call upon tech companies to design platforms that are safe by default, not only when settings are adjusted. Harmful prompts should be blocked, and AI should not be allowed to generate sexualised content involving minors. We want fast responses to reports, child-friendly spaces, and genuine engagement with young people, especially those who are often excluded from decision-making. Finally, we challenge parents, teachers, and community leaders to guide us without shaming or punishing us. Learn about AI with us. Talk to us. Support us when we face risks. Create environments where we feel safe to share our experiences and ask for help.
Join us!
We are almost 300 children and young people from across the Philippines. We speak not as one group, but as children from NCR, Luzon, Visayas, Mindanao, and BARMM. We live in different areas, each with different realities, but living in the same digital space.
We use AI every day, for learning, creating, coping, and exploring. We know its benefits and its dangers. We are not asking for AI to disappear from our lives; we are asking for it to be used and managed in ways that respect our rights and protect our well-being.
We ask adults to treat us not just as users of technology but as partners in shaping it.
We ask tech companies to value our safety over profit.
We ask the government to prioritise children’s rights in every AI policy.
We want a digital world where we can learn, express ourselves, and build our futures without fear.
AI is inevitable. Harm is not. Let us build an online world that protects our dignity, nurtures our potential, and listens to our voices.
This position paper is based on the national consultation conducted by Terre des Hommes Netherlands in the Philippines with almost 300 children and young people from different regions across the country, including NCR, Luzon, Visayas, Mindanao, and BARMM.
AI tools were used only to help consolidate and organise the large volume of inputs gathered.
All analyses, perspectives, insights, concerns, and recommendations in this paper come directly from children and young people themselves. These reflect lived experiences, understanding of AI, and our calls for a safer, more inclusive digital world.