Cymraeg

Young people are embracing AI and want to learn more about using it safely and responsibly. But they, and their parents and carers, urgently need more support.

Everyone’s talking about AI and almost all (97%) of children and young people of all ages from 8 to 17 have used AI in some form. With many children and young people using AI online on a weekly, and even daily, basis, it’s important they have the skills and confidence to make safe choices when they are using smart technology. We are exploring the impact of AI on all of our lives, looking at how AI can be used for good, as well as the risks AI poses, and giving advice and guidance that empowers people to use AI safely and responsibly.

There is much enthusiasm among young people about AI. 80% have seen AI used in positive ways and 58% think AI makes their life better. They are positive about the future too, with 73% of young people feeling that knowing how to use AI will help them in their future careers. But young people of all ages, and their parents and carers, also have real concerns. While 41% of young people think AI can be a great source of emotional support, just as many (45%) are worried about people their age getting really close to AI, like it’s a friend. Over half (52%) of parents and carers are also worried about their child relying heavily on AI for emotional support or help with personal issues.

When it comes to studying, we know that 73% of young people find AI useful. But we also know that 61% of 8 to 12-year-olds and 73% of 13 to 17-year-olds think that people their age rely heavily on AI for studying and homework, and 50% of young people have even seen people their age using AI to do their school or homework for them. Our research suggests that this heavy use of AI in studying is giving rise to issues of trust between schools and pupils, with over half (53%) of young people worried that their school may think that they used AI for their work when they didn’t.

Inappropriate and potentially harmful content made using AI is also a major source of concern for young people and their parents and carers. 60% of young people are worried about someone using AI to make inappropriate pictures of them and 65% of parents and carers are worried about this for their own children. 12% of 13 to 17-year-olds have seen people their age using AI to make sexual pictures or videos of other people and, even among younger children, 14% of 8 to 12-year-olds have seen people their age using AI to make rude or inappropriate pictures or videos of other people. This is an area that requires urgent attention. There is a clear need for the providers of this technology to address this risk, but we also need to look at how we can educate and support young people themselves to act safely and responsibly.

As they go about their online lives, young people have questions about AI and are keen to learn more and equip themselves to use AI safely and responsibly. They worry about transparency for example: 60% worry about not being able to tell if something is real or made by AI, and 75% think this it is getting harder to tell. But they also want to learn, with over half (51%) asking for more lessons at school about how to use AI safely and responsibly.

Our research shows loud and clear that parents and carers are playing a critical role as the primary source of advice and support for young people when it comes to AI. Families are willing to have important conversations: 74% of young people would talk to a parent or carer if they were worried about AI and 72% of parents and carers feel confident talking to their child about the safe and responsible use of AI. But parents and carers urgently need more support and resources too. Less than one in five (19%) have set rules or guidelines for how their child can use AI at home and only 13% know where to go for advice or support if they are worried about their child’s use of AI.

We would like to encourage important discussions both at home and among wider stakeholders about the full breadth of measures we can take to support and protect young people in the context of AI. This includes:

  • continuing to improve and adequately resource online safety education
  • providing parents and carers with the information and resources they need to support their children
  • improving routes to report potentially harmful or illegal content made using AI
  • building better protections into AI technology

AI is part of everyday life for all of us, including young people, whether it be in their studying and schoolwork, tools for everyday living, online gaming, online interactions with each other, or seeking advice and emotional support

AI is also relevant for those young people that do not use it directly, as their peers will likely be using it, which in turn may affect them directly, and they will be seeing AI content or services proliferating around them. Young people’s real-life experience of AI is invaluable and we must create opportunities to listen to and learn from their perspectives. This Safer Internet Day and going forward, we must champion their ideas about how we can best support them to make safe choices about this smart technology.

About this research

This research was commissioned and funded by Nominet to support Safer Internet Day 2026. The summary above is taken from the report by the UK Safer Internet Centre for Safer Internet Day 2026, written by Will Gardner OBE, Director of the UK Safer Internet Centre.

Childnet led on devising the study, alongside input from Nominet which was carried out by Opinium in November 2025. Opinium conducted two surveys.

One of them was of 2,018 children, aged 8 to 17, in the UK. The second survey was of 2,000 parents and carers of children, aged 8 to 17, in the UK. The data from both surveys has been weighted to be nationally representative.

Childnet also consulted its Digital Leaders, Digital Champions and its Youth Advisory Board, aged 8 to 18, in November to December 2025, and ran focus groups with young people in primary and secondary schools in May and June, 2025.


 

Will Gardner OBE

CEO Childnet International

Will Gardner is the CEO of children’s charity Childnet International. Will joined Childnet in 2000 and was appointed CEO in 2009. He is a Director of UKSIC, a partnership between Childnet, the Internet Watch Foundation and the SWGfL, and as part of UKSIC organises Safer Internet Day in the UK. He is also an Executive Board member of the UK Council for Internet Safety and chairs the Early Warning Working Group of helplines, hotlines and law enforcement. Will also sits on Facebook’s Safety Advisory Board and Twitter’s Trust and Safety Council.

In his time at Childnet Will has led national and international projects, and has led the development of Childnet’s range of award-winning internet safety programmes and resources aimed at children, parents, carers, teachers and schools.

Will was awarded an OBE in the 2018 Queen’s New Year’s Honour List for his work in the field of children’s online safety.