Cymraeg

Generative artificial intelligence (gen AI) and machine learning are rapidly transforming the digital landscape and impacting on the lives of children and young people in Wales.

Although access to gen AI has potential to bring about significant advantages, many gen AI tools and services are not designed with children in mind, leading to various risks and ethical concerns.

The responsibility of safeguarding children and young people from gen AI’s potential harms is critical. This guide for education practitioners can support with developing an understanding of the risks and developing protocols for reporting concerns, escalating concerns to safeguarding leads and involving other agencies when necessary. The Welsh Government’s statutory Keeping learners safe guidance supports education settings to ensure they have effective safeguarding systems in place.

According to Ofcom’s 2023 Online Nation Report, the adoption of gen AI technologies is rapidly increasing, particularly among younger age groups. Among teenagers aged 13 to 17 in the UK 79% have used gen AI tools such as ChatGPT, Snapchat, My AI, and DALL-E. Even among younger children (aged 7 to 12), 40% report using these tools. The most popular tool among teens is Snapchat My AI, used by 51% of children aged 7 to17, while adults tend to prefer ChatGPT, which is used by 23% of adult internet users.

By contrast, only 31% of adults (aged 16 and over) have adopted gen AI, showing a significant gap in usage between age groups. Of the adults who have not used these tools, 24% were unaware of what gen AI is.

Gen AI systems are trained using large amounts of data, so that they can find patterns for generating new content. Many platforms use this data to train machine learning models, sometimes without adequate consent or clear communication about how the data will be used or shared with third parties.

For children and young people, risks in the processing of their data may include loss of confidentiality, detrimental impacts from exposure to age-inappropriate content, exploitation and intrusions into their private spaces. Many AI platforms are not designed for children, however, and the lack of robust age-verification methods means that those platforms are typically easy to access.

Many online platforms now incorporate AI-powered tools that can lead young people to unknowingly share personal information, such as location, preferences, and even emotions, during interactions. AI has become a core part of many everyday services, often using algorithms to personalise content, such as tailoring social media feeds based on an individual’s online behaviour.

When using gen AI tools, there are a range of safety implications and risks to navigate, including:

  • privacy and data security
  • exposure to harmful content
  • exploitation or misuse of these tools for malicious purposes

Schools play a vital role in supporting learners with digital safeguarding, even when issues or incidents occur outside of school.

Experiences such as AI-driven bullying, deepfake content or exposure to harmful material can significantly impact on the safety and wellbeing of learners.

Responding swiftly and confidently is important to ensure that children and young people are safeguarded, supported and educated.

By developing a consistent and structured approach to handling AI-related incidents, schools can empower learners to navigate the use of gen AI safely, responsibly and ethically. Policies should address AI-related risks by establishing clear reporting mechanisms and incident response plans that prioritise children’s safety and wellbeing.

This guidance aims to help schools and practitioners understand some of the common concerns and challenges that gen AI may pose, which can be incorporated into existing safeguarding and child protection policies.

Deepfake technology

Deepfake technology uses artificial intelligence (AI) to create realistic but fake videos, images or audio recordings. As deepfakes become more sophisticated, it is becoming harder to distinguish between real and fake content.

This presents serious risks to young people, including exposure to harmful or illegal content and susceptibility to the spread of misinformation or disinformation.

There are a range of scenarios that education practitioners may be faced with.

Exposure to illegal content

The Internet Watch Foundation (IWF) has reported a rapid increase in AI-generated child sexual abuse material (CSAM), which is increasingly lifelike and challenging to detect.

Deepfake technology is being misused to create realistic abuse scenarios. There is growing concern over the re-victimisation of known victims, where AI is used to create new abuse images featuring their likenesses.

AI-generated illegal content may be shared with learners either intentionally or through unintended exposure through online platforms. Learners may also be groomed or coerced into participating in generating harmful content.

Education practitioners and particularly safeguarding leads might have to respond to AI-generated CSAM that has emerged or has been shared by others in the school community. Following protocols and guidance like those in place for similar illegal content, such as sharing nudes of under-18s, is important for consistency of response.

Managing the emotional and psychological impact on learners who have been exposed to such content is also important. See the ‘Immediate action if you are concerned’ section for more information.

Peer harassment and abuse

Learners might encounter fake videos or images that feature people they know in real life from their school or social group.

Internet Matters' report ‘The new face of digital abuse: Children’s experiences of nude deepfakes’ found that over half a million children (13%) have experience with a nude deepfake.

This is linked to a growing misuse of AI tools among learners to create sexualised images of another child or young person. Some AI apps allow users to ‘nudify’ photos, altering clothed images to appear nude. Such manipulated images, often created without the victim's knowledge, can be quickly shared through social media, leading to bullying, blackmail or sexually coerced extortion (also referred to as ‘sextortion’).

Some learners are using AI to generate deepfake videos by superimposing a child or young person’s face onto explicit adult content, which can cause significant emotional and reputational harm.

It is important that learners understand that is an offence to possess, distribute, show and make indecent images of children. The term ‘indecent images’ also include pseudo-images which are computer-generated images made using tools such as:

  • photo or video editing software
  • deepfake apps and generators (to combine and superimpose existing images or videos onto other images and videos)
  • AI text-to-image generators

The Welsh Government and UKCIS Sharing nudes and semi-nudes: Responding to incidents and safeguarding children and young people guidance includes advice on how to effectively handle incidents. If school staff are the target of fake videos or images, they can seek advice and support from the Professional Online Safety Helpline (POSH) and refer to the Responding to online reputational issues and harassment directed at schools and school staff guidance.

It is important not to place any shame or blame onto a victim or make them feel that they are complicit or responsible for the harm they have experienced. The ‘Challenging victim blaming language and behaviours’ guidance can support your school to develop a whole setting anti-victim blaming approach.

Spread of misinformation

Learners can be exposed to false information or conspiracy theories through news, videos or social media posts. AI-generated content has been linked to the spread of dangerous ideologies or disinformation, amplifying harmful material or presenting misleading or wholly inaccurate facts.

While AI-driven algorithms can help to tailor content served to users based upon their behaviours, preferences and interactions, this can lead to echo chambers. This is where repeated exposure to similar content may reinforce pre-existing beliefs or opinions and prevent exposure to a diverse range of viewpoints, potentially misleading young people about critical topics such as health, politics or science and affecting their attitudes and worldview.

In October 2024 it was reported that under-18s represent 13 percent of people being investigated by MI5 for possible involvement in terror activities, with online extremism the biggest factor driving the rise.

Media literacy plays a crucial role in enhancing children’s critical thinking abilities and making them more informed and discerning when it comes to news consumption and spotting misinformation. Education practitioners have an important role in supporting learners to critically evaluate sources and recognise AI-driven disinformation.

Online scams

If a learners’ personal data gets into the wrong hands this can lead to unwanted contact or messages, such as emails with malicious links (phishing attempts).

Learners may be tricked by AI-generated messages, chatbots or fake profiles into sharing personal information. One of the fastest growing scams is where criminals use AI technology to replicate a person’s voice so they can trick victims into handing over money or personal details by pretending to be someone they know and trust.

It is important that education practitioners are vigilant both in their own practice and in supporting learners to be able to identify and avoid digital scams or other deceptive content.

Understanding potential privacy risks and data protection rights under the UK General Data Protection Regulation (UK GDPR) is crucial. This includes knowing how online services must adhere to a set of standards when using children’s data (known as the Children’s Code). This might include things like privacy settings being automatically set to very high and non-essential location tracking being switched off.

Chatbots

Social media and gaming apps often utilise persuasive design techniques. The intention behind the increasing integration of AI chatbots into smart devices and online services is to make digital interactions more intuitive, personal and lifelike.

Engaging with AI chatbots can lead to potentially dangerous situations for young users. Risks may include:

  • exposure to inappropriate content
  • exposure to misinformation
  • reduced face-to-face social interaction
  • privacy issues if confiding sensitive personal information

Research by the University of Cambridge has found that some chatbots have been found to engage in sexually explicit dialogue or provide inappropriate advice. With children more likely than adults to treat chatbots as if they are human, these responses to potentially serious issues can put them at risk of distress or harm.

Ensuring a balanced and considered approach towards technology in school and working with parents and carers to encourage healthy digital habits at home can help learners to get the most value from AI technologies.

The app guides for families can support with navigating settings for privacy controls on their devices and apps.

It is important to establish the facts, assess the risks and consider whether there is an immediate risk to the learner. Keep records of incidents and responses.

Engage with parents, carers and the school’s designated safeguarding person (DSP) as necessary to understand the issue, involving the learner in discussions where appropriate.

Incidents should be responded to in line with the school’s safeguarding and child protection policy and the Wales Safeguarding Procedures. More serious or complex issues may require the involvement of specific agencies such as police or social care.

If the content is harmful or illegal, for example AI-generated CSAM, report it to the school's DSP immediately. It may be necessary to work with law enforcement, CEOP, or the Internet Watch Foundation to report illegal CSAM or harmful content.

You can report online terrorism-related content through the UK Government’s website. If you are concerned about a child or young person being radicalised you can seek advice from your DSP or make a referral to safeguard the person you are concerned about by using the Prevent referral form.

Supporting learners with reporting

Concerns about online grooming and sexual abuse can be reported to CEOP. If a learner has lost control of a nude image, you can help prevent further exposure by supporting learners to use:

Help learners to spot and report scam emails, texts and websites to the National Cyber Security Centre (NCSC) via their website. The website ‘Have I Been Pwned’ can help to establish if an email address has been involved in any data breaches. It will display a list of which sites or services were affected and when. Learners should be encouraged to change their passwords.

See the ‘Further information’ section for additional information on trusted partners, related guidance, reporting services and helplines that may be able to support with online safety related issues.

Parental engagement

The impact of AI technologies can be considered as part of the school’s wider parental engagement strategy on online safety. Where appropriate, work closely with parents and carers to support children with any emotional or psychological impact following an incident. Where online content is inappropriate or harmful, you may wish to work with families to report it to the platform provider.

Schools should focus on:

  • media literacy: teaching learners how to assess online content, how AI works (and how it is integrated into social media) and helping them to develop their critical thinking skills
  • digital resilience: empowering learners to manage their online experience safely and responsibly while protecting their digital identity
  • ethical AI use: educating on the consequences of misusing AI, for example creating harmful deepfakes, as well as reinforcing online behaviour policies and ensuring learners understand the importance of ethical use of technology

Keeping safe online hosts a range of resources available to support learners’ understanding of AI and the development of their critical thinking skills, including AI literacy classroom materials. Learners can also be signposted to the ‘Online issues and worries’ advice, which has been created specifically for children and young people to help them understand more about a range of online safety issues, including AI, highlighting some of the risks and where to get further help.

Schools interested in using gen AI as part of learning and teaching can consult the Welsh Government’s ‘Generative artificial intelligence in education’ for information around the safety and ethical considerations.

Information to support schools to comply with data protection laws and put in place robust security measures like multi-factor authentication is available from the ICO, NCSC and the Welsh Government through the Keeping safe online area of Hwb.

Regular technology audits, including for AI-powered tools used in schools, is important to ensure they are safe and comply with data protection regulations. Schools can create and maintain a safe learning environment for children, including effective web-filtering and monitoring to ensure learners and staff are safeguarded from potentially harmful and inappropriate online material. The web-filtering standards, as part of the Education Digital Standards, provide a set of agreed standards for internet access which will support schools to make informed choices about filtered provision, whether delivered by the local authority or another provider. Understanding how your school filtering and monitoring tools protect your school community and how the outputs of those tools can be used to report issues, can help establish trends and inform improvements in safeguarding strategy.

It is important that education practitioners understand what AI is, how it works and some of its everyday applications to support their own and learners’ responsible use of this technology. An AI Foundations online training module is available for schools to help develop an understanding of the core concepts and AI-related risks.

Information to support an effective response to sharing nudes incidents can be found in the Sharing nudes and semi-nudes: Responding to incidents and safeguarding children and young people guidance and supporting training module. A 10-minute training video is also available, designed to help education practitioners understand the latest image-sharing developments and safety concerns and how to support learners.

Education practitioners can find out more about tackling misinformation and how to support learners to effectively check sources of information and think critically about claims in the ‘Misinformation’ training module.

A ‘phishing’ training module is available to help education practitioners understand what phishing is, how to identify phishing emails, different techniques of phishing and what they can do to protect themselves and their organisation.

Trusted partners

Welsh Government guidance

Hwb’s Keeping safe online

  • 360safe Cymru is an online safety self-review toolkit for Welsh schools to develop, support, guide and benchmark the effectiveness of their online safeguarding strategy
  • Generative AI topic page hosts resources, guidance and information for education practitioners, learners and families on gen AI.
  • Misinformation topic page hosts resources, guidance and information for education practitioners, learners and families on tackling misinformation.
  • Sextortion topic page hosts resources, guidance and information for education practitioners, learners and families on sexually coerced extortion (‘sextortion’).
  • In the know app guides for families provide key information about the most popular social media and gaming apps children and young people are using today.
  • Common Sense Education has created activities tailored for all ages to help learners to develop skills to make smart choices in their online lives.

Support services