Generative AI: guide for parents and carers
A guide for parents and carers about Generative AI.
- Part of
Overview
Generative AI (gen AI) is a branch of artificial intelligence that can create content, such as text, images, and even music, based on prompts. Prompts are written and entered in a similar way to using a search engine like Google. Some gen AI, like ChatGPT, have appeared in the news because learners have reportedly used it to write their essays. Other forms of gen AI, like My AI on Snapchat, are designed to be more like an AI friend that a user might message to pick up some quick facts about a topic.
The rapid growth of AI technology means it is being integrated into web browsers and linked to voice-activated search tools. This means that a user often does not have to visit a specific website or app to access a gen AI tool. They are now often integrated into many of the popular services used daily by people all over the world. For example:
- Google searches now put ‘Gemini’ (Google’s AI tool) responses at the top of the list ahead of any sponsored or verified material
- the Microsoft Edge browser has Microsoft Copilot attached
- AI tools can be linked to virtual assistants like Siri, Google Assistant or Alexa meaning that access to AI technology can be started using simple voice commands
Official age rating
Gen AI tools are generally accessible to a wide age range, with no stringent age restrictions. Many Gen AI tools are now integrated into web browsers and respond automatically or are accessible on mobile devices with limited age restriction in place by default.
ChatGPT and other OpenAI platforms require users to create an account to verify that they are over 13 before being allowed access. However, they do not appear to have any rigorous age verification methods.
Meanwhile, platforms that have integrated these tools into their platform may have their own age guidelines. For example, My AI is integrated into Snapchat, which has a minimum age requirement of 13.
Find out more about age ratings in our ‘A parent and carer’s guide to age ratings of apps and games’.
How children and young people use the app
The use of gen AI is growing, and the appeal is obvious. Figures from the National Literacy Trust’s 2023 to 2024 Annual Literacy Survey show that:
- the percentage of children aged 13 to 18 that have used AI has more than doubled, from 37% in 2023 to 77% in 2024
- reasons for engaging with AI are varied and reflect many of the facets of teenage life
- entertainment, curiosity, homework and seeking inspiration are common reasons children use gen AI
Support with schoolwork
Many learners use tools like ChatGPT and Google Gemini to help with homework, research, and studying. Gen AI can also be used to generate summaries of complex topics, provide explanations and even suggest ideas for essays or projects. It is easy to paste a complex piece of text in and ask the AI to simplify the text or ask it to explain a task in a more accessible way.
Creative projects
For young creatives, Gen AI tools such as DALL-E and Microsoft Copilot (Image) are useful for generating unique artwork or design ideas. These can be used in school projects, hobbies, or even online content creation. AI tools can bring imaginative concepts to life, making them helpful to those with even limited artistic skills.
Social media content
Gen AI tools like My AI on Snapchat can be used to:
- generate quick responses
- craft unique messages
- create images that can be shared with friends on social platforms
These tools allow young people to maintain a consistent presence on social media. They do this by producing, with minimal effort, content that is creative, witty or visually appealing.
Gen AI tools themselves do not provide social networking opportunities or a means to message on the platform. Instead, they can play a significant role in enhancing social interaction among young users.
Key terminology
Prompt
This refers to the user’s input or question that guides the AI in generating content. Prompts can be as simple as a single word or as complex as a detailed scenario.
Model
This refers to the AI tool’s underlying system, trained on large datasets to understand language, images and other forms of data. Models like GPT (used by ChatGPT) are examples of large language models that generate text based on patterns in the data they were trained on.
Large language model
A ‘large language model’ (‘LLM’) is an algorithm that is trained on large amounts of data. An LLM writes sentences based on predicting which words would typically go together when expressing an idea.
Neural network
This refers to a type of machine learning model that mimics the human brain, used to generate content by recognising patterns in data. Neural networks are crucial to the operation of gen AI.
Training data
This refers to data used to train AI models, including text, images and other forms of digital content. The quality and range of training data significantly impacts on the AI tool’s ability to generate accurate and relevant content.
GPT
‘GPT’ is short for ‘generative pre-trained transformer’ and refers to the ability of the tool to form a response or answer to a user. This can be thought of as the ‘brain’ of an AI tool.
Popular gen AI platforms
This is not an exhaustive list but represents some of the most used and easily accessible versions of gen AI.
ChatGPT
Developed by OpenAI, this tool generates human-like text, making it suitable for things like:
• conversations
• writing tasks
• content creation
ChatGPT can simulate a natural conversation with its responses to prompts and requests. Ofcom’s Online Nation 2023 report shows that this element has proven to appeal to children and young people: 48% of those using gen AI state that their preferred activity is chatting with it. However, it is also used for drafting essays, generating ideas and editing content to improve clarity and coherence. Users can also invest time in ‘training’ ChatGPT so that the responses sound more like the user. This can make it harder to differentiate between AI and user-generated content.
DALL-E
DALL-E is another model from OpenAI. Unlike ChatGPT, it requires a paid account to access and generate images. DALL-E excels at turning descriptive text into detailed images. Though DALL-E has many uses, it is often used for digital art, or simply for fun and allows users to bring their imagination to life in a visual format. DALL-E supports a range of artistic styles which allows users to experiment with visual ideas which would otherwise be hard to create.
Microsoft Copilot
Integrated into Microsoft Office, it leverages AI to assist users in a range of activities, including:
• drafting emails
• generating reports
• creating presentations
In its current form, Copilot states that it is behind ChatGPT as a pure, conversational chatbot. However, its integration into the Microsoft Edge search engine means that it is more widely available as it is open whenever the browser is. This means it is connected to users and provides real-time suggestions to users. A key difference between ChatGPT and Copilot is that responses from Copilot come with web-link citations which allow users to verify any information they are given.
Copilot also has an image generator which is powered by DALL-E and specialises in using descriptive text to create an artistic image. Although powered by DALL-E it is limited to creating images from text inputs and does not give users access to the full range of tools and features. Image generation has a limited number of free uses, ‘boosts’, which makes it more accessible for children and young people. Users must wait until the beginning of each month for boosts to replenish. There is, currently, no service to pay for boosts. Users may be able to earn more through:
• promotions
• engaging with the platform (for example by completing surveys or providing feedback)
• referrals
Google Gemini
Google’s latest AI system is designed to generate text, helping with creative writing, summarising, and content editing. Like ChatGPT and Copilot, Google Gemini produces text responses to user prompts. It can also analyse data from a variety of sources like code, audio, images and text. Like Microsoft Copilot, Gemini provides links to its sources so users can check the validity of a response themselves.
My AI (Snapchat)
Integrated into Snapchat, this conversational AI tool interacts with users in a casual, friendly manner. It is often used for quick information retrieval or social interaction. My AI, from Snapchat, is powered using OpenAI’s GPT 3.5 and 4 so there are many similarities to ChatGPT. However, it has been adjusted so that it is tailored to Snapchat and its users. My AI provides quick, conversational responses to user queries, making it a fun and engaging tool for daily interactions. It can answer questions, provide recommendations or just engage in casual banter. My AI also helps users generate personalised content, such as tailored messages or creative captions for their snaps.
Potential risks
Content
One of the leading risks associated with gen AI is the potential for inaccurate or misleading content. It is important to make sure that young people understand that gen AI outputs do not guarantee factual accuracy. The information provided should always be checked with other sources. Some gen AI tools, such as Copilot, will also provide links to the websites that they are drawing information from so that users can verify the content, but this is not standard across all platforms. The National Literacy Trust’s survey shows that only 1 in 5 teenagers check the information they receive from an AI tool. It is usually recommended that young people check the information produced by AI tools using 2 other sources to verify the information is correct. This is a great life skill for young people to develop, as not only is critical thinking and reviewing sources an important part of using gen AI, but it is also a helpful skill to develop which will improve digital literacy as well as other areas of schoolwork.
While these tools can generate convincing text and images, they do not understand the truthfulness of the information they produce. AI is trained on data, which means that any incorrect or biased data will be reflected in an AI tool’s response. This can lead to an AI tool producing false or biased information. This can be particularly concerning when AI-generated content is used for academic purposes, as students might unknowingly rely on incorrect information. It is important to remember that gen AI tools are computers and not human and are therefore unable to distinguish between fact and fiction.
There is a risk that children and young people may encounter inappropriate content when using gen AI tools. While many AI platforms already have community guidelines or filters on specific words and themes, which make access to inappropriate content less likely, these should not be solely relied upon. AI tools can inadvertently expose users to content that is inappropriate for their age or maturity level.
It is important to remember that AI will only react to a prompt, and therefore will only generate inappropriate content if requested to do so. Even benign prompts can sometimes lead to the generation of unexpected or inappropriate content. This can happen due to the AI tool’s interpretation of the prompt or the influence of biased data in its training set. Remind your child to speak to you if they encounter any content that they find upsetting or confusing when using gen AI. It is also worth being mindful that some users look for ways to circumvent or outsmart AI and will share these tips on other platforms. It is recommended that parents and carers occasionally monitor their child’s usage to help mitigate these risks.
There is a risk that learners might use AI-generated content as their own work without proper attribution. This can lead to academic dishonesty and undermine the learning process. It’s important to discuss with children the importance of original work and proper citation when using AI tools.
Connecting with others
Gen AI tools may pose certain risks to relationships, despite it often not being possible to connect with others through AI tools. However, children can use AI tools to discuss their real-life relationships with other people. It is advised that children avoid sharing personal details with AI tools, as this data is typically collected and stored. AI tools generally lack human nuance and context in a relationship and should not replace humans for relationship advice.
There is also a potential risk that AI-generated content could lead to increased social pressures. Content created by AI can be easily shared on social media platforms like Instagram, TikTok, and Snapchat. AI-generated memes, artwork or even witty posts might be used by children to gain likes, followers or simply to entertain their friends. Using AI for social validation can lead children to feel compelled to produce more creative or polished content to keep up with peers. In turn, children may become anxious or experience a skewed sense of self-worth based on the reception of AI-enhanced creations.
User behaviour
As AI tools become more integrated into daily life, there is a risk that children might become overly dependent on them for tasks that would typically require critical thinking or creativity, such as a piece of homework or another creative project. Over-reliance on AI could hinder a child’s development of essential problem-solving skills, independent research and original thought. While young people should be encouraged to develop their skills in using gen AI tools, a balanced approach to AI usage is recommended, by treating it as a supplement to rather than a replacement for human effort.
There is also a risk that some children might prefer interacting with AI over their human peers. AI is often designed to respond positively to any request, which children may find more rewarding and easier to manage. Additionally, many AI tools will deliberately reply in a personable and responsive manner. This may lead users to form bonds that resemble real-life relationships. This can become especially problematic if the AI tool’s behaviour changes unexpectedly or becomes overly possessive, causing emotional distress to the user. It is crucial to remind young people that AI cannot truly replicate human emotions or relationships. These tools should not replace real-world social connections.
Parents and carers should encourage children to foster relationships with their peers to help counteract this trend. Remind your child that AI tools are computers, not humans, so cannot replicate empathetic relationships built and fostered with other human beings.
Design, data and costs
Using gen AI tools often involves sharing personal data, whether through voice commands, typed prompts or integrated services. Many AI systems collect data on user interactions to improve their services. This data might include personal information, such as search history, location and even voice recordings. Parents and carers should be aware of the privacy policies of the AI tools their children use and consider the implications of data collection.
The underlying design of these AI tools can also be considered a risk. Content created by gen AI is often stored by the platform and reviewed by humans to improve responses. It is important that children and young people understand this so that they know any information, such as names and locations, may be seen by a real person.
Though many AI platforms are freely available, they might often include a form of paid-for-services, such as a ‘premium’ or ‘pro’ version. These platforms may choose to hide some features behind a pay wall or offer purchasable ‘boost’ options to speed up generation or increase realism and levels of detail. Other platforms may limit the number of times a user can interact in a set amount of time, typically by limiting the user to a 24-hour timeslot or a 1 month free-trial period. Some AI tools, such as more powerful image-generating platforms, require a paid subscription. Speak to your child about how subscriptions work and remind them that this is a business strategy for companies like OpenAI to make money, rather than an offer that is a huge benefit to users.
Tips for keeping your child safe
Monitor AI usage
To ensure a safe and productive experience with gen AI, parents and carers should monitor their children’s use of these tools. Establish rules around when and how AI tools can be used. For instance, you might limit AI use to specific tasks, such as homework, and restrict access during social media or entertainment time.
It is important to engage in conversations with your child about their AI use. Ask them to show you what they have created or learned and discuss any concerns or questions that arise. This can help you to stay informed about their activities and guide them in using AI responsibly. Additionally, you could spend time using AI with your child to generate content together.
Educate on risks
Help your child understand the potential risks associated with gen AI and how to navigate them safely. Make sure your child understands that AI-generated content is not always accurate and should be verified before use. Encourage them to cross-check information and to think critically about the content they generate or encounter.
It is also important to talk about protecting personal information and being cautious when sharing details with AI tools. Explain how data can be collected and used, and why it’s essential to keep account information secure.
Reporting and blocking
Knowing how to report and block inappropriate content or interactions is key to maintaining a safe environment. Familiarise yourself and your child with the reporting mechanisms available within AI tools. Encourage them to report any content that makes them uncomfortable, to either you, another trusted adult or to the platform itself.
Though AI tools have their own safeguards, you may also utilise parental controls to block access to AI tools that might not be appropriate for your child’s age or maturity level.
Encourage responsible use
Promote healthy habits and responsible AI use by guiding your child on how to interact with these tools effectively. Encourage your child to use AI as a supplement to their learning, not a replacement. For example, they can use AI to generate ideas but should still engage in the process of researching, analysing and creating original content.
You can also teach your child to interact with AI respectfully and to avoid prompts that might lead to generating inappropriate or harmful content. Emphasise that AI is a tool to be used thoughtfully, not a source of entertainment that might lead to negative outcomes.
Implement parental controls
Where possible, utilise the parental control features available within AI tools or associated platforms. Activate filters or safe modes that limit the types of content the AI can generate. This can help prevent exposure to inappropriate material and ensure that the AI’s outputs align with your child’s age and maturity level.
Account deletion and deactivation
Knowing how to manage or delete accounts associated with gen AI tools is important for maintaining control over your child’s digital footprint. Each platform will have their own methods for deleting or deactivating an account. Some guidance is listed below but for other platforms a good start is to search for 'delete or deactivate my account' on the platform you wish to delete your account on.
ChatGPT
Deleting or deactivating an account involves navigating through OpenAI’s account management settings, where you can request data deletion or deactivate the account entirely.
Microsoft Copilot
This is typically tied to a Microsoft account. Deactivation involves removing the associated subscription or adjusting the settings within your Microsoft account dashboard.
Google Gemini
This is managed through Google account settings, where you can delete the AI-related data or deactivate the use of AI features.
My AI (Snapchat)
My AI can be disabled within Snapchat by adjusting chat settings or removing it as a contact. Complete account deletion would involve managing the Snapchat account as a whole.
DALL-E and Microsoft Copilot (Image)
These tools are often integrated into broader platforms (like OpenAI or Microsoft). Deleting your account or data involves accessing the account settings within these platforms and following the provided steps for data management.
AI literacy help
AI is a technology growing and developing at such an incredible rate that many people may lack personal knowledge of it, making it harder for them to educate children on how to use it safely. Hwb hosts an introduction to AI for parents and carers about using AI and activities that you can do with your children.
For more help with ChatGPT have a look at our focused guide on Hwb.
We also host a collection of AI guides and resources.
Additional tips
Have open conversations with your child about how they use AI tools. Emphasise the importance of using AI responsibly and ethically, particularly when it comes to academic work. Encourage them to view AI as a tool that aids their creativity and learning, rather than as a shortcut to completing tasks.
You should also ensure your child understands the strengths and limitations of AI, particularly that it is a tool to assist them rather than a replacement for their own abilities. Discuss the ethical implications of using AI, such as the importance of not using AI-generated content to mislead others or to take credit for work that isn’t their own.
You should strive to stay informed about the latest developments in AI technology and its potential impacts on children. This includes understanding new features, potential risks and best practices for safe usage. You may also try engaging with other parents and carers, or educators, to help identify potential issues early and provide collective solutions about safe AI usage for children.