Cymraeg

5. Potential risks

Content

One of the leading risks associated with gen AI is the potential for inaccurate or misleading content. It is important to make sure that young people understand that gen AI outputs do not guarantee factual accuracy. The information provided should always be checked with other sources. Some gen AI tools, such as Copilot, will also provide links to the websites that they are drawing information from so that users can verify the content, but this is not standard across all platforms. The National Literacy Trust’s survey shows that only 1 in 5 teenagers check the information they receive from an AI tool. It is usually recommended that young people check the information produced by AI tools using 2 other sources to verify the information is correct. This is a great life skill for young people to develop, as not only is critical thinking and reviewing sources an important part of using gen AI, but it is also a helpful skill to develop which will improve digital literacy as well as other areas of schoolwork.

While these tools can generate convincing text and images, they do not understand the truthfulness of the information they produce. AI is trained on data, which means that any incorrect or biased data will be reflected in an AI tool’s response. This can lead to an AI tool producing false or biased information. This can be particularly concerning when AI-generated content is used for academic purposes, as students might unknowingly rely on incorrect information. It is important to remember that gen AI tools are computers and not human and are therefore unable to distinguish between fact and fiction.

There is a risk that children and young people may encounter inappropriate content when using gen AI tools. While many AI platforms already have community guidelines or filters on specific words and themes, which make access to inappropriate content less likely, these should not be solely relied upon. AI tools can inadvertently expose users to content that is inappropriate for their age or maturity level.

It is important to remember that AI will only react to a prompt, and therefore will only generate inappropriate content if requested to do so. Even benign prompts can sometimes lead to the generation of unexpected or inappropriate content. This can happen due to the AI tool’s interpretation of the prompt or the influence of biased data in its training set. Remind your child to speak to you if they encounter any content that they find upsetting or confusing when using gen AI. It is also worth being mindful that some users look for ways to circumvent or outsmart AI and will share these tips on other platforms. It is recommended that parents and carers occasionally monitor their child’s usage to help mitigate these risks.

There is a risk that learners might use AI-generated content as their own work without proper attribution. This can lead to academic dishonesty and undermine the learning process. It’s important to discuss with children the importance of original work and proper citation when using AI tools.

Connecting with others

Gen AI tools may pose certain risks to relationships, despite it often not being possible to connect with others through AI tools. However, children can use AI tools to discuss their real-life relationships with other people. It is advised that children avoid sharing personal details with AI tools, as this data is typically collected and stored. AI tools generally lack human nuance and context in a relationship and should not replace humans for relationship advice.

There is also a potential risk that AI-generated content could lead to increased social pressures. Content created by AI can be easily shared on social media platforms like Instagram, TikTok, and Snapchat. AI-generated memes, artwork or even witty posts might be used by children to gain likes, followers or simply to entertain their friends. Using AI for social validation can lead children to feel compelled to produce more creative or polished content to keep up with peers. In turn, children may become anxious or experience a skewed sense of self-worth based on the reception of AI-enhanced creations.

User behaviour

As AI tools become more integrated into daily life, there is a risk that children might become overly dependent on them for tasks that would typically require critical thinking or creativity, such as a piece of homework or another creative project. Over-reliance on AI could hinder a child’s development of essential problem-solving skills, independent research and original thought. While young people should be encouraged to develop their skills in using gen AI tools, a balanced approach to AI usage is recommended, by treating it as a supplement to rather than a replacement for human effort.

There is also a risk that some children might prefer interacting with AI over their human peers. AI is often designed to respond positively to any request, which children may find more rewarding and easier to manage. Additionally, many AI tools will deliberately reply in a personable and responsive manner. This may lead users to form bonds that resemble real-life relationships. This can become especially problematic if the AI tool’s behaviour changes unexpectedly or becomes overly possessive, causing emotional distress to the user. It is crucial to remind young people that AI cannot truly replicate human emotions or relationships. These tools should not replace real-world social connections.

Parents and carers should encourage children to foster relationships with their peers to help counteract this trend. Remind your child that AI tools are computers, not humans, so cannot replicate empathetic relationships built and fostered with other human beings.

Design, data and costs

Using gen AI tools often involves sharing personal data, whether through voice commands, typed prompts or integrated services. Many AI systems collect data on user interactions to improve their services. This data might include personal information, such as search history, location and even voice recordings. Parents and carers should be aware of the privacy policies of the AI tools their children use and consider the implications of data collection.

The underlying design of these AI tools can also be considered a risk. Content created by gen AI is often stored by the platform and reviewed by humans to improve responses. It is important that children and young people understand this so that they know any information, such as names and locations, may be seen by a real person.

Though many AI platforms are freely available, they might often include a form of paid-for-services, such as a ‘premium’ or ‘pro’ version. These platforms may choose to hide some features behind a pay wall or offer purchasable ‘boost’ options to speed up generation or increase realism and levels of detail. Other platforms may limit the number of times a user can interact in a set amount of time, typically by limiting the user to a 24-hour timeslot or a 1 month free-trial period. Some AI tools, such as more powerful image-generating platforms, require a paid subscription. Speak to your child about how subscriptions work and remind them that this is a business strategy for companies like OpenAI to make money, rather than an offer that is a huge benefit to users.

  • Previous

    Popular gen AI platforms

  • Next

    Tips for keeping your child safe