Cymraeg

In our first instalment, we explored the importance of media literacy in identifying misinformation. We discussed how critical thinking skills can help us decide if information is credible, trustworthy and fair. In this second instalment, we are looking at generative artificial intelligence (gen AI) and its connection to misinformation.

Gen AI refers to AI systems that can create new content such as text, images, audio and video based on the data they have been trained on.

Examples include deepfake videos, AI-generated art as well as chatbots or AI assistants such as ChatGPT, Gemini or Copilot. While gen AI has many beneficial applications, it is equally capable of causing harm unless responsible AI practices are adopted.

Gen AI can only learn from its sources, so it takes on the biases, misinformation and problematic content of the original material. A ‘hallucination’ is when an AI system gives an incorrect response that is presented as factual information.

Some gen AI systems may produce offensive and harmful content, sometimes based on user prompts.

Gen AI tools can produce convincing fake content, making it easier for people with bad intentions to spread disinformation. For example, deepfake videos can manipulate public perception by depicting individuals saying or doing things they never did. Similarly, AI-generated text can create false news articles that appear legitimate.

To help learners think critically about AI-generated content, encourage them to question the authenticity of everything they see online and use a range of strategies.

Be sceptical

Be aware that AI bias and misinformation can occur due to unreliable or unrepresentative training data and that gen AI systems can be vulnerable to misuse.

Analyse the quality

AI-generated content may have subtle inconsistencies or errors. Look for unnatural language patterns, odd image distortions and audio glitches. Deepfake detection software can analyse videos for signs of manipulation.

Verify the source

Check if the content comes from a reputable source and if it can be verified elsewhere. Be cautious if it’s from unknown or suspicious websites or it’s highly sensational content on social media.

By teaching learners how to spot and challenge AI-generated misinformation we can create a more informed and resilient society. 

Look out for the next instalment in our media literacy series, where we will explore how to stay alert to online scams.

To develop your own knowledge on this topic, as well as that of your learners and their families, there are a variety of resources available on Hwb.

For schools

For learners and their families

  • Online issues and worries: Generative AI’ includes information and a short ‘What is AI?’ video designed to support learners with their digital wellbeing and increase their awareness of where they can get help.
  • Generative AI: guide for parents and carers’ includes information about how gen AI is increasingly integrated into smart devices, highlighting potential risks associated with apps popular with children and young people.