Media literacy: generative AI
Find out more about generative AI and its role in the spread of misinformation.
- Part of
In our first instalment, we explored the importance of media literacy in identifying misinformation. We discussed how critical thinking skills can help us decide if information is credible, trustworthy and fair. In this second instalment, we are looking at generative artificial intelligence (gen AI) and its connection to misinformation.
How gen AI works
Gen AI refers to AI systems that can create new content such as text, images, audio and video based on the data they have been trained on.
Examples include deepfake videos, AI-generated art as well as chatbots or AI assistants such as ChatGPT, Gemini or Copilot. While gen AI has many beneficial applications, it is equally capable of causing harm unless responsible AI practices are adopted.
AI bias explained
Gen AI can only learn from its sources, so it takes on the biases, misinformation and problematic content of the original material. A ‘hallucination’ is when an AI system gives an incorrect response that is presented as factual information.
Some gen AI systems may produce offensive and harmful content, sometimes based on user prompts.
The impact of gen AI on disinformation
Gen AI tools can produce convincing fake content, making it easier for people with bad intentions to spread disinformation. For example, deepfake videos can manipulate public perception by depicting individuals saying or doing things they never did. Similarly, AI-generated text can create false news articles that appear legitimate.
Identifying AI-generated content
To help learners think critically about AI-generated content, encourage them to question the authenticity of everything they see online and use a range of strategies.
Be sceptical
Be aware that AI bias and misinformation can occur due to unreliable or unrepresentative training data and that gen AI systems can be vulnerable to misuse.
Analyse the quality
AI-generated content may have subtle inconsistencies or errors. Look for unnatural language patterns, odd image distortions and audio glitches. Deepfake detection software can analyse videos for signs of manipulation.
Verify the source
Check if the content comes from a reputable source and if it can be verified elsewhere. Be cautious if it’s from unknown or suspicious websites or it’s highly sensational content on social media.
By teaching learners how to spot and challenge AI-generated misinformation we can create a more informed and resilient society.
Look out for the next instalment in our media literacy series, where we will explore how to stay alert to online scams.
Resources for further learning
To develop your own knowledge on this topic, as well as that of your learners and their families, there are a variety of resources available on Hwb.
For schools
- ‘Generative artificial intelligence in education: Opportunities and considerations for schools in using generative artificial intelligence (AI)’ is designed to support schools to reflect on the possible benefits, potential risks, and safety and ethical considerations of gen AI.
- ‘Generative AI: keeping learners safe online’ supports schools to understand current safeguarding concerns relating to gen AI and embed these as part of safeguarding practice and policy. It was developed with the UK Safer Internet Centre.
- The ‘AI foundations: training module for education practitioners’ explores what gen AI is, how it might affect teaching and learning and how to support learners to use it safely and responsibly.
- Classroom materials on the social and ethical impacts of AI and the importance of using this technology safely and responsibly have been developed through our partnership with Common Sense Education.
For learners and their families
- ‘Online issues and worries: Generative AI’ includes information and a short ‘What is AI?’ video designed to support learners with their digital wellbeing and increase their awareness of where they can get help.
- ‘Generative AI: guide for parents and carers’ includes information about how gen AI is increasingly integrated into smart devices, highlighting potential risks associated with apps popular with children and young people.