Cymraeg

Schools and settings need to be prepared to address the changes presented by new technologies like generative artificial intelligence (sometimes referred to as ‘generative AI’ or ‘gen AI’). This includes understanding and harnessing the opportunities this technology provides and considering how to mitigate against any risks.

It is key that practitioners develop the skills and knowledge to use this technology to support learners to thrive. If schools and settings choose to use generative AI tools, it is essential to ensure all use is safe, ethical and responsible.

Artificial Intelligence (AI) describes technologies that can perform tasks which usually need human intelligence. This could include learning, reasoning, problem-solving, perception, decision-making and speech recognition.

AI already exists in everyday technology such as text-to-speech tools, translation apps and predictive text. However, there has been a huge rise in the access and use of generative AI tools. This has led to significant interest in and concerns about how generative AI could be used, or misused, in education and with our learners.

Generative AI tools create content including text, images, music and videos in response to prompts from the user. These tools can:

  • answer questions
  • analyse information
  • remember the responses they have previously provided
  • respond to the user in a human-like way

They do this by drawing on the huge range of data sources they are trained with. This allows them to learn from the data to generate new content based on those sources.

How generative AI can support education

Used responsibly, generative AI offers opportunities and benefits that can enhance the role of practitioners in supporting their learners. We are clear, however, that it cannot replace the fundamental role of practitioners in supporting and inspiring learners to learn and reach their potential.

The use of generative AI in learning should be purposeful, as with all tools in the classroom. Its use should ultimately support learners to become ambitious, capable learners; enterprising, creative contributors; ethical, responsible citizens; and healthy, confident individuals.

When used responsibly, safely and purposefully, generative AI tools have the potential to reduce workload and to support with a variety of tasks such as:

  • supporting the development of content for use in learning
  • assisting with some routine administrative functions
  • helping provide more personalised learning experiences
  • supporting lesson planning, marking, feedback and reporting
  • providing contexts to support development of critical thinking skills
  • supporting school-level curriculum and assessment design

Working with the sector to understand the benefits of AI

The extent to which generative AI tools will support education is an area of ongoing debate. Innovative practice in Wales and internationally is showing real promise. However further exploration is needed to understand how AI can be most effectively used and its impact. Generative AI’s ability to reduce workload is fundamentally linked to how well these opportunities can be realised in a safe, effective and reliable way.

We have started a national conversation with the teaching sector about how best to draw on the opportunities of AI for learning and teaching, while mitigating the risks. This has involved practical exploration of opportunities to understand the potential of AI tools. Conversations with practitioners and leaders through the National Network for Curriculum Implementation and the Curriculum for Wales Policy Group will continue to inform our priorities.

We are continuing to work with schools, education partners, stakeholders and experts to understand emerging evidence, positive practice and approaches to AI in education. This is key to ensuring the support, resources and guidance we provide to schools on AI in education is practical and relevant. Wider work considering the implications of AI across the public sector will also continue to contribute to the thinking about the role of AI within education.

As generative AI continues to advance and influence the world we live in, it is crucial to provide all learners with the opportunity to develop the skills and knowledge they will need to thrive in the future. AI literacy, which includes being able to think critically, engage with evidence and information, and use creativity, will be fundamental. The skills integral to the four purposes, which should be developed across all Areas of Learning and Experience in the Curriculum for Wales, are focused on building these dispositions.

Schools should ensure all learners have opportunities to develop the integral skills across their learning to help them prepare to live and work in an increasingly AI-enabled society. Equity of access should be at the centre of these opportunities, developed in ways that embed inclusive practices that respect the abilities of all learners.

Developing these skills will support learners to engage responsibly with generative AI tools. This includes asking meaningful questions, evaluating information and evidence to understand its reliability and bias when reaching decisions, and developing learners’ awareness and understanding of the world around them.

The Digital Competence Framework already goes beyond the use of specific technologies and tools. It focuses on the dispositions learners need to harness these technologies and tools most effectively. This includes using technology to amplify their own creativity, using computational thinking to help problem-solve in the real world, being data and information literate, and becoming a conscientious digital citizen.

The responsibility of safeguarding children and young people from potential online harm is critical. As AI technologies continue to evolve so must our thinking on digital safeguarding. Generative AI has the potential to exacerbate online safety issues with increasing sophistication and speed, including concerns around deepfakes, misinformation and online exploitation, highlighting the importance of learners developing critical thinking skills. These skills are essential for evaluating the credibility of online information, recognising potential dangers, and making informed decisions about online interactions.

Our Keeping safe online area, provides schools with access to a range of support on emerging digital resilience issues, including the risks of generative AI.

As generative AI evolves, it is important to equip practitioners with the skills and knowledge needed for effective integration and navigation of the technologies. Professional learning should help practitioners understand different AI tools and how to use them to enhance their practice.

Schools should support practitioners in building competence and confidence with generative AI through ongoing professional learning opportunities. This learning should respect levels of experience and diverse backgrounds, enabling all practitioners to gain the skills necessary for AI integration.

Our Professional learning area on Hwb offers materials to help practitioners develop their understanding of generative AI. Resources will continue to be updated, and further opportunities for independent and collaborative learning around the use of generative AI offered.

Practitioners should remain aware of the potential risks of generative AI. Professional learning should emphasise strategies for evaluating outputs and ensuring digital safety in schools and support practitioners to use generative AI technologies safely, responsibly and ethically.

The relationship between digital technologies and education has been changing and developing for many years. This change is being further accelerated and amplified by the rapid development and widespread access to generative AI tools.

Integrating generative AI tools into education presents many opportunities. However, their use must prioritise safety, responsibility, ethics, trust and inclusivity. Before integrating generative AI, schools should perform a thorough risk analysis, similar to the process used for other digital tools, software or services. This is essential to evaluate the suitability of generative AI, identify potential risks and ensure a safe and secure learning environment.

Schools should continue to prioritise learners’ safety, security and well-being when considering how best to respond and adapt to emerging technologies in education.

Schools should consider the following when planning to use generative AI.

Age ratings

The age ratings of generative AI tools must be considered before using them. Age ratings can vary and some tools are only designed for use by over-18s. Many generative AI tools are not designed for education.

Practitioners must be aware of age ratings, and the safety concerns, in considering use of these tools with their learners. Consideration should be given to the inclusivity of materials for learners who may have additional learning needs.

Accessibility and digital inclusion

Practitioners should ensure that if generative AI is used in a lesson all learners are equipped with the necessary tools to participate on an equitable basis. As the prevalence of paid-for generative AI tools increases, it is important to ensure that these do not create inequity of access to digital services for learners.

As with all digital technologies, schools will need to be aware of broader accessibility issues, particularly for those with additional learning needs. Schools should continue to consider the use of speech and other technologies to improve the accessibility of any digital tools used for learning and teaching for all learners. Practitioners should be considerate of language, mindful that at present generative AI tools are more easily available in the English language than in the Welsh language.

Bias, discrimination, stereotyping and harmful content

Generative AI tools may reflect and amplify the biases and stereotypes that exist in the data upon which they are trained. As a result, some generative AI systems may produce content that can be offensive and harmful.

Some generative AI systems may generate content inconsistent with the values and ethos of schools in Wales, such as promoting violence, hate or misinformation. Alongside the risk of overt biases, it is important for practitioners to consider the risk of subtle biases within AI tools. Schools should be aware of this risk and monitor outputs for any signs of bias, discrimination, stereotyping or harmful content.

Schools may wish to focus on learners’ development of critical thinking and digital literacy skills to support them to engage critically with the outputs of these tools. It is important that learners and practitioners pro-actively question and challenge the assumptions and possible implications of the content generated by AI systems.

Schools should encourage learners and practitioners to report any content produced by generative AI tools that causes concern.

Content accuracy and reliability

Generative AI, which is trained on large amounts of data, can still produce content that is inaccurate or unreliable. At times, the content it generates is not based on real data but is instead fabricated or invented. This can be problematic as although the content is not based on factual information, it can appear both plausible and convincing.

It is important that, when using generative AI, outputs are examined for accuracy and reliability to ensure the overall quality and integrity of the content.

Data protection and privacy

Generative AI tools may save and use the information that its users provide, such as the prompts or questions they ask. Schools should check and consider how AI services and their Large Language Models (LLMs) are trained and which data they collect. As with using any digital tools, schools must comply with data protection laws.

Schools must be transparent about how data is used and with whom it is shared. Schools and practitioners must not share any personally identifiable information (PII) or other confidential or sensitive information with a generative AI tool or service. Critically, learner data must never be entered into generative AI tools.

Ethical use of AI

Generative AI has many applications. As it is increasingly integrated into everyday tools and services, keeping pace with the ethical standards and the potential for misuse can be a challenge.

Engaging in open conversations about the possibilities and limitations of AI technologies should go hand in hand with discussing the importance of ethical and responsible use. Social and ethical impacts of AI can be explored through digital citizenship. AI literacy forms a significant part of this including the core concepts, skills and attitudes needed to become responsible and ethical users of digital technologies.

When used in schools, AI should be used honestly and transparently, with clear expectations supporting the integrity of learning. Learners and teachers should know when AI is used in creating content or assisting with tasks, ensuring everyone understands its role. This helps build trust and ensures that learners are aware of the strengths and limitations of AI, allowing them to use it responsibly and effectively.

Environmental impacts

The integration of AI in education offers significant opportunities to contribute to environmental goals by reducing reliance on paper and streamlining operations. However, the increased network capacity and energy consumption required for AI technologies can amplify the environmental footprint of digital infrastructure. To address this, it is essential to adopt energy-efficient AI systems and ensure their use is both strategic and sustainable. By carefully balancing AI's potential benefits with its environmental impact, schools can harness its power while minimising resource demand and supporting long-term sustainability.

Intellectual property (IP)

Many generative AI tools will use data inputted by users to further train and refine their models. As such, it is important that schools check terms of use before deploying solutions with AI built in.

Learners own the intellectual property rights to original content that they themselves have authored. Learners' work must not be used to train generative AI models unless schools have received consent or exemption to copyright. Consent must be obtained from the parent or legal guardian of learners under the age of 18 or learners who do not have capacity to consent.

Qualifications and non-examined assessments

In recognition of the concerns about the use of generative AI tools in non-examined assessments, Qualifications Wales are working with awarding bodies to consider the implications of AI for our qualifications and examinations system. This includes exploring the potential opportunities, ethics, practical considerations, and risks of AI in this context.

Regulation of AI

The Online Safety Act 2023 imposes new legal duties and responsibilities on platforms to recognise their duty of care to their users. It also places greater responsibility on service providers to safeguard children and young people.

While the Online Safety Act does not directly refer to AI, it does mention automated tools and bots. This means that if a regulated provider uses AI within its services, the AI elements fall within its scope.

Additionally, AI chatbot providers must comply with rules on preventing users from accessing harmful content.

It is still yet to be fully understood where there may be gaps in regulation to protect children and young people from possible harm caused by AI-generated content, in English and Welsh. The Welsh Government will continue to work with Ofcom as they take forward work on how the Online Safety Act will impact the regulation of services that use AI technologies. This will be done alongside working with the UK Government on broader AI legislation, expected to place requirements on those working to develop the most powerful AI models.