Online issues and worries: Generative AI
Information for young people to understand what AI is and some risks to be aware of.
3. AI and algorithms
An algorithm is something that online platforms use to suggest new content. It uses AI to learn from your activity to do this.
How algorithms learn
Have you ever watched video game playthroughs on YouTube? After you finish watching those videos, you might see recommendations for other online gaming videos, or other content from the same creator. This is because the algorithm has learned you like that kind of content.
Algorithms gather information to suggest new content in different ways.
Content that you (and others) view or follow
One of the biggest things algorithms learn from is the content you watch. So, if you watch videos about crafts, you’ll probably see suggestions about other crafting videos.
If you watch content from the same creator a lot, the algorithm will show you similar creators. But it does this by comparing your activity with other people’s activity. So, you might sometimes get strange suggestions.
For example, if someone watches the same crafting creator as often as you do, but they also watch a lot of content from a fan of motorbikes, you might see a suggestion for motorbike content. That’s because the algorithm is learning from everyone, not just you.
In some cases, this can lead to strange or inappropriate content being suggested to you. The best thing you can do if this happens is to report any content that breaks the rules and tell the algorithm you don’t want to see that content.
To do this on YouTube when you’re watching a video:
- tap the 3 dots in the top right corner
- select ‘Not interested’ and ‘Don’t recommend this channel’
For a regular video, you’ll see recommendations on your homepage or under a video you’re watching. You can mark those videos the same by tapping the 3 dots next to the title.
Other platforms have similar features you can use to tell the algorithm that you don’t like that content.
But remember that you’re teaching the algorithm when you do this. Like learning something new at school, you’ll need to practise a few times before you have the knowledge. So, you might still see content you don’t want to see and might need to remind the algorithm that you’re not interested.
Platforms like YouTube also let you turn recommendations off so that the algorithm doesn’t learn from you at all. See how to do this on YouTube by turning off and deleting your Watch History.
Content you interact with
Interacting with content includes liking, sharing, commenting and viewing photos or videos. The algorithm can’t actually tell if you like or dislike content, so it thinks that if you interact with it, that means you do like it.
This can cause problems if you accidentally click on a video you don’t want to see or comment on something that has made you mad. The algorithm thinks that if you clicked or commented on a video, you must like it.
So, if you see a video that is spreading misinformation and try to inform people in the comments that the information is wrong, you might start seeing more of the same misinformation.
That’s why it’s best to avoid commenting or reacting. Instead, use the ‘not interested’ tool available in platforms. And, if someone’s content is spreading misinformation, report it to the platform so that it can be reviewed and removed.
Remember that AI isn’t actually smart and can only learn from what humans tell it. Because algorithms use AI, an algorithm can only know as much as you tell it. So, make sure you tell it when you don’t like content by marking videos as ‘not interested’, unfollowing creators who spread negativity, reporting content that could cause harm and blocking users or creators who you don’t want to see.
It's also important to watch a lot of different types of content. This will help the algorithm suggest a wide range of viewpoints and might even help you learn something new. This also helps you avoid something called echo chambers.
What is an echo chamber?
Imagine you’re walking down a long, empty corridor. As loud as you can, you shout, “I hate broccoli!” Your voice bounces off the corridor walls, and you can hear 2 or 3 echoes of “I hate broccoli” all the way down the corridor. Only, you think it’s other people agreeing with you.
Someone in a nearby room hears different people (you and your echoes) saying “I hate broccoli” from the corridor. They agree, so they go into the corridor and also shout, “I hate broccoli!” Their voice echoes 2 or 3 times too.
This happens a few more times until all you can hear in the corridor is hundreds of people shouting about how much they hate the vegetable. So, when someone new walks by, all they can hear is hatred for broccoli. They might think “wow, something must be really wrong with broccoli if so many people hate it!”
In reality, it might only be a handful of people shouting their distaste, but it seems like a lot more, and that makes it feel like a lot of people share the same belief. This makes it a lot easier for others to join and agree.
This is kind of how echo chambers work online, and they can often be a lot more harmful than disliking a vegetable.
How echo chambers are created
Echo chambers can happen in a number of ways, including:
- always watching the same type of content
- watching content from a creator who says things you don’t agree with
- following creators who share harmful views
- ignoring harmful content instead of reporting it
- interacting with harmful content such as through comments instead of reporting or blocking it
Algorithms might also suggest content to certain demographics based on the habits of others in their demographic.
For example, if a teen boy named Ben only watches TV or movie compilation videos on his chosen platform, it might seem strange that he suddenly gets a recommendation for a video showing people fighting. Ben has never watched those types of videos before, after all. However, the algorithm might think Ben would like the content because some other young men have watched it.
This kind of suggestion can spread harmful content and echo chambers to other people. If Ben accidentally watched that video, or if he added a comment about how wrong it was, the algorithm might think that it made a good suggestion, bringing more of the same content to his feed.
The impact of echo chambers
The biggest problem with echo chambers is that when you’re in one, you might not know it. As a kid or teen, you’re still figuring out who you want to be and what your beliefs are. If you fall into an echo chamber without knowing it, you might find yourself becoming angry about the world and believing things that are really harmful to both you and others.
Remember that in an echo chamber, the beliefs seem more common and bigger than they actually are – just like the echo in the corridor. So, it’s hard to recognise when you’re stuck in one.
Some signs can include:
- feeling angry towards a type of person or group of people
- feeling depressed or anxious about yourself (such as with your appearance, talents or experiences) but blaming it on someone else; for example, one known echo chamber belief is that women having rights make the world harder for men, which is not true
- in the comments section everyone saying the same thing, or, if someone says something different, others targeting them with cruel or angry messages
- you can’t remember the last time you saw a video or post about a different point of view
Remember that you can control your feed. If you think you’re in an echo chamber, turn off recommendations, delete your watch history and try to find content with different points of view.
You can also:
- report content that is hateful or harmful to the platform (this can help moderators remove the content)
- if you don’t like content that you see, tap the 3 dots on the content to mark it as ‘I don’t want to see this’ or ‘not interested’