Cymraeg

Instagram is a social media app predominantly used to share photos and videos, although other features include messaging, live-streaming and video chats. Filters and other photo-enhancing tools can be used to customise an image before sharing and the use of hashtags enables users to make connections with the other 1.452 billion users worldwide. Instagram accounts are often highly curated, with images and videos having been heavily edited, and accounts who have large followings are used to influence other users through product marketing and endorsement.

Instagram now sits alongside Facebook, WhatsApp and Messenger under the parent company Meta. Meta replaces Facebook as the leading company/brand in this group, and the Meta branding is likely to become increasingly visible on all of these apps.

The minimum age for Instagram users is 13, however, it doesn’t have any rigorous age verification methods.

All accounts set up by users under 16 will default to private. This option will only be offered if users have provided their correct date of birth, showing them as under 16. All other accounts will default to a public setting.

Find out more about age ratings in our ‘A parent and carer’s guide to age ratings of apps and games’.

Instagram is a popular social app based on photo and video sharing that allows young people to connect with their friends and family as well as follow their favourite celebrities, leaders and influencers. It is especially compelling for young people as the number of likes and followers they receive can provide a sense of approval, popularity and acceptance. Instagram ‘Feeds’ essentially act as a highlights reel that amplifies the good and allows users to share the best and most attractive versions of themselves. Although this could be damaging for their well-being and mental health, interacting positively with the app and the people they know through its various features can be very fun and satisfying for young people.

One of the most common criticisms of Instagram is that is it very image based. It is a space in which many users spend time using effects and filters to highly edit their photos and videos before sharing, projecting an image of perfection to the world. Such preoccupation with image and status can have a negative impact on users’ well-being and can lead to low self-esteem. Over exposure to highly edited content can also have a negative impact on body image, and constantly scrolling through photos and videos of other people looking great and having fun can be detrimental to mental health.

Negative comments placed on users’ photos and videos can further impact on wellbeing, as young people often share their images to seek validation from other users on the platform and delete those that do not achieve many ‘Likes’. Talk to your child about how many Instagram profiles have been curated to appear perfect but are not a true reflection of them or their lives and that validation or lack thereof from ‘Likes’ and followers should not be considered a realistic indication of their beauty and value as a person.

Instagram offers a ‘Sensitive content control’ which allows users to manage the amount of sensitive content they see within the app. Sensitive content is content that goes against Instagram’s recommended guidelines and can include violent or sexually explicit content, as well as content that promotes products or procedures that may be regulated. This control is set to ‘Standard’ by default, so it is recommended increasing this for younger users.

Users should be aware that Meta AI is an AI chatbot similar to ChatGPT or Snapchat's My AI. You can ask Meta AI any question, and it will try to provide an answer. However, as a 'Large Language Model' (LLM), Meta AI may sometimes produce misinformation or biased responses. This happens because LLM-based chatbots generate answers based on the likelihood of words appearing together, without verifying the accuracy of their sources. You should encourage your child to fact-check any statements made by Meta AI to ensure that the answers they receive are factual and true.

Meta AI can also generate images upon user request. Meta AI has several safeguards to prevent the generation of inappropriate images and watermarks any images it generates, making it easier to identify whether an image was created by Meta AI. However, there is a possibility that Meta AI can misinterpret a prompt and generate an image which may be upsetting or confusing for your child. It is important that you let your child know they should speak to you if Meta AI generates an inappropriate or upsetting image.

As Instagram is such a popular platform, with over a billion users worldwide, it is possible that your child is making connections with both people they know but also people they don’t. As with other social networks, it is possible for users to set up fake accounts, pretending to be someone else. Encourage your child to question whether they really know the person behind the Instagram profile before making contact. To protect young people from unwanted contact from adults, Instagram has a feature which prevents adults from sending direct messages to under 18s who do not follow them. Under 16 accounts will not appear in ‘Explore’, ‘Reels’ or ‘Accounts suggested for you’ to adults who are known to Instagram as showing potentially suspicious behaviour. Speak to your child about the risks of connecting with strangers and explain the importance of not sharing any personal or identifiable information on their profile or within chats. Encourage them to tell you if they have been asked more personal questions or to chat privately using a different app.

Parents and carers can additionally use Instagram’s built-in parental supervision tools to help protect their children on Instagram. The parental control tools allow parents and carers to see who their child follows on Instagram, and who follows their child. It is advised that you speak with your child about adding parental supervision to their account. Ensure that your child knows that parental supervision is to help you keep them safe, rather than a way for you to breach their privacy. For instructions on setting up parental supervision, see the ‘Managing teen accounts’ section of this guide.

The ’Add Yours’ feature allows creators to solicit submissions for reels from their fans and followers. Whilst this feature is designed to increase engagement between the creator and their fans, it is possible for it to be used as a shaming prompt. Creators can specify what submissions they are looking for in the prompt, which can either be fun or potentially upsetting. As a creator-selected reel is published and viewable by all the creator’s followers, your child’s content could be viewed by a much wider audience, which could result in online bullying for some users depending on the context of the ‘Add Yours’. It is advised that you speak with your child and help them avoid prompts from negative creators or influencers, as they may not have good intentions with follower submissions.

The ‘Nicknames’ feature on Instagram allows users give each other nicknames in group chats and personal chats. Nicknames are designed to be fun and allow users to give each other more personalised names in chats. However, nicknames may also be used derogatorily and be used to hurt other users, particularly in group chat settings. You should make sure your child is aware of this possibility if they decide to allow other users to give them nicknames. For instructions on managing nicknames, see the ‘Managing interactions and content’ section of this guide.

The Instagram profile that young people create to show the world who they are can be integral to their sense of self. Users can be targeted with negative comments and have their content shared in ways which may humiliate them. This can have a real impact on mental health and well-being. Whilst Instagram has community guidelines in place to limit hurtful contact, bullying and trolling on the platform is not uncommon. To limit exposure to harmful commentary and content, explore the privacy and ‘Limits’ settings on the app. Encourage your child to talk with you if they experience bullying through the platform and ensure they know how to report and block users who behave inappropriately. On each individual post, users can also hide the ‘Like count’ and turn off the comments to prevent others from making negative remarks and limit the pressure to get a high number of likes. These features can be enabled in the advanced settings on Instagram.

The platform has also launched a ‘Safety notices’ feature, which encourages teens to be cautious in conversations with adults they are already connected to. This feature will notify young people when an adult who has been conducting potentially suspicious behaviour is interacting with them in direct messages (DMs) and will give them the option to block or report.

The main commercial risk of Instagram is the use of influencers to promote goods or brands. Whilst influencers must disclose when they have been paid to advertise an item, it can be compelling for young people to want to copy. This can be especially appealing when famous celebrities or influencers that your children already idolise, value and follow persuade their following to purchase a particular item. Product tagging in reels also means that viewers can now buy the products promoted by clicking on the tag, which may lead to additional spending via the platform. Speak to your child about the role of influencers and remind them that they have not personally bought all of the goods they promote.

Meta has updated its targeted advertising policy on Instagram, meaning that adverts can only be targeted to users under 18 based on their age and location and not on their gender, interests or activity. Users will also be able to manage their advert topic control within ‘Ad preferences’ in the settings menu.

Meta is launching ‘Meta Verified’ for Instagram and Facebook, which is a subscription service that provides users with a blue badge to indicate the authenticity of the user. Young people may be attracted to having a blue badge on their profile, as this is typically conflated with popularity. Meta verified for Instagram costs £11.99 on either iOS or Android. We recommend having a discussion with your child about making decisions with money and remind your child that social media subscriptions are ways for social media companies to make money without offering many user benefits.

Users should also be aware that Meta’s introduction of the ‘Meta Accounts Centre’ has introduced ‘Cross-posting’ which is sharing individual posts across Instagram and Facebook. This means that users are now able to post on both social media accounts at once. Users who have used the same email for both accounts have found that cross-sharing has been enabled by default. It is recommended users unlink their accounts through the settings menu. Advice on how to do this is in the ‘Managing privacy’ section of this guide.

The ’Read receipts’ feature on Instagram allows users to know when another user has seen or read their message. Although users may enjoy being able to see if a user has seen their message, read receipts are known to pressure users to respond quickly. Read receipts can also make users anxious if they see a message has been read, but not responded to. You should help your child turn off read receipts so they can continue to message their friends in their own time and without feeling pressured or watched. Steps to do this are in the ‘Managing interactions and content’ section of this guide.

Instagram is a popular platform and when used responsibly, it can be enjoyable. Remind your child that most of the content posted is staged to look its best and does not reflect real life.

Meta has created a dedicated Teen privacy centre to help teen users manage their privacy on all Meta platforms.

Meta has published a new parents guide for Instagram that provides information on the latest safety tools and privacy settings.

Instagram has a dedicated Family Centre to help families navigate online experiences together.

Instagram have introduced a number of new features that are designed to protect young people from sextortion, a form of online extortion that involves the sending of nude or semi-nude images. The new features are default for any accounts belonging to under 18s.