Safeguarding or overstepping? Australia’s social media ban for under 16s

Eve Shvartsman and Meygol Menhaji 

Doomscrolling, clout culture, misinformation, online radicalisation, and data privacy: These are just some items from a long list of concerns surrounding the use of social media in 2025. Recent cultural reflections, such as Netflix’s Adolescence and the Social Dilemma, underscore the increasingly complex role that digital platforms play in adolescent development.

In Australia, 84% of children aged 8-12 are reported to have used at least one social media platform, with 40% using their own personal accounts. Of these users, only 13% were correctly identified by platforms for being under their minimum age requirement of 13. The growing use of technology and social media amongst children highlights the urgent need to examine the new government policies and responses aimed at regulating young Australians online.

Understanding the changes

In Australia, the Online Safety Act 2021 provides the primary framework for promoting online safety, empowering the eSafety Commissioner to handle complaints, manage harmful content, enforce removal notices, and coordinate national efforts against online harm.

In response to growing concerns about social media's impact on young Australians, the government passed the Online Safety Amendment (Social Media Minimum Age) Act 2024 (‘the Act’). The Act raises the minimum age for creating social media accounts from 13 to 16 and aims to hold platforms accountable for protecting children online. It applies to services that allow users to interact, post, and share content, such as Instagram, Facebook, Snapchat, X, and TikTok, and requires them to take ’reasonable steps’ to prevent under-16s from creating accounts, with penalties of up to $49.5 million for non-compliance.

The new age restrictions are expected to come into force by December 2025, giving platforms time to develop and implement age-verification systems. As the first law of its kind worldwide to lift the minimum age for social media use to 16, the move marks a bold step — but one that raises questions about its real-world impact on young Australians, and whether the benefits will outweigh the risks.

Impact on child rights

Under international laws that Australia is bound by, including the Convention on the Rights of the Child and the International Covenant on Civil and Political Rights, children are guaranteed rights such as freedom of expression, access to information, education, play and leisure, and protection from discrimination. While these rights are not absolute and can be limited, any restriction must serve the best interests of the child. The Australian government argues the social media ban is a proportionate measure to protect children, but human rights groups like UNICEF and the Human Rights Law Centre disagree. They warn the blanket ban undermines children’s rights and could harm their wellbeing, calling it overly restrictive and beyond what is necessary to address online risks.

This raises the question: does the Act strike the right balance between safeguarding children online and respecting their rights, or does it overreach in a way that risks doing more harm than good?

Rushed and unrefined guidelines

The Bill passed swiftly through both houses of Parliament in just nine days in November 2024, with the public given only one business day to make submissions. The Greens labelled the process ‘rushed and reckless', and key terms of the Bill, such as what constitutes ‘reasonable steps’ for social media platforms were left unclear, casting doubt on how the ban will be enforced and whether it can meet its objectives.

The limited submission window left many stakeholders unable to contribute meaningfully. Some declined to submit a response, stating that given its significant implications for child development, mental health and society at large, a single day was insufficient to properly assess such a complex issue. This concern was echoed in widespread criticism over the lack of consultation with key groups including young people, Indigenous communities, parents, and mental health professionals.

The Australian Psychological Society for instance, did not respond, citing the Bill’s far-reaching implications for child development, mental health, and society, and the impossibility of properly assessing such a complex issue in a single day. Broader criticism over the lack of consultation with young people2 highlights a deeper concern: that children were provided with very limited opportunities to provide input on matters directly affecting their lives, in direct contradiction to their right to be heard under Article 12 of the Convention on the Rights of the Child.

Mental health and cyberbullying

The World Health Organisation estimates that presently one in seven teens experience mental health conditions, and the Australian Institute of Health and Welfare reports that the prevalence of depression or anxiety amongst 15–34-year-olds has surged from around 9% in 2009 to 22% in 2022.

There is a growing amount of evidence correlating increased mental illness with social media use by adolescents. Adolescence is a critical period for brain development, which makes young people particularly susceptible to the negative effects of online environments. Social stress, especially in forms such as cyberbullying, has a disproportionate impact on mental health during this vulnerable stage. 

The 2021 eSafety Commissioner’s report notes that 44% of Australian young people have faced negative online experiences, with 15% enduring threats or abuse. However, the proposed ban will not fully shield kids from these dangers, with the Explanatory Memorandum noting that messaging apps, online gaming services and ‘services with the primary purpose of supporting the health and education of end-users' will be exempt from the age restrictions. While these spaces are still open to harms such as cyberbullying, researchers believe that a reduction in exposure to online forums will help ease the prevalence of cyberbullying, and in turn, improve mental health outcomes of young people. 

Despite these potential harms, social media can benefit children and adolescents by enhancing communication and social connection. This is especially relevant for minority groups, including Indigenous children, LGBTQIA+ communities, and individuals with communication disabilities,3 for whom social media plays a vital role in building community, reducing social isolation and enabling self-expression. Limiting access to these platforms may reduce opportunities for peer support and access to mental health resources, ultimately undermining protective factors for well-being. In doing so, a social media ban may unintentionally deepen existing inequities.

Privacy concerns

The ban will require platforms to develop and implement new age assurance mechanisms for verification of user age. Under the newly implemented s63DB of the Act, platforms are prohibited from collecting any government ID, prompting social media platforms to use ‘reasonable alternatives’ to comply with the Act. These ‘reasonable alternatives’ will be informed by the governments three-phase age assurance technology trials. The possible implementation of new age-assurance technologies raises significant privacy and data concerns relating to large-scale data collection, especially as the mechanisms for capturing and storing personal data remain vague and undefined.

Importantly, any information collected for the purposes of age checks will need to be destroyed to comply with s63F of the Act, or risk being in breach s13 of the Privacy Act 1988. However, prior to this destruction, the process for widespread collection and storage of data still raises serious privacy concerns. The 2024 decision of Ireland’s Data Protection Commission in fining Meta €251 million for storing users’ passwords without encryption, illustrates how major platforms can mishandle sensitive data and reinforces the need for strong and specific data use and protections.

Knowledge blocking or protection?

Social media bans risk narrowing access to information, confining knowledge within the limits of the school curriculum, and stifling exposure to diversity of opinions and knowledge. Exposure and dialogue surrounding movements like #MeToo, Black Lives Matter, and School Strike 4 Climate are likely to become more restricted, hindering identity and value development, as these movements rely on social media for support, visibility, and widespread engagement that traditional news sites alone cannot offer. Restricting access may also delay the development of digital literacy, leaving children more vulnerable and less prepared to navigate the online world when they eventually come online. 

On the other hand, while social media can be a valuable space for activism and learning, it also exposes users to misinformation and harmful content. Platforms like Instagram, Facebook and X are increasingly shifting away from independent fact-checking toward community-based moderation, raising serious concerns. Studies show that young Australians are turning to social media as a primary source of news, yet media literacy education in schools hasn’t kept pace. As hate speech, misinformation, and toxic commentary proliferate, many young users are left without the critical skills needed to navigate and interpret the content they encounter online.

A double-edged sword?

In countries like Norway, children under 13 already need parental consent to use platforms like Instagram or TikTok — and now, lawmakers are pushing to raise the age limit to 15 in a bid to better protect young users online. These laws, which can be bypassed through parental consent, have been widely flouted, with the Norwegian Media Authority estimating that 53% of 9-year-olds, 58% of 10-year-olds, and 72% of 11-year-olds use social media despite the restrictions.

This suggests that a more relaxed approach might not be enough — perhaps a stricter ban is exactly what's needed to effectively protect young users from online harm. However, questions arise as to whether this approach is overstepping, potentially going beyond the best interests of the child by infringing on children's rights and limiting access to valuable online spaces for community, activism and information. With questions about its effectiveness and the lack of consultation with key stakeholders, the ban’s broader impact on young Australian’s remains uncertain.

Eve Shvartsman and Meygol Menhaji were interns with the Australian Human Rights Institute in Term 1, 2025.

Footnotes

1. See: Submissions 8, 16, 19, 59, 104 to the Online Safety Amendment (Social Media Minimum Age) Bill 2024 https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Environment_and_Communications/SocialMediaMinimumAge/Submissions
2. See: Submissions 9, 48, 59,104 to the Online Safety Amendment (Social Media Minimum Age) Bill 2024 https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Environment_and_Communications/SocialMediaMinimumAge/Submissions
3. See: Online Safety Amendment (Social Media Minimum Age) Bill 2024 Submissions 8, 16, 64,105  https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Environment_and_Communications/SocialMediaMinimumAge/Submissions