Venture Insights - REPORT: Social media restrictions can work but need education

BRIEF: Social media age restrictions can work, but need an education strategy

Abstract: Growing evidence of harms to young people from social media has ignited public concern. In response, Australia passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, a groundbreaking piece of legislation that bans children under 16 from using certain social media platforms, including Snapchat, TikTok, Facebook, Instagram, and X. This law makes Australia the first country to implement such a stringent age restriction on social media access. A similar private member’s bill has been presented to the New Zealand Parliament.

Social media age restrictions are controversial, and raise many issues around privacy, human rights, free speech, and enforcement challenges. However, these can be addressed if regulators are prepared to accept second-best solutions on age verification, with a view to improving performance over time. The history of smoking regulation suggests that age verification needs to be complemented by a long-term commitment to parental and other education, based mainly on advertising campaigns. This could be supported by a levy on social media advertising revenues.

The ANZ debate so far

The push for stricter social media regulations in Australia and New Zealand emerged amid growing global concerns about the impact of digital platforms on youth mental health. The primary motivation for laws is to protect children from online harms, including exposure to inappropriate content, cyberbullying, and online predators, which are seen as threats to psychological and emotional well-being. 

Proponents of regulation cite evidence linking social media use to negative impacts on sleep, stress, and attention, particularly during critical developmental stages. Arguably, these attempts align with obligations under the Convention on the Rights of the Child (CRC), specifically Articles 17 and 19, which call for protecting children from harmful content and maltreatment.

The Australian government had been grappling with online safety for years. The eSafety Commissioner, established in 2015 as the world’s first online safety regulator, played a pivotal role in researching and addressing online harms, including cyberbullying and image-based abuse. 

By 2024, public sentiment and bipartisan political support created momentum for legislative action. Earlier efforts, such as South Australia’s proposal to ban children 13 and under from social media, and a national cabinet review linking online content to harmful behaviors, set the stage for a national approach.

In May 2024, Prime Minister Anthony Albanese announced plans to legislate a minimum age for social media, initially considering an age range between 14 and 16. The Coalition, led by Opposition Leader Peter Dutton, advocated for a 16-year-old cutoff, aligning with Albanese’s eventual decision. The rapid progression of the Online Safety Amendment (Social Media Minimum Age) Bill 2024, introduced on November 21, 2024, and passed within eight days, reflected the urgency felt by policymakers, despite criticism for its rushed timeline.

Controversially, YouTube has already been exempted from the requirements on the grounds that it is used for educational purposes. This has promoted pushback from other platforms, including TikTok which has launched an advertising campaign arguing that it offers similar advantages to YouTube.

New Zealand has also seen significant discussion around a proposed social media ban for under-16s, spearheaded by National MP Catherine Wedd through her My Social Media Age-Appropriate Users Bill. Introduced on May 6, 2025, the bill aimed to protect young people from online harms like bullying, inappropriate content, and addiction, mirroring Australia’s recent legislation. 

Public sentiment, according to a December 2024 1News Verian poll showing 68% support, leaned in favor. Prime Minister Christopher Luxon strongly supported the initiative, emphasising its importance for child safety and seeking bipartisan support, while the Education Minister was tasked with exploring legislative options. 

The bill applies to platforms enabling social interaction, such as TikTok, X, Facebook, and Instagram, but exempts education-focused platforms like Google Classroom and health apps like Headspace. 

Legislative provisions

The Australian Online Safety Amendment (Social Media Minimum Age) Act 2024 amends the Online Safety Act 2021, requiring “age-restricted social media platforms” to take “reasonable steps” to prevent users under 16 from holding accounts. Platforms face fines of up to AUD$49.5 million for systemic non-compliance. The law applies to both new and existing accounts, with no exemptions for parental consent or “grandfathering” arrangements for current users under 16.

The legislation does not prescribe specific age verification methods, leaving platforms to develop their own systems within a 12-month implementation period, set to conclude by December 2025. The eSafety Commissioner will issue regulatory guidelines on what constitutes “reasonable steps,” informed by an ongoing age assurance technology trial starting in January 2025. Platforms are prohibited from using personal data collected for age verification for other purposes without explicit, voluntary consent, and such data must be destroyed after use. A review of the law’s effectiveness is mandated within two years of its implementation.

Catherine Wedd’s My Social Media Age-Appropriate Users Bill, introduced on May 6, 2025, aims to restrict social media access for New Zealanders under 16 to protect them from online harms such as bullying, inappropriate content, and addiction. Modeled on Australia’s 2024 Online Safety Amendment (Social Media Minimum Age) Act, the bill places the responsibility on social media platforms to implement mandatory age verification, requiring them to take “all reasonable steps” to ensure users are at least 16 before granting access. Non-compliance could result in fines of up to NZ$2 million. The bill applies to platforms enabling social interaction, such as TikTok, X, Facebook, and Instagram, but exempts education-focused platforms like Google Classroom and health apps like Headspace. It includes provisions for regulatory oversight, a review after three years, and defenses for providers using reasonable verification measures, with considerations for user privacy in the verification process.

Controversies and criticisms

Critics have attacked these proposals on several lines:

  • Privacy Concerns: digital rights advocates, warn that age verification technologies, such as biometrics or ID-based systems, could lead to overcollection of personal data, increasing risks of misuse or breaches. The requirement to destroy data after use and the prohibition on repurposing it aim to mitigate these risks, but skepticism remains about enforcement and compliance by tech companies with a history of privacy violations.
  • Enforcement Challenges: The lack of clarity on what constitutes “reasonable steps” has raised concerns about the law’s feasibility. Experiences in other jurisdictions, such as Utah and Louisiana, where VPN use surged to bypass age restrictions, suggest enforcement may be difficult. Tech companies, including Meta and Snap, have criticised the rushed legislative process and argued that age verification technology is not yet mature enough to enforce the ban across numerous platforms.
  • Impact on Marginalised Youth: Youth advocates argue that the ban could harm marginalised groups, such as LGBTQIA+, neurodivergent, or migrant teens, who rely on social media for community and support. A blanket ban risks isolating these groups, potentially exacerbating mental health challenges rather than alleviating them.
  • Human Rights Concerns: Human rights advocates note that the ban may infringe on children’s rights to access information and participate in society, as outlined in the CRC. Critics argue that less restrictive alternatives, such as improved content moderation or parental controls, should be prioritised.
  • Potential for Underground Activity: Some critics have warned that the ban may drive young users to less regulated platforms or encourage the use of VPNs and fake accounts, potentially exposing them to greater risks.
  • Global Precedent and Free Speech: The law has drawn international attention, with countries like Norway and the UK considering similar measures. However, X raised concerns about its compatibility with international human rights treaties, while industry groups like NetChoice argue it could suppress free speech, citing successful legal challenges in the U.S.  
  • Industry concerns about implementation. In Australia, Meta and Snap have committed to compliance in Australia, but have raised concerns about the unclear guidelines and the burden of implementing age verification across multiple apps. There is also concern about the maturity of age verification technology and processes, and whether a staged implementation may be necessary. TheAustralian Digital Industry Group, representing major platforms, called for a delay until the age assurance trial provides clarity. 

Why does this matter?

Public support for intervention remains strong, driven by parental concerns about online safety, though youth advocates and digital researchers argue the law overlooks the benefits of social media, such as education and self-expression. In our view, the evidence for social media harms is significant enough to warrant alarm. The analogy here is smoking, which was increasingly restricted as evidence of harm grew, and led to a ban on selling tobacco products to minors many years ago.

Smoking regulation took decades to implement fully, and was resisted for much of that time on arguments similar to those of the critics of social media age restrictions. These objections must be listened to and addressed, but that does not mean a start cannot be made. 

Stretching the smoking analogy, social media age restrictions should be seen as a process, not a once-and-for-all intervention. The initial efforts will not be fully effective, just as early efforts to restrict access to tobacco were not fully effective. This means that regulators should be prepared to accept second-best solutions, with the aim of improving over time. Successful solutions will depend on the effectiveness of age verification technologies and the ability to enforce compliance without compromising privacy or access to beneficial online spaces.

Finally, reducing smoking, especially amongst minors, depended on a multi-pronged strategy of restrictions, education, and taxation. It did not rely simply on restricting access. 

This suggests that technology-based age restrictions are not going to be enough. A multi-year commitment to parental and other education, based mainly on advertising campaigns, will be needed to complement these restrictions and make them effective in the long term. This could be supported by a levy on social media advertising revenues.

About Venture Insights

Venture Insights is an independent company providing research services to companies across the media, telco and tech sectors in Australia, New Zealand, and Europe.

For more information go to ventureinsights.com.au or contact us at contact@ventureinsights.com.au.