
The Australian Government has enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, a world-first piece of legislation that establishes a new regulatory framework – the Social Media Minimum Age (SMMA) scheme – to prevent children under 16 from holding accounts on designated social media platforms.
This legislation is an important international test case. The experience gained, and the resolution of any legal challenges, will inform social media regulation globally.
The legislation amends the Online Safety Act 2021 by inserting a new Part 4A, which imposes a core obligation on providers of “age-restricted social media platforms” (ARSMPs) to take “reasonable steps” to prevent “age-restricted users” (Australians under 16) from creating or maintaining accounts.
Enforcement of this primary obligation, which applies to both new and existing accounts, is scheduled to commence on 10 December 2025. Non-compliance exposes corporations to large civil penalties, with the eSafety Commissioner empowered to seek fines of up to 150,000 penalty units, currently equivalent to AUD $49.5 million.
Implementation is not defined by the Act itself but is delegated to the eSafety Commissioner, whose Social Media Minimum Age Regulatory Guidance serves as the primary compliance document. This guidance mandates a “layered approach” to age assurance and explicitly rejects simple self-declaration of age as a sufficient standalone measure. This compliance regime is co-regulated by the Office of the Australian Information Commissioner (OAIC), which oversees new, stringent privacy provisions in Part 4A.
There remain issues to clarify. While the SMMA framework represents a globally significant regulatory experiment, its effectiveness and legal stability depend on managing the legal ambiguity in its scope (the “significant purpose” test). The Act’s stringent data privacy mandate (Section 63F), which requires the immediate destruction of personal information after an age check, may contradict the eSafety Commissioner’s safety guidance, which requires platforms to prevent blocked users from re-registering.
Outstanding issues will be addressed in a mandated independent review of the new Part 4A, which is scheduled to occur by late 2027. In the meantime, legal challenges on a number of grounds may emerge, helping to clarify the limits and operation of the scheme.
The primary legal instrument is the Online Safety Amendment (Social Media Minimum Age) Act 2024. This legislation is not a standalone Act; rather, it is a formal amendment to Australia’s principal online safety framework, the Online Safety Act 2021.
The Act was introduced in response to rising public and governmental concern regarding the detrimental effects of social media use on the mental health and development of children. The government’s stated rationale, articulated by Prime Minister Anthony Albanese, is to protect childhood, “let kids be kids” , and provide parents with greater control and support in managing their children’s digital experiences. The legislation aims to mitigate risks associated with cyberbullying, exposure to harmful content, and addictive platform designs.
The amendment’s primary function is the insertion of a new Part 4A – Social media minimum age into the Online Safety Act 2021. This new Part establishes the entire SMMA scheme.
The key provisions of Part 4A are:
To facilitate this age-based reform, the legislation also makes a necessary consequential amendment to the Age Discrimination Act 2004.
The legislation progressed from Bill to Act with notable speed in late 2024, followed by a 12-month period dedicated to building the regulatory and technical framework.
Legislative Phase (2024):
Regulatory & Enforcement Phase (2025):
The timeline reveals a “legislate first, build framework later” strategy. The Act commenced on 11 December 2024, but its central obligation (Section 63D) was subject to a delayed enforcement clause (Section 63E), allowing up to 12 months for compliance.
This 12-month grace period was a regulatory on-ramp. The government had successfully passed the law, but had not yet defined the mechanism for compliance (“reasonable steps”) or confirmed its technical feasibility.
This period was therefore essential for the government and its agencies to retroactively construct the entire compliance ecosystem, including the AATT to prove technical feasibility and the eSafety and OAIC guidance to define the rules.
For affected entities, this means the regulatory guidance from the Commissioners is arguably more significant for day-to-day compliance than the Act itself.
The SMMA scheme does not apply a simple list of banned services. Instead, it creates a new legal definition: “age-restricted social media platform” (ARSMP).
The criteria for determining if a service is an ARSMP are outlined in the Online Safety Act 2021 and referenced by the new Part 4A. A service is captured if it meets specific conditions, primarily:
The Minister for Communications retains the power to specify in legislative rules that a particular electronic service is an ARSMP, even if it might otherwise dispute the definition.
The Minister for Communications announced a list of affected platforms on 5 November 2025, after an evaluation conducted by the eSafety Commissioner. However, the Minister also described the list as “dynamic”, so other platforms may be added at a later stage.
Based on the “significant purpose” test and official government communications, the law is clearly intended to cover the major social media platforms that are central to the public debate on youth mental health.
Platforms explicitly named in the 5 November government announcement are:
YouTube has challenged its inclusion on a number of grounds. Primarily, it argues that the scheme could limit Australia’s implied freedom of political communication. This legal principle constrains what governments can do to limit speech about political and government matters. In addition, YouTube questions its identification as a social media platform, and argues that its late inclusion violates procedural fairness requirements. It remains to be seen whether YouTube (Google) will follow up with legal action.
These Rules specify that services are not ARSMPs if their “primary purpose” falls into one of the following categories:
This “purpose-based” distinction creates some ambiguity. The Act captures services with a “significant purpose” of social interaction, while the Rules exempt services with a “primary purpose” of gaming or messaging. These are two different and potentially conflicting legal standards. A “significant” purpose is merely an important or notable one, whereas a “primary” purpose is the dominant or main one.
A service can, and many do, have a primary purpose of gaming (e.g., Roblox) or messaging (e.g., Discord) while simultaneously having a significant purpose of social interaction. This creates a critical legal fault line. A platform like Discord or Roblox could argue it is exempt under the Rules, while the eSafety Commissioner could argue it is captured under the Act. While child safety advocates like the Alannah & Madeline Foundation broadly welcomed the exemptions, they noted that these services are not inherently “safe” and that risks still occur there. This ambiguity effectively creates a large, exempt category of online services where children can continue to interact socially, significantly weakening the practical scope of the “ban.”
The Online Safety Act 2021 also grants the Minister the power to exempt specific classes of services via legislative rules. On 29 July 2025, the Minister exercised this power by making the Online Safety (Age-Restricted Social Media Platforms) Rules 2025.
The Act 2024 does not impose a strict liability “ban” on underage users. The core legal obligation, articulated in Section 63D, is for ARSMPs to take “reasonable steps” to prevent them from having accounts.
This “reasonable steps” standard is a flexible, objective, and principles-based test that is common in regulation. The specifics of what constitutes “reasonable” are not defined within the Act itself. This obligation is comprehensive, applying both to preventing the creation of new accounts and to detecting and deactivating existing accounts held by users under 16.
This distinction is legally paramount. While the law is referred to as a “ban” in some public discourse, the legislation uses “reasonable steps”. This is a tacit acknowledgment that a perfect technical ban is impossible, given the wide availability of circumvention tools like VPNs and the ease of falsifying information. By choosing this standard, the legislature has shifted the legal burden for platforms from a technical one (achieve a 100% perfect block) to a legal and compliance one (create, document, and maintain a defensible, robust process that aligns with regulatory expectations).
The Act explicitly delegates the power to define “reasonable steps” to the eSafety Commissioner. Following consultations, the Commissioner released the “Social Media Minimum Age Regulatory Guidance” on 16 September 2025. This 55-page document is the single most important resource for platform compliance.
The guidance is principles-based, not technologically prescriptive. It requires platforms to implement measures that are reliable, robust and effective and, critically, can’t remain static. Platforms are expected to continuously monitor and improve their systems.
Key expectations outlined in the guidance include:
The most critical directive from the eSafety Commissioner is that “self-declaration alone… is not considered sufficient to meet the legal obligation”. This is reinforced by the eSafety Commissioner’s February 2025 report, which confirmed that self-declared age would no longer be acceptable and that stronger methods must be used.
This directive effectively ends the long-standing industry practice of relying on a simple checkbox or date-of-birth field, which has been widely recognised as loosely enforced.
Instead, the guidance and the government’s technical trial mandate a “layered approach” (also known as “Successive Validation”). This model is designed to minimise friction for the vast majority of legitimate, adult users while escalating the level of “age assurance” for users who are flagged as high-risk or potentially underage.
The government’s 12-month regulatory runway was used to commission the independent Age Assurance Technology Trial (AATT) to determine the real-world viability of these technologies in Australia. The Final Report, published on 31 August 2025, provides the technical foundation for the eSafety Commissioner’s guidance.
The trial analyzed the trade-offs of the three primary age assurance methods:
Synthesising the eSafety guidance with the AATT findings reveals the only logical compliance model, often referred to as a “risk-based waterfall.” The eSafety Commissioner has stated it would be “unreasonable” to “reverify everyone’s age”. Therefore, platforms will seek to assure the age of Australia’s 20 million adult users with “little interruption”, reserving high-friction checks for high-risk cases.
This “Successive Validation” model will likely operate as follows:
This “waterfall” is the only practical model that satisfies the eSafety Commissioner’s seemingly contradictory demands: be robust and effective, but do not rely on self-declaration, and do not force ID checks on all users.
The SMMA scheme is a dual-regulatory regime, with enforcement split between two powerful government bodies:
This dual-regulatory structure is deeply interconnected. The eSafety Commissioner’s guidance explicitly states that any steps taken by a platform “will not be ‘reasonable’ unless” the platform also complies with its privacy obligations under Part 4A and the Privacy Act 1988. This makes privacy compliance, as defined by the OAIC, a prerequisite for safety compliance.
To clarify these new, stricter obligations, the OAIC published its Privacy Guidance on Part 4A (Social Media Minimum Age) of the Online Safety Act 2021 in October 2025. This guidance mandates that platforms adopt a “Privacy by Design” approach and recommends they undertake Privacy Impact Assessments (PIAs) before implementing an age assurance method.
The guidance categorises the data being handled into three types:
The legislative core of these new privacy protections is Section 63F of the Online Safety Act 2021.This section was introduced to assuage public fears about data harvesting and is significantly stricter than existing Australian privacy law. It imposes two ironclad rules:
The OAIC’s guidance confirms that this destruction mandate is more stringent than the existing Australian Privacy Principle (APP) 11.2. APP 11.2 permits de-identification or retention for ancillary business needs. Section 63F, by contrast, requires the total destruction of “inputs” (like selfies or ID scans) and “outputs” (the age token).
This stringent mandate was a political “kill switch”. It was designed to counter widespread criticism that the law would force all Australians to hand over sensitive ID, creating a “honeypot” for data breaches. Reports from the AATT found “concerning evidence” that some vendors were already “over-anticipating” regulatory needs and retaining data unnecessarily. Section 63F was Parliament’s solution: an ironclad guarantee that data must be destroyed.
In its submission to the Parliamentary inquiry, TikTok argues that the Act creates a “compliance loophole” that directly pits the Act’s safety and privacy goals against each other. As articulated in TikTok’s submission, the “compliance loophole” functions as follows:
This scenario potentially creates a “dual penalty trap” for platforms. The privacy provision (S 63F) directly undermines the safety provision (S 63D) and the eSafety Commissioner’s explicit guidance that platforms must prevent the “re-registration” of blocked users.
Parliament did not accept this argument during the debate on passage of the Act, but it remains an issue to watch.
The biggest legal threat to the scheme is that it is, as YouTube alleges, a breach of Australia’s implied freedom of political communication. If so, it is open to legal challenge, and such a challenge would be an important test case. Given Google’s legal and financial resources, it would be easy for them to mount a challenge (though not necessarily easy to win it).
Closer to home, the Government’s Australian Human Rights Commission (AHRC) has expressed “serious reservations” about the entire scheme. The AHRC critiques the law as a “blunt instrument” that disproportionately interferes with the fundamental rights of children and young people, including:
Furthermore, the AHRC notes that platforms will inevitably be forced to “assure” the age of all users, not just children, creating systemic privacy risks for every Australian.
But the rights afforded to children are qualified by their relative immaturity – a principle well-established in common law and legislation – and this makes the Commission’s claims only arguable at best. And the stringent rule around data destruction arguably addresses the privacy issue.
But the Commission’s sentiment is shared by platform operators like Snap, which stated it will comply with the law but must “fundamentally disagree with it”, and many other free speech advocates. It remains to be seen if legal challenge can bring the scheme down or modify it significantly.
As established, the law is enforced by two co-regulators: the eSafety Commissioner for the “reasonable steps” safety obligation and the OAIC for the Section 63F privacy obligations.
A critical component of the enforcement regime is that the Act places the “evidential burden” on the platform provider. In any civil penalty proceeding, the platform bears the burden of proving that it did take reasonable steps, or, alternatively, that no reasonable steps could have been taken to comply.
This is a high legal threshold for platforms to meet and makes comprehensive compliance documentation essential. The eSafety Commissioner is equipped with information- gathering powers to monitor compliance and can issue formal notices regarding non-compliance.
The penalties for non-compliance are severe, reflecting the government’s political will and providing a significant deterrent.
These parallel, nine-figure penalties confirm the high-stakes nature of the “Dual Penalty Trap.” Platforms must navigate the contradictory demands of Section 63D and Section 63F, knowing that a compliance failure in either direction carries catastrophic financial risk.
To manage the public rollout and prepare families for the 10 December 2025 change, the government has launched a major public awareness campaign titled “For The Good Of”.
This final section provides a high-level summary of the critical challenges for platforms subject to the SMMA scheme and offers a forward-looking perspective on the law’s future.
Affected entities must navigate a complex, expensive, and legally contradictory regulatory environment. The primary challenges are:
Given the evidential burden and the legal ambiguities, a platform’s primary goal must be to build a defensible compliance regime:
The Act 2024 inserts Section 239B into the Online Safety Act 2021, which mandates an independent review of the operation of Part 4A. This review must be completed within two years of the enforcement date (i.e., by 10 December 2027), with a report tabled in Parliament.
This mandated review is a tacit acknowledgment by Parliament that this “world-first” law is a regulatory experiment. The 2025–2027 enforcement period will effectively serve as a “live trial.” All stakeholders will use this period to gather evidence. Platforms will undoubtedly document every instance of the Section 63F loophole to prove the law is unworkable as written. Simultaneously, the eSafety Commissioner will gather data on circumvention rates, and privacy advocates and the AHRC will gather evidence on the law’s privacy impact on all Australians.
The current law is, therefore, “Version 1.0.” The 2027 review is the true battleground where “Version 2.0” will be fought. The primary strategic objective for affected operators is to manage risk in the 2025–2027 period, and collect data and other evidence to inform the subsequent review of any changes or improvements necessary.
Venture Insights is an independent company providing research services to companies across the media, telco and tech sectors in Australia, New Zealand, and Europe.
For more information go to ventureinsights.com.au or contact us at contact@ventureinsights.com.au.