Venture Insights - Australia’s Social Media Minimum Age Scheme: What You Need to Know

EXPLAINER: Australia’s Social Media Minimum Age Scheme

Executive Summary

The Australian Government has enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, a world-first piece of legislation that establishes a new regulatory framework – the Social Media Minimum Age (SMMA) scheme – to prevent children under 16 from holding accounts on designated social media platforms. 

This legislation is an important international test case. The experience gained, and the resolution of any legal challenges, will inform social media regulation globally.

The legislation amends the Online Safety Act 2021 by inserting a new Part 4A, which imposes a core obligation on providers of “age-restricted social media platforms” (ARSMPs) to take “reasonable steps” to prevent “age-restricted users” (Australians under 16) from creating or maintaining accounts.

Enforcement of this primary obligation, which applies to both new and existing accounts, is scheduled to commence on 10 December 2025. Non-compliance exposes corporations to large civil penalties, with the eSafety Commissioner empowered to seek fines of up to 150,000 penalty units, currently equivalent to AUD $49.5 million.

Implementation is not defined by the Act itself but is delegated to the eSafety Commissioner, whose Social Media Minimum Age Regulatory Guidance serves as the primary compliance document. This guidance mandates a “layered approach” to age assurance and explicitly rejects simple self-declaration of age as a sufficient standalone measure. This compliance regime is co-regulated by the Office of the Australian Information Commissioner (OAIC), which oversees new, stringent privacy provisions in Part 4A.

There remain issues to clarify. While the SMMA framework represents a globally significant regulatory experiment, its effectiveness and legal stability depend on managing the legal ambiguity in its scope (the “significant purpose” test). The Act’s stringent data privacy mandate (Section 63F), which requires the immediate destruction of personal information after an age check, may contradict the eSafety Commissioner’s safety guidance, which requires platforms to prevent blocked users from re-registering. 

Outstanding issues will be addressed in a mandated independent review of the new Part 4A, which is scheduled to occur by late 2027. In the meantime, legal challenges on a number of grounds may emerge, helping to clarify the limits and operation of the scheme.

The Legislative Framework

The Online Safety Amendment (Social Media Minimum Age) Act 2024

The primary legal instrument is the Online Safety Amendment (Social Media Minimum Age) Act 2024. This legislation is not a standalone Act; rather, it is a formal amendment to Australia’s principal online safety framework, the Online Safety Act 2021.

The Act was introduced in response to rising public and governmental concern regarding the detrimental effects of social media use on the mental health and development of children. The government’s stated rationale, articulated by Prime Minister Anthony Albanese, is to protect childhood, “let kids be kids” , and provide parents with greater control and support in managing their children’s digital experiences. The legislation aims to mitigate risks associated with cyberbullying, exposure to harmful content, and addictive platform designs.

The amendment’s primary function is the insertion of a new Part 4A – Social media minimum age into the Online Safety Act 2021. This new Part establishes the entire SMMA scheme.

The key provisions of Part 4A are:

  • Section 63B: This section outlines the new Part’s objective: “to reduce the risk of harm to age-restricted users from certain kinds of social media platforms”.
  • Section 63D: This section contains the central civil penalty provision and the core legal obligation for platforms. It states: “A provider of an age-restricted social media platform must take reasonable steps to prevent age-restricted users having accounts with the age-restricted social media platform”.
  • Section 5 (Definitions): The Act 2024 also amends Section 5 of the Online Safety Act 2021 to insert a new definition for “age-restricted user”, which is defined as “an Australian child who has not reached 16 years”.

To facilitate this age-based reform, the legislation also makes a necessary consequential amendment to the Age Discrimination Act 2004.

Legislative Timeline and Key Dates for Compliance

The legislation progressed from Bill to Act with notable speed in late 2024, followed by a 12-month period dedicated to building the regulatory and technical framework.

Legislative Phase (2024):

  • 21 November 2024: The Online Safety Amendment (Social Media Minimum Age) Bill 2024 was introduced into the Federal Parliament.
  • 26 November 2024: The Senate Environment and Communications Legislation Committee reported on the Bill, following an exceptionally short inquiry period that received around 15,000 submissions.
  • 29 November 2024: The Bill was passed by both Houses of Parliament.
  • 10 December 2024: The Act received Royal Assent.
  • 11 December 2024: The Act commenced, formally inserting Part 4A into the Online Safety Act 2021.

Regulatory & Enforcement Phase (2025):

  • 29 July 2025: The Minister for Communications issued two critical legislative instruments:
    • The Online Safety (Age-Restricted Social Media Platforms) Rules 2025, which specified the types of services excluded from the scheme.
    • The Online Safety (Day of Effect of Social Media Minimum Age) Instrument 2025, which formally set the enforcement date.
  • 31 August 2025: The final report for the government-commissioned Age Assurance Technology Trial (AATT) was released, providing the technical basis for compliance.
  • 16 September 2025: The eSafety Commissioner released the binding Social Media Minimum Age Regulatory Guidance, defining the “reasonable steps” platforms must take.
  • October 2025: The Office of the Australian Information Commissioner (OAIC) published its complementary Privacy Guidance on Part 4A.
  • 19 October 2025: The “For The Good Of” public awareness campaign was launched to prepare Australians for the change.
  • 5 November 2025: Minister for Communications announces initial list of affected platforms.
  • 10 December 2025: This is the critical compliance date. The core obligation in Section 63D takes effect, and the eSafety Commissioner’s power to enforce penalties begins.

The timeline reveals a “legislate first, build framework later” strategy. The Act commenced on 11 December 2024, but its central obligation (Section 63D) was subject to a delayed enforcement clause (Section 63E), allowing up to 12 months for compliance. 

This 12-month grace period was a regulatory on-ramp. The government had successfully passed the law, but had not yet defined the mechanism for compliance (“reasonable steps”)  or confirmed its technical feasibility. 

This period was therefore essential for the government and its agencies to retroactively construct the entire compliance ecosystem, including the AATT to prove technical feasibility and the eSafety and OAIC guidance to define the rules. 

For affected entities, this means the regulatory guidance from the Commissioners is arguably more significant for day-to-day compliance than the Act itself.

Identifying “Age-Restricted Social Media Platforms”

The “Significant Purpose” Test

The SMMA scheme does not apply a simple list of banned services. Instead, it creates a new legal definition: “age-restricted social media platform” (ARSMP).

The criteria for determining if a service is an ARSMP are outlined in the Online Safety Act 2021 and referenced by the new Part 4A. A service is captured if it meets specific conditions, primarily:

  • The “sole purpose, or significant purpose” of the service is to enable social interaction between two or more end-users.
  • The service allows end-users to link to, or interact with, some or all other end-users.
  • The service allows end-users to post material on the service.

The Minister for Communications retains the power to specify in legislative rules that a particular electronic service is an ARSMP, even if it might otherwise dispute the definition.

The Minister for Communications announced a list of affected platforms on 5 November 2025, after an evaluation conducted by the eSafety Commissioner. However, the Minister also described the list as “dynamic”, so other platforms may be added at a later stage.

Affected Platforms

Based on the “significant purpose” test and official government communications, the law is clearly intended to cover the major social media platforms that are central to the public debate on youth mental health.

Platforms explicitly named in the 5 November government announcement are:

  • Meta Platforms: Facebook, Instagram, and Threads.
  • ByteDance: TikTok.
  • Snap Inc: Snapchat.
  • X Corp: X (formerly Twitter).
  • Google: YouTube.
  • Others: Reddit and the livestreaming platform Kick.

YouTube has challenged its inclusion on a number of grounds. Primarily, it argues that the scheme could limit Australia’s implied freedom of political communication. This legal principle constrains what governments can do to limit speech about political and government matters. In addition, YouTube questions its identification as a social media platform, and argues that its late inclusion violates procedural fairness requirements. It remains to be seen whether YouTube (Google) will follow up with legal action.

Exemptions

These Rules specify that services are not ARSMPs if their “primary purpose” falls into one of the following categories:

  • Messaging, email, voice calling, or video calling (e.g., WhatsApp, Messenger).
  • Playing online games (e.g., Roblox, Steam).
  • Education (e.g., Google Classroom).
  • Health services.
  • Professional networking and development.

This “purpose-based” distinction creates some ambiguity. The Act captures services with a “significant purpose” of social interaction, while the Rules exempt services with a “primary purpose” of gaming or messaging. These are two different and potentially conflicting legal standards. A “significant” purpose is merely an important or notable one, whereas a “primary” purpose is the dominant or main one.

A service can, and many do, have a primary purpose of gaming (e.g., Roblox) or messaging (e.g., Discord) while simultaneously having a significant purpose of social interaction. This creates a critical legal fault line. A platform like Discord or Roblox could argue it is exempt under the Rules, while the eSafety Commissioner could argue it is captured under the Act. While child safety advocates like the Alannah & Madeline Foundation broadly welcomed the exemptions, they noted that these services are not inherently “safe” and that risks still occur there. This ambiguity effectively creates a large, exempt category of online services where children can continue to interact socially, significantly weakening the practical scope of the “ban.”

The Online Safety Act 2021 also grants the Minister the power to exempt specific classes of services via legislative rules. On 29 July 2025, the Minister exercised this power by making the Online Safety (Age-Restricted Social Media Platforms) Rules 2025.

The “Reasonable Steps” Obligation and Age Assurance

The “Reasonable Steps” Requirement

The Act 2024 does not impose a strict liability “ban” on underage users. The core legal obligation, articulated in Section 63D, is for ARSMPs to take “reasonable steps” to prevent them from having accounts.

This “reasonable steps” standard is a flexible, objective, and principles-based test that is common in regulation. The specifics of what constitutes “reasonable” are not defined within the Act itself. This obligation is comprehensive, applying both to preventing the creation of new accounts and to detecting and deactivating existing accounts held by users under 16.

This distinction is legally paramount. While the law is referred to as a “ban” in some public discourse, the legislation uses “reasonable steps”. This is a tacit acknowledgment that a perfect technical ban is impossible, given the wide availability of circumvention tools like VPNs and the ease of falsifying information. By choosing this standard, the legislature has shifted the legal burden for platforms from a technical one (achieve a 100% perfect block) to a legal and compliance one (create, document, and maintain a defensible, robust process that aligns with regulatory expectations).

The eSafety Commissioner’s Regulatory Guidance

The Act explicitly delegates the power to define “reasonable steps” to the eSafety Commissioner. Following consultations, the Commissioner released the “Social Media Minimum Age Regulatory Guidance” on 16 September 2025. This 55-page document is the single most important resource for platform compliance.

The guidance is principles-based, not technologically prescriptive. It requires platforms to implement measures that are reliable, robust and effective and, critically, can’t remain static. Platforms are expected to continuously monitor and improve their systems.

Key expectations outlined in the guidance include:

  • Detecting and deactivating existing underage accounts with care and clear communication.
  • Preventing re-registration or circumvention by users whose accounts have been removed.
  • Taking a “layered approach” to age assurance to minimise user friction and provide choice.
  • Avoiding reliance on self-declaration alone, which is deemed insufficient.
  • Continuously monitoring, improving, and transparently reporting on the effectiveness of these measures.

The most critical directive from the eSafety Commissioner is that “self-declaration alone… is not considered sufficient to meet the legal obligation”. This is reinforced by the eSafety Commissioner’s February 2025 report, which confirmed that self-declared age would no longer be acceptable and that stronger methods must be used.

This directive effectively ends the long-standing industry practice of relying on a simple checkbox or date-of-birth field, which has been widely recognised as loosely enforced.

Instead, the guidance and the government’s technical trial mandate a “layered approach” (also known as “Successive Validation”). This model is designed to minimise friction for the vast majority of legitimate, adult users while escalating the level of “age assurance” for users who are flagged as high-risk or potentially underage.

The Age Assurance Technology Trial (AATT)

The government’s 12-month regulatory runway was used to commission the independent Age Assurance Technology Trial (AATT) to determine the real-world viability of these technologies in Australia. The Final Report, published on 31 August 2025, provides the technical foundation for the eSafety Commissioner’s guidance.

  • Headline Finding: The trial’s main conclusion is that age assurance “can be done in Australia privately, efficiently and effectively”. It found no substantial technological limitations to implementation.
  • Key Finding: The report stressed that there is “no ‘one-size-fits-all’ solution”. This finding directly underpins the eSafety Commissioner’s mandate for a “layered” or “waterfall” model.

The trial analyzed the trade-offs of the three primary age assurance methods:

  • Age Verification: Using official records (e.g., identity documents) to verify a date of birth.
    • Pros: Provides the strongest, most definitive level of assurance.
    • Cons: Carries significant privacy risks and can exclude individuals who lack access to identity documents.
  • Age Estimation: Using biometrics (e.g., a selfie) to derive a probabilistic age estimate.
    • Pros: Delivers speed, convenience, and can be highly privacy-preserving (e.g., data is deleted immediately after the check).
    • Cons: It is “probabilistic,” not definitive, and therefore has error rates (e.g., incorrectly blocking 16-year-olds or approving 15-year-olds).
  • Age Inference: Using contextual or behavioural signals (e.g., account tenure, “likes,” engagement patterns, web history) to infer a user’s age.
    • Pros: Offers a “low-friction” check that is invisible to the user.
    • Cons: Risks embedding bias and, if used persistently, can lead to “digital profiling,” undermining user autonomy.

Synthesising the eSafety guidance with the AATT findings reveals the only logical compliance model, often referred to as a “risk-based waterfall.” The eSafety Commissioner has stated it would be “unreasonable” to “reverify everyone’s age”. Therefore, platforms will seek to assure the age of Australia’s 20 million adult users with “little interruption”, reserving high-friction checks for high-risk cases.

This “Successive Validation” model will likely operate as follows:

  • Step 1 (Low-Friction Inference): The vast majority of adult users will be “passively” assured using Age Inference. Platforms will use software they already employ – originally developed for marketing – to analyze behavioural signals like engagement, content interaction, and account tenure to conclude they are over 16. This is the low-friction base layer.
  • Step 2 (Medium-Friction Estimation): If the inference model is uncertain, or if an account is flagged by reports from other users (a feature TikTok, for example, is developing), the user will be escalated to Age Estimation. They will be “pinged” and required to use an age assurance app, likely involving a biometric selfie analysis, to gain a probabilistic “over 16” check.
  • Step 3 (High-Friction Verification): If a user is blocked by Step 2 but contests the decision, they will be escalated to the final, high-friction layer: Age Verification. At this stage, they will be offered the option to prove their age with an identity document. This cannot be the only option; the Act explicitly forbids compelling users to use government-issued identification (including Digital ID), meaning a “reasonable alternative” must always be offered.

This “waterfall” is the only practical model that satisfies the eSafety Commissioner’s seemingly contradictory demands: be robust and effective, but do not rely on self-declaration, and do not force ID checks on all users.

Co-Regulation, Data Protection, and Section 63F

Respective Roles of the eSafety Commissioner and the OAIC

The SMMA scheme is a dual-regulatory regime, with enforcement split between two powerful government bodies:

  • The eSafety Commissioner: As the primary regulator for the Online Safety Act 2021, the Commissioner is responsible for enforcing the “reasonable steps” obligation under Section 63D. The Commissioner monitors compliance, investigates platforms, and issues notices and penalties related to safety.
  • The Office of the Australian Information Commissioner (OAIC): The OAIC acts as the privacy co-regulator, specifically overseeing compliance with the new, stringent privacy and data-handling provisions introduced in Section 63F of Part 4A.

This dual-regulatory structure is deeply interconnected. The eSafety Commissioner’s guidance explicitly states that any steps taken by a platform “will not be ‘reasonable’ unless” the platform also complies with its privacy obligations under Part 4A and the Privacy Act 1988. This makes privacy compliance, as defined by the OAIC, a prerequisite for safety compliance.

The OAIC’s Privacy Guidance on Part 4A

To clarify these new, stricter obligations, the OAIC published its Privacy Guidance on Part 4A (Social Media Minimum Age) of the Online Safety Act 2021 in October 2025. This guidance mandates that platforms adopt a “Privacy by Design” approach and recommends they undertake Privacy Impact Assessments (PIAs) before implementing an age assurance method.

The guidance categorises the data being handled into three types:

  • “Inputs”: Information provided by the user (e.g., selfies, ID documents).
  • “Outputs”: The result of the check (e.g., a binary “yes/no” age token).
  • “Existing data”: Pre-existing metadata used for inference (e.g., account tenure).

The legislative core of these new privacy protections is Section 63F of the Online Safety Act 2021.This section was introduced to assuage public fears about data harvesting and is significantly stricter than existing Australian privacy law. It imposes two ironclad rules:

  • Purpose Limitation (S 63F(1)): An entity that holds personal information collected for SMMA compliance must not use or disclose that information for any other purpose. This explicitly forbids using age-check data for commercial purposes like advertising or profiling. This can only be bypassed with “voluntary, informed, current, specific and unambiguous consent” from the user for that secondary use.
  • Mandatory Destruction (S 63F(3)): An entity must destroy the personal information “after using or disclosing it for the purposes for which it was collected”.

The OAIC’s guidance confirms that this destruction mandate is more stringent than the existing Australian Privacy Principle (APP) 11.2. APP 11.2 permits de-identification or retention for ancillary business needs. Section 63F, by contrast, requires the total destruction of “inputs” (like selfies or ID scans) and “outputs” (the age token).

This stringent mandate was a political “kill switch”. It was designed to counter widespread criticism that the law would force all Australians to hand over sensitive ID, creating a “honeypot” for data breaches. Reports from the AATT found “concerning evidence” that some vendors were already “over-anticipating” regulatory needs and retaining data unnecessarily. Section 63F was Parliament’s solution: an ironclad guarantee that data must be destroyed. 

A compliance trap for platforms?

In its submission to the Parliamentary inquiry, TikTok argues that the Act creates a “compliance loophole” that directly pits the Act’s safety and privacy goals against each other. As articulated in TikTok’s submission, the “compliance loophole” functions as follows:

  • An individual attempts to register and provides a date of birth indicating they are under 16.
  • The platform, to comply with Section 63D (“reasonable steps”), blocks the registration.
  • Now, Section 63F(3) (the “destruction mandate”) requires the platform to “delete the information about that individual’s self-declared age” because its purpose (the check) is complete.
  • The same individual then attempts to register again, using the same email address, but this time provides a false date of birth (e.g., 21 years old).
  • Because the platform was legally required by Section 63F(3) to delete the record of the first attempt, it “does not have the historical information available to question the accuracy of the information being provided”.

This scenario potentially creates a “dual penalty trap” for platforms. The privacy provision (S 63F) directly undermines the safety provision (S 63D) and the eSafety Commissioner’s explicit guidance that platforms must prevent the “re-registration” of blocked users.

Parliament did not accept this argument during the debate on passage of the Act, but it remains an issue to watch.

Broader Human Rights and Privacy Concerns 

The biggest legal threat to the scheme is that it is, as YouTube alleges, a breach of Australia’s implied freedom of political communication. If so, it is open to legal challenge, and such a challenge would be an important test case. Given Google’s legal and financial resources, it would be easy for them to mount a challenge (though not necessarily easy to win it).

Closer to home, the Government’s Australian Human Rights Commission (AHRC) has expressed “serious reservations” about the entire scheme. The AHRC critiques the law as a “blunt instrument” that disproportionately interferes with the fundamental rights of children and young people, including:

  • Freedom of Expression and Access to Information (Article 13, Convention on the Rights of the Child).
  • Freedom of Association and Peaceful Assembly (Article 15, CRC).

Furthermore, the AHRC notes that platforms will inevitably be forced to “assure” the age of all users, not just children, creating systemic privacy risks for every Australian. 

But the rights afforded to children are qualified by their relative immaturity – a principle well-established in common law and legislation – and this makes the Commission’s claims only arguable at best. And the stringent rule around data destruction arguably addresses the privacy issue. 

But the Commission’s sentiment is shared by platform operators like Snap, which stated it will comply with the law but must “fundamentally disagree with it”, and many other free speech advocates. It remains to be seen if legal challenge can bring the scheme down or modify it significantly.

Enforcement, Penalties, and Public Awareness

Civil Penalties and the Evidentiary Burden

As established, the law is enforced by two co-regulators: the eSafety Commissioner for the “reasonable steps” safety obligation and the OAIC for the Section 63F privacy obligations.

A critical component of the enforcement regime is that the Act places the “evidential burden” on the platform provider. In any civil penalty proceeding, the platform bears the burden of proving that it did take reasonable steps, or, alternatively, that no reasonable steps could have been taken to comply. 

This is a high legal threshold for platforms to meet and makes comprehensive compliance documentation essential. The eSafety Commissioner is equipped with information- gathering powers to monitor compliance and can issue formal notices regarding non-compliance.

The penalties for non-compliance are severe, reflecting the government’s political will and providing a significant deterrent.

  • Failure on “Reasonable Steps” (eSafety): A breach of the Section 63D obligation by a corporation carries a maximum civil penalty of 150,000 penalty units. This figure is consistently cited by the government and in legal analyses as equivalent to AUD $49.5 million.
  • Failure on Privacy (OAIC): A breach of Section 63F (either the purpose limitation or the destruction mandate) is “taken to be… an interference with the privacy of the individual for the purposes of the Privacy Act 1988”. This action exposes the platform to the Privacy Act’s own penalty scheme, which carries maximum fines of up to AUD $50 million.

These parallel, nine-figure penalties confirm the high-stakes nature of the “Dual Penalty Trap.” Platforms must navigate the contradictory demands of Section 63D and Section 63F, knowing that a compliance failure in either direction carries catastrophic financial risk.

Awareness Campaign

To manage the public rollout and prepare families for the 10 December 2025 change, the government has launched a major public awareness campaign titled “For The Good Of”.

  • Timeline: The campaign launched on Sunday, 19 October 2025.
  • Channels: It is being aired across television, radio, billboards, and on social media platforms themselves.
  • Key Messages: The campaign’s messaging is designed to frame the law in a positive, protective light:
  • Framing: It is a “delay, not a ban”.
  • Rationale: It is for the “good of” young people, protecting their mental health and development and allowing them to “be kids”.
  • Responsibility: The campaign emphasises that the onus is on platforms to comply, not on children or parents. There are no penalties for users.
  • Resources: The campaign directs the public to a central information hub, esafety.gov.au/for-the-good-of, which hosts digital information kits, fact sheets, and community resources translated into multiple languages.

Concluding Analysis and Recommendations for Affected Entities

This final section provides a high-level summary of the critical challenges for platforms subject to the SMMA scheme and offers a forward-looking perspective on the law’s future.

Key Compliance Challenges and Legal Ambiguities

Affected entities must navigate a complex, expensive, and legally contradictory regulatory environment. The primary challenges are:

  • Definitional Ambiguity: The “significant purpose” (Act) vs. “primary purpose” (Rules) test creates a legal grey area. Platforms in the “social gaming” (e.g., Roblox) and “community messaging” (e.g., Discord) spaces face significant uncertainty, as they are arguably “exempt” under one standard but “covered” under the other.
  • Technical and Financial Burden: The eSafety Commissioner’s rejection of “self-declaration alone” imposes a substantial new technical and financial cost. Platforms must design, implement, and maintain a complex, AI-driven “waterfall” system that blends inference, estimation, and verification technologies.
  • The “Privacy Paradox”: The legislative contradiction between the safety mandate (Section 63D) and the data destruction mandate (Section 63F) must be carefully managed to avoid nearly $50 million penalties.

Demonstrating “Reasonable Steps”

Given the evidential burden and the legal ambiguities, a platform’s primary goal must be to build a defensible compliance regime:

  • Embrace the Guidance as Law: The eSafety Commissioner’s Social Media Minimum Age Regulatory Guidance must be treated as the primary compliance text. Platforms must build a “layered” or “waterfall” model and must be able to demonstrate that they do not rely on “self-declaration alone”.
  • Document All Compliance Efforts: Because the “evidential burden” rests with the platform, meticulous documentation is necessary. This includes all technical assessments of AATT-trialled technologies, the rationale for the design of the “waterfall” model, and records of ongoing monitoring and system improvements.
  • Mandate Privacy Impact Assessments (PIAs): The OAIC’s guidance recommends conducting a PIA. In this high-risk environment, this should be considered non-negotiable. A PIA is essential to demonstrate a “privacy by design” approach and to formally document the platform’s strategy for navigating the Section 63F data destruction requirements.
  • Seek Joint Regulatory Clarity: The Section 63F contradiction is a legislative flaw. Platforms should, individually or as an industry, actively engage both the eSafety Commissioner and the OAIC to seek joint, public guidance or a safe-harbour provision that resolves this “dual penalty trap.”

Next Steps: The Mandated Two-Year Legislative Review

The Act 2024 inserts Section 239B into the Online Safety Act 2021, which mandates an independent review of the operation of Part 4A. This review must be completed within two years of the enforcement date (i.e., by 10 December 2027), with a report tabled in Parliament.

This mandated review is a tacit acknowledgment by Parliament that this “world-first” law is a regulatory experiment. The 2025–2027 enforcement period will effectively serve as a “live trial.” All stakeholders will use this period to gather evidence. Platforms will undoubtedly document every instance of the Section 63F loophole to prove the law is unworkable as written. Simultaneously, the eSafety Commissioner will gather data on circumvention rates, and privacy advocates and the AHRC will gather evidence on the law’s privacy impact on all Australians.

The current law is, therefore, “Version 1.0.” The 2027 review is the true battleground where “Version 2.0” will be fought. The primary strategic objective for affected operators is to manage risk in the 2025–2027 period, and collect data and other evidence to inform the subsequent review of any changes or improvements necessary.

About Venture Insights

Venture Insights is an independent company providing research services to companies across the media, telco and tech sectors in Australia, New Zealand, and Europe.

For more information go to ventureinsights.com.au or contact us at contact@ventureinsights.com.au.