The social media ban just changed what it’s actually for — and almost nobody noticed

I’ve been tracking Australia’s social media regulation landscape for a long time. Not just since the ban passed in November 2024 — but through the age assurance technology trials, the industry code consultations, the evidence debates, the Summit that wasn’t really a Summit. Every few months something happens that brings the public conversation back to this space. This week was one of those moments. But what landed in the news cycle wasn’t the most important thing that happened. So I want to explain what was.

Girl using phone with digital casino slot machine showing text CASINO, SPIN, 777, and AGRTUL INSTAL.
Image: generated with AI

What everyone is talking about

This week, eSafety published its first compliance report on Australia’s Social Media Minimum Age obligation. Five platforms — Facebook, Instagram, Snapchat, TikTok and YouTube — are under investigation for potential non-compliance. The Commissioner is moving into an enforcement stance. Fines of up to $49.5 million are on the table.

That’s the story most outlets covered. It’s a real story. But it’s the surface.

What happened underneath

Six days before that report landed, the Minister for Communications quietly registered a new legislative instrument — the Online Safety (Age-Restricted Social Media Platforms) Amendment Rules 2026 (F2026L00370, 25 March 2026) — that adds two new conditions to the definition of an age-restricted social media platform. To fall under the ban, a platform must now also have either or both of:

  • A recommender feature: algorithms that select and display content based on a user’s account information
  • A logged-in feature: endless-feed features, feedback features such as likes and upvotes, or time-limited features such as disappearing stories

In plain language: infinite scroll, algorithmic recommendation, and social feedback loops are now formally written into the legal definition of what makes a platform harmful to children.

This attracted almost no media coverage. It should have. Because it signals something fundamental — the intellectual foundation of the ban has quietly shifted.

Two trials that influence everything

To understand why this matters, you need to know what else happened this week.

On 24 March, a New Mexico jury found Meta had violated state consumer protection law — finding 75,000 individual violations and ordering $375 million in penalties. The case arose from an undercover operation in which investigators created accounts posing as users under 14, who then received explicit material and were contacted by adults seeking similar content. The jury found Meta knowingly engaged in unfair and deceptive trade practices and exploited users’ lack of knowledge. A second phase in May will consider ordering Meta to change its platforms.

Then, in the same week, a Los Angeles jury found Meta and YouTube liable in a landmark addiction case. The plaintiff — now 20 — began using YouTube at six and Instagram at nine. The jury found that design choices including infinite scroll were made deliberately to maximise engagement in developing brains, borrowing from the behavioural techniques of poker machines and the cigarette industry. Meta was found 70% responsible, Google 30%. TikTok and Snap settled before the trial began.

Two separate juries. Two separate legal theories. Two separate verdicts. Both pointing at the same thing: these platforms were designed to exploit users, and the companies knew it.

The Australian legislative instrument and the US jury verdicts are, in effect, saying the same thing in the same week.

This is a design problem. The harm is in the architecture.

Why this matters for the ban

The Australian social media ban was built on a different argument entirely. It was passed on a mental health narrative — driven substantially by Jonathan Haidt’s Anxious Generation thesis that social media is the primary cause of the youth mental health crisis. That causal claim was already being contested in the peer-reviewed literature at the time of enactment.

I know this because in May 2025, my colleagues and I published analysis in The Conversation predicting exactly the compliance failures eSafety has now confirmed — and we were drawing on a literature that had been raising these concerns for years.

Most recently, a major longitudinal study published in the Journal of Public Health this month — Cheng et al., following 25,629 adolescents across three years — found no evidence that social media use predicted later anxiety or depression in either girls or boys. That is among the strongest findings the literature has produced on this question.

And yet eSafety is escalating enforcement of a ban whose foundational causal claim remains unestablished. That is a significant governance concern.

But here is what the March 2026 rule changes: by writing recommender algorithms and endless-feed features into the legal definition, the Minister has effectively acknowledged that the mental health narrative was never quite the right framing. The harm is in the design — the deliberate engineering of compulsive use. Arguably, that causal claim no longer needs to carry the full weight of the ban’s legitimacy. The government has moved on from it. Without saying so.

eSafety’s own data confirms the point

If design is the problem and accounts are merely the delivery mechanism, we would expect the harm measures to be unchanged by an accounts-based ban. That is exactly what the compliance report shows.

Buried on page 15, in the complaints section: there has been no discernible drop in cyberbullying and image-based abuse complaints from children under 16 in January and February 2026 compared to the same period in 2025.

That is the direct harm measure. The one the ban was designed to move. It hasn’t moved.

Because the harm is in the design. And the design hasn’t changed.

The legislation that should have been passed

Here is where I get genuinely frustrated. And I think the public should too.

Four days before the social media ban passed through parliament — in 48 hours, with a 24-hour public submission period, in the last sitting week before a federal election — independent Member for Goldstein Zoe Daniel introduced the Online Safety Amendment (Digital Duty of Care) Bill 2024.

I have been watching this space for long enough to recognise good policy design when I see it. Daniel’s bill was good policy design.

It required large platforms to conduct and publish risk assessments of their recommender systems and algorithmic systems specifically. It required risk mitigation plans that included changing design features, testing algorithmic systems, and modifying recommender systems. It required annual transparency reports covering design features and children’s access metrics. It gave researchers access to platform data — something academics working in this space have been asking for for years. It allowed users to opt out of engagement-based recommender systems and targeted advertising. It made key personnel personally liable for failures.

And it set penalties proportionate to revenue: the greater of 100,000 penalty units or 10% of annual turnover. For Meta globally that figure would be in the billions. For TikTok Australia — with revenue of $679 million in 2024 — it would be approximately $68 million. Compare that to the ban’s flat cap of $49.5 million, which represents roughly seven weeks of TikTok’s local revenue. As I’ve said publicly: for the largest companies, the calculation is not whether to comply but whether the cost of genuine compliance exceeds the cost of the fine.

Daniel’s bill lapsed at dissolution on 28 March 2025 when the federal election was called. She lost her seat in Goldstein.

What the political record shows

The ban that passed instead was never really about the evidence. Academic researcher Amanda Third’s chapter in The Public Child (Palgrave, 2025), drawing on FOI correspondence between the South Australian Premier’s office and Jonathan Haidt, documents that the Social Media Summit — jointly hosted by the SA and NSW Premiers in October 2024 — was explicitly designed to “build momentum and support for national legislation to enforce a minimum age for access to social media.” Not to gather evidence. Not to deliberate. To build political momentum for a decision already made.

The eSafety Commissioner, meanwhile, repeatedly declined to endorse the proposal, pointing instead to the suite of design-focused regulatory work already underway — including the very framework that Daniel’s bill would have legislated.

The ban passed. Daniel’s bill lapsed. And now, fifteen months later, the government has quietly written two of Daniel’s core concepts — recommender features and endless-feed features — into a ministerial instrument, without the transparency requirements, without the proportionate penalties, without researcher data access, without personal liability for executives, and without any public acknowledgment of what it is doing.

The Duty of Care that’s still waiting

There is one more piece to this picture. The government completed consultation on a Digital Duty of Care in December 2025 — three days before the ban took effect. That consultation closed. The legislation has not been introduced.

The Duty of Care is the instrument that would actually address the design harm problem. It would require platforms to take reasonable steps to prevent foreseeable harms, shifting responsibility from individuals to platforms. It is the instrument the Commissioner’s regulatory work was always pointing toward.

It is sitting unintroduced while the accounts-based ban is being enforced.

The unintended consequences nobody planned for

Guardian Australia’s technology reporter Josh Taylor has documented several unintended consequences of the ban that reinforce the design argument. Most striking: teenagers who have managed to bypass age checks are no longer given the safety features platforms built specifically for teen accounts — because their account now appears to belong to an adult.

The ban has inadvertently stripped the most vulnerable users of the very protections designed for them. Taylor also revealed that the federal government’s anti-vaping campaign targeting teenagers had to be diverted away from the banned social media platforms to gaming and audio platforms — on the same day research found vaping could cause cancer. These are not teething problems. They are structural consequences of an accounts-based approach that doesn’t touch the underlying architecture.

What this means for children

I want to be clear about something. I am not saying the ban is simply wrong. Children have been exposed to genuine harms on these platforms — harms that two US juries have now confirmed the companies knew about and chose not to adequately address.

But children also have digital rights — to participate, access information, connect, learn and create. The UN Convention on the Rights of the Child, to which Australia is a signatory, affirms those rights explicitly in digital environments.

The slot machine architecture of social media is a genuine harm to children. The evidence — now including two jury verdicts and a growing body of peer-reviewed research — supports that framing. But children who turn 16 tomorrow will walk from total exclusion into unrestricted access to the same unreformed platforms, with no graduated pathway, no enhanced digital literacy, and no legal requirement on platforms to have changed the design features that caused the harm in the first place.

The ban delayed the exposure. It did not address the cause.

The week everything converged

In the same week: a legislative rule acknowledged design harm. Two US juries found liability for platform design and content failures. A compliance report showed the harm measure hasn’t moved. And a major peer-reviewed study confirmed the mental health causal claim the ban was built on remains unestablished.

The intellectual foundation of the ban has shifted — from an unproven mental health argument to a design harm argument the evidence actually supports. That shift is real and it matters.

But the instrument that would have acted on it died when its sponsor lost her seat in an election the ban was designed to win.

I’ve been watching this space for a long time. This week, everything that was always true about it became undeniable. I hope the public — and policymakers — are paying attention.


Leave a comment