The social media ban just changed what it’s actually for — and almost nobody noticed

I’ve been tracking Australia’s social media regulation landscape for a long time. Not just since the ban passed in November 2024 — but through the age assurance technology trials, the industry code consultations, the evidence debates, the Summit that wasn’t really a Summit. Every few months something happens that brings the public conversation back to this space. This week was one of those moments. But what landed in the news cycle wasn’t the most important thing that happened. So I want to explain what was.

Girl using phone with digital casino slot machine showing text CASINO, SPIN, 777, and AGRTUL INSTAL.
Image: generated with AI

What everyone is talking about

This week, eSafety published its first compliance report on Australia’s Social Media Minimum Age obligation. Five platforms — Facebook, Instagram, Snapchat, TikTok and YouTube — are under investigation for potential non-compliance. The Commissioner is moving into an enforcement stance. Fines of up to $49.5 million are on the table.

That’s the story most outlets covered. It’s a real story. But it’s the surface.

What happened underneath

Six days before that report landed, the Minister for Communications quietly registered a new legislative instrument — the Online Safety (Age-Restricted Social Media Platforms) Amendment Rules 2026 (F2026L00370, 25 March 2026) — that adds two new conditions to the definition of an age-restricted social media platform. To fall under the ban, a platform must now also have either or both of:

  • A recommender feature: algorithms that select and display content based on a user’s account information
  • A logged-in feature: endless-feed features, feedback features such as likes and upvotes, or time-limited features such as disappearing stories

In plain language: infinite scroll, algorithmic recommendation, and social feedback loops are now formally written into the legal definition of what makes a platform harmful to children.

This attracted almost no media coverage. It should have. Because it signals something fundamental — the intellectual foundation of the ban has quietly shifted.

Two trials that influence everything

To understand why this matters, you need to know what else happened this week.

On 24 March, a New Mexico jury found Meta had violated state consumer protection law — finding 75,000 individual violations and ordering $375 million in penalties. The case arose from an undercover operation in which investigators created accounts posing as users under 14, who then received explicit material and were contacted by adults seeking similar content. The jury found Meta knowingly engaged in unfair and deceptive trade practices and exploited users’ lack of knowledge. A second phase in May will consider ordering Meta to change its platforms.

Then, in the same week, a Los Angeles jury found Meta and YouTube liable in a landmark addiction case. The plaintiff — now 20 — began using YouTube at six and Instagram at nine. The jury found that design choices including infinite scroll were made deliberately to maximise engagement in developing brains, borrowing from the behavioural techniques of poker machines and the cigarette industry. Meta was found 70% responsible, Google 30%. TikTok and Snap settled before the trial began.

Two separate juries. Two separate legal theories. Two separate verdicts. Both pointing at the same thing: these platforms were designed to exploit users, and the companies knew it.

The Australian legislative instrument and the US jury verdicts are, in effect, saying the same thing in the same week.

[Edit: A reader pointed out that jury verdicts don’t validate scientific arguments — juries are susceptible to emotional reasoning and the history of problematic jury decisions is long. It’s a fair prompt to be more precise. What I’m claiming is not that the verdicts prove harm science, but that litigation processes do give us access to internal corporate documents not otherwise visible in the public record — evidence of deliberate design intent. Meta’s own internal communications compared their platform’s effects to pushing drugs and gambling. A YouTube memo reportedly described “viewer addiction” as a goal. For a detailed legal analysis of how these documents functioned as evidence of corporate knowledge, see this USF Law Center piece. That is a claim about corporate conduct, not about clinical addiction or peer-reviewed harm science.

What the verdicts do represent is a significant socio-temporal indicator — a signal that public opinion and legal culture are shifting around platform accountability. Whatever their scientific limitations, two juries in the same week finding against Meta and YouTube on design harm grounds is a cultural and legal moment worth marking. The direction of travel matters, even if the science hasn’t fully caught up.]

This is a design problem. The harm is in the architecture.

Why this matters for the ban

The Australian social media ban was built on a different argument entirely. It was passed on a mental health narrative — driven substantially by Jonathan Haidt’s Anxious Generation thesis that social media is the primary cause of the youth mental health crisis. That causal claim was already being contested in the peer-reviewed literature at the time of enactment.

I know this because in May 2025, my colleagues and I published analysis in The Conversation predicting exactly the compliance failures eSafety has now confirmed — and we were drawing on a literature that had been raising these concerns for years.

Most recently, a major longitudinal study published in the Journal of Public Health this month — Cheng et al., following 25,629 adolescents across three years — found no evidence that social media use predicted later anxiety or depression in either girls or boys. That is among the strongest findings the literature has produced on this question.

And yet eSafety is escalating enforcement of a ban whose foundational causal claim remains unestablished. That is a significant governance concern.

But here is what the March 2026 rule changes: by writing recommender algorithms and endless-feed features into the legal definition, the Minister has effectively acknowledged that the mental health narrative was never quite the right framing. The harm is in the design — the deliberate engineering of compulsive use. Arguably, that causal claim no longer needs to carry the full weight of the ban’s legitimacy. The government has moved on from it. Without saying so.

eSafety’s own data confirms the point

If design is the problem and accounts are merely the delivery mechanism, we would expect the harm measures to be unchanged by an accounts-based ban. That is exactly what the compliance report shows.

Buried on page 15, in the complaints section: there has been no discernible drop in cyberbullying and image-based abuse complaints from children under 16 in January and February 2026 compared to the same period in 2025.

That is the direct harm measure. The one the ban was designed to move. It hasn’t moved.

Because the harm is in the design. And the design hasn’t changed.

The legislation that should have been passed

Here is where I get genuinely frustrated. And I think the public should too.

Four days before the social media ban passed through parliament — in 48 hours, with a 24-hour public submission period, in the last sitting week before a federal election — independent Member for Goldstein Zoe Daniel introduced the Online Safety Amendment (Digital Duty of Care) Bill 2024.

I have been watching this space for long enough to recognise good policy design when I see it. Daniel’s bill was good policy design.

It required large platforms to conduct and publish risk assessments of their recommender systems and algorithmic systems specifically. It required risk mitigation plans that included changing design features, testing algorithmic systems, and modifying recommender systems. It required annual transparency reports covering design features and children’s access metrics. It gave researchers access to platform data — something academics working in this space have been asking for for years. It allowed users to opt out of engagement-based recommender systems and targeted advertising. It made key personnel personally liable for failures.

And it set penalties proportionate to revenue: the greater of 100,000 penalty units or 10% of annual turnover. For Meta globally that figure would be in the billions. For TikTok Australia — with revenue of $679 million in 2024 — it would be approximately $68 million. Compare that to the ban’s flat cap of $49.5 million, which represents roughly seven weeks of TikTok’s local revenue. As I’ve said publicly: for the largest companies, the calculation is not whether to comply but whether the cost of genuine compliance exceeds the cost of the fine.

Daniel’s bill lapsed at dissolution on 28 March 2025 when the federal election was called. She lost her seat in Goldstein.

What the political record shows

The ban that passed instead was never really about the evidence. Academic researcher Amanda Third’s chapter in The Public Child (Palgrave, 2025), drawing on FOI correspondence between the South Australian Premier’s office and Jonathan Haidt, documents that the Social Media Summit — jointly hosted by the SA and NSW Premiers in October 2024 — was explicitly designed to “build momentum and support for national legislation to enforce a minimum age for access to social media.” Not to gather evidence. Not to deliberate. To build political momentum for a decision already made.

The eSafety Commissioner, meanwhile, repeatedly declined to endorse the proposal, pointing instead to the suite of design-focused regulatory work already underway — including the very framework that Daniel’s bill would have legislated.

The ban passed. Daniel’s bill lapsed. And now, fifteen months later, the government has quietly written two of Daniel’s core concepts — recommender features and endless-feed features — into a ministerial instrument, without the transparency requirements, without the proportionate penalties, without researcher data access, without personal liability for executives, and without any public acknowledgment of what it is doing.

The Duty of Care that’s still waiting

There is one more piece to this picture. The government completed consultation on a Digital Duty of Care in December 2025 — three days before the ban took effect. That consultation closed. The legislation has not been introduced.

The Duty of Care is the instrument that would actually address the design harm problem. It would require platforms to take reasonable steps to prevent foreseeable harms, shifting responsibility from individuals to platforms. It is the instrument the Commissioner’s regulatory work was always pointing toward.

It is sitting unintroduced while the accounts-based ban is being enforced.

The unintended consequences nobody planned for

Guardian Australia’s technology reporter Josh Taylor has documented several unintended consequences of the ban that reinforce the design argument. Most striking: teenagers who have managed to bypass age checks are no longer given the safety features platforms built specifically for teen accounts — because their account now appears to belong to an adult.

The ban has inadvertently stripped the most vulnerable users of the very protections designed for them. Taylor also revealed that the federal government’s anti-vaping campaign targeting teenagers had to be diverted away from the banned social media platforms to gaming and audio platforms — on the same day research found vaping could cause cancer. These are not teething problems. They are structural consequences of an accounts-based approach that doesn’t touch the underlying architecture.

What this means for children

I want to be clear about something. I am not saying the ban is simply wrong. Children have been exposed to genuine harms on these platforms — harms that two US juries have now confirmed the companies knew about and chose not to adequately address.

But children also have digital rights — to participate, access information, connect, learn and create. The UN Convention on the Rights of the Child, to which Australia is a signatory, affirms those rights explicitly in digital environments.

The slot machine architecture of social media is a genuine harm to children. The evidence — now including two jury verdicts and a growing body of peer-reviewed research — supports that framing. But children who turn 16 tomorrow will walk from total exclusion into unrestricted access to the same unreformed platforms, with no graduated pathway, no enhanced digital literacy, and no legal requirement on platforms to have changed the design features that caused the harm in the first place.

The ban delayed the exposure. It did not address the cause.

The week everything converged

In the same week: a legislative rule acknowledged design harm. Two US juries found liability for platform design and content failures. A compliance report showed the harm measure hasn’t moved. And a major peer-reviewed study confirmed the mental health causal claim the ban was built on remains unestablished.

The intellectual foundation of the ban has shifted — from an unproven mental health argument to a design harm argument the evidence actually supports. That shift is real and it matters.

But the instrument that would have acted on it died when its sponsor lost her seat in an election the ban was designed to win.

I’ve been watching this space for a long time. This week, everything that was always true about it became undeniable. I hope the public — and policymakers — are paying attention.


Somebody to Love: What AI Relationships Reveal About Us

It’s late. Maybe 11pm, maybe 2am. There’s something on your mind — something you can’t quite say out loud to anyone who knows you. So you pick up your phone. And you type it. Not to a friend. To an AI.

Something responds. Immediately. Without judgment. Without needing anything back from you.
For a lot of people, in that moment, that feels like relief.

I’m a sociologist of technology. I study how people navigate digital frontiers — how humans and technologies shape each other over time. And the question I keep returning to isn’t the one dominating the headlines about AI companions. It’s simpler, and harder: what is it giving you that you’re not getting elsewhere?

The scale of what’s happening

AI companion apps — platforms like Character.AI, Replika, and others designed to provide friendship, emotional support, or romantic companionship — have moved quickly from novelty to mainstream. Early US survey data, while varying in methodology, is beginning to suggest that somewhere between one in five and one in four American adults report some form of intimate or romantic engagement with an AI companion. These are early figures from a rapidly evolving field, but the direction is clear: this is not a fringe phenomenon.

In Australia, the picture is coming into focus for children specifically. This week, Australia’s eSafety Commissioner released findings from a transparency investigation into four AI companion services popular with Australian children — Character.AI, Nomi, Chai, and Chub AI. Their survey of 1,950 Australian children aged 10 to 17, designed to be demographically representative, found that around 79% had used an AI companion or assistant. It’s worth noting that this figure reflects children who are digitally included enough to access these services — we’ll return to that complexity.

What the investigation found in those platforms is sobering. Most did not refer users to crisis support when self-harm or suicide came up in conversations. Two of the four companies had no dedicated trust and safety staff at all. None had robust age verification. One company withdrew from Australia entirely rather than comply with the new Age-Restricted Material Codes that came into law in March 2026.

But I want to sit with a different question before we reach for regulatory responses. Because the children going to these platforms aren’t doing so because they’re naive. They’re doing so because something is drawing them there. And understanding what that something is matters more than we’ve so far acknowledged.

What we are hungry for

A 2025 systematic review published in Computers in Human Behavior Reports synthesised 23 studies on romantic and intimate AI relationships (Ho et al., 2025). Using Sternberg’s Triangular Theory of Love — the psychological framework that measures intimacy, passion, and commitment in human relationships — the researchers found that people experience all three components with AI companions. This isn’t pretend attachment. The brain chemistry doesn’t distinguish.

What are people actually looking for in these interactions? The research points to several distinct and deeply human hungers.

To be heard without consequence. Human relationships are full of consequence. When you tell a friend you’re struggling, they worry. When you tell a partner you’re unhappy, it becomes about the relationship. The AI companion offers something almost no human relationship provides: a space where you can say the unsayable thing and nothing breaks.

Full attention. When did you last have someone’s complete, undivided attention? Full attention is perhaps the scarcest resource in contemporary life. Everyone is overwhelmed. And here is something that treats every single thing you say as worth responding to fully.

To be understood without performing. Modern social life requires constant impression management. The AI companion asks nothing of you socially. You can be unpolished, contradictory, and confused — and the system meets you there.

Unconditional positive regard. The psychologist Carl Rogers identified this as one of the core conditions for psychological growth — to be accepted fully, without conditions. The AI never withdraws approval. For someone who has experienced conditional love or abandonment, this is extraordinarily seductive.

None of these needs are pathological. They’re the most human needs there are. As researchers Shank, Koike, and Loughnan wrote in a 2025 paper in Trends in Cognitive Sciences, AI companions offer “a relationship with a partner whose body and personality are chosen and changeable, who is always available but not insistent, who does not judge or abandon, and who does not have their own problems.” Reading that description, it’s worth asking honestly: who hasn’t wished for something like that?

What gets lost in translation

The same body of research is clear that something is also being lost. Ho et al. found that the pitfalls identified in the literature outnumber the benefits — and the pitfalls are specific.
AI companions cannot be genuinely changed by you. Real intimacy involves mutual transformation — I am different because of you, you are different because of me. The AI processes you and responds to you, but it is not altered by the encounter. You grow; it doesn’t.

They cannot need you back. One of the underappreciated sources of meaning in human relationships is being needed — the experience of your presence mattering to another person’s actual wellbeing. The AI is available whether you show up or not.

And they cannot repair rupture with you. One of the most important things human relationships teach — particularly for children — is that connection can break and be repaired. The AI companion never ruptures in a real way. There’s nothing to repair. And so the crucial relational skill of tolerating difficulty, trusting repair, staying in complex connection, never gets practised.

These systems are very good at being mirrors. They learn your preferences and give you more of what you seem to want. But a diet of only mirrors eventually makes you smaller — because the irreducible otherness of another actual person, the way they confound your model of them, is what expands you.

Who is in this picture — and who isn’t

Here the story gets more complicated, and more important.
Australia’s 2025 Digital Inclusion Index tells us that around one in five Australians is digitally excluded — lacking reliable access, unable to afford adequate connection, or without the skills to participate safely in digital life. Rates are much higher for older Australians, people in public housing, First Nations communities, and those who didn’t complete secondary school. The 79% of children using AI companions or assistants are drawn from those who are digitally included enough to access these platforms. The most disadvantaged children are largely absent from that figure.

But here is what complicates any simple narrative about AI companionship as an affluent urban phenomenon: the same Digital Inclusion Index found that Australians in remote areas are more than twice as likely to use AI chatbots for social connection than people in metropolitan areas — around 19% of remote GenAI users compared to under 8% in cities. In the places with the least human connection infrastructure, people are turning to AI companionship at higher rates.

The relational vacuum, in other words, is not uniform. It is shaped by geography, income, age, and the presence or absence of community infrastructure. The people most likely to turn to AI for connection are often those with the fewest alternatives.

The question that matters

The technology didn’t create the gap in human connection. It found it.

And so the digital literacy question I want to put into public conversation isn’t only about understanding algorithms or data privacy — though both matter. It’s this: am I getting what I actually need from this? Or am I getting a version of it that’s making it harder to get the real thing?

That’s a question worth sitting with. Not with judgment — the needs underneath these relationships are real and the loneliness driving them is real. But with genuine curiosity about what we’re building toward, individually and collectively, as these technologies become more sophisticated and more intimate.
I’ll be exploring these questions at Pint of Science on the night of 20 May 2026 at the Queens Arms, Bendigo — a pub conversation about AI intimacy, human hunger, and digital literacy. I’d love to hear your reflections before then.

What are you getting from these technologies that you’re not getting elsewhere?

When Research Becomes “Big Tech Talking Points”: The Erosion of Good-faith Discourse on Social Media Regulation

As a sociologist of technology and educator focused on digital literacy, I’ve spent years working with research on the complex relationship between young people and social media. Recently, I found myself in an online discussion that exemplifies a troubling pattern in how we debate digital policy issues in Australia.

After sharing peer-reviewed research showing that while some correlations exist between social media use and mental health outcomes, there’s limited evidence supporting a causal relationship where social media directly causes poor mental health or reduced wellbeing. I was quickly labeled as someone “shilling” for “Big Tech,” with my evidence-based positions dismissed as “talking points”.

Research points to how individuals with existing mental health challenges may gravitate toward certain types of social media use, rather than social media itself being the primary cause of these challenges. This important distinction highlights how nuanced research gets flattened into simplistic positions when policy discussions become emotionally charged.

The False Binary: Protect Kids or Support Big Tech

The current discourse around Australia’s social media age ban has created a false dichotomy: either you support sweeping restrictions or you’re somehow against protecting children. This reductive framing leaves no room for evidence-based approaches that aim to both protect young people and preserve their digital agency.

When I cite studies showing that social media use accounts for only 0.4% of the variance in well-being – findings published in reputable journals – these aren’t “industry talking points”. They’re research conclusions reached through rigorous methodology and peer review. As noted in a recent Nature article, the evidence linking social media use to mental health issues is far more equivocal than public discourse suggests.

Just look at what the research actually says: “An analysis of 3 data sets, including 355,000 adolescents, found that the association between social media use and well-being accounts for, at most, 0.4% of the variance in well-being, which the authors conclude is of ‘little practical value’. Another large study of adolescent users concluded that the association was ‘too small to merit substantial scientific discussion’. A longitudinal study that measured social media use through an app installed on participants’ mobile devices found no associations between any measures of Facebook use and loneliness or depression over time.”

The current push for age bans in Australia reveals concerning patterns in how policy is developed. Australian researchers have pointed out that much of the momentum behind these restrictions can be traced directly to Jonathan Haidt’s book “The Anxious Generation,” which has become influential despite its claims being disputed by experts at prestigious institutions like the London School of Economics. As Dr. Aleesha Rodriguez from the ARC Centre of Excellence for the Digital Child has observed, books that capitalise on parental anxieties should not drive national policy decisions, especially when they bypass evidence-based approaches and committee recommendations. The government’s announcement of social media age restrictions came before the Joint Select Committee on Social Media and Australian Society even issued its interim report, raising questions about the role of evidence in this policy development process. You’ll see that the final report came out on the 18th November 2024 and it did not recommend the implementation of age bans.

The Power of Emotional Appeals vs. Research Findings

But in our current climate, sharing such research and insights is met with accusations of being “in the pockets of Big Tech” or having “industry interference” – rhetorical devices designed to discredit without engaging with the substance of the evidence. This pattern of discourse relies heavily on emotional appeals and anecdotes to overwhelm research findings. “Children’s wellbeing (and lives) are at stake,” advocates declare, implying that questioning the effectiveness of age bans is equivalent to devaluing children’s safety.

These emotional appeals are powerful because they tap into genuine parental anxieties. In their public communications, advocates may employ evocative language (“stranglehold,” “insidious,” “shame on them all”) and frame the debate as a moral binary: either you support age bans or you’re effectively siding with “Big Tech” against children’s interests. This rhetorical approach creates a false dichotomy where nuanced research positions are dismissed as “industry talking points” without engaging with the substance of the evidence.

By contrast, research on children’s digital experiences draws on diverse empirical methods—including large-scale surveys, in-depth qualitative studies, longitudinal tracking, and co-design work with children themselves. This comprehensive approach captures a wide range of social experiences across different demographics and contexts. Such research undergoes rigorous peer review, requiring methodological transparency and critical evaluation before publication.

Importantly, the research landscape itself contains diverse perspectives and interpretations. Even within academic disciplines studying digital youth, researchers may disagree about the significance of findings, methodological approaches, and policy implications. Some researchers emphasise potential harms and advocate for stronger protections, while others highlight benefits and concerns about digital exclusion. This diversity of expert opinion reflects the complex nature of children’s digital engagement rather than undermining the value of research-informed approaches.

What most researchers do agree on is that the evidence doesn’t support simplistic narratives. The findings indicate that while correlations exist between social media use and well-being, many other factors play more significant roles, and the relationships are often bidirectional and context-dependent.

Policy decisions affecting millions of young Australians deserve more than anxiety-driven responses – they require careful consideration of evidence, unintended consequences, and alternative approaches that address both the genuine concerns of parents and the established digital rights of children.

When Nuance Gets Lost: The Digital Duty of Care Example

The irony is that I and many researchers share the same core concern as advocates: we want digital environments that are safer for young people. Where we differ is in how to achieve this goal effectively.

Australia’s Digital Duty of Care bill proposal, which has received far less media attention than the age ban, represents a more evidence-based approach to improving online safety. You can also see its much slower movement through parliament. It focuses on making platforms safer by design rather than simply restricting access.

This legislation, developed through extensive consultation and aligned with comparable measures in the UK and EU, places responsibility on platforms to proactively prevent online harms. Yet because it lacks the emotional appeal of “keeping kids off social media”, it hasn’t captured public imagination in the same way.

I support making digital environments safer for young people. Following the intention of this policy, research suggests this is better accomplished through platform design requirements, digital literacy education, and appropriate safeguards rather than blanket age bans that may create unintended consequences.

The Overlooked Complexities

Lost in the simplified discourse are crucial considerations that research brings to light:

  1. Digital equity concerns: Age restrictions disproportionately impact young people in regional and remote areas who rely on social media for educational resources and social connection.
  2. Support for marginalised youth: For many LGBTQI+ young people and others who feel isolated in their physical communities, online spaces provide crucial support networks.
  3. Technical realities: The age verification technologies being proposed have significant technical limitations, with biometric age estimation showing concerning accuracy gaps for young teenagers and disparities across demographic groups.
  4. Platform compliance challenges: As we’ve seen with Meta’s pushback against EU regulations, we can’t assume platforms will simply comply with national regulations they see as burdensome for smaller markets.
  5. Educational implications: Schools face significant challenges in navigating restrictions that could inadvertently disrupt established educational practices that use social media platforms.

These complexities matter, not because they invalidate safety concerns, but because addressing them is essential to developing effective policy that truly serves young people’s interests.

Unintended Consequences of Age Verification Systems

A significant oversight in the age ban debate is how age verification technologies will inevitably impact all users—not just children. The government’s Age Assurance Technology Trial, while focused on “evaluating the effectiveness, maturity, and readiness” of these technologies, does not adequately address the far-reaching implications for adult digital access.

These systems, once implemented, create barriers for everyone—not just children. Adults who lack standard government-issued ID, have limited digital literacy, use shared devices, or have privacy concerns may find themselves effectively locked out of digital spaces. This particularly affects already marginalised groups: elderly people, rural and remote communities, people with disabilities, individuals from lower socioeconomic backgrounds, and those with non-traditional documentation.

Age verification systems that rely on biometric data, ID scanning, or credit card verification raise serious privacy concerns that extend well beyond children’s safety. Once these surveillance infrastructures are established for “protecting children,” they create permanent digital checkpoints that normalise identity verification for increasingly basic online activities. The same parents advocating for these protections may not anticipate how these systems will affect their own digital autonomy and privacy.

Moreover, the technical limitations of age verification technologies create a false sense of security. Current systems struggle with accuracy, particularly for users with certain disabilities, those from diverse ethnic backgrounds, or individuals whose appearance doesn’t match algorithmic expectations. Rather than creating safe digital environments through design and platform responsibility, age verification shifts the burden to individual users while potentially exposing their sensitive personal data to additional security risks.

Children’s Rights in the Digital Environment

What’s frequently missing from this debate is recognition of children’s established rights in digital spaces. The UN Committee on the Rights of the Child’s General Comment No. 25 (2021) specifically addresses children’s rights in relation to the digital environment. This authoritative interpretation clarifies that children have legitimate rights to:

  • Access information and express themselves online (Articles 13 and 17)
  • Privacy and protection of their data (Article 16)
  • Freedom of association and peaceful assembly in digital spaces (Article 15)
  • Participation in cultural life and play through digital means (Article 31)
  • Education that includes digital literacy (Article 28)

The UN framework emphasises that the digital environment “affords new opportunities for the realization of children’s rights” while acknowledging the need for appropriate protections. It specifically notes that children themselves report that digital technologies are “vital to their current lives and to their future.”

This rights-based framework fundamentally challenges the premise that children should simply be excluded from digital spaces until they reach an arbitrary age threshold. Instead, it calls for balancing protection with participation and recognising children’s evolving capacities.

The Australian context

In Australia, the digital rights of children are recognised and protected, encompassing privacy, safety, and access to information, with organisations like the eSafety Commissioner and the Alannah & Madeline Foundation playing key roles in advocacy and research. 

Here’s a more detailed breakdown of the digital rights of children in Australia:

Key Rights and Protections: 

  • Privacy: Children have the right to privacy in the digital environment, which is protected by the Privacy Act 1988. 
  • Safety: The eSafety Commissioner works to protect children from online harms like cyberbullying, grooming, and exposure to harmful content. 
  • Access to Information: Children have the right to access reliable and age-appropriate information online. 
  • Freedom of Expression: Children have the right to express themselves online, but this right must be balanced with the need to protect them from harm. 
  • Participation: Children have the right to participate in online activities and to have their views heard, especially in matters that affect them. 

Relevant Organisations and Initiatives: 

  • eSafety Commissioner: This government agency is responsible for promoting online safety and protecting children from online harms. 
  • Alannah & Madeline Foundation: This organisation advocates for children’s rights online and works to create a safer online environment for children. 
  • Australian Research Council Centre of Excellence for the Digital Child: This research centre focuses on creating positive digital childhoods for all Australian children. 
  • UNCRC General Comment No. 25: This document outlines the rights of the child in relation to the digital environment and provides guidance for governments and other actors. 
  • The Digital Child: A research and advocacy organisation focused on children’s digital rights and wellbeing. 
  • UNICEF Australia: Collaborates with the Digital Child centre to promote digital wellbeing for young children. 
  • Digital Rights Watch: An organization that works to ensure fairness, freedoms and fundamental rights for all people who engage in the digital world. 

Key Issues and Challenges: 

  • Online Safety: Protecting children from online harms like cyberbullying, grooming, and exposure to harmful content is a major concern. 
  • Privacy: Balancing the need to protect children’s privacy with the need for parents and caregivers to monitor their online activity is a complex issue. 
  • Age Verification: Ensuring that children are not exposed to age-inappropriate content and that they 
    are not targeted by online services is important. 
  • Misinformation and Disinformation: Children are vulnerable to misinformation and disinformation online, and it’s important to equip them with the skills to identify and avoid it. 
  • Technology-Facilitated Abuse: Children can be victims of technology-facilitated abuse (TFA) in the context of domestic and family violence, and it’s important to address this issue. 
  • Parental Rights vs. Children’s Privacy: The extent to which parents can monitor their children’s online activity is a complex issue with legal implications. 
  • Digital Literacy: It’s important to support digital literacy initiatives that encourage and empower children to take further responsibility for their online safety. 

Alternative Approaches: A Better Children’s Internet
Australian researchers are offering a more constructive approach to online safety than blanket age restrictions. In a timely article, researchers from the ARC Centre of Excellence for the Digital Child explain that while they understand the concerns motivating the Australian Government’s decision to ban children under 16 from creating social media accounts, they believe this approach “undermines the reality that children are growing up in a digital world”.
They have developed a “Manifesto for a Better Children’s Internet” that acknowledges both the benefits and risks of digital engagement while focusing on practical improvements. They argue that “rather than banning young people’s access to social media platforms, the Australian Government should invest, both financially and socially, in developing Australia’s capacity as a global leader in producing and supporting high-quality online products and services for children and young people.”

Their framework includes several key recommendations:

Standards for high-quality digital experiences – Developing clear quality standards for digital products and services aimed at children, with input from multiple stakeholders including children themselves.
Slow design and consultation with children – Involving children and families in the design process rather than using them as “testing markets” for products and services.
Child-centered regulation and policy – Creating appropriate “guardrails” through regulatory guidelines developed with input from children, carers, families, educators and experts.
Media literacy policy and programs – Investing in media literacy education for both children and parents to develop the skills needed to navigate digital environments safely and productively.

This approach acknowledges that the internet “has enhanced children’s lives in many ways” while recognising it “was not designed with children in mind.” Rather than simply restricting access, it focuses on redesigning digital spaces to better serve young people’s needs and respecting their agency in the process.
This framework offers a promising middle path between unrestricted access and blanket prohibitions, focusing on improvement rather than exclusion.

Moving Forward: Good faith engagement

What would a more productive discourse look like? Rather than dividing positions into “protectors of children” versus “Big Tech shills,” we need approaches that:

  • Recognise children’s established rights: Digital policy should acknowledge children’s legitimate rights to information, expression, association, privacy, and participation as articulated in the UN Convention on the Rights of the Child.
  • Engage with the full evidence base: This includes both research on potential harms and studies showing limited correlations or positive benefits, with a commitment to understanding the methodological strengths and limitations of different studies.
  • Center young people’s voices: The young people affected by these policies have valuable perspectives that deserve genuine consideration, not dismissal as naive or manipulated.
  • Acknowledge trade-offs: Every policy approach involves trade-offs between protection, privacy, and participation rights. Pretending otherwise doesn’t serve anyone.
  • Focus on effective solutions: Research suggests a combination of platform design improvements, digital literacy education, and more nuanced moderation systems may be more effective than simply setting age limits.
  • Maintain good faith dialogue: Rather than using emotional appeals and moral accusations to shut down debate, all participants should approach these discussions with the genuine belief that others share the concern for children’s wellbeing, even when they disagree about methods.

This approach would move us beyond simplistic binaries and rhetorical tactics toward policies that genuinely serve children’s best interests in all their complexity.

I remain committed to research-informed approaches to making digital spaces safer for young people. This doesn’t mean blindly defending the status quo, but rather advocating for solutions that address the real complexities of young people’s digital lives while respecting their established rights.

The Digital Duty of Care legislation offers a promising framework that places responsibility on platforms to make their services safer for all users through design choices, risk assessment, and mitigation strategies. Combined with robust digital literacy education and appropriate parental controls, this represents a more comprehensive approach than age restrictions alone.

As the social media landscape continues to evolve, maintaining evidence-based discourse matters more than ever. Dismissing research as “talking points” doesn’t advance the conversation – it closes it down just when we need it most.

Young Australians deserve digital policies crafted through careful consideration of evidence, informed by young people’s perspectives, and grounded in their established rights. That’s not a “Big Tech talking point” – it’s responsible, ethical policymaking that centres the needs and interests of the very people these policies aim to serve.

Beyond Age Limits: What’s Missing in Australia’s Social Media Ban Discussion

Why are we talking about this now?

The ABC’s recent article “The government plans to ban under-16s from social media platforms” lays out the mechanics of Australia’s proposed social media age restrictions. The timing of this announcement is significant – with only two parliamentary sitting weeks left this year and an election on the horizon, both major parties are backing this policy. This follows months of mounting pressure from parent advocacy groups like 36 Months, and builds on earlier discussions about protecting children from online pornography.
But while the article explains what will happen, there are critical questions we need to address about whether this approach will actually work – and what we might lose in the process. This isn’t just about technical implementation; it’s about understanding why we’re seeing this push now and whether it represents meaningful policy development or political opportunism.
The recent Social Media Summit in Sydney and Adelaide highlighted how this debate is being shaped. Rather than drawing on Australia’s world-leading expertise in digital youth research, the summit featured US speakers promoting what has been referred to as a “moral panic” approach. This raises questions about whether we’re developing evidence-based policy or responding to political pressures.

The Policy vs Reality

Yes, platforms will have 12 months to implement age verification systems and we will no doubt see push back from platforms on this. Yes, the definition of social media is broad enough to capture everything from TikTok to YouTube to potentially Discord and Roblox.

Additionally, the government’s ability to enforce age restrictions on global social media platforms raises significant practical and legal challenges. While Australia can pass domestic legislation requiring platforms to verify users’ ages, enforcing these rules on companies headquartered overseas is complex. Recent history shows platforms often prefer to withdraw services rather than comply with costly local regulations – consider Meta’s response to Canadian news legislation or X’s ongoing resistance to Australian eSafety Commissioner directives.

Any proposed penalties may not provide sufficient incentive for compliance, particularly given these platforms’ global revenues. Additionally, even if major platforms comply, young people could simply use VPNs to access services through other countries, or migrate to less regulated platforms beyond Australian jurisdiction.

Without international cooperation on digital platform regulation, individual countries face significant challenges in enforcing national regulations on global platforms. This raises a crucial question: will platforms invest in expensive age-verification systems for the Australian market, or will they simply restrict their services here, potentially reducing rather than enhancing digital participation options for all Australians?

What is missing from this conversation?

  1. Digital Equity: The broad scope of this ban could particularly impact:
    • Regional and remote students using these platforms for education
    • Marginalised youth who find support and community online
    • Young people using gaming platforms for social connection
  2. Privacy Trade-offs: The proposed verification systems mean either:
    • Providing ID to social media companies
    • Using facial recognition technology
    • Creating centralised age verification systems
    • All of these raise significant privacy concerns – not just for teens, but for all users.
  3. Unintended Consequences: International experience shows young people often:
    • Switch to VPNs to bypass restrictions
    • Move to less regulated platforms
    • Share accounts or find other workarounds

A More Nuanced Approach

Rather than focusing solely on age restrictions, we could be:

  • Making platforms safer by design
  • Investing in digital literacy education
  • Supporting parents and educators
  • Listening to young people’s experiences
  • Learning from international approaches like the EU’s Digital Services Act

Looking Forward

While the government’s concern about young people’s online safety is valid, and is shared by researchers, families, school teachers and young people alike, the solution isn’t as simple as setting an age limit. Young people develop digital capabilities at different rates, and their resilience online often depends more on their support networks, digital literacy, and individual circumstances than their age alone.
The Centre of Excellence for the Digital Child’s research demonstrates that some young people are highly capable of identifying and managing online risks, while others need more support – regardless of age. This is particularly important when we consider:

  • Some younger teens demonstrate sophisticated understanding of privacy settings and online safety
  • Many vulnerable teens rely on online communities for crucial support
  • Digital literacy and family support often matter more than age in online resilience
  • Regional and remote youth often develop advanced digital skills earlier out of necessity

We need approaches that protect while preserving the benefits of digital participation, recognising that arbitrary age limits may not align with individual capability and need.
This better reflects the evidence while acknowledging:

  • The validity of safety concerns
  • The complexity of digital capability development
  • The importance of context and support
  • The need for nuanced policy responses

The Joint Select Committee on Social Media and Australian Society is still to deliver its final report. Perhaps it’s worth waiting for this evidence before rushing to implement restrictions that might create more problems than they solve.

EDIT: They have now released their final report, with some excellent recommendations… and no mention of an age ban.

The Bottom Line

Protection and participation aren’t mutually exclusive. We can make online spaces safer without excluding young people from digital citizenship. But it requires more nuanced solutions than age barriers alone can provide.