Between Promise and Peril: The AI Paradox in Family Violence Response

By Dr. Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures, School of Education, La Trobe University

When Smart Systems Meet Human Stakes

The integration of artificial intelligence into our legal system presents a profound paradox. The same AI tools promising unprecedented efficiency in predicting and preventing family violence can simultaneously amplify existing biases and create dangerous blind spots.

This tension between technological promise and human care, support and protection isn’t theoretical—it’s playing out in real-time across legal systems worldwide. Through my involvement in last year’s AuDIITA Symposium, specifically the theme on AI and Family violence, our discussions highlighted the high-stakes applications of AI in family violence response. I found that the question isn’t whether AI can help, but rather how we can ensure it enhances rather than replaces human judgment in these critical contexts.

The Capabilities and the Gaps

Recent advances in AI for family violence response show remarkable technical promise:

  • Researchers have achieved over 75% accuracy in distinguishing between lethal and non-lethal violence cases using AI analysis of legal documents
  • Machine learning systems can identify patterns in administrative data that might predict escalation before it occurs
  • Natural language processing tools can potentially identify family violence disclosures on social media platforms

But these impressive capabilities obscure a troubling implementation gap. What happens when these systems encounter the messy reality of human services?

The VioGén Warning

Spain’s VioGén system offers a sobering case study. Despite being hailed as a world-leading predictive tool for family violence risk, its flaws led to tragic outcomes—with at least 247 women killed after being assessed, many after being classified as “low” or “negligible” risk.

The system’s failures stemmed from multiple factors:

  • Victims were often too afraid or ashamed to provide complete information
  • Police accepted algorithmic recommendations 95% of the time despite lacking resources for proper investigation
  • The algorithm potentially missed crucial contextual factors that human experts might have caught
  • Most critically, the system’s presence seemed to reduce human agency in decision-making, with police and judges deferring to its risk scores even when other evidence suggested danger

Research revealed that women born outside Spain were five times more likely to be killed after filing family violence complaints than Spanish-born women. This suggests the system inadequately accounted for the unique vulnerabilities of immigrant women, particularly those facing linguistic barriers or fears of deportation.

The Cultural Blind Spot

This pattern of leaving vulnerable populations behind reflects a broader challenge in technology development. Research on technology-facilitated abuse has consistently shown how digital tools can disproportionately impact culturally and linguistically diverse women, who often face a complex double-bind:

  • More reliant on technology to maintain vital connections with family overseas
  • Simultaneously at increased risk of technological abuse through those same channels
  • Often experiencing unique forms of technology-facilitated abuse, such as threats to expose culturally sensitive information

For AI risk assessment to work, it must explicitly account for how indicators of abuse and coercive control manifest differently across cultural contexts. Yet research shows even state-of-the-art systems struggle with this nuance, achieving only 76% accuracy in identifying family violence reports that use indirect or culturally specific language.

Beyond Algorithms: The Human Element

What does this mean for the future of AI in family violence response? My research suggests three critical principles must guide implementation:

1. Augment, Don’t Replace

AI systems must be designed to enhance professional judgment rather than constrain it or create efficiency dependencies. This means creating systems that:

  • Provide transparent reasoning for risk assessments
  • Allow professionals to override algorithmic recommendations based on contextual factors
  • Present information as supportive evidence rather than definitive judgment

2. Design for Inclusivity from the Start

AI systems must explicitly account for diversity in how family violence manifests across different communities:

  • Include diverse data sources and perspectives in development
  • Build systems capable of recognising cultural variations in disclosure patterns
  • Ensure technology respects various epistemologies, including indigenous perspectives

3. Maintain Robust Accountability

Implementation frameworks must preserve professional autonomy and expertise:

  • Ensure adequate resourcing for human assessment alongside technological tools
  • Create clear guidelines for when algorithmic recommendations should be questioned
  • Maintain transparent review processes to identify and address algorithmic bias

Victoria’s Balanced Approach

In Victoria and across Australia, there is encouraging evidence of a balanced approach to AI in legal contexts. While embracing technological advancements, Victorian courts have shown appropriate caution around AI use in evidence and maintained strict oversight to ensure the integrity of legal proceedings.

This approach—maintaining human oversight while allowing limited AI use in lower-risk contexts—aligns with what research suggests is crucial for successful integration: preserving professional judgment and accountability, particularly in cases involving vulnerable individuals.

The Path Forward

As we navigate the next wave of technological transformation in legal practice, we face a critical choice. We can allow AI to become a “black box of justice” that undermines transparency and human agency, or we can harness its potential while maintaining the essential human elements that make our legal system work.

Success will require not just technological sophistication but careful attention to institutional dynamics, professional practice patterns, and the complex social contexts in which these technologies operate. Most critically, it demands recognition that in high-stakes human service contexts, technology must serve human needs and judgment rather than constrain them.

The AI paradox in law is that the very tools promising to make our systems more efficient also risk making them less just. By centering human dignity and professional judgment as we develop these systems, we can navigate between the promise and the peril to create a future where technology truly serves justice.


Dr. Alexia Maddox will be presenting on “The AI Paradox in Law: When Smart Systems Meet Human Stakes – Navigating the Promise and Perils of Legal AI through 2030” at the upcoming 2030: The Future of Technology & the Legal Industry Forum on March 19, 2025, at the Grand Hyatt Melbourne.

Thinking Through Meta’s Fact-Checking Changes: What It Means for Australia

Please note, this blog is being actively updated as position pieces and insightful commentary arise. Last update 10 January 5pm AEST.

When I saw Mark Zuckerberg’s announcement yesterday about Meta ending their third-party fact-checking program in favour of a community-based system, my first thought was naturally about its implications for Australia given that many of my colleagues over the years have researched the Australian media sphere and misinformation on social media.

My second thought was, what is this agenda really about? This skepticism about Meta’s motives is shared by major advocacy organisations. Common Sense Media, a leading voice on kids’ digital wellbeing, issued a scathing response, describing the changes as a ‘transparent attempt to curry favour with incoming political power brokers’ and pointing to Meta’s recent actions killing key federal legislation to protect kids online through ‘flanks of lobbyists and the promise of a new data center in Louisiana’ (Common Sense Media, 2025). Listening to Zuckerberg, what I heard amongst all the Silicon Valley speak was something that they didn’t include in the written text that I thought may be the key.

At about 4 minutes in Mark drops the following very telling spin: “Finally we are going to work with president Trump to push back on governments around the world that are going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever increasing number of laws institutionalising censorship and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in this country. The only way we can push back on this global trend is with the support of the US government. And that’s why it’s been so difficult in the past 4 years when even the US government has pushed for censorship. By going after us and other American companies it has emboldened other governments to go even further.”

I could give you an analysis of this statement, but I think it stands for itself if you just remove the spin and observe that the European Digital Services Act is intended to provide positive outcomes for people and while it does constrain what Meta can do, maybe that is a good thing. You can see the EU commission response here pushing back on the interpretation of content moderation requirements as censorship, which was a definite spin coming from Meta’s statement, mouthpieced by Mark (whoever wrote this piece actually did this with a straight face?).

In some insightful commentary, Daphne Keller, Director, Program on Platform Regulation at Stanford Cyber Policy Center posts on LinkedIn that Zuckerberg’s open declaration of Meta’s antagonistic stance towards EU regulators may well encourage an equal and opposite response from regulators, cultivating their worst crackdown tendencies and marginalising those who wish to be careful.

Also, there is clearly a fundamental conflict between the Trump administration’s approach to technology regulation, Silicon Valley’s claims of innovation, the power of the ‘tech demagogues’ and any meaningful duty of care towards platform users (let alone acknowledgement of legislation in different national jurisdictions). Let us not forget Elon Musk and the kitchen sink meme upon Trump’s election win. There is also likely the need for a repositioning from Meta considering their history with banning Trump during the attack on the US Capitol. This analysis by writers for PolitiFact, one of the US 3rd party fact-checking organisations, while depressing, is insightful on this aspect of the situation.

However commentary from prominent social media researcher, danah boyd, and Siva Vaidhyanathan speak to perhaps the personal motivations at play here and point to a wobbling spinning top of desire for political alignment, a seeking of power, motivations not connected to money, and perhaps an outsized or cartoonish expression of competitive masculinity within the techbro elite. This is where the commentary gets personal and starts to incorporate the charismatic approach of social media company CEOs such as Mark Zuckerberg and Elon Musk, whose companies appear to be more like a personal play toy for their various ambitions.

As media requests started coming in and discussions began among my colleagues, I have taken the scope of this discussion away from a ‘culture shift’ – the term of the day – and considered specifically the ways that we need to carefully consider what this shift means for how Australians access and share credible information about political issues on social media.

Understanding the Change

Currently in Australia, Meta partners with fact-checking organisations including AFP and AAP FactCheck. These organisations provide structured, methodical verification of claims that circulate on Meta’s platforms, helping establish a baseline of credible information that can inform public discussion. Then there is RMIT Lookout, accredited by the International Fact-Checking Network (IFCN) based at Poynter.

While Meta frames fact-checking as something that can be readily replaced by community input, the reality of professional fact-checking involves complex verification processes, collaborative networks, and sophisticated tools. Professional fact-checkers have established relationships with deep fake detection experts and digital forensics specialists who can be quickly consulted on complex cases. Until recently, they also had access to Meta’s CrowdTangle tool, which allowed them to track and analyse how content spreads across the platform. These kinds of editorial decisions require not just expertise and established processes, but access to tools and expert networks that are difficult to replicate consistently by community moderation.

The shift to a Community Notes system represents a significant change from this professional approach. Meta currently partners with certified fact-checkers through the non-partisan International Fact-Checking Network (IFCN), which this open letter to Zuckerberg shows required all fact-checking partners to meet strict nonpartisanship standards. Instead of this reputable and standards based verification approach, this new system would rely on user communities to identify and provide context for potential misinformation.

This shift reflects a concerning pattern identified in recent research. A study published in Social Media + Society shows that platforms consistently prioritise managing content visibility over ensuring information accuracy (Cotter et al., 2022). By focusing on how content is displayed rather than verifying its accuracy, platforms treat misinformation primarily as a visibility problem rather than an information quality challenge. This approach fundamentally misunderstands the complexity of fact-checking and verification processes.

Recent research from the Prosocial Design Network offers insight into Community Notes’ effectiveness in addressing the visibility issue: while they can reduce retweets of flagged posts by 50-60%, their delayed appearance (usually after 80% of reshares have occurred) means they only reduce overall sharing of misleading posts by about 10%. The system shows promise but faces inherent scalability challenges due to its reliance on volunteers (Prosocial Design Network, 2025).

However, as The Advocate reports, the shift to Community Notes comes alongside broader changes to content moderation policies that go beyond just managing misinformation. These changes also include significant alterations to hate speech policies, raising concerns about protections for vulnerable communities (Wiggins, 2025).

The shift from professional fact-checking to community moderation represents more than just a change in process – it signals a fundamental retreat from platform responsibility for maintaining safe, credible information environments and changes how online information is verified and controlled. By replacing expert systems with user-led tools like Community Notes, Meta is effectively transferring responsibility for information quality from trained professionals to its user base – a shift that raises serious questions about the future of truth and accountability in our digital public spaces.

What is the Community Notes system?

The Community Notes system in X operates through a specific process: users who meet initial eligibility criteria (having accounts at least six months old, verified phone numbers, and no recent rule violations) can contribute contextual notes to any post. However, the ability to rate notes requires users to first demonstrate consistent, thoughtful rating behavior that earns them “rating impact.” Notes only become visible when rated ‘helpful’ by enough users who have previously disagreed in their note-rating patterns – a unique approach designed to surface consensus across different viewpoints.

As Queensland University of Technology’s Dr Tim Graham points out, this consensus-based approach is fundamentally different from professional fact-checking: ‘Community Notes is billed as a panacea… but when you get into the nitty-gritty the system fails to get a consensus most of the time. [Consensus] is a fundamental misreading of truth and how fact checking works’ (ABC News, 2025).

The system’s design, while aimed at preventing bias, creates additional structural challenges. Coordinated groups can potentially game the system by deliberately creating artificial disagreement patterns in their rating histories to control which notes become visible. Furthermore, the system’s reliance on volunteer labour means coverage tends to skew toward viral political content while technical misinformation or regional issues often lack sufficient qualified raters. The absence of expertise verification also means that authoritative-sounding but subtly inaccurate notes can gain visibility if they appeal to multiple viewpoints.

Research highlights significant limitations: analysis from The Washington Post found only 7.7% of proposed notes actually appeared on posts, while the Centre for Countering Digital Hate found 74% of accurate notes on misleading political posts never reached the consensus needed for display. The system faces particular challenges with timing – notes typically take several hours to achieve consensus and become visible. As Dr Graham notes, ‘The damage is already done in an hour or two, once you get into five hours, a day, two days, everyone moves on’ (ABC News, 2025).

As Meta looks to emulate X’s (formerly Twitter) Community Notes system, the results so far reveal clear strengths and weaknesses. While notes excel at correcting clear factual errors like misattributed images or incorrect statistics, they struggle with more nuanced claims or context-dependent situations. The system has shown vulnerabilities including susceptibility to coordinated action by groups of users, inconsistent coverage across different types of content, and varying quality of notes that sometimes lean more toward opinion than fact. During fast-moving events where rapid fact-checking is crucial, these limitations become particularly apparent.

Meta’s proposed Community Notes feature represents both opportunity and risk. While Daphne Keller sees positive potential in this approach – which builds on successful models of social curation like Wikipedia – she raises crucial concerns about its implementation. Meta’s decision to use Community Notes as a replacement for professional fact-checking, rather than a complement to it, while simultaneously reducing other safeguards against hate speech, puts enormous pressure on the system to perform. This strategic choice, Keller argues, could put Meta and this model into the firing line and may discourage other platforms from experimenting with similar collaborative moderation tools, even as the need for innovative approaches to content moderation grows.

The effectiveness of Meta’s implementation will ultimately depend on:

  • The diversity and representativeness of contributors, including robust systems to prevent domination by any particular viewpoint or group
  • Technical safeguards against manipulation by coordinated groups
  • Significantly faster response times to emerging misinformation than currently seen on X
  • Clear accountability measures and transparency about note visibility decisions
  • Robust mechanisms to verify expertise and maintain quality in specialised topic areas

So Does Fact Checking matter?

The Prosocial Design Network’s research reveals that fact-checking is just one tool in a broader kit of misinformation interventions. Their evidence review suggests that other approaches, such as accuracy prompts and pre-bunking, can be more effective than fact-checking alone in reducing misinformation spread (Prosocial Design Network, 2025). This raises an important question about Meta’s shift away from professional fact-checking: how much does fact-checking actually matter?

This is actually an interesting question about whether having 3rd party fact checking actually matters, ie that it impacts upon news consumers and social media content consumers perception and assessment of information credibility and authenticity. We know from the impacts of misinformation surrounding the COVID19 vaccination and in exacerbating political polarisation, alongside the increasing prevalence of AI generated content online, that we WANT it to matter. But does fact checking impress upon people that the information/content they are consuming is factual and credible or not?

Recent research published in Digital Journalism (Carson et al., 2022) found that third-party fact-checking can actually decrease trust in news stories – a concerning “backfire effect” that suggests we need to carefully consider how fact-checking is implemented. The study, which examined Australian news consumers, found that when readers were presented with a fact-check of a political claim, their trust in the original news story decreased, regardless of their political leanings or the media outlet involved.

Carson et al.’s research demonstrates that news audiences may not clearly distinguish between a politician’s false claims within a news story and the news reporting itself. This means that when a fact-check identifies a false claim, readers’ distrust can spread to the entire story and news outlet, rather than being limited to the politician making the false statement. This finding is particularly relevant as Meta shifts away from professional fact-checkers to a community-based system.

Meta’s shift away from fact-checking comes alongside deeply concerning changes to content moderation policies. As documented by the Platform Governance Archive, Meta has significantly rewritten its Community Guidelines, removing crucial protections against hate speech and reframing these rules as “hateful conduct” policies. I think Matt Schneider articulates the concerns this raises best in his LinkedIn post on the topic. He argues that these changes explicitly permit previously restricted content, particularly harmful speech targeting gender, sexual orientation, and minority groups. Most alarming is his observation on the explicit permission of “allegations of mental illness or abnormality when based on gender or sexual orientation” and allowing comparisons of women to “household objects or property” (Schneider, 2025).

These policy changes have dire implications for vulnerable communities. According to Platformer’s January 2025 reporting, Meta has explicitly removed protections against dehumanising speech targeting transgender people, women, and immigrants. The platform now allows posts denying trans people’s existence, comparing them to objects rather than people, and making allegations of mental illness based on gender identity or sexual orientation. This shift comes at a particularly dangerous time – when over 550 anti-LGBTQ+ bills were introduced in state legislatures last year in the US, 40 became law, and hate crimes against LGBTQ+ people reached record levels, with more than 2,800 incidents reported in 2023 alone.

These changes represent more than just a technical policy shift – they signal a troubling retreat from platform responsibility that could have serious consequences for vulnerable communities. This context is crucial – the move away from professional fact-checking isn’t happening in isolation, but as part of a broader and potentially harmful shift in how Meta approaches content moderation and platform governance, seemingly prioritising political expediency over user safety and dignified public discourse.

A Pattern of Platform Responsibility

This isn’t the first time Meta has attempted to dodge platform responsibility. As documented in WIRED’s investigation of Facebook’s response to the 2016 election crisis (Thompson & Vogelstein, 2018), the company has a pattern of initially denying accountability for content moderation issues, only acknowledging responsibility after significant pressure. While Meta continues to invoke Section 230 protections and claim it’s ‘just a platform,’ history shows that its algorithmic choices and content moderation policies actively shape public discourse.

The current retreat from professional fact-checking echoes previous instances where Facebook prioritised growth and engagement over safety and accuracy. Just as the company eventually had to acknowledge its role in election misinformation, Meta needs to recognise that with its unprecedented reach comes unprecedented responsibility. The solutions to addressing fake news, AI-generated content, deep fakes, and hate speech cannot come from community moderation alone – they require platform-level commitment and investment.

Beyond “Free Speech”

Using the euphemism of a ‘cultural shift’ to justify kiboshing their years of work bringing in fact checking, Zuckerberg says that their concern now is on increasing ‘speech’. However free speech exists within an ecosystem of other rights and responsibilities. Meta’s announcement focuses heavily on reducing restrictions in the name of free expression, but as researchers at Cornell’s CAT Lab note, the ability to participate meaningfully in online spaces involves more than just the freedom to speak – it requires the freedom to form connections and engage in collective action without fear of harassment or intimidation (Matias & Gilbert, 2024).

While Meta frames these changes as expanding free speech, the real challenge is ensuring everyone can participate meaningfully in online discourse. When misinformation spreads unchecked, or when harassment goes unmoderated, it can effectively prevent certain groups from participating in public debate. Free expression isn’t just about removing restrictions – it’s about creating an environment where all voices can be heard and verified information can reach its audience.

The Challenge of Shared Information

Meta’s move reflects what researchers identify as a “marketplace of ideas” approach where platforms “prioritise free speech and more speech to correct the record” (Cotter et al., 2022). While this might seem reasonable, it creates practical challenges for public discussion. When different groups encounter radically different versions of political information and news, it becomes harder to have meaningful discussions about important issues.

Meta’s shift toward more “personalised” political content could create information asymmetries, where users see vastly different versions of political discussions based on their existing views and engagement patterns. This could make it harder for users to encounter diverse perspectives or verify claims across different communities, especially during election periods. Public discussion requires some degree of shared information – when different groups of voters are seeing fundamentally different versions of political issues, it becomes more challenging to engage in informed debate.

The Australian Context

As the authors of the open letter from fact-checking organisations around the world observes, Meta’s plan to end the fact-checking program in 2025 applies only to the United States, for now. They note however that Meta has similar programs in more than 100 countries covering diverse political systems and stages of development. Hauntingly, they suggest that if Meta decides to stop the program worldwide, it is almost certain to result in real-world harm in many places. For now, we will likely have some comparative case studies to observe the resulting impacts of the professional verses community-lead models of fact checking within the Facebook environment, but this may change at any point.

In the Australian context, timing matters. With our federal election approaching, changes to how information is verified on Meta’s platforms could affect how Australians access and share political information. While we have strong fact-checking institutions like RMIT Lookout, which operates independently, Meta’s platforms play a distinct and significant role in how many Australians encounter and share political information. Australia’s concentrated media market means that changes to Meta’s platforms can have significant effects on what information reaches Australian audiences.

Beyond Content Moderation

Rather than just managing what content is visible, platforms need to support the infrastructure that helps people access and verify credible information. The challenges we face go beyond simple content filtering – they require a comprehensive approach to building resilient information ecosystems. This means:

  • Developing better systems to identify accurate information through a combination of automated detection, expert verification, and community input. These systems need to work proactively rather than reactively, identifying potential misinformation before it goes viral and providing real-time verification tools that users can access directly.
  • Supporting rather than undermining professional fact-checking by providing fact-checkers with better tools, resources, and platform access. This includes maintaining partnerships with accredited fact-checking organisations, ensuring transparent access to content spread data, and integrating fact-checking more deeply into platform architectures.
  • Creating tools that help bridge information divides by making verified information more accessible and engaging. This could include features that surface diverse perspectives from credible sources, tools that help users understand the context and history of viral claims, and systems that encourage cross-pollination of verified information across different communities.
  • Investing in digital literacy through both platform features and educational initiatives. This means building in-platform tools that help users evaluate information credibility, supporting external digital literacy programs, and developing resources that help users understand how information spreads online and how to verify claims they encounter.
  • Ensuring platform accountability through transparent reporting on content moderation decisions, clear appeals processes, and regular independent audits of platform practices. Without accountability measures, even the best systems can be undermined by inconsistent enforcement or political pressure.

This comprehensive approach recognises that effective content moderation isn’t just about removing harmful content – it’s about building an information environment that helps users make informed decisions and engage meaningfully with online discourse.

Looking Ahead

As we approach our federal election, these changes deserve careful attention. While Meta’s commitment to reducing over-enforcement of content moderation is understandable, we need to consider how changes to fact-checking systems might affect Australians’ ability to access credible information about political issues and engage in informed public discussion.

As the Carson et al.’s 2022 study suggests, fact-checkers could more clearly state they are fact-checking a politician’s specific claim rather than the media coverage containing it. They also recommend that journalists may need to more actively adjudicate false claims within their original reporting rather than relying solely on external fact-checkers.

The challenge isn’t just about determining what’s true or false. It’s about maintaining systems that help Australians access reliable information they can use to understand and discuss important political issues. Meta’s policy shift suggests they may be stepping back from this role just when clear, credible information is most needed.

The research evidence suggests that fact-checking, while important, needs to be implemented thoughtfully to avoid undermining trust in legitimate journalism. As we move forward, the key question isn’t just about free speech versus restriction – it’s about how we maintain the integrity of our shared democratic conversation in an increasingly fragmented digital landscape.

What can you do?

While some choose to opt out of social media platforms due to concerns about misinformation and toxicity, this isn’t always feasible or productive, especially as these challenges proliferate across multiple platforms. Given that I am an educator, it should come as no surprise that I think education is key and that I would emphasise the importance of building information and digital literacy skills and capabilities that work across all online environments.

Here are essential strategies for navigating online information:

  • Verify the source’s credibility: Check their track record, expertise, and potential biases
  • Watch for emotional manipulation: Be especially skeptical of content designed to provoke strong emotional reactions
  • Check dates and context: Old content is often recycled and presented as current news
  • Cross-reference information: Look for multiple reliable sources covering the same topic
  • Apply logical scrutiny: Ask yourself if the claim aligns with what you know about the person, organisation, or situation
  • Look for Community Notes: While not perfect, they can provide valuable additional context

This questioning is really the start point for critical thinking.

For AI-generated content and deepfakes specifically:

  • Watch for visual inconsistencies: Look closely at hands, teeth, backgrounds, and reflections
  • Check for unnatural movement in videos: Pay attention to lip synchronisation and eye movements
  • Be especially wary of crisis-related content: Deepfakes often exploit breaking news situations
  • Use reverse image searches: Tools like Google Lens can help identify manipulated images
  • Pay attention to audio quality: AI-generated voices often have subtle irregularities

The challenge of identifying synthetic media is growing as AI technology becomes more sophisticated. This makes it crucial to develop a strong understanding of current events, public figures, and social issues over time. This contextual knowledge becomes our foundation for evaluating authenticity. However, it’s important to acknowledge that building these skills takes time – and that’s okay. Critical thinking and digital literacy are ongoing practices that we develop gradually, not a checklist we need to master overnight.

While individual skills are vital, we shouldn’t shoulder this burden alone. This is precisely why Meta’s shift away from professional fact-checking and reduced content moderation safeguards is concerning. Platforms have access to advanced detection tools, professional fact-checkers, and technical expertise that complement our personal verification efforts. At the same time, we need to acknowledge that AI agents are becoming integral to how we create and make sense of online content.

Rather than seeing AI solely as a threat or feeling overwhelmed by the need to become expert evaluators, we can approach this as a gradual learning process. This includes building our understanding of AI’s capabilities and limitations over time, learning to use these tools productively while maintaining critical awareness, and recognising that our digital literacy will evolve alongside these technologies. However, this individual growth needs to be supported by robust platform policies and professional fact-checking resources – not treated as a replacement for them. As platforms experiment with systems like Community Notes, they must recognise that effective content moderation requires a multi-layered approach combining institutional resources, professional fact-checkers, and community participation.

Beyond Age Limits: What’s Missing in Australia’s Social Media Ban Discussion

Why are we talking about this now?

The ABC’s recent article “The government plans to ban under-16s from social media platforms” lays out the mechanics of Australia’s proposed social media age restrictions. The timing of this announcement is significant – with only two parliamentary sitting weeks left this year and an election on the horizon, both major parties are backing this policy. This follows months of mounting pressure from parent advocacy groups like 36 Months, and builds on earlier discussions about protecting children from online pornography.
But while the article explains what will happen, there are critical questions we need to address about whether this approach will actually work – and what we might lose in the process. This isn’t just about technical implementation; it’s about understanding why we’re seeing this push now and whether it represents meaningful policy development or political opportunism.
The recent Social Media Summit in Sydney and Adelaide highlighted how this debate is being shaped. Rather than drawing on Australia’s world-leading expertise in digital youth research, the summit featured US speakers promoting what has been referred to as a “moral panic” approach. This raises questions about whether we’re developing evidence-based policy or responding to political pressures.

The Policy vs Reality

Yes, platforms will have 12 months to implement age verification systems and we will no doubt see push back from platforms on this. Yes, the definition of social media is broad enough to capture everything from TikTok to YouTube to potentially Discord and Roblox.

Additionally, the government’s ability to enforce age restrictions on global social media platforms raises significant practical and legal challenges. While Australia can pass domestic legislation requiring platforms to verify users’ ages, enforcing these rules on companies headquartered overseas is complex. Recent history shows platforms often prefer to withdraw services rather than comply with costly local regulations – consider Meta’s response to Canadian news legislation or X’s ongoing resistance to Australian eSafety Commissioner directives.

Any proposed penalties may not provide sufficient incentive for compliance, particularly given these platforms’ global revenues. Additionally, even if major platforms comply, young people could simply use VPNs to access services through other countries, or migrate to less regulated platforms beyond Australian jurisdiction.

Without international cooperation on digital platform regulation, individual countries face significant challenges in enforcing national regulations on global platforms. This raises a crucial question: will platforms invest in expensive age-verification systems for the Australian market, or will they simply restrict their services here, potentially reducing rather than enhancing digital participation options for all Australians?

What is missing from this conversation?

  1. Digital Equity: The broad scope of this ban could particularly impact:
    • Regional and remote students using these platforms for education
    • Marginalised youth who find support and community online
    • Young people using gaming platforms for social connection
  2. Privacy Trade-offs: The proposed verification systems mean either:
    • Providing ID to social media companies
    • Using facial recognition technology
    • Creating centralised age verification systems
    • All of these raise significant privacy concerns – not just for teens, but for all users.
  3. Unintended Consequences: International experience shows young people often:
    • Switch to VPNs to bypass restrictions
    • Move to less regulated platforms
    • Share accounts or find other workarounds

A More Nuanced Approach

Rather than focusing solely on age restrictions, we could be:

  • Making platforms safer by design
  • Investing in digital literacy education
  • Supporting parents and educators
  • Listening to young people’s experiences
  • Learning from international approaches like the EU’s Digital Services Act

Looking Forward

While the government’s concern about young people’s online safety is valid, and is shared by researchers, families, school teachers and young people alike, the solution isn’t as simple as setting an age limit. Young people develop digital capabilities at different rates, and their resilience online often depends more on their support networks, digital literacy, and individual circumstances than their age alone.
The Centre of Excellence for the Digital Child’s research demonstrates that some young people are highly capable of identifying and managing online risks, while others need more support – regardless of age. This is particularly important when we consider:

  • Some younger teens demonstrate sophisticated understanding of privacy settings and online safety
  • Many vulnerable teens rely on online communities for crucial support
  • Digital literacy and family support often matter more than age in online resilience
  • Regional and remote youth often develop advanced digital skills earlier out of necessity

We need approaches that protect while preserving the benefits of digital participation, recognising that arbitrary age limits may not align with individual capability and need.
This better reflects the evidence while acknowledging:

  • The validity of safety concerns
  • The complexity of digital capability development
  • The importance of context and support
  • The need for nuanced policy responses

The Joint Select Committee on Social Media and Australian Society is still to deliver its final report. Perhaps it’s worth waiting for this evidence before rushing to implement restrictions that might create more problems than they solve.

EDIT: They have now released their final report, with some excellent recommendations… and no mention of an age ban.

The Bottom Line

Protection and participation aren’t mutually exclusive. We can make online spaces safer without excluding young people from digital citizenship. But it requires more nuanced solutions than age barriers alone can provide.