The Irreplaceable Human Skill: Why Generative AI Can’t Teach Students to Judge Their Own Work

A note to readers: I’m writing this in the thick of marking student submissions – the most grinding aspect of academic work. My brain fights against repetitive rote labour and goes on tangents to keep me entertained. What follows emerged from that very human need to find intellectual stimulation in the midst of administrative necessity.

There’s considerable discussion that our distinction as creators and thinkers from Generative AI content production lies in creativity and critical thinking linked to innovation. But where does the hair actually split? Are we actually replaceable by robots or will they atrophy our critical thinking skills by doing the work for us? Will we just get dummer and less capable to tie our own shoe laces – like most fear based reporting suggests? I think we are asking the wrong questions.

Here is a look at what is actually going on, on the ground. A student recently asked me for detailed annotations on their assignment—line-by-line corrections marking every error. They wanted me to do the analytical work of identifying problems in their writing. This request highlights a fundamental challenge in education: the difference between fixing problems and developing the capacity to recognise them. More importantly, it reveals where the Human-Generative AI distinction becomes genuinely meaningful.

Could Generative AI theoretically teach students to judge their own work? Perhaps, through Socratic questioning or scaffolded self-assessment prompts. But that’s not how students actually use these tools. Or want to use them, apparently. A discussion I had with a tech developer working in a tutoring company utilising Generative AI in the teaching/learning process mentioned that students got annoyed by the Socratic approach when they encountered it. So there goes that morsel of hope.

The Seductive Trap of Generative AI Writing Assistance

Students increasingly use Generative AI tools for grammar checking, expression polishing, and even content generation. These tools are seductive because they make writing appear better—more polished, more confident, more academically sophisticated. But here’s the problem: Generative AI tools are fundamentally sycophantic and don’t course correct misapprehensions. They won’t tell a student their framework analysis is conceptually flawed, their citations are inaccurate, or their arguments lack logical consistency. Instead, they’ll make poorly reasoned content sound more convincing.

This creates a dangerous paradox: students use Generative AI to make their work sound rigorous and sophisticated, but this very process prevents them from developing the judgement to recognise what genuine rigour looks like. They can’t evaluate what they clearly don’t know – that their work isn’t conceptually aligned, coherently logical, or correctly interpreting sources – because the AI has dressed their half-formed understanding in authoritative-sounding language.

I have encountered several submissions across different subjects that exemplified this perfectly: beautifully written but containing fundamental errors in framework descriptions, questionable source citations, and confused theoretical applications. The prose was polished, the structure clear, but the content revealed gaps in understanding that no grammar checker could identify or fix. The student had learned to simulate the appearance of academic rigour without developing the capacity to recognise genuine scholarly quality.

Where the Hair Actually Splits

Generative AI can actually be quite “creative” in generating novel combinations of ideas, and it can perform certain types of critical analysis when clearly guided and bounded. What it fundamentally cannot do is develop the evaluative judgement to recognise quality, coherence, and accuracy in complex, contextualised work. It has no capacity for self reflection and meaning making (at the moment), we do.

The distinction isn’t between:

  • Generating creative output (which Generative AI can somewhat do)
  • Performing critical analysis (which generative AI can also somewhat do)

Rather, it’s between:

  • Creating sophisticated looking content (which Generative AI increasingly excels at)
  • Judging the quality of that content in context (which requires human oversight and discernment)

Generative AI can produce beautifully written, seemingly sophisticated arguments that are conceptually flawed. It can create engaging content that misrepresents sources or conflates different frameworks. What it cannot do is step back and recognise “this sounds polished but the underlying logic is problematic” or “this citation doesn’t actually support this claim.”

The irreplaceable human skill isn’t creativity per se—it’s the capacity for metacognitive evaluation: the ability to assess one’s own thinking, to recognise when arguments are coherent versus merely convincing, to distinguish between surface-level polish and deep understanding.

What Humans Bring That AI Cannot

The irreplaceable human contribution to education isn’t information delivery—AI is increasingly able to do that pretty efficiently (although there is a lot of hidden labour in this). It’s developing the capacity for metacognitive evaluation in our students.

This happens through:

Exposure to expertise modelling: Students need to observe how experts think through problems, make quality judgements, and navigate uncertainty. This isn’t just about seeing perfect examples—it’s about witnessing the thinking process behind quality work.

Calibrated feedback loops: Human educators can match feedback to developmental readiness, escalating complexity as students build capacity. We recognise when to scaffold and when to challenge.

Critical engagement with authentic problems: Unlike AI-generated scenarios, real-world applications come with messy complexities, competing priorities, and value judgements that require human judgement, discernment and social intelligence.

Social construction of standards: Quality isn’t just individual—it’s negotiated within communities of practice. Students learn to recognise “good work” through dialogue, peer comparison, and collective sense-making.

Refusing to spoon-feed solutions: Perhaps most importantly, human educators understand when not to provide answers. When my student asked for line-by-line corrections, providing them would have created dependency rather than developing their evaluative judgement. The metacognitive skill of self-assessment can only develop when students are required to do the analytical work themselves.

The Dependency Problem

When educators provide line-by-line corrections or when students rely on Generative AI for error detection in thinking, writing or creating, we create dependency rather than capacity. Students learn to outsource quality judgement instead of developing their own ability to recognise problems.

The student who asked for detailed annotations was essentially asking me to do their self-assessment for them. But self-regulated learning—the ability to monitor, evaluate, and adjust one’s own work—is perhaps the most crucial skill we can develop. Without it, students remain permanently dependent on external validation and correction.

Teaching Evaluative Judgement in a Generative AI World

This doesn’t mean abandoning Generative AI tools entirely. Rather, it means being intentional about what we ask humans to do versus what we delegate to technology:

Use Generative AI for: Initial drafting, grammar checking, formatting, research organisation—the mechanical aspects of work.

Reserve human judgement for: Source evaluation, argument coherence, conceptual accuracy, ethical reasoning, quality assessment—the thinking that requires wisdom, not just processing.

In my own practice, I provide rubric-based feedback that requires students to match criteria to their own work. This forces them to develop pattern recognition and quality calibration. It’s more cognitively demanding than receiving pre-marked corrections, but it builds the evaluative judgement they’ll need throughout their careers.

The Larger Stakes

The question of human versus Generative AI roles in education isn’t just pedagogical—it’s about what kind of thinkers we’re developing. If students learn to outsource quality judgement to Generative AI tools, we’re creating a generation that can produce polished content but can’t recognise flawed reasoning, evaluate source credibility, or build intellectual capacity and critical reasoning skills.

This is why we need to build self-evaluative judgement in students – not just critical thinking and creative processes more broadly. The standard educational discourse about “21st century skills” focuses on abstract categories like critical thinking and creativity, but misses this more precise distinction: the specific metacognitive capacity to evaluate the quality of one’s own intellectual work.

This self-evaluative judgement operates laterally across disciplines rather than being domain-specific, and it’s fundamentally metacognitive because it requires thinking about thinking. It addresses the actual challenge students face in a Generative AI world: distinguishing between genuine understanding and polished simulation of understanding. A student might articulate sophisticated pedagogical concepts yet be unable to evaluate whether their own framework descriptions are accurate or their citations valid.

The unique human contribution isn’t delivering perfect feedback—it’s teaching students to become their own quality assessors. That capacity for self-evaluation, for recognising what makes work meaningful and rigorous, remains irreplaceably human.

In a world where Generative AI can make anyone’s writing sound professional, the ability to think critically about one’s own work becomes more valuable, not less. That’s the expertise that human educators bring to the table—not just knowing the right answers, but developing in students the judgement to recognise quality thinking when they see it, including in their own work.

The Tyranny of Academic Fluff: Why Word Limits Matter

Students push back hard against word constraints. They want room for elaborate introductions, extensive background sections, and careful hedging that transforms “Research shows X” into “It is important to note that extensive research clearly demonstrates that X may be considered significant in certain contexts.”

I’m done with it.

The Problem with Academic Padding

Every semester I read hundreds of assignments where students bury their insights under layers of unnecessary qualification and hyperbole. They write “It can be argued that this particular approach might potentially offer some benefits” instead of “This approach works.” They transform concrete evidence into abstract speculation.

This isn’t sophisticated analysis. It’s fear disguised as scholarship.

Students learn this defensive writing in response to academic culture that rewards hedging over clarity. But defensive writing serves no one. It asks readers to excavate meaning from prose designed to avoid commitment to any particular position.

When Embellishment Serves Purpose

Creative fiction earns its elaborate descriptions. When the creative writer spends paragraphs on consciousness streams, every word builds character depth and emotional resonance. Fiction writers choose vivid detail because it serves story and connection.

Academic writers often mistake ornamentation for sophistication. But their audience isn’t seeking emotional transport – they need information, analysis, and conclusions they can apply. Different purposes require different approaches to word choice.

The Reader’s Contract

Professional writing establishes an implicit contract with readers: your time invested will yield understanding proportional to effort required. Verbose academic prose violates this contract by demanding excessive cognitive load for minimal informational return.

Word limits force writers to honour this contract. When you can’t pad your argument, you must strengthen it. When you can’t hedge every claim, you must support claims with evidence. When you can’t elaborate endlessly, you must choose your most compelling points.

The Discipline of Constraint

Constraint breeds creativity. Poets working within sonnets discover language precision that free verse might not demand. Academic writers working within word limits develop clarity skills that unlimited space cannot teach.

Clarity takes work. It is the labour of the writer to do it and not lazily leave it to their readers to wrestle with. This is an offload of responsibility and also, a lost opportunity.

Students resist word limits because constraints feel restrictive. But constraint creates power. Every unnecessary word removed makes remaining words more impactful. Every redundant phrase eliminated sharpens the argument.

Professional Stakes for Educators

Education professionals write policy recommendations, grant applications, and research reports. Teachers in schools handle parent communications, behaviour management plans, and learning support documentation. None of these contexts tolerate verbose exploration of tangential considerations.

Principals need clear implementation strategies, not elaborate theoretical frameworks. Parents need actionable guidance about their child’s progress, not comprehensive literature reviews. Grant reviewers need compelling justifications, not exhaustive background summaries.

Preservice teachers who master concise communication develop professional advantages. Their policy recommendations get implemented. Their grant applications get funded. Their research gets cited. Teachers in schools who communicate clearly build stronger parent partnerships and more effective student support plans

Beyond Academic Performance

Clear communication shapes democratic discourse. Citizens navigating complex policy decisions need accessible analysis, not impenetrable academic jargon. Teachers explaining educational approaches to parents need precision, not qualification-laden hedging.

The stakes extend beyond individual career success. Public understanding of educational issues depends partly on whether education professionals can communicate clearly with non-specialist audiences.

The Path Forward

Word limits teach editorial discipline. Students must choose their strongest evidence, eliminate weak arguments, and commit to defensible positions. This process transforms tentative scholars into confident professionals.

Yes, students initially struggle with constraints. They’ve learned that more words signal more effort, that elaborate qualification demonstrates intellectual sophistication. But professional communication rewards clarity over complexity, precision over padding.

Word limits aren’t punishment – they’re preparation for professional contexts where clear communication determines outcomes. Students who master this skill shape educational policy, influence public understanding, and serve their communities more effectively.

The constraint teaches compassion for readers and respect for language as a tool of connection rather than obfuscation.

When Research Becomes “Big Tech Talking Points”: The Erosion of Good-faith Discourse on Social Media Regulation

As a sociologist of technology and educator focused on digital literacy, I’ve spent years working with research on the complex relationship between young people and social media. Recently, I found myself in an online discussion that exemplifies a troubling pattern in how we debate digital policy issues in Australia.

After sharing peer-reviewed research showing that while some correlations exist between social media use and mental health outcomes, there’s limited evidence supporting a causal relationship where social media directly causes poor mental health or reduced wellbeing. I was quickly labeled as someone “shilling” for “Big Tech,” with my evidence-based positions dismissed as “talking points”.

Research points to how individuals with existing mental health challenges may gravitate toward certain types of social media use, rather than social media itself being the primary cause of these challenges. This important distinction highlights how nuanced research gets flattened into simplistic positions when policy discussions become emotionally charged.

The False Binary: Protect Kids or Support Big Tech

The current discourse around Australia’s social media age ban has created a false dichotomy: either you support sweeping restrictions or you’re somehow against protecting children. This reductive framing leaves no room for evidence-based approaches that aim to both protect young people and preserve their digital agency.

When I cite studies showing that social media use accounts for only 0.4% of the variance in well-being – findings published in reputable journals – these aren’t “industry talking points”. They’re research conclusions reached through rigorous methodology and peer review. As noted in a recent Nature article, the evidence linking social media use to mental health issues is far more equivocal than public discourse suggests.

Just look at what the research actually says: “An analysis of 3 data sets, including 355,000 adolescents, found that the association between social media use and well-being accounts for, at most, 0.4% of the variance in well-being, which the authors conclude is of ‘little practical value’. Another large study of adolescent users concluded that the association was ‘too small to merit substantial scientific discussion’. A longitudinal study that measured social media use through an app installed on participants’ mobile devices found no associations between any measures of Facebook use and loneliness or depression over time.”

The current push for age bans in Australia reveals concerning patterns in how policy is developed. Australian researchers have pointed out that much of the momentum behind these restrictions can be traced directly to Jonathan Haidt’s book “The Anxious Generation,” which has become influential despite its claims being disputed by experts at prestigious institutions like the London School of Economics. As Dr. Aleesha Rodriguez from the ARC Centre of Excellence for the Digital Child has observed, books that capitalise on parental anxieties should not drive national policy decisions, especially when they bypass evidence-based approaches and committee recommendations. The government’s announcement of social media age restrictions came before the Joint Select Committee on Social Media and Australian Society even issued its interim report, raising questions about the role of evidence in this policy development process. You’ll see that the final report came out on the 18th November 2024 and it did not recommend the implementation of age bans.

The Power of Emotional Appeals vs. Research Findings

But in our current climate, sharing such research and insights is met with accusations of being “in the pockets of Big Tech” or having “industry interference” – rhetorical devices designed to discredit without engaging with the substance of the evidence. This pattern of discourse relies heavily on emotional appeals and anecdotes to overwhelm research findings. “Children’s wellbeing (and lives) are at stake,” advocates declare, implying that questioning the effectiveness of age bans is equivalent to devaluing children’s safety.

These emotional appeals are powerful because they tap into genuine parental anxieties. In their public communications, advocates may employ evocative language (“stranglehold,” “insidious,” “shame on them all”) and frame the debate as a moral binary: either you support age bans or you’re effectively siding with “Big Tech” against children’s interests. This rhetorical approach creates a false dichotomy where nuanced research positions are dismissed as “industry talking points” without engaging with the substance of the evidence.

By contrast, research on children’s digital experiences draws on diverse empirical methods—including large-scale surveys, in-depth qualitative studies, longitudinal tracking, and co-design work with children themselves. This comprehensive approach captures a wide range of social experiences across different demographics and contexts. Such research undergoes rigorous peer review, requiring methodological transparency and critical evaluation before publication.

Importantly, the research landscape itself contains diverse perspectives and interpretations. Even within academic disciplines studying digital youth, researchers may disagree about the significance of findings, methodological approaches, and policy implications. Some researchers emphasise potential harms and advocate for stronger protections, while others highlight benefits and concerns about digital exclusion. This diversity of expert opinion reflects the complex nature of children’s digital engagement rather than undermining the value of research-informed approaches.

What most researchers do agree on is that the evidence doesn’t support simplistic narratives. The findings indicate that while correlations exist between social media use and well-being, many other factors play more significant roles, and the relationships are often bidirectional and context-dependent.

Policy decisions affecting millions of young Australians deserve more than anxiety-driven responses – they require careful consideration of evidence, unintended consequences, and alternative approaches that address both the genuine concerns of parents and the established digital rights of children.

When Nuance Gets Lost: The Digital Duty of Care Example

The irony is that I and many researchers share the same core concern as advocates: we want digital environments that are safer for young people. Where we differ is in how to achieve this goal effectively.

Australia’s Digital Duty of Care bill proposal, which has received far less media attention than the age ban, represents a more evidence-based approach to improving online safety. You can also see its much slower movement through parliament. It focuses on making platforms safer by design rather than simply restricting access.

This legislation, developed through extensive consultation and aligned with comparable measures in the UK and EU, places responsibility on platforms to proactively prevent online harms. Yet because it lacks the emotional appeal of “keeping kids off social media”, it hasn’t captured public imagination in the same way.

I support making digital environments safer for young people. Following the intention of this policy, research suggests this is better accomplished through platform design requirements, digital literacy education, and appropriate safeguards rather than blanket age bans that may create unintended consequences.

The Overlooked Complexities

Lost in the simplified discourse are crucial considerations that research brings to light:

  1. Digital equity concerns: Age restrictions disproportionately impact young people in regional and remote areas who rely on social media for educational resources and social connection.
  2. Support for marginalised youth: For many LGBTQI+ young people and others who feel isolated in their physical communities, online spaces provide crucial support networks.
  3. Technical realities: The age verification technologies being proposed have significant technical limitations, with biometric age estimation showing concerning accuracy gaps for young teenagers and disparities across demographic groups.
  4. Platform compliance challenges: As we’ve seen with Meta’s pushback against EU regulations, we can’t assume platforms will simply comply with national regulations they see as burdensome for smaller markets.
  5. Educational implications: Schools face significant challenges in navigating restrictions that could inadvertently disrupt established educational practices that use social media platforms.

These complexities matter, not because they invalidate safety concerns, but because addressing them is essential to developing effective policy that truly serves young people’s interests.

Unintended Consequences of Age Verification Systems

A significant oversight in the age ban debate is how age verification technologies will inevitably impact all users—not just children. The government’s Age Assurance Technology Trial, while focused on “evaluating the effectiveness, maturity, and readiness” of these technologies, does not adequately address the far-reaching implications for adult digital access.

These systems, once implemented, create barriers for everyone—not just children. Adults who lack standard government-issued ID, have limited digital literacy, use shared devices, or have privacy concerns may find themselves effectively locked out of digital spaces. This particularly affects already marginalised groups: elderly people, rural and remote communities, people with disabilities, individuals from lower socioeconomic backgrounds, and those with non-traditional documentation.

Age verification systems that rely on biometric data, ID scanning, or credit card verification raise serious privacy concerns that extend well beyond children’s safety. Once these surveillance infrastructures are established for “protecting children,” they create permanent digital checkpoints that normalise identity verification for increasingly basic online activities. The same parents advocating for these protections may not anticipate how these systems will affect their own digital autonomy and privacy.

Moreover, the technical limitations of age verification technologies create a false sense of security. Current systems struggle with accuracy, particularly for users with certain disabilities, those from diverse ethnic backgrounds, or individuals whose appearance doesn’t match algorithmic expectations. Rather than creating safe digital environments through design and platform responsibility, age verification shifts the burden to individual users while potentially exposing their sensitive personal data to additional security risks.

Children’s Rights in the Digital Environment

What’s frequently missing from this debate is recognition of children’s established rights in digital spaces. The UN Committee on the Rights of the Child’s General Comment No. 25 (2021) specifically addresses children’s rights in relation to the digital environment. This authoritative interpretation clarifies that children have legitimate rights to:

  • Access information and express themselves online (Articles 13 and 17)
  • Privacy and protection of their data (Article 16)
  • Freedom of association and peaceful assembly in digital spaces (Article 15)
  • Participation in cultural life and play through digital means (Article 31)
  • Education that includes digital literacy (Article 28)

The UN framework emphasises that the digital environment “affords new opportunities for the realization of children’s rights” while acknowledging the need for appropriate protections. It specifically notes that children themselves report that digital technologies are “vital to their current lives and to their future.”

This rights-based framework fundamentally challenges the premise that children should simply be excluded from digital spaces until they reach an arbitrary age threshold. Instead, it calls for balancing protection with participation and recognising children’s evolving capacities.

The Australian context

In Australia, the digital rights of children are recognised and protected, encompassing privacy, safety, and access to information, with organisations like the eSafety Commissioner and the Alannah & Madeline Foundation playing key roles in advocacy and research. 

Here’s a more detailed breakdown of the digital rights of children in Australia:

Key Rights and Protections: 

  • Privacy: Children have the right to privacy in the digital environment, which is protected by the Privacy Act 1988. 
  • Safety: The eSafety Commissioner works to protect children from online harms like cyberbullying, grooming, and exposure to harmful content. 
  • Access to Information: Children have the right to access reliable and age-appropriate information online. 
  • Freedom of Expression: Children have the right to express themselves online, but this right must be balanced with the need to protect them from harm. 
  • Participation: Children have the right to participate in online activities and to have their views heard, especially in matters that affect them. 

Relevant Organisations and Initiatives: 

  • eSafety Commissioner: This government agency is responsible for promoting online safety and protecting children from online harms. 
  • Alannah & Madeline Foundation: This organisation advocates for children’s rights online and works to create a safer online environment for children. 
  • Australian Research Council Centre of Excellence for the Digital Child: This research centre focuses on creating positive digital childhoods for all Australian children. 
  • UNCRC General Comment No. 25: This document outlines the rights of the child in relation to the digital environment and provides guidance for governments and other actors. 
  • The Digital Child: A research and advocacy organisation focused on children’s digital rights and wellbeing. 
  • UNICEF Australia: Collaborates with the Digital Child centre to promote digital wellbeing for young children. 
  • Digital Rights Watch: An organization that works to ensure fairness, freedoms and fundamental rights for all people who engage in the digital world. 

Key Issues and Challenges: 

  • Online Safety: Protecting children from online harms like cyberbullying, grooming, and exposure to harmful content is a major concern. 
  • Privacy: Balancing the need to protect children’s privacy with the need for parents and caregivers to monitor their online activity is a complex issue. 
  • Age Verification: Ensuring that children are not exposed to age-inappropriate content and that they 
    are not targeted by online services is important. 
  • Misinformation and Disinformation: Children are vulnerable to misinformation and disinformation online, and it’s important to equip them with the skills to identify and avoid it. 
  • Technology-Facilitated Abuse: Children can be victims of technology-facilitated abuse (TFA) in the context of domestic and family violence, and it’s important to address this issue. 
  • Parental Rights vs. Children’s Privacy: The extent to which parents can monitor their children’s online activity is a complex issue with legal implications. 
  • Digital Literacy: It’s important to support digital literacy initiatives that encourage and empower children to take further responsibility for their online safety. 

Alternative Approaches: A Better Children’s Internet
Australian researchers are offering a more constructive approach to online safety than blanket age restrictions. In a timely article, researchers from the ARC Centre of Excellence for the Digital Child explain that while they understand the concerns motivating the Australian Government’s decision to ban children under 16 from creating social media accounts, they believe this approach “undermines the reality that children are growing up in a digital world”.
They have developed a “Manifesto for a Better Children’s Internet” that acknowledges both the benefits and risks of digital engagement while focusing on practical improvements. They argue that “rather than banning young people’s access to social media platforms, the Australian Government should invest, both financially and socially, in developing Australia’s capacity as a global leader in producing and supporting high-quality online products and services for children and young people.”

Their framework includes several key recommendations:

Standards for high-quality digital experiences – Developing clear quality standards for digital products and services aimed at children, with input from multiple stakeholders including children themselves.
Slow design and consultation with children – Involving children and families in the design process rather than using them as “testing markets” for products and services.
Child-centered regulation and policy – Creating appropriate “guardrails” through regulatory guidelines developed with input from children, carers, families, educators and experts.
Media literacy policy and programs – Investing in media literacy education for both children and parents to develop the skills needed to navigate digital environments safely and productively.

This approach acknowledges that the internet “has enhanced children’s lives in many ways” while recognising it “was not designed with children in mind.” Rather than simply restricting access, it focuses on redesigning digital spaces to better serve young people’s needs and respecting their agency in the process.
This framework offers a promising middle path between unrestricted access and blanket prohibitions, focusing on improvement rather than exclusion.

Moving Forward: Good faith engagement

What would a more productive discourse look like? Rather than dividing positions into “protectors of children” versus “Big Tech shills,” we need approaches that:

  • Recognise children’s established rights: Digital policy should acknowledge children’s legitimate rights to information, expression, association, privacy, and participation as articulated in the UN Convention on the Rights of the Child.
  • Engage with the full evidence base: This includes both research on potential harms and studies showing limited correlations or positive benefits, with a commitment to understanding the methodological strengths and limitations of different studies.
  • Center young people’s voices: The young people affected by these policies have valuable perspectives that deserve genuine consideration, not dismissal as naive or manipulated.
  • Acknowledge trade-offs: Every policy approach involves trade-offs between protection, privacy, and participation rights. Pretending otherwise doesn’t serve anyone.
  • Focus on effective solutions: Research suggests a combination of platform design improvements, digital literacy education, and more nuanced moderation systems may be more effective than simply setting age limits.
  • Maintain good faith dialogue: Rather than using emotional appeals and moral accusations to shut down debate, all participants should approach these discussions with the genuine belief that others share the concern for children’s wellbeing, even when they disagree about methods.

This approach would move us beyond simplistic binaries and rhetorical tactics toward policies that genuinely serve children’s best interests in all their complexity.

I remain committed to research-informed approaches to making digital spaces safer for young people. This doesn’t mean blindly defending the status quo, but rather advocating for solutions that address the real complexities of young people’s digital lives while respecting their established rights.

The Digital Duty of Care legislation offers a promising framework that places responsibility on platforms to make their services safer for all users through design choices, risk assessment, and mitigation strategies. Combined with robust digital literacy education and appropriate parental controls, this represents a more comprehensive approach than age restrictions alone.

As the social media landscape continues to evolve, maintaining evidence-based discourse matters more than ever. Dismissing research as “talking points” doesn’t advance the conversation – it closes it down just when we need it most.

Young Australians deserve digital policies crafted through careful consideration of evidence, informed by young people’s perspectives, and grounded in their established rights. That’s not a “Big Tech talking point” – it’s responsible, ethical policymaking that centres the needs and interests of the very people these policies aim to serve.

Navigating the Crossroads: GenAI, Youth Online Safety, and the Future of Web3

Do you feel like we’re at a crossroads in what the internet is and how we want it to be in the future? But really, I feel like we are down in the weeds, trying to thrash out the details on a minute by minute basis.

Artificial intelligence is argued to reshape our digital landscape, with it being usefully referred to as synthetic media. That stuff is surreal. But sometimes cool. Like isn’t it funny that you could take this post and ask a GenAI tool to make it more spooky, or a fairy tale. Please feel free.

There’s some interesting questions that it gives rise to. For example, how much of our online content is going to actually have any link to our material realities and at what point will it start consuming itself?… and us along with it.

Meanwhile governments continue to grapple with “old” media formats of Web 2.0 and protecting youth online (a risk versus harm debate as danah boyd usefully points out). The intersection of technology and society has never been more complex or consequential. As we stand at this pivotal point, let’s ensure that we are spicing up our opinions about policy and emerging tech trends with expert perspectives.

A shocking perspective, I know. It’s all very emotive, political and important to talk about keeping our kids safe online, however I just wanted to flag a few things. For the debate around the child ban on social media being bandied around by the Australian government currently, I have appreciated the informed commentary by academics and advocates, Tama Leaver, Johnathon Hutchinson and Justine Humphry. If you want to really look at a balanced perspective, they offer it. Just remember that children have digital rights too … and also that if the ban is not enforceable, what impact will it actually have?

For myself, I’ve spent the last year putting all my writing energy into a Web3 case study that unpacks what people care about in the online environment and what the implications are of this for the future of the internet. You ‘ll be able to read all about this from November in my forthcoming book “Insider and Outsider Cultures in Web3″ with Emerald. It was a labour of love and is essentially my wrap up of the last 10 years of research practice talking blockchain, crypto and decentralised technologies pushing at our digital frontiers.

More on this later, this is just a taster post to say, ‘still kicking here’. But I’m probably a bit too busy looking at the impacts of GenAI tools in education and in our schools.