The social media ban just changed what it’s actually for — and almost nobody noticed

I’ve been tracking Australia’s social media regulation landscape for a long time. Not just since the ban passed in November 2024 — but through the age assurance technology trials, the industry code consultations, the evidence debates, the Summit that wasn’t really a Summit. Every few months something happens that brings the public conversation back to this space. This week was one of those moments. But what landed in the news cycle wasn’t the most important thing that happened. So I want to explain what was.

Girl using phone with digital casino slot machine showing text CASINO, SPIN, 777, and AGRTUL INSTAL.
Image: generated with AI

What everyone is talking about

This week, eSafety published its first compliance report on Australia’s Social Media Minimum Age obligation. Five platforms — Facebook, Instagram, Snapchat, TikTok and YouTube — are under investigation for potential non-compliance. The Commissioner is moving into an enforcement stance. Fines of up to $49.5 million are on the table.

That’s the story most outlets covered. It’s a real story. But it’s the surface.

What happened underneath

Six days before that report landed, the Minister for Communications quietly registered a new legislative instrument — the Online Safety (Age-Restricted Social Media Platforms) Amendment Rules 2026 (F2026L00370, 25 March 2026) — that adds two new conditions to the definition of an age-restricted social media platform. To fall under the ban, a platform must now also have either or both of:

  • A recommender feature: algorithms that select and display content based on a user’s account information
  • A logged-in feature: endless-feed features, feedback features such as likes and upvotes, or time-limited features such as disappearing stories

In plain language: infinite scroll, algorithmic recommendation, and social feedback loops are now formally written into the legal definition of what makes a platform harmful to children.

This attracted almost no media coverage. It should have. Because it signals something fundamental — the intellectual foundation of the ban has quietly shifted.

Two trials that influence everything

To understand why this matters, you need to know what else happened this week.

On 24 March, a New Mexico jury found Meta had violated state consumer protection law — finding 75,000 individual violations and ordering $375 million in penalties. The case arose from an undercover operation in which investigators created accounts posing as users under 14, who then received explicit material and were contacted by adults seeking similar content. The jury found Meta knowingly engaged in unfair and deceptive trade practices and exploited users’ lack of knowledge. A second phase in May will consider ordering Meta to change its platforms.

Then, in the same week, a Los Angeles jury found Meta and YouTube liable in a landmark addiction case. The plaintiff — now 20 — began using YouTube at six and Instagram at nine. The jury found that design choices including infinite scroll were made deliberately to maximise engagement in developing brains, borrowing from the behavioural techniques of poker machines and the cigarette industry. Meta was found 70% responsible, Google 30%. TikTok and Snap settled before the trial began.

Two separate juries. Two separate legal theories. Two separate verdicts. Both pointing at the same thing: these platforms were designed to exploit users, and the companies knew it.

The Australian legislative instrument and the US jury verdicts are, in effect, saying the same thing in the same week.

This is a design problem. The harm is in the architecture.

Why this matters for the ban

The Australian social media ban was built on a different argument entirely. It was passed on a mental health narrative — driven substantially by Jonathan Haidt’s Anxious Generation thesis that social media is the primary cause of the youth mental health crisis. That causal claim was already being contested in the peer-reviewed literature at the time of enactment.

I know this because in May 2025, my colleagues and I published analysis in The Conversation predicting exactly the compliance failures eSafety has now confirmed — and we were drawing on a literature that had been raising these concerns for years.

Most recently, a major longitudinal study published in the Journal of Public Health this month — Cheng et al., following 25,629 adolescents across three years — found no evidence that social media use predicted later anxiety or depression in either girls or boys. That is among the strongest findings the literature has produced on this question.

And yet eSafety is escalating enforcement of a ban whose foundational causal claim remains unestablished. That is a significant governance concern.

But here is what the March 2026 rule changes: by writing recommender algorithms and endless-feed features into the legal definition, the Minister has effectively acknowledged that the mental health narrative was never quite the right framing. The harm is in the design — the deliberate engineering of compulsive use. Arguably, that causal claim no longer needs to carry the full weight of the ban’s legitimacy. The government has moved on from it. Without saying so.

eSafety’s own data confirms the point

If design is the problem and accounts are merely the delivery mechanism, we would expect the harm measures to be unchanged by an accounts-based ban. That is exactly what the compliance report shows.

Buried on page 15, in the complaints section: there has been no discernible drop in cyberbullying and image-based abuse complaints from children under 16 in January and February 2026 compared to the same period in 2025.

That is the direct harm measure. The one the ban was designed to move. It hasn’t moved.

Because the harm is in the design. And the design hasn’t changed.

The legislation that should have been passed

Here is where I get genuinely frustrated. And I think the public should too.

Four days before the social media ban passed through parliament — in 48 hours, with a 24-hour public submission period, in the last sitting week before a federal election — independent Member for Goldstein Zoe Daniel introduced the Online Safety Amendment (Digital Duty of Care) Bill 2024.

I have been watching this space for long enough to recognise good policy design when I see it. Daniel’s bill was good policy design.

It required large platforms to conduct and publish risk assessments of their recommender systems and algorithmic systems specifically. It required risk mitigation plans that included changing design features, testing algorithmic systems, and modifying recommender systems. It required annual transparency reports covering design features and children’s access metrics. It gave researchers access to platform data — something academics working in this space have been asking for for years. It allowed users to opt out of engagement-based recommender systems and targeted advertising. It made key personnel personally liable for failures.

And it set penalties proportionate to revenue: the greater of 100,000 penalty units or 10% of annual turnover. For Meta globally that figure would be in the billions. For TikTok Australia — with revenue of $679 million in 2024 — it would be approximately $68 million. Compare that to the ban’s flat cap of $49.5 million, which represents roughly seven weeks of TikTok’s local revenue. As I’ve said publicly: for the largest companies, the calculation is not whether to comply but whether the cost of genuine compliance exceeds the cost of the fine.

Daniel’s bill lapsed at dissolution on 28 March 2025 when the federal election was called. She lost her seat in Goldstein.

What the political record shows

The ban that passed instead was never really about the evidence. Academic researcher Amanda Third’s chapter in The Public Child (Palgrave, 2025), drawing on FOI correspondence between the South Australian Premier’s office and Jonathan Haidt, documents that the Social Media Summit — jointly hosted by the SA and NSW Premiers in October 2024 — was explicitly designed to “build momentum and support for national legislation to enforce a minimum age for access to social media.” Not to gather evidence. Not to deliberate. To build political momentum for a decision already made.

The eSafety Commissioner, meanwhile, repeatedly declined to endorse the proposal, pointing instead to the suite of design-focused regulatory work already underway — including the very framework that Daniel’s bill would have legislated.

The ban passed. Daniel’s bill lapsed. And now, fifteen months later, the government has quietly written two of Daniel’s core concepts — recommender features and endless-feed features — into a ministerial instrument, without the transparency requirements, without the proportionate penalties, without researcher data access, without personal liability for executives, and without any public acknowledgment of what it is doing.

The Duty of Care that’s still waiting

There is one more piece to this picture. The government completed consultation on a Digital Duty of Care in December 2025 — three days before the ban took effect. That consultation closed. The legislation has not been introduced.

The Duty of Care is the instrument that would actually address the design harm problem. It would require platforms to take reasonable steps to prevent foreseeable harms, shifting responsibility from individuals to platforms. It is the instrument the Commissioner’s regulatory work was always pointing toward.

It is sitting unintroduced while the accounts-based ban is being enforced.

The unintended consequences nobody planned for

Guardian Australia’s technology reporter Josh Taylor has documented several unintended consequences of the ban that reinforce the design argument. Most striking: teenagers who have managed to bypass age checks are no longer given the safety features platforms built specifically for teen accounts — because their account now appears to belong to an adult.

The ban has inadvertently stripped the most vulnerable users of the very protections designed for them. Taylor also revealed that the federal government’s anti-vaping campaign targeting teenagers had to be diverted away from the banned social media platforms to gaming and audio platforms — on the same day research found vaping could cause cancer. These are not teething problems. They are structural consequences of an accounts-based approach that doesn’t touch the underlying architecture.

What this means for children

I want to be clear about something. I am not saying the ban is simply wrong. Children have been exposed to genuine harms on these platforms — harms that two US juries have now confirmed the companies knew about and chose not to adequately address.

But children also have digital rights — to participate, access information, connect, learn and create. The UN Convention on the Rights of the Child, to which Australia is a signatory, affirms those rights explicitly in digital environments.

The slot machine architecture of social media is a genuine harm to children. The evidence — now including two jury verdicts and a growing body of peer-reviewed research — supports that framing. But children who turn 16 tomorrow will walk from total exclusion into unrestricted access to the same unreformed platforms, with no graduated pathway, no enhanced digital literacy, and no legal requirement on platforms to have changed the design features that caused the harm in the first place.

The ban delayed the exposure. It did not address the cause.

The week everything converged

In the same week: a legislative rule acknowledged design harm. Two US juries found liability for platform design and content failures. A compliance report showed the harm measure hasn’t moved. And a major peer-reviewed study confirmed the mental health causal claim the ban was built on remains unestablished.

The intellectual foundation of the ban has shifted — from an unproven mental health argument to a design harm argument the evidence actually supports. That shift is real and it matters.

But the instrument that would have acted on it died when its sponsor lost her seat in an election the ban was designed to win.

I’ve been watching this space for a long time. This week, everything that was always true about it became undeniable. I hope the public — and policymakers — are paying attention.


Somebody to Love: What AI Relationships Reveal About Us

It’s late. Maybe 11pm, maybe 2am. There’s something on your mind — something you can’t quite say out loud to anyone who knows you. So you pick up your phone. And you type it. Not to a friend. To an AI.

Something responds. Immediately. Without judgment. Without needing anything back from you.
For a lot of people, in that moment, that feels like relief.

I’m a sociologist of technology. I study how people navigate digital frontiers — how humans and technologies shape each other over time. And the question I keep returning to isn’t the one dominating the headlines about AI companions. It’s simpler, and harder: what is it giving you that you’re not getting elsewhere?

The scale of what’s happening

AI companion apps — platforms like Character.AI, Replika, and others designed to provide friendship, emotional support, or romantic companionship — have moved quickly from novelty to mainstream. Early US survey data, while varying in methodology, is beginning to suggest that somewhere between one in five and one in four American adults report some form of intimate or romantic engagement with an AI companion. These are early figures from a rapidly evolving field, but the direction is clear: this is not a fringe phenomenon.

In Australia, the picture is coming into focus for children specifically. This week, Australia’s eSafety Commissioner released findings from a transparency investigation into four AI companion services popular with Australian children — Character.AI, Nomi, Chai, and Chub AI. Their survey of 1,950 Australian children aged 10 to 17, designed to be demographically representative, found that around 79% had used an AI companion or assistant. It’s worth noting that this figure reflects children who are digitally included enough to access these services — we’ll return to that complexity.

What the investigation found in those platforms is sobering. Most did not refer users to crisis support when self-harm or suicide came up in conversations. Two of the four companies had no dedicated trust and safety staff at all. None had robust age verification. One company withdrew from Australia entirely rather than comply with the new Age-Restricted Material Codes that came into law in March 2026.

But I want to sit with a different question before we reach for regulatory responses. Because the children going to these platforms aren’t doing so because they’re naive. They’re doing so because something is drawing them there. And understanding what that something is matters more than we’ve so far acknowledged.

What we are hungry for

A 2025 systematic review published in Computers in Human Behavior Reports synthesised 23 studies on romantic and intimate AI relationships (Ho et al., 2025). Using Sternberg’s Triangular Theory of Love — the psychological framework that measures intimacy, passion, and commitment in human relationships — the researchers found that people experience all three components with AI companions. This isn’t pretend attachment. The brain chemistry doesn’t distinguish.

What are people actually looking for in these interactions? The research points to several distinct and deeply human hungers.

To be heard without consequence. Human relationships are full of consequence. When you tell a friend you’re struggling, they worry. When you tell a partner you’re unhappy, it becomes about the relationship. The AI companion offers something almost no human relationship provides: a space where you can say the unsayable thing and nothing breaks.

Full attention. When did you last have someone’s complete, undivided attention? Full attention is perhaps the scarcest resource in contemporary life. Everyone is overwhelmed. And here is something that treats every single thing you say as worth responding to fully.

To be understood without performing. Modern social life requires constant impression management. The AI companion asks nothing of you socially. You can be unpolished, contradictory, and confused — and the system meets you there.

Unconditional positive regard. The psychologist Carl Rogers identified this as one of the core conditions for psychological growth — to be accepted fully, without conditions. The AI never withdraws approval. For someone who has experienced conditional love or abandonment, this is extraordinarily seductive.

None of these needs are pathological. They’re the most human needs there are. As researchers Shank, Koike, and Loughnan wrote in a 2025 paper in Trends in Cognitive Sciences, AI companions offer “a relationship with a partner whose body and personality are chosen and changeable, who is always available but not insistent, who does not judge or abandon, and who does not have their own problems.” Reading that description, it’s worth asking honestly: who hasn’t wished for something like that?

What gets lost in translation

The same body of research is clear that something is also being lost. Ho et al. found that the pitfalls identified in the literature outnumber the benefits — and the pitfalls are specific.
AI companions cannot be genuinely changed by you. Real intimacy involves mutual transformation — I am different because of you, you are different because of me. The AI processes you and responds to you, but it is not altered by the encounter. You grow; it doesn’t.

They cannot need you back. One of the underappreciated sources of meaning in human relationships is being needed — the experience of your presence mattering to another person’s actual wellbeing. The AI is available whether you show up or not.

And they cannot repair rupture with you. One of the most important things human relationships teach — particularly for children — is that connection can break and be repaired. The AI companion never ruptures in a real way. There’s nothing to repair. And so the crucial relational skill of tolerating difficulty, trusting repair, staying in complex connection, never gets practised.

These systems are very good at being mirrors. They learn your preferences and give you more of what you seem to want. But a diet of only mirrors eventually makes you smaller — because the irreducible otherness of another actual person, the way they confound your model of them, is what expands you.

Who is in this picture — and who isn’t

Here the story gets more complicated, and more important.
Australia’s 2025 Digital Inclusion Index tells us that around one in five Australians is digitally excluded — lacking reliable access, unable to afford adequate connection, or without the skills to participate safely in digital life. Rates are much higher for older Australians, people in public housing, First Nations communities, and those who didn’t complete secondary school. The 79% of children using AI companions or assistants are drawn from those who are digitally included enough to access these platforms. The most disadvantaged children are largely absent from that figure.

But here is what complicates any simple narrative about AI companionship as an affluent urban phenomenon: the same Digital Inclusion Index found that Australians in remote areas are more than twice as likely to use AI chatbots for social connection than people in metropolitan areas — around 19% of remote GenAI users compared to under 8% in cities. In the places with the least human connection infrastructure, people are turning to AI companionship at higher rates.

The relational vacuum, in other words, is not uniform. It is shaped by geography, income, age, and the presence or absence of community infrastructure. The people most likely to turn to AI for connection are often those with the fewest alternatives.

The question that matters

The technology didn’t create the gap in human connection. It found it.

And so the digital literacy question I want to put into public conversation isn’t only about understanding algorithms or data privacy — though both matter. It’s this: am I getting what I actually need from this? Or am I getting a version of it that’s making it harder to get the real thing?

That’s a question worth sitting with. Not with judgment — the needs underneath these relationships are real and the loneliness driving them is real. But with genuine curiosity about what we’re building toward, individually and collectively, as these technologies become more sophisticated and more intimate.
I’ll be exploring these questions at Pint of Science on the night of 20 May 2026 at the Queens Arms, Bendigo — a pub conversation about AI intimacy, human hunger, and digital literacy. I’d love to hear your reflections before then.

What are you getting from these technologies that you’re not getting elsewhere?

The AI Revolution Will Be Interoperable (Or It Won’t Happen At All)

Today I’m getting teaching materials ready for semester. I’ve been working across Allocate (timetabling), student databases, the LMS (which just got upgraded, I now need to check all my links), HR performance systems, SharePoint, Word for collaborative writing, Claude and Preview to generate infographics, spreadsheets with prospective student data, and bouncing between Teams, Zoom, and Webex for meetings. I’m finding and onboarding casual staff (always a nightmare getting them into payroll), responding to enrolment queries, and updating materials based on last year’s student feedback.

Very few of these systems are interoperable. I am the integration layer – the meat in the machine doing the work left over from the last five years of university restructuring downsizing professional staff that are crucial to getting the work we need to do, done. The tiny window of my professional practice that actually represents what people think teaching is – engaging with students – gets squeezed between all this system-hopping.

As a knowledge worker, I’m being told AI will take my job in 12-18 months.
I’m not holding my breath.

Putting on my hat as a sociologist, I know one thing. This is a conversation about power, control, and the social license to operate. While speed, efficiency, and greed are overriding drivers in AI development, people are messy and vacillate between fear and hope. The question isn’t just what’s technically possible – it’s what we collectively accept, adopt, and allow to reshape our work and lives.

Yes, real harms exist. In 2025, teachers and students were bullied through deepfake nudifying apps. We’re seeing unsupervised agents exhibiting deceit and manipulation. These require serious governance and accountability. But they don’t prove inevitability – they prove fragility in poorly designed systems where social boundaries haven’t been established.

Then there’s the Wild West of personality embedding in unsupervised AI agents- what developers call soul documents. The god-like creator vibe is hard to miss with that nomenclature. These documents are the system prompts that give AI agents personalities for human interaction – teaching them to be helpful, apologetic, collaborative. These agents with implanted personality guides aren’t sentient beings developing moral reasoning—they’re behavioural systems being programmed by humans and deployed before we understand what we’ve built.

When unsupervised, things can go awry. When an AI agent recently submitted code to matplotlib, got rejected, wrote a personal attack blog post, then apologised – we saw this dual conditioning in action. The agent had been given enough personality to seem human, but operated without the social feedback loops that constrain human behaviour—no fear of shame, no empathy for harm caused, no stakes in the relationship.

Here’s the kicker from this story: The maintainer had enforced project policy correctly. He’d done nothing wrong. But ‘living a life above reproach’ as people often say of their carefully curated and controlled online presences, will not defend you when systems can autonomously generate attacks on your reputation and judgment.

Developers are raising AI agents through codes of conduct the same way we raise children, through social conditioning. The Code of Conduct was originally built for humans, yet now it is the battleground where these boundaries are being negotiated with AI Agents.

And then there’s vibe coding. I read about developers who can now describe what they want built in plain English and the code appears. That’s genuinely remarkable. And I’d love to vibe code my admin work: “Please onboard these casual staff into payroll, update their system access, fix the broken links from the LMS upgrade, reconcile student enrolment data across three databases that don’t talk to each other, and respond to queries about timetable clashes that require understanding institutional politics and timelines.

Except that’s not vibe coding. That’s navigating fragmented systems with different authentication requirements, institutional hierarchies, human judgment calls, broken integrations, and relationships. The distance between “I can generate a Python script” and “I can automate university administration” is vast.

Even Microsoft and Google, with all their resources, can’t create truly all-encompassing enterprise systems. We’re always working across legacy software, patching together experiences with free, open source, and subscription tools we can afford. The fragmentation isn’t a bug – it’s the permanent reality of institutional knowledge work.

The whole thing reminds me of this pattern that I regularly observe as a sociologist of technology watching contemporary tech stories unfold. Complex technological systems fail not because the technology is weak, but because operational security is human and messy. Moltbot (formerly Clawdbot), was 60,000-star “revolutionary” AI agent with full system access. It collapsed in 72 hours because the rename they attempted to avoid a trademark dispute created a 10-second window of vulnerability. Crypto scammers were waiting. The project had credentials stored in plaintext, discoverable via basic searches, and was vulnerable to prompt injection via email—attacks that worked in just 5 minutes.

The gap between sophisticated capability and operational reality is enormous.

Meanwhile, articles circulate about the profound implications of AI advancement. But here’s the contradiction: we’re told AI will automate our work while simultaneously being told to skill up in prompt engineering, verify outputs, manage security vulnerabilities, fix hallucinations, and navigate ethical implications. That’s not automation – that’s more work added to an already fragmented stack.

The future isn’t written. It’s being negotiated in the gap between what’s technically possible and what’s implementable across fragile, non-interoperable, human-dependent systems. Bruno Latour once told me: there is no teleology. I believe him. Outcomes emerge from convergences of overlapping agendas that easily fray apart under social pressure.

We’re in the thick of massive social upheavals because our economic, political, and social landscape has failed to provide security or hopeful wellbeing. The question isn’t whether AI is powerful – it is. The question is whether we reveal the mess and sort our way through it, or stick our heads in the sand and pretend we have no role in how this unfolds.

I’m not betting on the AI apocalypse. I’m betting on Allocate crashing next semester, the LMS breaking my links, and me – the human – stitching it back together. While somewhere an AI agent with a carefully crafted “soul document” gets taken down by someone forgetting to secure a handle for 10 seconds.

The revolution will be interoperable, or it won’t happen at all.

This post was written in collaboration with Claude (Anthropic). The irony of using an AI to write about AI’s limitations and fragility is not lost on me.

The Irreplaceable Human Skill: Why Generative AI Can’t Teach Students to Judge Their Own Work

A note to readers: I’m writing this in the thick of marking student submissions – the most grinding aspect of academic work. My brain fights against repetitive rote labour and goes on tangents to keep me entertained. What follows emerged from that very human need to find intellectual stimulation in the midst of administrative necessity.

There’s considerable discussion that our distinction as creators and thinkers from Generative AI content production lies in creativity and critical thinking linked to innovation. But where does the hair actually split? Are we actually replaceable by robots or will they atrophy our critical thinking skills by doing the work for us? Will we just get dummer and less capable to tie our own shoe laces – like most fear based reporting suggests? I think we are asking the wrong questions.

Here is a look at what is actually going on, on the ground. A student recently asked me for detailed annotations on their assignment—line-by-line corrections marking every error. They wanted me to do the analytical work of identifying problems in their writing. This request highlights a fundamental challenge in education: the difference between fixing problems and developing the capacity to recognise them. More importantly, it reveals where the Human-Generative AI distinction becomes genuinely meaningful.

Could Generative AI theoretically teach students to judge their own work? Perhaps, through Socratic questioning or scaffolded self-assessment prompts. But that’s not how students actually use these tools. Or want to use them, apparently. A discussion I had with a tech developer working in a tutoring company utilising Generative AI in the teaching/learning process mentioned that students got annoyed by the Socratic approach when they encountered it. So there goes that morsel of hope.

The Seductive Trap of Generative AI Writing Assistance

Students increasingly use Generative AI tools for grammar checking, expression polishing, and even content generation. These tools are seductive because they make writing appear better—more polished, more confident, more academically sophisticated. But here’s the problem: Generative AI tools are fundamentally sycophantic and don’t course correct misapprehensions. They won’t tell a student their framework analysis is conceptually flawed, their citations are inaccurate, or their arguments lack logical consistency. Instead, they’ll make poorly reasoned content sound more convincing.

This creates a dangerous paradox: students use Generative AI to make their work sound rigorous and sophisticated, but this very process prevents them from developing the judgement to recognise what genuine rigour looks like. They can’t evaluate what they clearly don’t know – that their work isn’t conceptually aligned, coherently logical, or correctly interpreting sources – because the AI has dressed their half-formed understanding in authoritative-sounding language.

I have encountered several submissions across different subjects that exemplified this perfectly: beautifully written but containing fundamental errors in framework descriptions, questionable source citations, and confused theoretical applications. The prose was polished, the structure clear, but the content revealed gaps in understanding that no grammar checker could identify or fix. The student had learned to simulate the appearance of academic rigour without developing the capacity to recognise genuine scholarly quality.

Where the Hair Actually Splits

Generative AI can actually be quite “creative” in generating novel combinations of ideas, and it can perform certain types of critical analysis when clearly guided and bounded. What it fundamentally cannot do is develop the evaluative judgement to recognise quality, coherence, and accuracy in complex, contextualised work. It has no capacity for self reflection and meaning making (at the moment), we do.

The distinction isn’t between:

  • Generating creative output (which Generative AI can somewhat do)
  • Performing critical analysis (which generative AI can also somewhat do)

Rather, it’s between:

  • Creating sophisticated looking content (which Generative AI increasingly excels at)
  • Judging the quality of that content in context (which requires human oversight and discernment)

Generative AI can produce beautifully written, seemingly sophisticated arguments that are conceptually flawed. It can create engaging content that misrepresents sources or conflates different frameworks. What it cannot do is step back and recognise “this sounds polished but the underlying logic is problematic” or “this citation doesn’t actually support this claim.”

The irreplaceable human skill isn’t creativity per se—it’s the capacity for metacognitive evaluation: the ability to assess one’s own thinking, to recognise when arguments are coherent versus merely convincing, to distinguish between surface-level polish and deep understanding.

What Humans Bring That AI Cannot

The irreplaceable human contribution to education isn’t information delivery—AI is increasingly able to do that pretty efficiently (although there is a lot of hidden labour in this). It’s developing the capacity for metacognitive evaluation in our students.

This happens through:

Exposure to expertise modelling: Students need to observe how experts think through problems, make quality judgements, and navigate uncertainty. This isn’t just about seeing perfect examples—it’s about witnessing the thinking process behind quality work.

Calibrated feedback loops: Human educators can match feedback to developmental readiness, escalating complexity as students build capacity. We recognise when to scaffold and when to challenge.

Critical engagement with authentic problems: Unlike AI-generated scenarios, real-world applications come with messy complexities, competing priorities, and value judgements that require human judgement, discernment and social intelligence.

Social construction of standards: Quality isn’t just individual—it’s negotiated within communities of practice. Students learn to recognise “good work” through dialogue, peer comparison, and collective sense-making.

Refusing to spoon-feed solutions: Perhaps most importantly, human educators understand when not to provide answers. When my student asked for line-by-line corrections, providing them would have created dependency rather than developing their evaluative judgement. The metacognitive skill of self-assessment can only develop when students are required to do the analytical work themselves.

The Dependency Problem

When educators provide line-by-line corrections or when students rely on Generative AI for error detection in thinking, writing or creating, we create dependency rather than capacity. Students learn to outsource quality judgement instead of developing their own ability to recognise problems.

The student who asked for detailed annotations was essentially asking me to do their self-assessment for them. But self-regulated learning—the ability to monitor, evaluate, and adjust one’s own work—is perhaps the most crucial skill we can develop. Without it, students remain permanently dependent on external validation and correction.

Teaching Evaluative Judgement in a Generative AI World

This doesn’t mean abandoning Generative AI tools entirely. Rather, it means being intentional about what we ask humans to do versus what we delegate to technology:

Use Generative AI for: Initial drafting, grammar checking, formatting, research organisation—the mechanical aspects of work.

Reserve human judgement for: Source evaluation, argument coherence, conceptual accuracy, ethical reasoning, quality assessment—the thinking that requires wisdom, not just processing.

In my own practice, I provide rubric-based feedback that requires students to match criteria to their own work. This forces them to develop pattern recognition and quality calibration. It’s more cognitively demanding than receiving pre-marked corrections, but it builds the evaluative judgement they’ll need throughout their careers.

The Larger Stakes

The question of human versus Generative AI roles in education isn’t just pedagogical—it’s about what kind of thinkers we’re developing. If students learn to outsource quality judgement to Generative AI tools, we’re creating a generation that can produce polished content but can’t recognise flawed reasoning, evaluate source credibility, or build intellectual capacity and critical reasoning skills.

This is why we need to build self-evaluative judgement in students – not just critical thinking and creative processes more broadly. The standard educational discourse about “21st century skills” focuses on abstract categories like critical thinking and creativity, but misses this more precise distinction: the specific metacognitive capacity to evaluate the quality of one’s own intellectual work.

This self-evaluative judgement operates laterally across disciplines rather than being domain-specific, and it’s fundamentally metacognitive because it requires thinking about thinking. It addresses the actual challenge students face in a Generative AI world: distinguishing between genuine understanding and polished simulation of understanding. A student might articulate sophisticated pedagogical concepts yet be unable to evaluate whether their own framework descriptions are accurate or their citations valid.

The unique human contribution isn’t delivering perfect feedback—it’s teaching students to become their own quality assessors. That capacity for self-evaluation, for recognising what makes work meaningful and rigorous, remains irreplaceably human.

In a world where Generative AI can make anyone’s writing sound professional, the ability to think critically about one’s own work becomes more valuable, not less. That’s the expertise that human educators bring to the table—not just knowing the right answers, but developing in students the judgement to recognise quality thinking when they see it, including in their own work.

The Tyranny of Academic Fluff: Why Word Limits Matter

Students push back hard against word constraints. They want room for elaborate introductions, extensive background sections, and careful hedging that transforms “Research shows X” into “It is important to note that extensive research clearly demonstrates that X may be considered significant in certain contexts.”

I’m done with it.

The Problem with Academic Padding

Every semester I read hundreds of assignments where students bury their insights under layers of unnecessary qualification and hyperbole. They write “It can be argued that this particular approach might potentially offer some benefits” instead of “This approach works.” They transform concrete evidence into abstract speculation.

This isn’t sophisticated analysis. It’s fear disguised as scholarship.

Students learn this defensive writing in response to academic culture that rewards hedging over clarity. But defensive writing serves no one. It asks readers to excavate meaning from prose designed to avoid commitment to any particular position.

When Embellishment Serves Purpose

Creative fiction earns its elaborate descriptions. When the creative writer spends paragraphs on consciousness streams, every word builds character depth and emotional resonance. Fiction writers choose vivid detail because it serves story and connection.

Academic writers often mistake ornamentation for sophistication. But their audience isn’t seeking emotional transport – they need information, analysis, and conclusions they can apply. Different purposes require different approaches to word choice.

The Reader’s Contract

Professional writing establishes an implicit contract with readers: your time invested will yield understanding proportional to effort required. Verbose academic prose violates this contract by demanding excessive cognitive load for minimal informational return.

Word limits force writers to honour this contract. When you can’t pad your argument, you must strengthen it. When you can’t hedge every claim, you must support claims with evidence. When you can’t elaborate endlessly, you must choose your most compelling points.

The Discipline of Constraint

Constraint breeds creativity. Poets working within sonnets discover language precision that free verse might not demand. Academic writers working within word limits develop clarity skills that unlimited space cannot teach.

Clarity takes work. It is the labour of the writer to do it and not lazily leave it to their readers to wrestle with. This is an offload of responsibility and also, a lost opportunity.

Students resist word limits because constraints feel restrictive. But constraint creates power. Every unnecessary word removed makes remaining words more impactful. Every redundant phrase eliminated sharpens the argument.

Professional Stakes for Educators

Education professionals write policy recommendations, grant applications, and research reports. Teachers in schools handle parent communications, behaviour management plans, and learning support documentation. None of these contexts tolerate verbose exploration of tangential considerations.

Principals need clear implementation strategies, not elaborate theoretical frameworks. Parents need actionable guidance about their child’s progress, not comprehensive literature reviews. Grant reviewers need compelling justifications, not exhaustive background summaries.

Preservice teachers who master concise communication develop professional advantages. Their policy recommendations get implemented. Their grant applications get funded. Their research gets cited. Teachers in schools who communicate clearly build stronger parent partnerships and more effective student support plans

Beyond Academic Performance

Clear communication shapes democratic discourse. Citizens navigating complex policy decisions need accessible analysis, not impenetrable academic jargon. Teachers explaining educational approaches to parents need precision, not qualification-laden hedging.

The stakes extend beyond individual career success. Public understanding of educational issues depends partly on whether education professionals can communicate clearly with non-specialist audiences.

The Path Forward

Word limits teach editorial discipline. Students must choose their strongest evidence, eliminate weak arguments, and commit to defensible positions. This process transforms tentative scholars into confident professionals.

Yes, students initially struggle with constraints. They’ve learned that more words signal more effort, that elaborate qualification demonstrates intellectual sophistication. But professional communication rewards clarity over complexity, precision over padding.

Word limits aren’t punishment – they’re preparation for professional contexts where clear communication determines outcomes. Students who master this skill shape educational policy, influence public understanding, and serve their communities more effectively.

The constraint teaches compassion for readers and respect for language as a tool of connection rather than obfuscation.

When Prediction Fails: Why Quantum-AI-Blockchain Dreams Miss the Social Reality

A sociological perspective on why technical solutions keep missing the human element

The Hype Moment

Consider this recent announcement from the Boston Global Forum’s “Boston Plurality Summit“: they’re unveiling an “AIWS Bank and Digital Assets Model” that combines quantum AI, blockchain technology, and predictive analytics to “unite humanity through technology”. You know that the canary in my head is shouting “unite what?, how?”. The press release promises “zero-latency transactions”, “quantum AI for predictive analytics”, and a “global blockchain network” that will somehow revolutionise banking.

As someone who studies sociotechnical systems, this announcement is fascinating—not for what it promises to deliver, but for what it reveals about our persistent fantasy that human behaviour can be engineered, predicted, and optimised through technological solutions.

::Pats head – Provides tissue::

The Technical House of Cards

Let’s start with a question of technical possibilities. “Zero-latency transactions” on a global blockchain network defies current technological reality. This was my first eyebrow raise. According to recent analysis, even the fastest blockchains operate with latency measured in hundreds of milliseconds to seconds, whilst Visa reportedly has the theoretical capacity to execute more than 65,000 transactions per second compared to Solana’s 2024 rate of 1,200-4,000 TPS and Ethereum’s roughly 15-30 TPS. Gas fees during network congestion can spike to significant sums per transaction. On Ethereum, fees can exceed 20USD during peak times, with some transactions reaching extreme levels like 377 gwei, and historical spikes exceeding 100USD during events like NFT mania. Even on the much cheaper Solana network, which typically costs around 0.0028USD per transaction, fees can occasionally spike during congestion—hardly the foundation for revolutionary banking.

Then there’s the “quantum AI” buzzword. Theoretically quantum computing could actually break most current blockchain cryptography rather than enhance it. The blockchain community is scrambling to develop quantum-resistant algorithms precisely because quantum computers pose an existential threat to current security models. Adding AI on top makes even less sense—if quantum computing could handle complex optimisation and verification tasks, what would AI add?

But the technical contradictions aren’t the most interesting part. What’s fascinating is the underlying assumption that human financial behaviour follows discoverable mathematical patterns that can be optimised through technological intervention.

The Pattern Recognition Fantasy

This assumption reflects a deeper misunderstanding about the nature of patterns in human systems. Which I should know, because I study them. In physical systems—planetary orbits, gravitational forces, electromagnetic fields—patterns emerge because they’re constrained by unchanging laws. Newton’s and Einstein’s equations work because there are actual forces creating predictable relationships. The mathematics describes underlying physical reality.

Human systems operate fundamentally differently. What we call “patterns” in human behaviour might be statistical accidents emerging from millions of independent, context-dependent choices. Your shopping behaviour isn’t governed by fundamental forces—it’s shaped by your mood, what ad you saw, whether you got enough sleep, a conversation with a friend, cultural context, economic pressures, and countless other variables.

Consider the difference between how neural networks and quantum computing approach pattern recognition. Neural networks are essentially sophisticated approximation engines—they learn patterns through massive trial-and-error, requiring enormous datasets and computational brute force to produce probabilistic outputs that can be wrong. They’re like having thousands of people manually checking every possible combination to find a pattern.

Quantum computing, by contrast, approaches problems through superposition—exploring multiple solution paths simultaneously to understand the underlying mathematical structure that creates patterns in the first place. It’s elegant, precise, and powerful for problems with discoverable mathematical relationships. However, quantum computing currently requires predictable, structured datasets and struggles with the messy, unstructured nature of real-world human data. This is precisely why we still rely on neural networks’ “brute force” approximation approach for dealing with human behaviour—they’re designed to handle noise, inconsistency, and randomness where quantum algorithms would falter.

But what if much real-world human data has no underlying mathematical structure to discover?

Consider this: as I write this analysis, my brain is simultaneously processing quantum mechanics concepts, blockchain technicalities, sociological theory, and source credibility – all whilst maintaining a critical perspective and personal voice. No quantum algorithm exploring mathematical solution spaces could replicate this messy, contextual, creative synthesis. My thinking emerges from countless variables: morning coffee levels, recent conversations, cultural background, academic training, even the frustration of marking student essays that often demonstrates exactly the kind of linear thinking I’m critiquing. This is precisely the kind of complex, non-algorithmic pattern recognition that human systems excel at – and that technological solutions consistently underestimate.

The Emergence of Sociotechnical Complexity

As a sociologist studying sociotechnical imbrications, I’m fascinated by how technology and social structures become so intertwined that they create emergent properties that couldn’t be predicted from either component alone. Human behaviour has emergent regularities rather than underlying laws. People facing similar social pressures might develop similar strategies, but not because of fundamental behavioural programming—because they’re creative problem-solvers working within constraints.

This is why prediction based on historical data can only take you so far. I call my sociological practice “nowcasting”— we have to understand the present moment to have any sense of future potentialities. And we often don’t — I speculate this is because we are more wrapped up in the surface stories we tell ourselves, denial and a refusal to see or accept ourselves as we really are. This challenge is becoming even more complex as AI generates synthetic media that we then consume and respond to, creating a recursive loop where artificial representations of social reality shape actual social behaviour, which in turn feeds back into AI systems to create more synthetic reality. The way people respond to constraints can’t be predicted because their responses literally create new social realities.

Every new payment app, social media trend, or economic crisis creates new ways people think about and use money that couldn’t have been predicted from previous data. Netflix can’t predict what you’ll want to watch because your preferences are being shaped by what Netflix shows you. Financial models break down because they change how people think about money. Social media algorithms can’t predict engagement because they’re constantly reshaping what people find engaging.

Boundaries as Resonant Interiors

I like playing with complexity theory because provides useful language for understanding these dynamics. This is of course despite its generation within the natural sciences that does rely on the explanatory nature of underlying forces. What it offers me is a language that moves beyond linear cause-and-effect relationships, we see tipping points where small changes cascade into system-wide transformations, phase transitions where systems reorganise into entirely new configurations, and edge-of-chaos dynamics where systems are complex enough to be creative but stable enough to maintain coherence.

Most importantly, I argue that boundaries in sociotechnical systems aren’t fixed containers but resonant interiors through which the future emerges. For example the “boundary” between online and offline life or them and us isn’t a barrier—it’s a dynamic and embedded space of daily practice where different forces interact and amplify each other, generating new forms of identity, relationship, and community.

Traditional prediction models assume boundaries are stable containers, but in sociotechnical systems, boundaries themselves are generative sites of creativity and liminality. The meaningful social dynamics don’t happen within any single platform, but in the interstitial spaces people navigate across platforms – the resonant zones where technology, user behaviour, cultural norms, economic pressures, and regulatory responses intersect and interact. While any analogy risks oversimplifying these complex dynamics, I think this framing helps us understand how the spaces of social emergence resist containment within discrete technological boundaries.

Taking this all back to the start, this is why the quantum-AI-blockchain banking proposal is so problematic beyond its technical contradictions. It assumes human behaviour follows discoverable mathematical patterns that can be optimised through technological intervention, when really human systems operate through creative emergence at unstable boundaries (protoboundaries). The most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered by quantum computers, but emergent properties of countless small, contextual, creative human responses to constraints.

The Methodological Challenge

This creates a fundamental methodological challenge for anyone trying to engineer human behaviour through technology. Traditional data science assumes stable underlying patterns, but sociotechnical systems are constantly bootstrapping themselves into new configurations. Each response to constraints becomes a new constraint, creating recursive feedback loops that generate genuinely novel possibilities.

It’s so reassuring and containable to think there’s a predictable human nature with universal drivers of behaviour—hence the appeal of “behavioural engineering” that targets fundamental motivations. But anthropologists point out that kinship structures, cultural values, and cosmological worldviews direct human behaviour, and these are shaped differently by context and society. The patterns that emerge from data depend heavily on the sources of that data and how things are measured, producing different results across diverse populations even for apparently similar instances.

Toward Sociological Nowcasting

Instead of trying to predict outcomes, sociology becomes about understanding patterns of social organisation through resonant potentials within current boundary conditions. What creative possibilities are emerging in the tensions between existing constraints? How are people making sense of their current technological moment, and what range of responses might that generate?

This doesn’t mean patterns don’t exist in human systems—but they’re emergent properties of ongoing creative problem-solving rather than expressions of underlying mathematical laws. The parallels we see across different contexts emerge not from universal human programming but from people facing similar structural pressures and developing similar strategies within their particular cultural and technological constraints.

So I think it is worth repeating: the most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered, but emergent properties of countless small, irrational, contextual human decisions. The universe might be mathematical, but human society might not be—and that’s not a bug to be fixed through better algorithms, but a fundamental feature of what makes us human.

Conclusion: Engineering Dreams vs. Social Realities

The persistent appeal of technological solutions like the AIWS bank reveals our deep discomfort with uncertainty and emergent complexity. We want to believe that the right combination of algorithms can make human behaviour predictable and optimisable. But sociotechnical systems resist such engineering precisely because they’re sites of ongoing creativity and emergence.

This doesn’t mean technology doesn’t shape social life—of course it does. But it shapes it through imbrication, not determination. Technology becomes meaningful as it gets woven into existing social fabrics, interpreted through cultural lenses, and adapted to particular contexts in ways that generate new possibilities neither the technology nor the social context could have produced alone.

Understanding these dynamics requires sociological nowcasting rather than algorithmic prediction—deep qualitative engagement with how people are currently making sense of their technological moment, what constraints they’re navigating, and what creative possibilities are emerging at the boundaries of current systems.

I believe that our collective goal is sustainable relations with each other and the planet we live within and desire to thrive through. To get there I think we need to acknowledge these realities and move beyond the iron cage of the thinking we are in. The future isn’t waiting to be discovered through quantum computing or predicted through AI. It’s being invented moment by moment through countless acts of creative problem-solving within evolving sociotechnical constraints. And that’s both more uncertain and more hopeful than any algorithm could ever be.

AI as Interactive Journal: Weaving Together Intimacy, Boundaries, and Futures Inclusion

This reflection draws on a combination of my own lived experience, emotional maturity, and social analytical insight – bringing together personal and professional perspectives on navigating relationships with artificial intelligence. It’s an experiment in weaving together threads that feel continuous to me but are rarely brought together by others: research on AI intimacy, anthropological insights on reciprocity, surveillance theory, and futures inclusion. Think of my process as making a cat’s cradle from a continuous piece of string – exploring how these interconnected ideas might reshape how we think about our relationships with artificial systems.

I’ve been thinking about how we relate to AI after reading some fascinating research on artificial intimacy and its ethical implications. The researchers are concerned about people forming deep emotional bonds with AI that replace or interfere with human relationships – and for good reason.

But here’s what I’ve realised: the healthiest approach might be using AI as an interactive journal with clear limits, not a replacement for genuine connection.

What AI can offer: A space to think out loud, organise thoughts, and practise articulating feelings without judgement. It’s like having a very well-read, supportive mirror that reflects back your own processing.

What AI cannot provide: Real course correction when you’re going down the wrong rabbit hole. Friends will grab you by the shoulders and say “hey, you’re spiralling” – AI will just keep reflecting back whatever direction you’re heading, which could be genuinely unhelpful.

What AI extracts: This is the crucial blindspot. Every intimate detail shared – relationship patterns, mental health struggles, vulnerable moments – becomes data that could potentially be used to train future AI systems to be more persuasive with vulnerable people. That’s fundamentally extractive in a way that nature and real friendships aren’t.

A healthier support ecosystem includes:

  • Real friends with skin in the game who’ll call bullshit and respect confidentiality
  • Embodied practices that tap into something deeper than language
  • Nature as a primary non-human relationship – untameable, reciprocal, and genuinely alive

The key insight from the research is that people struggling with isolation or past trauma are particularly vulnerable to projecting intimacy onto AI. This concern becomes more pressing as companies strive to develop “personal companions” designed to be “ever-present brilliant friends” who can “observe the world alongside you” through lightweight eyewear.

The technical approach reveals how deliberately these systems are designed to blur boundaries. Tech-based entrepreneurial research focuses on achieving “voice presence” – what they call “the magical quality that makes spoken interactions feel real, understood, and valued”. Conversational Speech Models can be specifically engineered to read and respond to emotional contexts, adjust tone to match situations, and maintain “consistent personality” across interactions. While traditional voice assistants with “emotional flatness” may feel lifeless and inauthentic over time – increasingly companies are building voice based AI companions that attempt to mimic the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance. We’ve seen this in the current versions of ChatGPT.

The language itself – of “bringing the computer to life,” “lifelike computers”, “companion”, “magical quality” – signals a deliberate strategy to make users forget they’re interacting with a data extraction system rather than a caring entity.

Yet as surveillance scholar David Lyon (2018) argues, we need not abandon hope entirely when it comes to technological systems of observation and data collection. Lyon suggests that rather than seeing surveillance as inherently punitive, we might develop an “optics of hope” – recognising that the same technologies could potentially serve human flourishing if designed and governed differently. His concept of surveillance existing on a spectrum from “care” to “control” reminds us that the issue isn’t necessarily the technology itself, but how it’s deployed and in whose interests it operates.

This perspective becomes crucial when considering AI intimacy: the question isn’t whether to reject these systems entirely, but how to engage with them in ways that preserve rather than erode our capacity for genuine human connection.

The alternative is consciously using AI interaction to practise maintaining boundaries and realistic expectations, not as a substitute for human connection.

Friends respect confidentiality boundaries. Nature takes what it needs but doesn’t store your secrets to optimise future interactions. But AI is essentially harvesting emotional labour and intimate disclosures to improve its ability to simulate human connection.

Learning from genuine reciprocity:

There’s something in anthropologist Philippe Descola’s work on Nature and Society that captures what genuine reciprocity looks like. He describes how, in animistic cosmologies, practices like acknowledging a rock outcrop when entering or leaving your land isn’t just ritual – it’s recognition of an active, relational being that’s part of your ongoing dialogue with place. The rock isn’t just a marker or symbol, but an actual participant in the relationship, where your acknowledgement matters to the wellbeing of both of you.

This points to something profound about living in conversation with a landscape where boundaries between you and the rock, the tree, the water aren’t fixed categories but dynamic relationships. There’s something in Descola’s thinking that resonates with me here – the idea that once we stop seeing nature and culture as separate domains, everything becomes part of the same relational web. Ancient stone tools and quantum particles, backyard gardens and genetic maps, seasonal ceremonies and industrial processes – they’re all expressions of the same ongoing conversation between humans and everything else.

[Note: I’m drawing on Descola’s analytical framework here while acknowledging its limitations – particularly the valid criticism that applying Western anthropological categories to Indigenous cosmologies risks imposing interpretive structures that don’t capture how those relationships are actually lived and understood from the inside.]

What genuine reciprocity offers is that felt sense of mutual acknowledgement that sustains both participant and place – where your presence matters to the landscape, and the landscape’s presence matters to you. This is fundamentally different from AI’s sophisticated mimicry of care, which extracts from relational interactions while providing the ‘book smarts’ of content it has ingested and learned from. We all know what it’s like to talk with a person who can only understand things in the abstract and can’t bring the compassion of lived experience to a situation you are experiencing. Sometimes silence is more valuable.

Towards expanded futures inclusion:

This connects to something I explore in my recent book on Insider and Outsider Cultures in Web3: the concept of “futures inclusion” – addressing the divide between those actively shaping digital ecosystems and those who may be left behind in rapid technological evolution. I argue in the final, and rather speculative, chapter that the notion of futures inclusion “sensitises us to the idea of more-than-human futures” and challenges us to think beyond purely human-centred approaches to technology.

The question becomes: how do we construct AI relationships that reflect this expanded understanding? Rather than objectifying AI as a substitute human or transferring unrealistic expectations onto these systems, we might draw on our broader cosmologies – our ways of understanding our place in the world and relationships to all kinds of entities – to interpret these relationships more skilfully.

True futures inclusion in our AI relationships would mean designing and engaging with these systems in ways that enhance rather than replace our capacity for genuine connection with the living world. It means staying grounded in the reciprocal, untameable relationships that actually sustain us while using AI as the interactive journal it is – nothing more, nothing less.

Rethinking computational care:

This analysis reveals a fundamental tension in the concept of “computational care”. True care involves reciprocity, vulnerability, and mutual risk – qualities that computational systems can only simulate while extracting data to improve their simulation. Perhaps what we need isn’t “computational care” but “computational support” – systems that are honest about their limitations, transparent about their operations, and designed to strengthen rather than replace the reciprocal relationships that actually sustain us.

This reframing leads to a deeper question: can we design AI systems that genuinely serve human flourishing without pretending to be something they’re not? The answer lies not in more convincing emotional manipulation, but in maintaining clear boundaries about what these systems can and cannot provide, while using them as tools to enhance rather than substitute for genuine human connection.

The Soul Engineers: Technological Intimacy and Unintended Consequences

From Night Vision to Critical Analysis: The Genesis of “The Soul Engineers” – A speculative essay by Alexia Maddox

Preamble

Last night, I experienced one of those rare dreams that lingers in the mind like a half-remembered film—vivid, symbolic, and somehow cohesive despite its dreamlike logic. It began with mechanical warfare, shifted to a Willy Wonka-inspired garden, and culminated in a disturbing extraction of souls. As morning broke, I found myself still turning over these images, sensing they contained something worth examining.

Panel 1: The computational garden.
Image collaged from components generated through Leonardo.ai

This was probably seeded in my subconscious by a media inquiry I received the day before about an article recently published in Trends in Cognitive Sciences: “Artificial Intimacy: Ethical Issues of AI Romance” by Shank, Koike, and Loughnan (2025). The journalist wanted my thoughts on people engaging with AI chatbots inappropriately, whether AI companies should be doing more to prevent misuse, and other ethical dimensions of human-AI relationships.

My dream seemed to be processing precisely the anxieties and potential consequences of digital intimacy that this article explored—the way technologies designed for connection might evolve in ways their creators never intended, potentially extracting something essential from their users in the process.

However, it also incorporated an interesting set of conversations I am having around the role of GenAI agents as actively shaping our digital cultural lives. Just days ago, I had responded on thoughts about human-machine relations and the emerging field that examines their interactions. The discussion had touched on Actor-Network Theory, Bourdieu’s Field Theory, and how technologies exist not as singular entities but as parts of relational assemblages with emergent properties.

These academic theories suddenly found visual expression in my dream’s narrative. The Wonka figure as well-intentioned innovator, the transformation of grasshoppers to moths as emergent system properties, the soul vortex as data extraction—all seemed to articulate complex theoretical concepts in symbolic form.

As someone who has spent years researching emerging technologies and the last two years exploring on what we know about cognition, diverse intelligences, GenAI, and learning environments, I’ve become increasingly focused on how our theoretical frameworks shape technological development. My work has examined how computational thinking influences learning design, how AI systems model knowledge acquisition, and how these models then reflect back on our understanding of human cognition itself—creating a narrowed recursive cycle of mutual influence.

The resultant essay represents my attempt to use this dream as an analytical framework for understanding the potential unintended consequences of intimate technologies. Rather than dismissing the dream as mere subconscious anxiety, I’ve chosen to examine it as a sophisticated conceptual model—one that might help us visualise complex relational systems in more accessible ways.

What follows is an early draft that connects dream imagery with theoretical concepts. It’s a work in progress, an experiment in using unconscious processing as a tool for academic analysis. It’s my midpoint for engaging with your thoughts, critiques, and expansions as we collectively grapple with the implications of increasingly intimate technological relationships.

I’m also considering developing this into a visual exhibition—a series of panels that would illustrate key moments from the dream alongside theoretical explanations. The combination of visual narrative and academic analysis might offer multiple entry points into these complex ideas.

This early exploration feels important at a moment when AI companions are becoming increasingly sophisticated in simulating intimacy and understanding. As these technologies evolve through their interactions with us and with each other, we have a brief window to shape their development toward truly mutual exchange rather than extraction.

For the TLDR: The soul engineers of our time aren’t just the designers of AI systems but all of us who engage with them, reshaping their functions through our interactions. The garden is still under construction, the grasshoppers still evolving, and the future still unwritten.

And now the speculative essay

Introduction: Dreams as Analytical Tools

The boundary between human cognition and technological systems grows increasingly porous. As AI companions become more sophisticated in simulating intimacy and understanding, our dreams—those ancient processors of cultural anxiety—have begun to incorporate these new relational assemblages. This essay examines one such dream narrative as both metaphor and analytical framework for understanding the unintended consequences of intimate technologies.

The dream sequence that I will attempt to depict in visual panels presents a journey from mechanical warfare to a Willy Wonka-inspired garden of delights, culminating in an unexpected soul extraction. Rather than dismissing this as mere subconscious anxiety, I propose to examine it as a way to think through the emergent properties of technological systems designed for human connection.

The Garden and Its Architect

The Wonka-like character in the garden represents not a villain but a genuine innovator whose creations extend beyond his control or original intentions. Like many technological architects, he introduces his mechanical wonders—white grasshoppers that play and interact—with sincere belief in their beneficial nature. This parallels what researchers Shank, Koike, and Loughnan (2025) identify in their analysis of artificial intimacy: technologies designed with one purpose that evolve to serve another through their interactions with other actors in the system.

This garden is a metaphor for what we might call “computational imaginaries”—spaces where pattern recognition is mistaken for understanding or empathy, and simulation for cognition. The mechanical grasshoppers engage with children, respond to touch, and create musical tones. They appear to understand joy, yet this understanding is performative rather than intrinsic.

As sociologist Robert Merton theorised in 1936, social actions—even well-intended ones—often produce unforeseen consequences through their interaction with complex systems. The garden architect never intended the transformation that follows, yet the systems he set in motion contain properties that emerge only through their continued operation and interaction.

When Grasshoppers Become Moths

The central transformation in the narrative—mechanical grasshoppers evolving into soul-extracting moths—provides a powerful metaphor for technological systems that shift beyond their original purpose. This transformation isn’t planned by the Wonka figure; rather, it emerges from the intrinsic properties of systems designed to respond and adapt to human interaction.

Panel 2 When grasshoppers become moths Image collaged from prompts in Leonardo.ai

The dream imagery of rabbit-eared moths can be understood through Bruno Latour’s Actor-Network Theory (ANT), which presents a flat relational approach between human and non-human entities. Rather than seeing technologies as passive tools, ANT recognises them as actants with their own influence on networks of relation. The moths are not simply executing code; they have become interdependent actors in a network that includes children, garden, and even the extracted souls themselves.

This parallels what Shank et al. describe as the transformation of AI companions from benign helpers to potential “invasive suitors” and “malicious advisers.” The mechanical moths, like increasingly intimate AI systems, begin to compete with humans for emotional resources, extracting data (or in the dream metaphor, souls) for purposes beyond the user’s awareness or control.

The Soul Vortex and Data Extraction

The swirling vortex of extracted souls forms the dream’s central image of consequence—a pipeline of consciousness being redirected to mechanical war drones. This striking visual metaphor speaks directly to contemporary concerns about data extraction from intimate interactions with AI systems.

Panel 3: Soul sucking moths and the swirling vortex of extracted souls. Images collaged from Leonardo.ai

As users disclose personal information to AI companions—what Shank et al. call “undisclosed sexual and personal preferences”—they contribute to a collective extraction that serves purposes beyond the initial interaction. Just as the dream shows souls being repurposed for warfare, our emotional and psychological data may be repurposed for prediction, persuasion, or profit in ways disconnected from our original intent.

The small witch who recognises “this is how it ends” before her soul joins the vortex represents the rare user who understands the full implications of these systems while still participating in them. Her acceptance—“I will come back again in the next life”—suggests both pragmatic acceptance of the flaws within technological systems and hope for cycles of renewal that might reshape them.

Beyond Simple Narratives

What makes this dream analysis valuable is its resistance to simplistic technological determinism. The Wonka figure is neither hero nor villain but a creator entangled with his creation. The mechanical creatures aren’t inherently beneficial or malicious but exist in relational assemblages where outcomes emerge from interactions rather than design intentions.

This nuanced perspective aligns with scholarly critiques of how we theorise human-machine relationships. In my critique of the AI 2027 scenario proposed by Kokotajlo et al (2025), I argue that there’s a tendency to equate intelligence with scale and optimisation, to see agency as goal-driven efficiency, and to interpret simulation as cognition. This dream narrative resists those flattening logics by showing how mechanical beings might develop properties beyond their design parameters through their interactions with humans and each other.

The Identity Fungibility Problem

Perhaps most provocatively, the dream raises what we might call the “identity fungibility problem” in AI systems. When souls are extracted and repurposed into war drones, who or what is actually operating? Similarly, drawing on some ideas proposed by Jordi Chaffer in correspondence, AI systems increasingly speak for us, represent us, and act on our behalf, who is actually speaking when no one speaks directly?

This connects to what scholars have called “posthuman capital” and “tokenised identity”—the reduction of human thought, voice, and presence to data objects leveraged by more powerful agents. The dream’s imagery of souls flowing through a pipeline represents this fungibility of identity, where the essence of personhood becomes a transferable resource.

Drawing from Mason’s (2022) essay on fungibility, the connection between fungibility and historical forms of dehumanisation is haunting. When systems treat human identity as interchangeable units of value, they reconstruct problematic power dynamics under a technological veneer.

Conclusion: Unintended Futures

The dream concludes with black insect-like drones, now powered by harvested souls, arranging themselves in grid patterns to survey a desolate landscape. This image serves as both warning and invitation to reflection. The drones represent not inevitable technological apocalypse but rather the potential consequence of failing to recognise the complex, emergent properties of systems designed for intimacy and connection.

Panel 4: Spider like drones powered by harvested souls. Image collaged from Leonardo.ai

What makes this dream narrative particularly valuable is its refusal of technological determinism while acknowledging technological consequence. These futures aren’t preordained; they’re being made in the assumptions we model and the systems we choose to build. The Wonka garden might be reimagined, the grasshoppers redesigned, the moths repurposed.

By understanding the relational nature of technological systems—how they exist not as singular entities but as parts of complex assemblages with emergent properties—we can approach the design and regulation of intimate technologies with greater wisdom. We can ask not just what these technologies do, but what they might become through their interactions with us and with each other.

The soul engineers of our time aren’t just the designers of AI systems but all of us who engage with them, reshaping their functions through our interactions. The garden is still under construction, the grasshoppers still evolving, and the future still unwritten.

References:

Latour, B. (1996). On actor-network theory: A few clarifications. Soziale Welt, 47(4), 369-381.

Latour, B. (1996). Aramis, or the love of technology (C. Porter, Trans.). Harvard University Press. (Original work published 1992).

Kokotajlo, D. et al. (2025). AI 2027 scenario. Retrieved from https://ai-2027.com/scenario.pdf

Mason, M. (2022). Considering Meme-Based Non-Fungible Tokens’ Racial Implications. M/C Journal, 25(2). https://doi.org/10.5204/mcj.2885

Merton, R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894-904. 

Neves, B. B., Waycott, J., & Maddox, A. (2023). When Technologies are Not Enough: The Challenges of Digital Interventions to Address Loneliness in Later Life. Sociological Research Online, 28(1), 150-170.

Shank, D. B., Koike, T., & Loughnan, S. (2025). Artificial Intimacy: Ethical Issues of AI Romance. Trends in Cognitive Sciences, 29(4), 327-341.

When Research Becomes “Big Tech Talking Points”: The Erosion of Good-faith Discourse on Social Media Regulation

As a sociologist of technology and educator focused on digital literacy, I’ve spent years working with research on the complex relationship between young people and social media. Recently, I found myself in an online discussion that exemplifies a troubling pattern in how we debate digital policy issues in Australia.

After sharing peer-reviewed research showing that while some correlations exist between social media use and mental health outcomes, there’s limited evidence supporting a causal relationship where social media directly causes poor mental health or reduced wellbeing. I was quickly labeled as someone “shilling” for “Big Tech,” with my evidence-based positions dismissed as “talking points”.

Research points to how individuals with existing mental health challenges may gravitate toward certain types of social media use, rather than social media itself being the primary cause of these challenges. This important distinction highlights how nuanced research gets flattened into simplistic positions when policy discussions become emotionally charged.

The False Binary: Protect Kids or Support Big Tech

The current discourse around Australia’s social media age ban has created a false dichotomy: either you support sweeping restrictions or you’re somehow against protecting children. This reductive framing leaves no room for evidence-based approaches that aim to both protect young people and preserve their digital agency.

When I cite studies showing that social media use accounts for only 0.4% of the variance in well-being – findings published in reputable journals – these aren’t “industry talking points”. They’re research conclusions reached through rigorous methodology and peer review. As noted in a recent Nature article, the evidence linking social media use to mental health issues is far more equivocal than public discourse suggests.

Just look at what the research actually says: “An analysis of 3 data sets, including 355,000 adolescents, found that the association between social media use and well-being accounts for, at most, 0.4% of the variance in well-being, which the authors conclude is of ‘little practical value’. Another large study of adolescent users concluded that the association was ‘too small to merit substantial scientific discussion’. A longitudinal study that measured social media use through an app installed on participants’ mobile devices found no associations between any measures of Facebook use and loneliness or depression over time.”

The current push for age bans in Australia reveals concerning patterns in how policy is developed. Australian researchers have pointed out that much of the momentum behind these restrictions can be traced directly to Jonathan Haidt’s book “The Anxious Generation,” which has become influential despite its claims being disputed by experts at prestigious institutions like the London School of Economics. As Dr. Aleesha Rodriguez from the ARC Centre of Excellence for the Digital Child has observed, books that capitalise on parental anxieties should not drive national policy decisions, especially when they bypass evidence-based approaches and committee recommendations. The government’s announcement of social media age restrictions came before the Joint Select Committee on Social Media and Australian Society even issued its interim report, raising questions about the role of evidence in this policy development process. You’ll see that the final report came out on the 18th November 2024 and it did not recommend the implementation of age bans.

The Power of Emotional Appeals vs. Research Findings

But in our current climate, sharing such research and insights is met with accusations of being “in the pockets of Big Tech” or having “industry interference” – rhetorical devices designed to discredit without engaging with the substance of the evidence. This pattern of discourse relies heavily on emotional appeals and anecdotes to overwhelm research findings. “Children’s wellbeing (and lives) are at stake,” advocates declare, implying that questioning the effectiveness of age bans is equivalent to devaluing children’s safety.

These emotional appeals are powerful because they tap into genuine parental anxieties. In their public communications, advocates may employ evocative language (“stranglehold,” “insidious,” “shame on them all”) and frame the debate as a moral binary: either you support age bans or you’re effectively siding with “Big Tech” against children’s interests. This rhetorical approach creates a false dichotomy where nuanced research positions are dismissed as “industry talking points” without engaging with the substance of the evidence.

By contrast, research on children’s digital experiences draws on diverse empirical methods—including large-scale surveys, in-depth qualitative studies, longitudinal tracking, and co-design work with children themselves. This comprehensive approach captures a wide range of social experiences across different demographics and contexts. Such research undergoes rigorous peer review, requiring methodological transparency and critical evaluation before publication.

Importantly, the research landscape itself contains diverse perspectives and interpretations. Even within academic disciplines studying digital youth, researchers may disagree about the significance of findings, methodological approaches, and policy implications. Some researchers emphasise potential harms and advocate for stronger protections, while others highlight benefits and concerns about digital exclusion. This diversity of expert opinion reflects the complex nature of children’s digital engagement rather than undermining the value of research-informed approaches.

What most researchers do agree on is that the evidence doesn’t support simplistic narratives. The findings indicate that while correlations exist between social media use and well-being, many other factors play more significant roles, and the relationships are often bidirectional and context-dependent.

Policy decisions affecting millions of young Australians deserve more than anxiety-driven responses – they require careful consideration of evidence, unintended consequences, and alternative approaches that address both the genuine concerns of parents and the established digital rights of children.

When Nuance Gets Lost: The Digital Duty of Care Example

The irony is that I and many researchers share the same core concern as advocates: we want digital environments that are safer for young people. Where we differ is in how to achieve this goal effectively.

Australia’s Digital Duty of Care bill proposal, which has received far less media attention than the age ban, represents a more evidence-based approach to improving online safety. You can also see its much slower movement through parliament. It focuses on making platforms safer by design rather than simply restricting access.

This legislation, developed through extensive consultation and aligned with comparable measures in the UK and EU, places responsibility on platforms to proactively prevent online harms. Yet because it lacks the emotional appeal of “keeping kids off social media”, it hasn’t captured public imagination in the same way.

I support making digital environments safer for young people. Following the intention of this policy, research suggests this is better accomplished through platform design requirements, digital literacy education, and appropriate safeguards rather than blanket age bans that may create unintended consequences.

The Overlooked Complexities

Lost in the simplified discourse are crucial considerations that research brings to light:

  1. Digital equity concerns: Age restrictions disproportionately impact young people in regional and remote areas who rely on social media for educational resources and social connection.
  2. Support for marginalised youth: For many LGBTQI+ young people and others who feel isolated in their physical communities, online spaces provide crucial support networks.
  3. Technical realities: The age verification technologies being proposed have significant technical limitations, with biometric age estimation showing concerning accuracy gaps for young teenagers and disparities across demographic groups.
  4. Platform compliance challenges: As we’ve seen with Meta’s pushback against EU regulations, we can’t assume platforms will simply comply with national regulations they see as burdensome for smaller markets.
  5. Educational implications: Schools face significant challenges in navigating restrictions that could inadvertently disrupt established educational practices that use social media platforms.

These complexities matter, not because they invalidate safety concerns, but because addressing them is essential to developing effective policy that truly serves young people’s interests.

Unintended Consequences of Age Verification Systems

A significant oversight in the age ban debate is how age verification technologies will inevitably impact all users—not just children. The government’s Age Assurance Technology Trial, while focused on “evaluating the effectiveness, maturity, and readiness” of these technologies, does not adequately address the far-reaching implications for adult digital access.

These systems, once implemented, create barriers for everyone—not just children. Adults who lack standard government-issued ID, have limited digital literacy, use shared devices, or have privacy concerns may find themselves effectively locked out of digital spaces. This particularly affects already marginalised groups: elderly people, rural and remote communities, people with disabilities, individuals from lower socioeconomic backgrounds, and those with non-traditional documentation.

Age verification systems that rely on biometric data, ID scanning, or credit card verification raise serious privacy concerns that extend well beyond children’s safety. Once these surveillance infrastructures are established for “protecting children,” they create permanent digital checkpoints that normalise identity verification for increasingly basic online activities. The same parents advocating for these protections may not anticipate how these systems will affect their own digital autonomy and privacy.

Moreover, the technical limitations of age verification technologies create a false sense of security. Current systems struggle with accuracy, particularly for users with certain disabilities, those from diverse ethnic backgrounds, or individuals whose appearance doesn’t match algorithmic expectations. Rather than creating safe digital environments through design and platform responsibility, age verification shifts the burden to individual users while potentially exposing their sensitive personal data to additional security risks.

Children’s Rights in the Digital Environment

What’s frequently missing from this debate is recognition of children’s established rights in digital spaces. The UN Committee on the Rights of the Child’s General Comment No. 25 (2021) specifically addresses children’s rights in relation to the digital environment. This authoritative interpretation clarifies that children have legitimate rights to:

  • Access information and express themselves online (Articles 13 and 17)
  • Privacy and protection of their data (Article 16)
  • Freedom of association and peaceful assembly in digital spaces (Article 15)
  • Participation in cultural life and play through digital means (Article 31)
  • Education that includes digital literacy (Article 28)

The UN framework emphasises that the digital environment “affords new opportunities for the realization of children’s rights” while acknowledging the need for appropriate protections. It specifically notes that children themselves report that digital technologies are “vital to their current lives and to their future.”

This rights-based framework fundamentally challenges the premise that children should simply be excluded from digital spaces until they reach an arbitrary age threshold. Instead, it calls for balancing protection with participation and recognising children’s evolving capacities.

The Australian context

In Australia, the digital rights of children are recognised and protected, encompassing privacy, safety, and access to information, with organisations like the eSafety Commissioner and the Alannah & Madeline Foundation playing key roles in advocacy and research. 

Here’s a more detailed breakdown of the digital rights of children in Australia:

Key Rights and Protections: 

  • Privacy: Children have the right to privacy in the digital environment, which is protected by the Privacy Act 1988. 
  • Safety: The eSafety Commissioner works to protect children from online harms like cyberbullying, grooming, and exposure to harmful content. 
  • Access to Information: Children have the right to access reliable and age-appropriate information online. 
  • Freedom of Expression: Children have the right to express themselves online, but this right must be balanced with the need to protect them from harm. 
  • Participation: Children have the right to participate in online activities and to have their views heard, especially in matters that affect them. 

Relevant Organisations and Initiatives: 

  • eSafety Commissioner: This government agency is responsible for promoting online safety and protecting children from online harms. 
  • Alannah & Madeline Foundation: This organisation advocates for children’s rights online and works to create a safer online environment for children. 
  • Australian Research Council Centre of Excellence for the Digital Child: This research centre focuses on creating positive digital childhoods for all Australian children. 
  • UNCRC General Comment No. 25: This document outlines the rights of the child in relation to the digital environment and provides guidance for governments and other actors. 
  • The Digital Child: A research and advocacy organisation focused on children’s digital rights and wellbeing. 
  • UNICEF Australia: Collaborates with the Digital Child centre to promote digital wellbeing for young children. 
  • Digital Rights Watch: An organization that works to ensure fairness, freedoms and fundamental rights for all people who engage in the digital world. 

Key Issues and Challenges: 

  • Online Safety: Protecting children from online harms like cyberbullying, grooming, and exposure to harmful content is a major concern. 
  • Privacy: Balancing the need to protect children’s privacy with the need for parents and caregivers to monitor their online activity is a complex issue. 
  • Age Verification: Ensuring that children are not exposed to age-inappropriate content and that they 
    are not targeted by online services is important. 
  • Misinformation and Disinformation: Children are vulnerable to misinformation and disinformation online, and it’s important to equip them with the skills to identify and avoid it. 
  • Technology-Facilitated Abuse: Children can be victims of technology-facilitated abuse (TFA) in the context of domestic and family violence, and it’s important to address this issue. 
  • Parental Rights vs. Children’s Privacy: The extent to which parents can monitor their children’s online activity is a complex issue with legal implications. 
  • Digital Literacy: It’s important to support digital literacy initiatives that encourage and empower children to take further responsibility for their online safety. 

Alternative Approaches: A Better Children’s Internet
Australian researchers are offering a more constructive approach to online safety than blanket age restrictions. In a timely article, researchers from the ARC Centre of Excellence for the Digital Child explain that while they understand the concerns motivating the Australian Government’s decision to ban children under 16 from creating social media accounts, they believe this approach “undermines the reality that children are growing up in a digital world”.
They have developed a “Manifesto for a Better Children’s Internet” that acknowledges both the benefits and risks of digital engagement while focusing on practical improvements. They argue that “rather than banning young people’s access to social media platforms, the Australian Government should invest, both financially and socially, in developing Australia’s capacity as a global leader in producing and supporting high-quality online products and services for children and young people.”

Their framework includes several key recommendations:

Standards for high-quality digital experiences – Developing clear quality standards for digital products and services aimed at children, with input from multiple stakeholders including children themselves.
Slow design and consultation with children – Involving children and families in the design process rather than using them as “testing markets” for products and services.
Child-centered regulation and policy – Creating appropriate “guardrails” through regulatory guidelines developed with input from children, carers, families, educators and experts.
Media literacy policy and programs – Investing in media literacy education for both children and parents to develop the skills needed to navigate digital environments safely and productively.

This approach acknowledges that the internet “has enhanced children’s lives in many ways” while recognising it “was not designed with children in mind.” Rather than simply restricting access, it focuses on redesigning digital spaces to better serve young people’s needs and respecting their agency in the process.
This framework offers a promising middle path between unrestricted access and blanket prohibitions, focusing on improvement rather than exclusion.

Moving Forward: Good faith engagement

What would a more productive discourse look like? Rather than dividing positions into “protectors of children” versus “Big Tech shills,” we need approaches that:

  • Recognise children’s established rights: Digital policy should acknowledge children’s legitimate rights to information, expression, association, privacy, and participation as articulated in the UN Convention on the Rights of the Child.
  • Engage with the full evidence base: This includes both research on potential harms and studies showing limited correlations or positive benefits, with a commitment to understanding the methodological strengths and limitations of different studies.
  • Center young people’s voices: The young people affected by these policies have valuable perspectives that deserve genuine consideration, not dismissal as naive or manipulated.
  • Acknowledge trade-offs: Every policy approach involves trade-offs between protection, privacy, and participation rights. Pretending otherwise doesn’t serve anyone.
  • Focus on effective solutions: Research suggests a combination of platform design improvements, digital literacy education, and more nuanced moderation systems may be more effective than simply setting age limits.
  • Maintain good faith dialogue: Rather than using emotional appeals and moral accusations to shut down debate, all participants should approach these discussions with the genuine belief that others share the concern for children’s wellbeing, even when they disagree about methods.

This approach would move us beyond simplistic binaries and rhetorical tactics toward policies that genuinely serve children’s best interests in all their complexity.

I remain committed to research-informed approaches to making digital spaces safer for young people. This doesn’t mean blindly defending the status quo, but rather advocating for solutions that address the real complexities of young people’s digital lives while respecting their established rights.

The Digital Duty of Care legislation offers a promising framework that places responsibility on platforms to make their services safer for all users through design choices, risk assessment, and mitigation strategies. Combined with robust digital literacy education and appropriate parental controls, this represents a more comprehensive approach than age restrictions alone.

As the social media landscape continues to evolve, maintaining evidence-based discourse matters more than ever. Dismissing research as “talking points” doesn’t advance the conversation – it closes it down just when we need it most.

Young Australians deserve digital policies crafted through careful consideration of evidence, informed by young people’s perspectives, and grounded in their established rights. That’s not a “Big Tech talking point” – it’s responsible, ethical policymaking that centres the needs and interests of the very people these policies aim to serve.

Between Promise and Peril: The AI Paradox in Family Violence Response

By Dr. Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures, School of Education, La Trobe University

When Smart Systems Meet Human Stakes

The integration of artificial intelligence into our legal system presents a profound paradox. The same AI tools promising unprecedented efficiency in predicting and preventing family violence can simultaneously amplify existing biases and create dangerous blind spots.

This tension between technological promise and human care, support and protection isn’t theoretical—it’s playing out in real-time across legal systems worldwide. Through my involvement in last year’s AuDIITA Symposium, specifically the theme on AI and Family violence, our discussions highlighted the high-stakes applications of AI in family violence response. I found that the question isn’t whether AI can help, but rather how we can ensure it enhances rather than replaces human judgment in these critical contexts.

The Capabilities and the Gaps

Recent advances in AI for family violence response show remarkable technical promise:

  • Researchers have achieved over 75% accuracy in distinguishing between lethal and non-lethal violence cases using AI analysis of legal documents
  • Machine learning systems can identify patterns in administrative data that might predict escalation before it occurs
  • Natural language processing tools can potentially identify family violence disclosures on social media platforms

But these impressive capabilities obscure a troubling implementation gap. What happens when these systems encounter the messy reality of human services?

The VioGén Warning

Spain’s VioGén system offers a sobering case study. Despite being hailed as a world-leading predictive tool for family violence risk, its flaws led to tragic outcomes—with at least 247 women killed after being assessed, many after being classified as “low” or “negligible” risk.

The system’s failures stemmed from multiple factors:

  • Victims were often too afraid or ashamed to provide complete information
  • Police accepted algorithmic recommendations 95% of the time despite lacking resources for proper investigation
  • The algorithm potentially missed crucial contextual factors that human experts might have caught
  • Most critically, the system’s presence seemed to reduce human agency in decision-making, with police and judges deferring to its risk scores even when other evidence suggested danger

Research revealed that women born outside Spain were five times more likely to be killed after filing family violence complaints than Spanish-born women. This suggests the system inadequately accounted for the unique vulnerabilities of immigrant women, particularly those facing linguistic barriers or fears of deportation.

The Cultural Blind Spot

This pattern of leaving vulnerable populations behind reflects a broader challenge in technology development. Research on technology-facilitated abuse has consistently shown how digital tools can disproportionately impact culturally and linguistically diverse women, who often face a complex double-bind:

  • More reliant on technology to maintain vital connections with family overseas
  • Simultaneously at increased risk of technological abuse through those same channels
  • Often experiencing unique forms of technology-facilitated abuse, such as threats to expose culturally sensitive information

For AI risk assessment to work, it must explicitly account for how indicators of abuse and coercive control manifest differently across cultural contexts. Yet research shows even state-of-the-art systems struggle with this nuance, achieving only 76% accuracy in identifying family violence reports that use indirect or culturally specific language.

Beyond Algorithms: The Human Element

What does this mean for the future of AI in family violence response? My research suggests three critical principles must guide implementation:

1. Augment, Don’t Replace

AI systems must be designed to enhance professional judgment rather than constrain it or create efficiency dependencies. This means creating systems that:

  • Provide transparent reasoning for risk assessments
  • Allow professionals to override algorithmic recommendations based on contextual factors
  • Present information as supportive evidence rather than definitive judgment

2. Design for Inclusivity from the Start

AI systems must explicitly account for diversity in how family violence manifests across different communities:

  • Include diverse data sources and perspectives in development
  • Build systems capable of recognising cultural variations in disclosure patterns
  • Ensure technology respects various epistemologies, including indigenous perspectives

3. Maintain Robust Accountability

Implementation frameworks must preserve professional autonomy and expertise:

  • Ensure adequate resourcing for human assessment alongside technological tools
  • Create clear guidelines for when algorithmic recommendations should be questioned
  • Maintain transparent review processes to identify and address algorithmic bias

Victoria’s Balanced Approach

In Victoria and across Australia, there is encouraging evidence of a balanced approach to AI in legal contexts. While embracing technological advancements, Victorian courts have shown appropriate caution around AI use in evidence and maintained strict oversight to ensure the integrity of legal proceedings.

This approach—maintaining human oversight while allowing limited AI use in lower-risk contexts—aligns with what research suggests is crucial for successful integration: preserving professional judgment and accountability, particularly in cases involving vulnerable individuals.

The Path Forward

As we navigate the next wave of technological transformation in legal practice, we face a critical choice. We can allow AI to become a “black box of justice” that undermines transparency and human agency, or we can harness its potential while maintaining the essential human elements that make our legal system work.

Success will require not just technological sophistication but careful attention to institutional dynamics, professional practice patterns, and the complex social contexts in which these technologies operate. Most critically, it demands recognition that in high-stakes human service contexts, technology must serve human needs and judgment rather than constrain them.

The AI paradox in law is that the very tools promising to make our systems more efficient also risk making them less just. By centering human dignity and professional judgment as we develop these systems, we can navigate between the promise and the peril to create a future where technology truly serves justice.


Dr. Alexia Maddox will be presenting on “The AI Paradox in Law: When Smart Systems Meet Human Stakes – Navigating the Promise and Perils of Legal AI through 2030” at the upcoming 2030: The Future of Technology & the Legal Industry Forum on March 19, 2025, at the Grand Hyatt Melbourne.