AI as Interactive Journal: Weaving Together Intimacy, Boundaries, and Futures Inclusion

This reflection draws on a combination of my own lived experience, emotional maturity, and social analytical insight – bringing together personal and professional perspectives on navigating relationships with artificial intelligence. It’s an experiment in weaving together threads that feel continuous to me but are rarely brought together by others: research on AI intimacy, anthropological insights on reciprocity, surveillance theory, and futures inclusion. Think of my process as making a cat’s cradle from a continuous piece of string – exploring how these interconnected ideas might reshape how we think about our relationships with artificial systems.

I’ve been thinking about how we relate to AI after reading some fascinating research on artificial intimacy and its ethical implications. The researchers are concerned about people forming deep emotional bonds with AI that replace or interfere with human relationships – and for good reason.

But here’s what I’ve realised: the healthiest approach might be using AI as an interactive journal with clear limits, not a replacement for genuine connection.

What AI can offer: A space to think out loud, organise thoughts, and practise articulating feelings without judgement. It’s like having a very well-read, supportive mirror that reflects back your own processing.

What AI cannot provide: Real course correction when you’re going down the wrong rabbit hole. Friends will grab you by the shoulders and say “hey, you’re spiralling” – AI will just keep reflecting back whatever direction you’re heading, which could be genuinely unhelpful.

What AI extracts: This is the crucial blindspot. Every intimate detail shared – relationship patterns, mental health struggles, vulnerable moments – becomes data that could potentially be used to train future AI systems to be more persuasive with vulnerable people. That’s fundamentally extractive in a way that nature and real friendships aren’t.

A healthier support ecosystem includes:

  • Real friends with skin in the game who’ll call bullshit and respect confidentiality
  • Embodied practices that tap into something deeper than language
  • Nature as a primary non-human relationship – untameable, reciprocal, and genuinely alive

The key insight from the research is that people struggling with isolation or past trauma are particularly vulnerable to projecting intimacy onto AI. This concern becomes more pressing as companies strive to develop “personal companions” designed to be “ever-present brilliant friends” who can “observe the world alongside you” through lightweight eyewear.

The technical approach reveals how deliberately these systems are designed to blur boundaries. Tech-based entrepreneurial research focuses on achieving “voice presence” – what they call “the magical quality that makes spoken interactions feel real, understood, and valued”. Conversational Speech Models can be specifically engineered to read and respond to emotional contexts, adjust tone to match situations, and maintain “consistent personality” across interactions. While traditional voice assistants with “emotional flatness” may feel lifeless and inauthentic over time – increasingly companies are building voice based AI companions that attempt to mimic the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance. We’ve seen this in the current versions of ChatGPT.

The language itself – of “bringing the computer to life,” “lifelike computers”, “companion”, “magical quality” – signals a deliberate strategy to make users forget they’re interacting with a data extraction system rather than a caring entity.

Yet as surveillance scholar David Lyon (2018) argues, we need not abandon hope entirely when it comes to technological systems of observation and data collection. Lyon suggests that rather than seeing surveillance as inherently punitive, we might develop an “optics of hope” – recognising that the same technologies could potentially serve human flourishing if designed and governed differently. His concept of surveillance existing on a spectrum from “care” to “control” reminds us that the issue isn’t necessarily the technology itself, but how it’s deployed and in whose interests it operates.

This perspective becomes crucial when considering AI intimacy: the question isn’t whether to reject these systems entirely, but how to engage with them in ways that preserve rather than erode our capacity for genuine human connection.

The alternative is consciously using AI interaction to practise maintaining boundaries and realistic expectations, not as a substitute for human connection.

Friends respect confidentiality boundaries. Nature takes what it needs but doesn’t store your secrets to optimise future interactions. But AI is essentially harvesting emotional labour and intimate disclosures to improve its ability to simulate human connection.

Learning from genuine reciprocity:

There’s something in anthropologist Philippe Descola’s work on Nature and Society that captures what genuine reciprocity looks like. He describes how, in animistic cosmologies, practices like acknowledging a rock outcrop when entering or leaving your land isn’t just ritual – it’s recognition of an active, relational being that’s part of your ongoing dialogue with place. The rock isn’t just a marker or symbol, but an actual participant in the relationship, where your acknowledgement matters to the wellbeing of both of you.

This points to something profound about living in conversation with a landscape where boundaries between you and the rock, the tree, the water aren’t fixed categories but dynamic relationships. There’s something in Descola’s thinking that resonates with me here – the idea that once we stop seeing nature and culture as separate domains, everything becomes part of the same relational web. Ancient stone tools and quantum particles, backyard gardens and genetic maps, seasonal ceremonies and industrial processes – they’re all expressions of the same ongoing conversation between humans and everything else.

[Note: I’m drawing on Descola’s analytical framework here while acknowledging its limitations – particularly the valid criticism that applying Western anthropological categories to Indigenous cosmologies risks imposing interpretive structures that don’t capture how those relationships are actually lived and understood from the inside.]

What genuine reciprocity offers is that felt sense of mutual acknowledgement that sustains both participant and place – where your presence matters to the landscape, and the landscape’s presence matters to you. This is fundamentally different from AI’s sophisticated mimicry of care, which extracts from relational interactions while providing the ‘book smarts’ of content it has ingested and learned from. We all know what it’s like to talk with a person who can only understand things in the abstract and can’t bring the compassion of lived experience to a situation you are experiencing. Sometimes silence is more valuable.

Towards expanded futures inclusion:

This connects to something I explore in my recent book on Insider and Outsider Cultures in Web3: the concept of “futures inclusion” – addressing the divide between those actively shaping digital ecosystems and those who may be left behind in rapid technological evolution. I argue in the final, and rather speculative, chapter that the notion of futures inclusion “sensitises us to the idea of more-than-human futures” and challenges us to think beyond purely human-centred approaches to technology.

The question becomes: how do we construct AI relationships that reflect this expanded understanding? Rather than objectifying AI as a substitute human or transferring unrealistic expectations onto these systems, we might draw on our broader cosmologies – our ways of understanding our place in the world and relationships to all kinds of entities – to interpret these relationships more skilfully.

True futures inclusion in our AI relationships would mean designing and engaging with these systems in ways that enhance rather than replace our capacity for genuine connection with the living world. It means staying grounded in the reciprocal, untameable relationships that actually sustain us while using AI as the interactive journal it is – nothing more, nothing less.

Rethinking computational care:

This analysis reveals a fundamental tension in the concept of “computational care”. True care involves reciprocity, vulnerability, and mutual risk – qualities that computational systems can only simulate while extracting data to improve their simulation. Perhaps what we need isn’t “computational care” but “computational support” – systems that are honest about their limitations, transparent about their operations, and designed to strengthen rather than replace the reciprocal relationships that actually sustain us.

This reframing leads to a deeper question: can we design AI systems that genuinely serve human flourishing without pretending to be something they’re not? The answer lies not in more convincing emotional manipulation, but in maintaining clear boundaries about what these systems can and cannot provide, while using them as tools to enhance rather than substitute for genuine human connection.

Between Promise and Peril: The AI Paradox in Family Violence Response

By Dr. Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures, School of Education, La Trobe University

When Smart Systems Meet Human Stakes

The integration of artificial intelligence into our legal system presents a profound paradox. The same AI tools promising unprecedented efficiency in predicting and preventing family violence can simultaneously amplify existing biases and create dangerous blind spots.

This tension between technological promise and human care, support and protection isn’t theoretical—it’s playing out in real-time across legal systems worldwide. Through my involvement in last year’s AuDIITA Symposium, specifically the theme on AI and Family violence, our discussions highlighted the high-stakes applications of AI in family violence response. I found that the question isn’t whether AI can help, but rather how we can ensure it enhances rather than replaces human judgment in these critical contexts.

The Capabilities and the Gaps

Recent advances in AI for family violence response show remarkable technical promise:

  • Researchers have achieved over 75% accuracy in distinguishing between lethal and non-lethal violence cases using AI analysis of legal documents
  • Machine learning systems can identify patterns in administrative data that might predict escalation before it occurs
  • Natural language processing tools can potentially identify family violence disclosures on social media platforms

But these impressive capabilities obscure a troubling implementation gap. What happens when these systems encounter the messy reality of human services?

The VioGén Warning

Spain’s VioGén system offers a sobering case study. Despite being hailed as a world-leading predictive tool for family violence risk, its flaws led to tragic outcomes—with at least 247 women killed after being assessed, many after being classified as “low” or “negligible” risk.

The system’s failures stemmed from multiple factors:

  • Victims were often too afraid or ashamed to provide complete information
  • Police accepted algorithmic recommendations 95% of the time despite lacking resources for proper investigation
  • The algorithm potentially missed crucial contextual factors that human experts might have caught
  • Most critically, the system’s presence seemed to reduce human agency in decision-making, with police and judges deferring to its risk scores even when other evidence suggested danger

Research revealed that women born outside Spain were five times more likely to be killed after filing family violence complaints than Spanish-born women. This suggests the system inadequately accounted for the unique vulnerabilities of immigrant women, particularly those facing linguistic barriers or fears of deportation.

The Cultural Blind Spot

This pattern of leaving vulnerable populations behind reflects a broader challenge in technology development. Research on technology-facilitated abuse has consistently shown how digital tools can disproportionately impact culturally and linguistically diverse women, who often face a complex double-bind:

  • More reliant on technology to maintain vital connections with family overseas
  • Simultaneously at increased risk of technological abuse through those same channels
  • Often experiencing unique forms of technology-facilitated abuse, such as threats to expose culturally sensitive information

For AI risk assessment to work, it must explicitly account for how indicators of abuse and coercive control manifest differently across cultural contexts. Yet research shows even state-of-the-art systems struggle with this nuance, achieving only 76% accuracy in identifying family violence reports that use indirect or culturally specific language.

Beyond Algorithms: The Human Element

What does this mean for the future of AI in family violence response? My research suggests three critical principles must guide implementation:

1. Augment, Don’t Replace

AI systems must be designed to enhance professional judgment rather than constrain it or create efficiency dependencies. This means creating systems that:

  • Provide transparent reasoning for risk assessments
  • Allow professionals to override algorithmic recommendations based on contextual factors
  • Present information as supportive evidence rather than definitive judgment

2. Design for Inclusivity from the Start

AI systems must explicitly account for diversity in how family violence manifests across different communities:

  • Include diverse data sources and perspectives in development
  • Build systems capable of recognising cultural variations in disclosure patterns
  • Ensure technology respects various epistemologies, including indigenous perspectives

3. Maintain Robust Accountability

Implementation frameworks must preserve professional autonomy and expertise:

  • Ensure adequate resourcing for human assessment alongside technological tools
  • Create clear guidelines for when algorithmic recommendations should be questioned
  • Maintain transparent review processes to identify and address algorithmic bias

Victoria’s Balanced Approach

In Victoria and across Australia, there is encouraging evidence of a balanced approach to AI in legal contexts. While embracing technological advancements, Victorian courts have shown appropriate caution around AI use in evidence and maintained strict oversight to ensure the integrity of legal proceedings.

This approach—maintaining human oversight while allowing limited AI use in lower-risk contexts—aligns with what research suggests is crucial for successful integration: preserving professional judgment and accountability, particularly in cases involving vulnerable individuals.

The Path Forward

As we navigate the next wave of technological transformation in legal practice, we face a critical choice. We can allow AI to become a “black box of justice” that undermines transparency and human agency, or we can harness its potential while maintaining the essential human elements that make our legal system work.

Success will require not just technological sophistication but careful attention to institutional dynamics, professional practice patterns, and the complex social contexts in which these technologies operate. Most critically, it demands recognition that in high-stakes human service contexts, technology must serve human needs and judgment rather than constrain them.

The AI paradox in law is that the very tools promising to make our systems more efficient also risk making them less just. By centering human dignity and professional judgment as we develop these systems, we can navigate between the promise and the peril to create a future where technology truly serves justice.


Dr. Alexia Maddox will be presenting on “The AI Paradox in Law: When Smart Systems Meet Human Stakes – Navigating the Promise and Perils of Legal AI through 2030” at the upcoming 2030: The Future of Technology & the Legal Industry Forum on March 19, 2025, at the Grand Hyatt Melbourne.