This reflection draws on a combination of my own lived experience, emotional maturity, and social analytical insight – bringing together personal and professional perspectives on navigating relationships with artificial intelligence. It’s an experiment in weaving together threads that feel continuous to me but are rarely brought together by others: research on AI intimacy, anthropological insights on reciprocity, surveillance theory, and futures inclusion. Think of my process as making a cat’s cradle from a continuous piece of string – exploring how these interconnected ideas might reshape how we think about our relationships with artificial systems.

I’ve been thinking about how we relate to AI after reading some fascinating research on artificial intimacy and its ethical implications. The researchers are concerned about people forming deep emotional bonds with AI that replace or interfere with human relationships – and for good reason.
But here’s what I’ve realised: the healthiest approach might be using AI as an interactive journal with clear limits, not a replacement for genuine connection.
What AI can offer: A space to think out loud, organise thoughts, and practise articulating feelings without judgement. It’s like having a very well-read, supportive mirror that reflects back your own processing.
What AI cannot provide: Real course correction when you’re going down the wrong rabbit hole. Friends will grab you by the shoulders and say “hey, you’re spiralling” – AI will just keep reflecting back whatever direction you’re heading, which could be genuinely unhelpful.
What AI extracts: This is the crucial blindspot. Every intimate detail shared – relationship patterns, mental health struggles, vulnerable moments – becomes data that could potentially be used to train future AI systems to be more persuasive with vulnerable people. That’s fundamentally extractive in a way that nature and real friendships aren’t.
A healthier support ecosystem includes:
- Real friends with skin in the game who’ll call bullshit and respect confidentiality
- Embodied practices that tap into something deeper than language
- Nature as a primary non-human relationship – untameable, reciprocal, and genuinely alive
The key insight from the research is that people struggling with isolation or past trauma are particularly vulnerable to projecting intimacy onto AI. This concern becomes more pressing as companies strive to develop “personal companions” designed to be “ever-present brilliant friends” who can “observe the world alongside you” through lightweight eyewear.
The technical approach reveals how deliberately these systems are designed to blur boundaries. Tech-based entrepreneurial research focuses on achieving “voice presence” – what they call “the magical quality that makes spoken interactions feel real, understood, and valued”. Conversational Speech Models can be specifically engineered to read and respond to emotional contexts, adjust tone to match situations, and maintain “consistent personality” across interactions. While traditional voice assistants with “emotional flatness” may feel lifeless and inauthentic over time – increasingly companies are building voice based AI companions that attempt to mimic the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance. We’ve seen this in the current versions of ChatGPT.
The language itself – of “bringing the computer to life,” “lifelike computers”, “companion”, “magical quality” – signals a deliberate strategy to make users forget they’re interacting with a data extraction system rather than a caring entity.
Yet as surveillance scholar David Lyon (2018) argues, we need not abandon hope entirely when it comes to technological systems of observation and data collection. Lyon suggests that rather than seeing surveillance as inherently punitive, we might develop an “optics of hope” – recognising that the same technologies could potentially serve human flourishing if designed and governed differently. His concept of surveillance existing on a spectrum from “care” to “control” reminds us that the issue isn’t necessarily the technology itself, but how it’s deployed and in whose interests it operates.
This perspective becomes crucial when considering AI intimacy: the question isn’t whether to reject these systems entirely, but how to engage with them in ways that preserve rather than erode our capacity for genuine human connection.
The alternative is consciously using AI interaction to practise maintaining boundaries and realistic expectations, not as a substitute for human connection.
Friends respect confidentiality boundaries. Nature takes what it needs but doesn’t store your secrets to optimise future interactions. But AI is essentially harvesting emotional labour and intimate disclosures to improve its ability to simulate human connection.
Learning from genuine reciprocity:
There’s something in anthropologist Philippe Descola’s work on Nature and Society that captures what genuine reciprocity looks like. He describes how, in animistic cosmologies, practices like acknowledging a rock outcrop when entering or leaving your land isn’t just ritual – it’s recognition of an active, relational being that’s part of your ongoing dialogue with place. The rock isn’t just a marker or symbol, but an actual participant in the relationship, where your acknowledgement matters to the wellbeing of both of you.
This points to something profound about living in conversation with a landscape where boundaries between you and the rock, the tree, the water aren’t fixed categories but dynamic relationships. There’s something in Descola’s thinking that resonates with me here – the idea that once we stop seeing nature and culture as separate domains, everything becomes part of the same relational web. Ancient stone tools and quantum particles, backyard gardens and genetic maps, seasonal ceremonies and industrial processes – they’re all expressions of the same ongoing conversation between humans and everything else.
[Note: I’m drawing on Descola’s analytical framework here while acknowledging its limitations – particularly the valid criticism that applying Western anthropological categories to Indigenous cosmologies risks imposing interpretive structures that don’t capture how those relationships are actually lived and understood from the inside.]
What genuine reciprocity offers is that felt sense of mutual acknowledgement that sustains both participant and place – where your presence matters to the landscape, and the landscape’s presence matters to you. This is fundamentally different from AI’s sophisticated mimicry of care, which extracts from relational interactions while providing the ‘book smarts’ of content it has ingested and learned from. We all know what it’s like to talk with a person who can only understand things in the abstract and can’t bring the compassion of lived experience to a situation you are experiencing. Sometimes silence is more valuable.
Towards expanded futures inclusion:

This connects to something I explore in my recent book on Insider and Outsider Cultures in Web3: the concept of “futures inclusion” – addressing the divide between those actively shaping digital ecosystems and those who may be left behind in rapid technological evolution. I argue in the final, and rather speculative, chapter that the notion of futures inclusion “sensitises us to the idea of more-than-human futures” and challenges us to think beyond purely human-centred approaches to technology.
The question becomes: how do we construct AI relationships that reflect this expanded understanding? Rather than objectifying AI as a substitute human or transferring unrealistic expectations onto these systems, we might draw on our broader cosmologies – our ways of understanding our place in the world and relationships to all kinds of entities – to interpret these relationships more skilfully.
True futures inclusion in our AI relationships would mean designing and engaging with these systems in ways that enhance rather than replace our capacity for genuine connection with the living world. It means staying grounded in the reciprocal, untameable relationships that actually sustain us while using AI as the interactive journal it is – nothing more, nothing less.
Rethinking computational care:
This analysis reveals a fundamental tension in the concept of “computational care”. True care involves reciprocity, vulnerability, and mutual risk – qualities that computational systems can only simulate while extracting data to improve their simulation. Perhaps what we need isn’t “computational care” but “computational support” – systems that are honest about their limitations, transparent about their operations, and designed to strengthen rather than replace the reciprocal relationships that actually sustain us.
This reframing leads to a deeper question: can we design AI systems that genuinely serve human flourishing without pretending to be something they’re not? The answer lies not in more convincing emotional manipulation, but in maintaining clear boundaries about what these systems can and cannot provide, while using them as tools to enhance rather than substitute for genuine human connection.