When Prediction Fails: Why Quantum-AI-Blockchain Dreams Miss the Social Reality

A sociological perspective on why technical solutions keep missing the human element

The Hype Moment

Consider this recent announcement from the Boston Global Forum’s “Boston Plurality Summit“: they’re unveiling an “AIWS Bank and Digital Assets Model” that combines quantum AI, blockchain technology, and predictive analytics to “unite humanity through technology”. You know that the canary in my head is shouting “unite what?, how?”. The press release promises “zero-latency transactions”, “quantum AI for predictive analytics”, and a “global blockchain network” that will somehow revolutionise banking.

As someone who studies sociotechnical systems, this announcement is fascinating—not for what it promises to deliver, but for what it reveals about our persistent fantasy that human behaviour can be engineered, predicted, and optimised through technological solutions.

::Pats head – Provides tissue::

The Technical House of Cards

Let’s start with a question of technical possibilities. “Zero-latency transactions” on a global blockchain network defies current technological reality. This was my first eyebrow raise. According to recent analysis, even the fastest blockchains operate with latency measured in hundreds of milliseconds to seconds, whilst Visa reportedly has the theoretical capacity to execute more than 65,000 transactions per second compared to Solana’s 2024 rate of 1,200-4,000 TPS and Ethereum’s roughly 15-30 TPS. Gas fees during network congestion can spike to significant sums per transaction. On Ethereum, fees can exceed 20USD during peak times, with some transactions reaching extreme levels like 377 gwei, and historical spikes exceeding 100USD during events like NFT mania. Even on the much cheaper Solana network, which typically costs around 0.0028USD per transaction, fees can occasionally spike during congestion—hardly the foundation for revolutionary banking.

Then there’s the “quantum AI” buzzword. Theoretically quantum computing could actually break most current blockchain cryptography rather than enhance it. The blockchain community is scrambling to develop quantum-resistant algorithms precisely because quantum computers pose an existential threat to current security models. Adding AI on top makes even less sense—if quantum computing could handle complex optimisation and verification tasks, what would AI add?

But the technical contradictions aren’t the most interesting part. What’s fascinating is the underlying assumption that human financial behaviour follows discoverable mathematical patterns that can be optimised through technological intervention.

The Pattern Recognition Fantasy

This assumption reflects a deeper misunderstanding about the nature of patterns in human systems. Which I should know, because I study them. In physical systems—planetary orbits, gravitational forces, electromagnetic fields—patterns emerge because they’re constrained by unchanging laws. Newton’s and Einstein’s equations work because there are actual forces creating predictable relationships. The mathematics describes underlying physical reality.

Human systems operate fundamentally differently. What we call “patterns” in human behaviour might be statistical accidents emerging from millions of independent, context-dependent choices. Your shopping behaviour isn’t governed by fundamental forces—it’s shaped by your mood, what ad you saw, whether you got enough sleep, a conversation with a friend, cultural context, economic pressures, and countless other variables.

Consider the difference between how neural networks and quantum computing approach pattern recognition. Neural networks are essentially sophisticated approximation engines—they learn patterns through massive trial-and-error, requiring enormous datasets and computational brute force to produce probabilistic outputs that can be wrong. They’re like having thousands of people manually checking every possible combination to find a pattern.

Quantum computing, by contrast, approaches problems through superposition—exploring multiple solution paths simultaneously to understand the underlying mathematical structure that creates patterns in the first place. It’s elegant, precise, and powerful for problems with discoverable mathematical relationships. However, quantum computing currently requires predictable, structured datasets and struggles with the messy, unstructured nature of real-world human data. This is precisely why we still rely on neural networks’ “brute force” approximation approach for dealing with human behaviour—they’re designed to handle noise, inconsistency, and randomness where quantum algorithms would falter.

But what if much real-world human data has no underlying mathematical structure to discover?

Consider this: as I write this analysis, my brain is simultaneously processing quantum mechanics concepts, blockchain technicalities, sociological theory, and source credibility – all whilst maintaining a critical perspective and personal voice. No quantum algorithm exploring mathematical solution spaces could replicate this messy, contextual, creative synthesis. My thinking emerges from countless variables: morning coffee levels, recent conversations, cultural background, academic training, even the frustration of marking student essays that often demonstrates exactly the kind of linear thinking I’m critiquing. This is precisely the kind of complex, non-algorithmic pattern recognition that human systems excel at – and that technological solutions consistently underestimate.

The Emergence of Sociotechnical Complexity

As a sociologist studying sociotechnical imbrications, I’m fascinated by how technology and social structures become so intertwined that they create emergent properties that couldn’t be predicted from either component alone. Human behaviour has emergent regularities rather than underlying laws. People facing similar social pressures might develop similar strategies, but not because of fundamental behavioural programming—because they’re creative problem-solvers working within constraints.

This is why prediction based on historical data can only take you so far. I call my sociological practice “nowcasting”— we have to understand the present moment to have any sense of future potentialities. And we often don’t — I speculate this is because we are more wrapped up in the surface stories we tell ourselves, denial and a refusal to see or accept ourselves as we really are. This challenge is becoming even more complex as AI generates synthetic media that we then consume and respond to, creating a recursive loop where artificial representations of social reality shape actual social behaviour, which in turn feeds back into AI systems to create more synthetic reality. The way people respond to constraints can’t be predicted because their responses literally create new social realities.

Every new payment app, social media trend, or economic crisis creates new ways people think about and use money that couldn’t have been predicted from previous data. Netflix can’t predict what you’ll want to watch because your preferences are being shaped by what Netflix shows you. Financial models break down because they change how people think about money. Social media algorithms can’t predict engagement because they’re constantly reshaping what people find engaging.

Boundaries as Resonant Interiors

I like playing with complexity theory because provides useful language for understanding these dynamics. This is of course despite its generation within the natural sciences that does rely on the explanatory nature of underlying forces. What it offers me is a language that moves beyond linear cause-and-effect relationships, we see tipping points where small changes cascade into system-wide transformations, phase transitions where systems reorganise into entirely new configurations, and edge-of-chaos dynamics where systems are complex enough to be creative but stable enough to maintain coherence.

Most importantly, I argue that boundaries in sociotechnical systems aren’t fixed containers but resonant interiors through which the future emerges. For example the “boundary” between online and offline life or them and us isn’t a barrier—it’s a dynamic and embedded space of daily practice where different forces interact and amplify each other, generating new forms of identity, relationship, and community.

Traditional prediction models assume boundaries are stable containers, but in sociotechnical systems, boundaries themselves are generative sites of creativity and liminality. The meaningful social dynamics don’t happen within any single platform, but in the interstitial spaces people navigate across platforms – the resonant zones where technology, user behaviour, cultural norms, economic pressures, and regulatory responses intersect and interact. While any analogy risks oversimplifying these complex dynamics, I think this framing helps us understand how the spaces of social emergence resist containment within discrete technological boundaries.

Taking this all back to the start, this is why the quantum-AI-blockchain banking proposal is so problematic beyond its technical contradictions. It assumes human behaviour follows discoverable mathematical patterns that can be optimised through technological intervention, when really human systems operate through creative emergence at unstable boundaries (protoboundaries). The most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered by quantum computers, but emergent properties of countless small, contextual, creative human responses to constraints.

The Methodological Challenge

This creates a fundamental methodological challenge for anyone trying to engineer human behaviour through technology. Traditional data science assumes stable underlying patterns, but sociotechnical systems are constantly bootstrapping themselves into new configurations. Each response to constraints becomes a new constraint, creating recursive feedback loops that generate genuinely novel possibilities.

It’s so reassuring and containable to think there’s a predictable human nature with universal drivers of behaviour—hence the appeal of “behavioural engineering” that targets fundamental motivations. But anthropologists point out that kinship structures, cultural values, and cosmological worldviews direct human behaviour, and these are shaped differently by context and society. The patterns that emerge from data depend heavily on the sources of that data and how things are measured, producing different results across diverse populations even for apparently similar instances.

Toward Sociological Nowcasting

Instead of trying to predict outcomes, sociology becomes about understanding patterns of social organisation through resonant potentials within current boundary conditions. What creative possibilities are emerging in the tensions between existing constraints? How are people making sense of their current technological moment, and what range of responses might that generate?

This doesn’t mean patterns don’t exist in human systems—but they’re emergent properties of ongoing creative problem-solving rather than expressions of underlying mathematical laws. The parallels we see across different contexts emerge not from universal human programming but from people facing similar structural pressures and developing similar strategies within their particular cultural and technological constraints.

So I think it is worth repeating: the most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered, but emergent properties of countless small, irrational, contextual human decisions. The universe might be mathematical, but human society might not be—and that’s not a bug to be fixed through better algorithms, but a fundamental feature of what makes us human.

Conclusion: Engineering Dreams vs. Social Realities

The persistent appeal of technological solutions like the AIWS bank reveals our deep discomfort with uncertainty and emergent complexity. We want to believe that the right combination of algorithms can make human behaviour predictable and optimisable. But sociotechnical systems resist such engineering precisely because they’re sites of ongoing creativity and emergence.

This doesn’t mean technology doesn’t shape social life—of course it does. But it shapes it through imbrication, not determination. Technology becomes meaningful as it gets woven into existing social fabrics, interpreted through cultural lenses, and adapted to particular contexts in ways that generate new possibilities neither the technology nor the social context could have produced alone.

Understanding these dynamics requires sociological nowcasting rather than algorithmic prediction—deep qualitative engagement with how people are currently making sense of their technological moment, what constraints they’re navigating, and what creative possibilities are emerging at the boundaries of current systems.

I believe that our collective goal is sustainable relations with each other and the planet we live within and desire to thrive through. To get there I think we need to acknowledge these realities and move beyond the iron cage of the thinking we are in. The future isn’t waiting to be discovered through quantum computing or predicted through AI. It’s being invented moment by moment through countless acts of creative problem-solving within evolving sociotechnical constraints. And that’s both more uncertain and more hopeful than any algorithm could ever be.

AI as Interactive Journal: Weaving Together Intimacy, Boundaries, and Futures Inclusion

This reflection draws on a combination of my own lived experience, emotional maturity, and social analytical insight – bringing together personal and professional perspectives on navigating relationships with artificial intelligence. It’s an experiment in weaving together threads that feel continuous to me but are rarely brought together by others: research on AI intimacy, anthropological insights on reciprocity, surveillance theory, and futures inclusion. Think of my process as making a cat’s cradle from a continuous piece of string – exploring how these interconnected ideas might reshape how we think about our relationships with artificial systems.

I’ve been thinking about how we relate to AI after reading some fascinating research on artificial intimacy and its ethical implications. The researchers are concerned about people forming deep emotional bonds with AI that replace or interfere with human relationships – and for good reason.

But here’s what I’ve realised: the healthiest approach might be using AI as an interactive journal with clear limits, not a replacement for genuine connection.

What AI can offer: A space to think out loud, organise thoughts, and practise articulating feelings without judgement. It’s like having a very well-read, supportive mirror that reflects back your own processing.

What AI cannot provide: Real course correction when you’re going down the wrong rabbit hole. Friends will grab you by the shoulders and say “hey, you’re spiralling” – AI will just keep reflecting back whatever direction you’re heading, which could be genuinely unhelpful.

What AI extracts: This is the crucial blindspot. Every intimate detail shared – relationship patterns, mental health struggles, vulnerable moments – becomes data that could potentially be used to train future AI systems to be more persuasive with vulnerable people. That’s fundamentally extractive in a way that nature and real friendships aren’t.

A healthier support ecosystem includes:

  • Real friends with skin in the game who’ll call bullshit and respect confidentiality
  • Embodied practices that tap into something deeper than language
  • Nature as a primary non-human relationship – untameable, reciprocal, and genuinely alive

The key insight from the research is that people struggling with isolation or past trauma are particularly vulnerable to projecting intimacy onto AI. This concern becomes more pressing as companies strive to develop “personal companions” designed to be “ever-present brilliant friends” who can “observe the world alongside you” through lightweight eyewear.

The technical approach reveals how deliberately these systems are designed to blur boundaries. Tech-based entrepreneurial research focuses on achieving “voice presence” – what they call “the magical quality that makes spoken interactions feel real, understood, and valued”. Conversational Speech Models can be specifically engineered to read and respond to emotional contexts, adjust tone to match situations, and maintain “consistent personality” across interactions. While traditional voice assistants with “emotional flatness” may feel lifeless and inauthentic over time – increasingly companies are building voice based AI companions that attempt to mimic the subtleties of voice: the rising excitement, the thoughtful pause, the warm reassurance. We’ve seen this in the current versions of ChatGPT.

The language itself – of “bringing the computer to life,” “lifelike computers”, “companion”, “magical quality” – signals a deliberate strategy to make users forget they’re interacting with a data extraction system rather than a caring entity.

Yet as surveillance scholar David Lyon (2018) argues, we need not abandon hope entirely when it comes to technological systems of observation and data collection. Lyon suggests that rather than seeing surveillance as inherently punitive, we might develop an “optics of hope” – recognising that the same technologies could potentially serve human flourishing if designed and governed differently. His concept of surveillance existing on a spectrum from “care” to “control” reminds us that the issue isn’t necessarily the technology itself, but how it’s deployed and in whose interests it operates.

This perspective becomes crucial when considering AI intimacy: the question isn’t whether to reject these systems entirely, but how to engage with them in ways that preserve rather than erode our capacity for genuine human connection.

The alternative is consciously using AI interaction to practise maintaining boundaries and realistic expectations, not as a substitute for human connection.

Friends respect confidentiality boundaries. Nature takes what it needs but doesn’t store your secrets to optimise future interactions. But AI is essentially harvesting emotional labour and intimate disclosures to improve its ability to simulate human connection.

Learning from genuine reciprocity:

There’s something in anthropologist Philippe Descola’s work on Nature and Society that captures what genuine reciprocity looks like. He describes how, in animistic cosmologies, practices like acknowledging a rock outcrop when entering or leaving your land isn’t just ritual – it’s recognition of an active, relational being that’s part of your ongoing dialogue with place. The rock isn’t just a marker or symbol, but an actual participant in the relationship, where your acknowledgement matters to the wellbeing of both of you.

This points to something profound about living in conversation with a landscape where boundaries between you and the rock, the tree, the water aren’t fixed categories but dynamic relationships. There’s something in Descola’s thinking that resonates with me here – the idea that once we stop seeing nature and culture as separate domains, everything becomes part of the same relational web. Ancient stone tools and quantum particles, backyard gardens and genetic maps, seasonal ceremonies and industrial processes – they’re all expressions of the same ongoing conversation between humans and everything else.

[Note: I’m drawing on Descola’s analytical framework here while acknowledging its limitations – particularly the valid criticism that applying Western anthropological categories to Indigenous cosmologies risks imposing interpretive structures that don’t capture how those relationships are actually lived and understood from the inside.]

What genuine reciprocity offers is that felt sense of mutual acknowledgement that sustains both participant and place – where your presence matters to the landscape, and the landscape’s presence matters to you. This is fundamentally different from AI’s sophisticated mimicry of care, which extracts from relational interactions while providing the ‘book smarts’ of content it has ingested and learned from. We all know what it’s like to talk with a person who can only understand things in the abstract and can’t bring the compassion of lived experience to a situation you are experiencing. Sometimes silence is more valuable.

Towards expanded futures inclusion:

This connects to something I explore in my recent book on Insider and Outsider Cultures in Web3: the concept of “futures inclusion” – addressing the divide between those actively shaping digital ecosystems and those who may be left behind in rapid technological evolution. I argue in the final, and rather speculative, chapter that the notion of futures inclusion “sensitises us to the idea of more-than-human futures” and challenges us to think beyond purely human-centred approaches to technology.

The question becomes: how do we construct AI relationships that reflect this expanded understanding? Rather than objectifying AI as a substitute human or transferring unrealistic expectations onto these systems, we might draw on our broader cosmologies – our ways of understanding our place in the world and relationships to all kinds of entities – to interpret these relationships more skilfully.

True futures inclusion in our AI relationships would mean designing and engaging with these systems in ways that enhance rather than replace our capacity for genuine connection with the living world. It means staying grounded in the reciprocal, untameable relationships that actually sustain us while using AI as the interactive journal it is – nothing more, nothing less.

Rethinking computational care:

This analysis reveals a fundamental tension in the concept of “computational care”. True care involves reciprocity, vulnerability, and mutual risk – qualities that computational systems can only simulate while extracting data to improve their simulation. Perhaps what we need isn’t “computational care” but “computational support” – systems that are honest about their limitations, transparent about their operations, and designed to strengthen rather than replace the reciprocal relationships that actually sustain us.

This reframing leads to a deeper question: can we design AI systems that genuinely serve human flourishing without pretending to be something they’re not? The answer lies not in more convincing emotional manipulation, but in maintaining clear boundaries about what these systems can and cannot provide, while using them as tools to enhance rather than substitute for genuine human connection.