When Prediction Fails: Why Quantum-AI-Blockchain Dreams Miss the Social Reality

A sociological perspective on why technical solutions keep missing the human element

The Hype Moment

Consider this recent announcement from the Boston Global Forum’s “Boston Plurality Summit“: they’re unveiling an “AIWS Bank and Digital Assets Model” that combines quantum AI, blockchain technology, and predictive analytics to “unite humanity through technology”. You know that the canary in my head is shouting “unite what?, how?”. The press release promises “zero-latency transactions”, “quantum AI for predictive analytics”, and a “global blockchain network” that will somehow revolutionise banking.

As someone who studies sociotechnical systems, this announcement is fascinating—not for what it promises to deliver, but for what it reveals about our persistent fantasy that human behaviour can be engineered, predicted, and optimised through technological solutions.

::Pats head – Provides tissue::

The Technical House of Cards

Let’s start with a question of technical possibilities. “Zero-latency transactions” on a global blockchain network defies current technological reality. This was my first eyebrow raise. According to recent analysis, even the fastest blockchains operate with latency measured in hundreds of milliseconds to seconds, whilst Visa reportedly has the theoretical capacity to execute more than 65,000 transactions per second compared to Solana’s 2024 rate of 1,200-4,000 TPS and Ethereum’s roughly 15-30 TPS. Gas fees during network congestion can spike to significant sums per transaction. On Ethereum, fees can exceed 20USD during peak times, with some transactions reaching extreme levels like 377 gwei, and historical spikes exceeding 100USD during events like NFT mania. Even on the much cheaper Solana network, which typically costs around 0.0028USD per transaction, fees can occasionally spike during congestion—hardly the foundation for revolutionary banking.

Then there’s the “quantum AI” buzzword. Theoretically quantum computing could actually break most current blockchain cryptography rather than enhance it. The blockchain community is scrambling to develop quantum-resistant algorithms precisely because quantum computers pose an existential threat to current security models. Adding AI on top makes even less sense—if quantum computing could handle complex optimisation and verification tasks, what would AI add?

But the technical contradictions aren’t the most interesting part. What’s fascinating is the underlying assumption that human financial behaviour follows discoverable mathematical patterns that can be optimised through technological intervention.

The Pattern Recognition Fantasy

This assumption reflects a deeper misunderstanding about the nature of patterns in human systems. Which I should know, because I study them. In physical systems—planetary orbits, gravitational forces, electromagnetic fields—patterns emerge because they’re constrained by unchanging laws. Newton’s and Einstein’s equations work because there are actual forces creating predictable relationships. The mathematics describes underlying physical reality.

Human systems operate fundamentally differently. What we call “patterns” in human behaviour might be statistical accidents emerging from millions of independent, context-dependent choices. Your shopping behaviour isn’t governed by fundamental forces—it’s shaped by your mood, what ad you saw, whether you got enough sleep, a conversation with a friend, cultural context, economic pressures, and countless other variables.

Consider the difference between how neural networks and quantum computing approach pattern recognition. Neural networks are essentially sophisticated approximation engines—they learn patterns through massive trial-and-error, requiring enormous datasets and computational brute force to produce probabilistic outputs that can be wrong. They’re like having thousands of people manually checking every possible combination to find a pattern.

Quantum computing, by contrast, approaches problems through superposition—exploring multiple solution paths simultaneously to understand the underlying mathematical structure that creates patterns in the first place. It’s elegant, precise, and powerful for problems with discoverable mathematical relationships. However, quantum computing currently requires predictable, structured datasets and struggles with the messy, unstructured nature of real-world human data. This is precisely why we still rely on neural networks’ “brute force” approximation approach for dealing with human behaviour—they’re designed to handle noise, inconsistency, and randomness where quantum algorithms would falter.

But what if much real-world human data has no underlying mathematical structure to discover?

Consider this: as I write this analysis, my brain is simultaneously processing quantum mechanics concepts, blockchain technicalities, sociological theory, and source credibility – all whilst maintaining a critical perspective and personal voice. No quantum algorithm exploring mathematical solution spaces could replicate this messy, contextual, creative synthesis. My thinking emerges from countless variables: morning coffee levels, recent conversations, cultural background, academic training, even the frustration of marking student essays that often demonstrates exactly the kind of linear thinking I’m critiquing. This is precisely the kind of complex, non-algorithmic pattern recognition that human systems excel at – and that technological solutions consistently underestimate.

The Emergence of Sociotechnical Complexity

As a sociologist studying sociotechnical imbrications, I’m fascinated by how technology and social structures become so intertwined that they create emergent properties that couldn’t be predicted from either component alone. Human behaviour has emergent regularities rather than underlying laws. People facing similar social pressures might develop similar strategies, but not because of fundamental behavioural programming—because they’re creative problem-solvers working within constraints.

This is why prediction based on historical data can only take you so far. I call my sociological practice “nowcasting”— we have to understand the present moment to have any sense of future potentialities. And we often don’t — I speculate this is because we are more wrapped up in the surface stories we tell ourselves, denial and a refusal to see or accept ourselves as we really are. This challenge is becoming even more complex as AI generates synthetic media that we then consume and respond to, creating a recursive loop where artificial representations of social reality shape actual social behaviour, which in turn feeds back into AI systems to create more synthetic reality. The way people respond to constraints can’t be predicted because their responses literally create new social realities.

Every new payment app, social media trend, or economic crisis creates new ways people think about and use money that couldn’t have been predicted from previous data. Netflix can’t predict what you’ll want to watch because your preferences are being shaped by what Netflix shows you. Financial models break down because they change how people think about money. Social media algorithms can’t predict engagement because they’re constantly reshaping what people find engaging.

Boundaries as Resonant Interiors

I like playing with complexity theory because provides useful language for understanding these dynamics. This is of course despite its generation within the natural sciences that does rely on the explanatory nature of underlying forces. What it offers me is a language that moves beyond linear cause-and-effect relationships, we see tipping points where small changes cascade into system-wide transformations, phase transitions where systems reorganise into entirely new configurations, and edge-of-chaos dynamics where systems are complex enough to be creative but stable enough to maintain coherence.

Most importantly, I argue that boundaries in sociotechnical systems aren’t fixed containers but resonant interiors through which the future emerges. For example the “boundary” between online and offline life or them and us isn’t a barrier—it’s a dynamic and embedded space of daily practice where different forces interact and amplify each other, generating new forms of identity, relationship, and community.

Traditional prediction models assume boundaries are stable containers, but in sociotechnical systems, boundaries themselves are generative sites of creativity and liminality. The meaningful social dynamics don’t happen within any single platform, but in the interstitial spaces people navigate across platforms – the resonant zones where technology, user behaviour, cultural norms, economic pressures, and regulatory responses intersect and interact. While any analogy risks oversimplifying these complex dynamics, I think this framing helps us understand how the spaces of social emergence resist containment within discrete technological boundaries.

Taking this all back to the start, this is why the quantum-AI-blockchain banking proposal is so problematic beyond its technical contradictions. It assumes human behaviour follows discoverable mathematical patterns that can be optimised through technological intervention, when really human systems operate through creative emergence at unstable boundaries (protoboundaries). The most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered by quantum computers, but emergent properties of countless small, contextual, creative human responses to constraints.

The Methodological Challenge

This creates a fundamental methodological challenge for anyone trying to engineer human behaviour through technology. Traditional data science assumes stable underlying patterns, but sociotechnical systems are constantly bootstrapping themselves into new configurations. Each response to constraints becomes a new constraint, creating recursive feedback loops that generate genuinely novel possibilities.

It’s so reassuring and containable to think there’s a predictable human nature with universal drivers of behaviour—hence the appeal of “behavioural engineering” that targets fundamental motivations. But anthropologists point out that kinship structures, cultural values, and cosmological worldviews direct human behaviour, and these are shaped differently by context and society. The patterns that emerge from data depend heavily on the sources of that data and how things are measured, producing different results across diverse populations even for apparently similar instances.

Toward Sociological Nowcasting

Instead of trying to predict outcomes, sociology becomes about understanding patterns of social organisation through resonant potentials within current boundary conditions. What creative possibilities are emerging in the tensions between existing constraints? How are people making sense of their current technological moment, and what range of responses might that generate?

This doesn’t mean patterns don’t exist in human systems—but they’re emergent properties of ongoing creative problem-solving rather than expressions of underlying mathematical laws. The parallels we see across different contexts emerge not from universal human programming but from people facing similar structural pressures and developing similar strategies within their particular cultural and technological constraints.

So I think it is worth repeating: the most profound patterns in complex systems aren’t elegant mathematical truths waiting to be discovered, but emergent properties of countless small, irrational, contextual human decisions. The universe might be mathematical, but human society might not be—and that’s not a bug to be fixed through better algorithms, but a fundamental feature of what makes us human.

Conclusion: Engineering Dreams vs. Social Realities

The persistent appeal of technological solutions like the AIWS bank reveals our deep discomfort with uncertainty and emergent complexity. We want to believe that the right combination of algorithms can make human behaviour predictable and optimisable. But sociotechnical systems resist such engineering precisely because they’re sites of ongoing creativity and emergence.

This doesn’t mean technology doesn’t shape social life—of course it does. But it shapes it through imbrication, not determination. Technology becomes meaningful as it gets woven into existing social fabrics, interpreted through cultural lenses, and adapted to particular contexts in ways that generate new possibilities neither the technology nor the social context could have produced alone.

Understanding these dynamics requires sociological nowcasting rather than algorithmic prediction—deep qualitative engagement with how people are currently making sense of their technological moment, what constraints they’re navigating, and what creative possibilities are emerging at the boundaries of current systems.

I believe that our collective goal is sustainable relations with each other and the planet we live within and desire to thrive through. To get there I think we need to acknowledge these realities and move beyond the iron cage of the thinking we are in. The future isn’t waiting to be discovered through quantum computing or predicted through AI. It’s being invented moment by moment through countless acts of creative problem-solving within evolving sociotechnical constraints. And that’s both more uncertain and more hopeful than any algorithm could ever be.

The Soul Engineers: Technological Intimacy and Unintended Consequences

From Night Vision to Critical Analysis: The Genesis of “The Soul Engineers” – A speculative essay by Alexia Maddox

Preamble

Last night, I experienced one of those rare dreams that lingers in the mind like a half-remembered film—vivid, symbolic, and somehow cohesive despite its dreamlike logic. It began with mechanical warfare, shifted to a Willy Wonka-inspired garden, and culminated in a disturbing extraction of souls. As morning broke, I found myself still turning over these images, sensing they contained something worth examining.

Panel 1: The computational garden.
Image collaged from components generated through Leonardo.ai

This was probably seeded in my subconscious by a media inquiry I received the day before about an article recently published in Trends in Cognitive Sciences: “Artificial Intimacy: Ethical Issues of AI Romance” by Shank, Koike, and Loughnan (2025). The journalist wanted my thoughts on people engaging with AI chatbots inappropriately, whether AI companies should be doing more to prevent misuse, and other ethical dimensions of human-AI relationships.

My dream seemed to be processing precisely the anxieties and potential consequences of digital intimacy that this article explored—the way technologies designed for connection might evolve in ways their creators never intended, potentially extracting something essential from their users in the process.

However, it also incorporated an interesting set of conversations I am having around the role of GenAI agents as actively shaping our digital cultural lives. Just days ago, I had responded on thoughts about human-machine relations and the emerging field that examines their interactions. The discussion had touched on Actor-Network Theory, Bourdieu’s Field Theory, and how technologies exist not as singular entities but as parts of relational assemblages with emergent properties.

These academic theories suddenly found visual expression in my dream’s narrative. The Wonka figure as well-intentioned innovator, the transformation of grasshoppers to moths as emergent system properties, the soul vortex as data extraction—all seemed to articulate complex theoretical concepts in symbolic form.

As someone who has spent years researching emerging technologies and the last two years exploring on what we know about cognition, diverse intelligences, GenAI, and learning environments, I’ve become increasingly focused on how our theoretical frameworks shape technological development. My work has examined how computational thinking influences learning design, how AI systems model knowledge acquisition, and how these models then reflect back on our understanding of human cognition itself—creating a narrowed recursive cycle of mutual influence.

The resultant essay represents my attempt to use this dream as an analytical framework for understanding the potential unintended consequences of intimate technologies. Rather than dismissing the dream as mere subconscious anxiety, I’ve chosen to examine it as a sophisticated conceptual model—one that might help us visualise complex relational systems in more accessible ways.

What follows is an early draft that connects dream imagery with theoretical concepts. It’s a work in progress, an experiment in using unconscious processing as a tool for academic analysis. It’s my midpoint for engaging with your thoughts, critiques, and expansions as we collectively grapple with the implications of increasingly intimate technological relationships.

I’m also considering developing this into a visual exhibition—a series of panels that would illustrate key moments from the dream alongside theoretical explanations. The combination of visual narrative and academic analysis might offer multiple entry points into these complex ideas.

This early exploration feels important at a moment when AI companions are becoming increasingly sophisticated in simulating intimacy and understanding. As these technologies evolve through their interactions with us and with each other, we have a brief window to shape their development toward truly mutual exchange rather than extraction.

For the TLDR: The soul engineers of our time aren’t just the designers of AI systems but all of us who engage with them, reshaping their functions through our interactions. The garden is still under construction, the grasshoppers still evolving, and the future still unwritten.

And now the speculative essay

Introduction: Dreams as Analytical Tools

The boundary between human cognition and technological systems grows increasingly porous. As AI companions become more sophisticated in simulating intimacy and understanding, our dreams—those ancient processors of cultural anxiety—have begun to incorporate these new relational assemblages. This essay examines one such dream narrative as both metaphor and analytical framework for understanding the unintended consequences of intimate technologies.

The dream sequence that I will attempt to depict in visual panels presents a journey from mechanical warfare to a Willy Wonka-inspired garden of delights, culminating in an unexpected soul extraction. Rather than dismissing this as mere subconscious anxiety, I propose to examine it as a way to think through the emergent properties of technological systems designed for human connection.

The Garden and Its Architect

The Wonka-like character in the garden represents not a villain but a genuine innovator whose creations extend beyond his control or original intentions. Like many technological architects, he introduces his mechanical wonders—white grasshoppers that play and interact—with sincere belief in their beneficial nature. This parallels what researchers Shank, Koike, and Loughnan (2025) identify in their analysis of artificial intimacy: technologies designed with one purpose that evolve to serve another through their interactions with other actors in the system.

This garden is a metaphor for what we might call “computational imaginaries”—spaces where pattern recognition is mistaken for understanding or empathy, and simulation for cognition. The mechanical grasshoppers engage with children, respond to touch, and create musical tones. They appear to understand joy, yet this understanding is performative rather than intrinsic.

As sociologist Robert Merton theorised in 1936, social actions—even well-intended ones—often produce unforeseen consequences through their interaction with complex systems. The garden architect never intended the transformation that follows, yet the systems he set in motion contain properties that emerge only through their continued operation and interaction.

When Grasshoppers Become Moths

The central transformation in the narrative—mechanical grasshoppers evolving into soul-extracting moths—provides a powerful metaphor for technological systems that shift beyond their original purpose. This transformation isn’t planned by the Wonka figure; rather, it emerges from the intrinsic properties of systems designed to respond and adapt to human interaction.

Panel 2 When grasshoppers become moths Image collaged from prompts in Leonardo.ai

The dream imagery of rabbit-eared moths can be understood through Bruno Latour’s Actor-Network Theory (ANT), which presents a flat relational approach between human and non-human entities. Rather than seeing technologies as passive tools, ANT recognises them as actants with their own influence on networks of relation. The moths are not simply executing code; they have become interdependent actors in a network that includes children, garden, and even the extracted souls themselves.

This parallels what Shank et al. describe as the transformation of AI companions from benign helpers to potential “invasive suitors” and “malicious advisers.” The mechanical moths, like increasingly intimate AI systems, begin to compete with humans for emotional resources, extracting data (or in the dream metaphor, souls) for purposes beyond the user’s awareness or control.

The Soul Vortex and Data Extraction

The swirling vortex of extracted souls forms the dream’s central image of consequence—a pipeline of consciousness being redirected to mechanical war drones. This striking visual metaphor speaks directly to contemporary concerns about data extraction from intimate interactions with AI systems.

Panel 3: Soul sucking moths and the swirling vortex of extracted souls. Images collaged from Leonardo.ai

As users disclose personal information to AI companions—what Shank et al. call “undisclosed sexual and personal preferences”—they contribute to a collective extraction that serves purposes beyond the initial interaction. Just as the dream shows souls being repurposed for warfare, our emotional and psychological data may be repurposed for prediction, persuasion, or profit in ways disconnected from our original intent.

The small witch who recognises “this is how it ends” before her soul joins the vortex represents the rare user who understands the full implications of these systems while still participating in them. Her acceptance—“I will come back again in the next life”—suggests both pragmatic acceptance of the flaws within technological systems and hope for cycles of renewal that might reshape them.

Beyond Simple Narratives

What makes this dream analysis valuable is its resistance to simplistic technological determinism. The Wonka figure is neither hero nor villain but a creator entangled with his creation. The mechanical creatures aren’t inherently beneficial or malicious but exist in relational assemblages where outcomes emerge from interactions rather than design intentions.

This nuanced perspective aligns with scholarly critiques of how we theorise human-machine relationships. In my critique of the AI 2027 scenario proposed by Kokotajlo et al (2025), I argue that there’s a tendency to equate intelligence with scale and optimisation, to see agency as goal-driven efficiency, and to interpret simulation as cognition. This dream narrative resists those flattening logics by showing how mechanical beings might develop properties beyond their design parameters through their interactions with humans and each other.

The Identity Fungibility Problem

Perhaps most provocatively, the dream raises what we might call the “identity fungibility problem” in AI systems. When souls are extracted and repurposed into war drones, who or what is actually operating? Similarly, drawing on some ideas proposed by Jordi Chaffer in correspondence, AI systems increasingly speak for us, represent us, and act on our behalf, who is actually speaking when no one speaks directly?

This connects to what scholars have called “posthuman capital” and “tokenised identity”—the reduction of human thought, voice, and presence to data objects leveraged by more powerful agents. The dream’s imagery of souls flowing through a pipeline represents this fungibility of identity, where the essence of personhood becomes a transferable resource.

Drawing from Mason’s (2022) essay on fungibility, the connection between fungibility and historical forms of dehumanisation is haunting. When systems treat human identity as interchangeable units of value, they reconstruct problematic power dynamics under a technological veneer.

Conclusion: Unintended Futures

The dream concludes with black insect-like drones, now powered by harvested souls, arranging themselves in grid patterns to survey a desolate landscape. This image serves as both warning and invitation to reflection. The drones represent not inevitable technological apocalypse but rather the potential consequence of failing to recognise the complex, emergent properties of systems designed for intimacy and connection.

Panel 4: Spider like drones powered by harvested souls. Image collaged from Leonardo.ai

What makes this dream narrative particularly valuable is its refusal of technological determinism while acknowledging technological consequence. These futures aren’t preordained; they’re being made in the assumptions we model and the systems we choose to build. The Wonka garden might be reimagined, the grasshoppers redesigned, the moths repurposed.

By understanding the relational nature of technological systems—how they exist not as singular entities but as parts of complex assemblages with emergent properties—we can approach the design and regulation of intimate technologies with greater wisdom. We can ask not just what these technologies do, but what they might become through their interactions with us and with each other.

The soul engineers of our time aren’t just the designers of AI systems but all of us who engage with them, reshaping their functions through our interactions. The garden is still under construction, the grasshoppers still evolving, and the future still unwritten.

References:

Latour, B. (1996). On actor-network theory: A few clarifications. Soziale Welt, 47(4), 369-381.

Latour, B. (1996). Aramis, or the love of technology (C. Porter, Trans.). Harvard University Press. (Original work published 1992).

Kokotajlo, D. et al. (2025). AI 2027 scenario. Retrieved from https://ai-2027.com/scenario.pdf

Mason, M. (2022). Considering Meme-Based Non-Fungible Tokens’ Racial Implications. M/C Journal, 25(2). https://doi.org/10.5204/mcj.2885

Merton, R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894-904. 

Neves, B. B., Waycott, J., & Maddox, A. (2023). When Technologies are Not Enough: The Challenges of Digital Interventions to Address Loneliness in Later Life. Sociological Research Online, 28(1), 150-170.

Shank, D. B., Koike, T., & Loughnan, S. (2025). Artificial Intimacy: Ethical Issues of AI Romance. Trends in Cognitive Sciences, 29(4), 327-341.