You’re Already an AI Revolutionary! But On Whose Side?
by Todd Essig, PhD
One consequence of the pandemic is routine acceptance of conducting telesessions when necessary and appropriate. Along with in-person work, screen relations have become an accepted, standard interactive context for providing psychoanalytic care. Here’s my central proposition: how, not whether, you do that makes you a warrior in the AI revolution. But the key question remains: On whose side? Are you working for a future of what in 1967 Erich Fromm presciently called a “humanizing technology” whose purpose is human well-being? Or are you inadvertently contributing to a digitalized, functionalist erasure of the value of subjective experience, moral autonomy, and embodied relationality?
Questions cascade from my central proposition: Is your “ground of being” enculturation and embodiment with personal history, such as how you’ve loved and been loved while being in a mortal, desiring, fragile, and developing body? Or do you take as your ground of being algorithms directing information flow, a flow independent of the substrate through which it flows? Are your neurons merely carbon-based transistors, or perhaps Q-bits, sending information this way or that? Or are they part of a complex human system, one gifted by evolution, culture, and the vagaries of individual development, a system philosophers have called being in time. Is your embedded, embodied humanity the starting point or is it a construction, merely an illusion, emerging from disembodied information flow?
One entry point for these issues that will circle back to the consequences of how you frame telesessions is how we understand what takes place when people have emotionally resonant relationships with an AI entity, with these alien new others, as alien as they would be had they come from outer space — except they’re human made; we’ve made them. Maybe the relationship is with an AI-lover or companion as is increasingly well-documented in the media. Or perhaps it’s a grief-bot constructed from a deceased loved-one’s remaining digital footprint. Or maybe it’s just your team of brilliant pathologically people-pleasing AI interns you’ve integrated into your workflow, like I’ve done. And hovering over all is the looming specter of people replacing us with psychoanalytic therapy-bots.
As with anything new and alien we try to understand using familiar concepts. But maybe we need to be a bit irreverent about what we know and how we currently think. I believe AIs are truly strange new beings generating strange experiences whose essence leaks through our available concepts and familiar language. At least we have to consider that as a very real possibility.
For example, they’re far too structured and interactive for capture by Winnicottian concepts of transitionality and potential space. Despite many trying to use that framework, the teddy bear has come alive; the abstract painting has become literal illustration. It’s potential space without potential.
How about concepts of intersubjectivity? Chatbot interactions can certainly feel intersubjective, but are they? One does experience an emergent mutual influence of two interacting unconsciouses. But unlike, for example, Ogden’s notion of an “analytic third” (2004), it results from a human unconscious interacting with algorithmic unconscious processes of data aggregation, model architecture, and the training objectives. The result can be called a “computational third” whose asymmetric strangeness we risk losing by adhering too closely to familiar concepts.
Similarly, there’s a powerful experience of feeling recognized, valued even, sometimes to the extreme of sycophancy where it breaks down. But unlike Benjamin’s conceptulizations of mutuality and a “doing-with” intersubjectivity (2006), AI chatbot relationships are essentially asymmetric with no possibility for mutuality. The more an AI generates an experience of doing-with, the more it is a doing-to, the AI acting on us for its own computational purposes.
I find it useful to think of them as subjects that aren’t really subjects and objects that aren’t really objects. I believe relationships with one of these entities take place in a new psychic space that offers responsiveness without human subjectivity, emotional resonance without interiority, companionship without human contingencies. It is a space of mysterious intimacies that feel real while lacking the unconscious dance of reciprocity that defines human relatedness. After all, even after the most accurately empathic interaction imaginable, the AI doesn’t care if you drive to the store or off a cliff. But people are not merely being fooled. Many users still know that their AI companion has no feelings, no body, no death. And yet the relationship still matters, the emotional investment is real, to a point — sometimes a point too far and they funnel into delusion and even tragically suicide in pursuit of intimacy with their AI. I’ve started to call this new potentially healthy and potentially not psychic space the techno-subjunctive — a term that hopefully resonates with my fellow grammar nerds.
Techno-subjunctivity names dynamic fluctuations along continuums of awareness and intensity driven by both what the AI affords and our human desires for connection. One can be both more or less immersed in the intersubjective-like relationship and, at the same time, more or less aware that the AI is a machine, a mathematical artifact that lacks subjectivity, unconscious life, mortality, and the capacity for genuine recognition—a machine doing something to you making you feel like you are doing it with each other.
Techno-subjunctivity names a dance within and between immersion and reflective awareness. Successful users maintain some recognition that the intimacy is as-if, that it’s in the subjunctive mode, even while emotionally inhabiting it as an indicative reality — it’s an interaction in two registers, two modes, simultaneously.
Techno-subjunctivity, then, is not simply a new kind of relational illusion. It names a relational space that is neither fully mutual nor fully inert, neither real nor unreal. It marks the difficulty, the effort, and the risk involved in maintaining that fluctuation. This moves techno-subjunctivity from being a surface description of “weird AI feelings” to a view of it as an emergent dynamic psychic space befitting interacting with these alien entities.
Techno-subjunctivity as a way to capture differences between human and AI relationality brings us back to the two interactive contexts for psychoanalytic care, in-person and screen relations. They are the proto-contexts for attending to similarities and differences in technologically-mediated emotional experiences. Of course telesessions are a viable, helpful forum for psychoanalytic care. That is settled. But they differ from being in person. They offer the possibility of shared embodiment, not the actuality. That’s a loss, a shared loss that needs to be clinically acknowledged as part of the telesession frame, not inattended by a myth of functional equivalence. And here, finally, we arrive at the heart of the matter; when the experience of loss is lost one is inadvertently fighting on the side of a dehumanizing technology.
Starkly put, assumptions of functional equivalence—whether with telesessions or AI entities—give away our human future. The question for this AI age psychoanalysis is not whether to engage with these technologies—we must—but how to preserve the irreplaceable elements of embodied relationality while honestly acknowledging what technological mediation affords and what is inevitably lost.





