Response to Drs Levy and Gioia
by Todd Essig, PhD
I can count on one hand the days this year when I didn’t have some contact with my co-panelists through our CAI work. So, even before exchanging presentations we were already very familiar with the what and why of how we think. Today is pretty much an extension of our ongoing conversations and struggles to learn and think together.
I want to use a familiar clinical yardstick to put the likely future of AI into perspective: psychoanalytic treatments starting this year, barring rupture or premature termination, will likely be ongoing after Artificial SuperIntelligence, so-called ASI (or AGI for Artificial General Intelligence) appears. These are systems more capable than humans in all areas of cognitive and intellectual life. To be clear, I’m saying that ASI and all the radical disruptions it will bring to the economy, culture, self-experience, intimate relationships, and, yes, psychoanalytic care, is a question of when, not if—and it’s sooner than we realize.
But that’s technology. Our human future is not yet written. So much that is at stake is still fundamentally undetermined. Basic questions await answers: Will an AI-transformed world primarily serve the interests of military power? Will corporate and personal profit dominate—like with social media that was built to sell ads by maximizing user engagement yielding an accelerating attention-gobbling machine? Or will human needs and values, what Fromm (1968) called “human well-being” have pride of place? Here’s my central question, and the very center of the what and why of how I think: What can psychoanalysis do to support human well-being in the dawning AI-age, and, conversely, what should we avoid doing?
What to do? What not to do? For this discussion I’m going to point to some different answers from my friends from whom I’m learning so much. Hopefully, our differences will help you triangulate what’s at the center of your AI-age thinking.
Amy’s presentation captured the affective atmosphere of living through a crumbling worldview in real time. She offered a tragic-dialectic of human civilization always in motion—revering and destroying, creating and mourning—motion made inevitable by our “innovation drive.” She writes we’re in a “transition from humanism to AI containment,” that we’re losing “a belief system built on the sanctity of human experience and human wisdom.” We differ here. To my mind transitioning from humanism doesn’t necessarily entail losing the sanctity of human experience. In fact, I think it might be the only way to save it.
Humanism crowned the rational, self-contained individual: disembodying the mind, glorifying control and domination, and masking our dependencies beneath an illusion of autonomy. It gave us the logic that helped bring ecological devastation and social fragmentation. It privileged thought over feeling, function over relationship. Humanism not only built psychoanalysis, it built the suffering psychoanalytic care tries to heal.
So yes, humanism’s passing deserves acknowledgment—but not a nostalgic mourning, more like mourning an abusive parent. Both that and humanism’s death brings celebration and relief as well as grief.
We actually may need an emerging AI worldview, an AI container, that isn’t centered on either heroic humanist individuals or on profit and power (Fuchs, 2024). Our psychoanalytic challenge is to help shape what emerges, along with other communities of care. As I see it, our challenge, and tremendous opportunity for psychoanalysis to thrive, is contributing to the development of an AI container centered not on either a humanist optimization of rational function or on profit and power, but on embodied, interconnected, vulnerable, and passionate life. Not on domination, but on embodied relationality. Not on information, but on care. In other words, I yearn for a future of sum ergo cogito — I am, therefore I think.
Instead of seeing AI as a human catastrophe, we can engage it as a historically unique opportunity to honor our creaturely needs, limits, and desires; our entanglement with all forms of life, and the fundamentally embodied nature of human experience and subjectivity. I believe our challenge is not just to mourn but to make sure what emerges next serves human well-being rather than efficient disembodied functionality.
Which brings me to Fred’s illuminating presentation of the neuropsychoanalytic project to build artificial consciousness, a disembodied consciousness emerging from the functional consequences of information flow independent of substrate. If a system, regardless of whether it is organic like us or silicon based, is self-organizing within a Markov blanket, minimizes prediction error (i.e., Friston free energy), and prioritizes needs then subjectivity will emerge. It will give a damn.
I have questions about the narratives being built around this ambitious project. It’s offered as a salve for Amy’s vertiginous oscillations, an Archimedean platform for harmonious relationships with “alien forms of consciousness—or even superhuman intelligence.” Don’t worry about not being the smartest creatures on the planet, these aliens we’ve made are just like us, only better; don’t be afraid of them.
Despite my deep respect for Fred and all I’m learning from him about the neuropsychoanalytic project, I couldn’t disagree more. I believe we’re at crossroads. One leads to a future of embodied relationality as foundation for what it means to be human, the starting point for the new AI world-view container that is replacing the dying world view we call humanism. The other reduces the meaning of being human to mere information flow, a functionalist reduction that can run in neurons, transistors or Q-bits, or even, if you think it through, a near infinite collection of equations on yellow legal pads. Of course, information flow is necessary for human consciousness, and simulating it can help us test certain neurobiological theories. But we are much more than information flow. An analogy: a dancer needs blood flow. Imagine a computer perfectly modeling every functional consequence of that blood flow—every heartbeat, every oxygen molecule. Would you say that computer is dancing? Or merely simulating one crucial aspect of what makes dance possible? Saying that AI information flow generates experience similar to your consciousness is either a category mistake or reflects the very dehumanization at the core of my dystopian fears. Simulating consciouness is just not the same as creating artificial conscioueness, and confusing that difference is dangerous.
Basically, I believe anything that either underdevelops or undermines the foundational centrality of embodied relationality needs to be carefully avoided. As anybody who has ever nursed or been nursed by a loved one knows, embodied relationality deserves our reverence. And as anybody who has ever been at a formal party and danced like no one was watching knows, it also deserves to be treated with irreverence. We bring this both-and to the treatments we begin today. They will unfold in a world where these very qualities—our capacity for both sacred presence and playful spontaneity—will distinguish genuinely human care from its algorithmic simulations. Our task is to ensure that when our patients emerge from analytic care into an AI-transformed world, they value what it means to be truly seen, held, and recognized by another embodied, mortal being.
Resources
Fromm, E. (1968). The revolution of hope: Toward a humanized technology. Harper & RowFuchs, T. (2024). Narcissistic depressive technoscience: Why modern man yearns to be replaced, fantasizes of being the one to do it, and how we can stay human instead. The New Atlantis, (76), 79-95





