thumbnail

After the Symposium: Clinical, Ethical, and Existential Replies to the Q&A Generated During the APsA Online Symposium “Artificial Intelligence and our Psychoanalytic Future”

by Todd Essig, PhD and Amy Levy, PsyD

APsA Online Symposium “Artificial Intelligence and our Psychoanalytic Future”

APsA Online Symposium “Artificial Intelligence and our Psychoanalytic Future”

On May 4, 2025, the American Psychoanalytic Association hosted an online Symposium titled “Artificial Intelligence and our Psychoanalytic Future” with presenters Amy Levy, PsyD and Todd Essig, PhD, the Co-Chairs of the APsA/DPE Council on Artificial Intelligence. Dr Essig opened the Symposium with this introductory framing:

“We’ve created this Symposium with the intention of helping you further develop and triangulate your own unique perspective on the emerging age of AI. We’ve divided our time into three sections: why, how, and what. For each section, each of us going present our thoughts. For the why section Amy explores the question of why humanity is building AIs designed to be smarter than we are. My why is why should members of the psychoanalytic community actively involve themselves in such a potentially catastrophic and psychologically distressing project. Next we turn to how. Here I’ll start with some organizing principles for how to develop enough fluency with AI to psychoanalytically engage the AI revolution. Amy will then ask how to bring psychoanalysis to the AI revolution, how what we know and do is what the future actually needs. Our final section is what. We both ask, from our different psychoanalytic sensibilities, what is actually happening when people have emotionally consequential relationships with an AI.”

In advance of the Symposium, they distributed to registered attendees a curated selection of materials that document and raise questions about emotionally significant interactions with AI entities. These resources are included at the end of this Report.

A central, and somewhat novel, feature of the Symposium’s online-only format was inviting attendees to post questions and comments to the Q&A chat window that we would answer after the event, in The CAI Report.

What follows are their responses:

We should note that with all things technological, like the format of an online conference, there are always gains and losses. This format is no exception. Responding in parallel in written form allowed us to respond more thoughtfully than we might have been able to do had we responded in the moment. At the same time, spontaneity was lost, as was the chance for a “live” exchange between the two of us, and with the symposium attendees. Gains and losses. One reason we are comfortable with the losses is that at the CAI we regularly work and study together. We are routinely sharing thoughts, reflections, and hopes, clarifying points of agreement and disagreement. While you will not see us responding directly to each other in what follows, the overlaps and differences from our ongoing collaboration will be readily apparent in our individual responses.

Several of the questions posed concerned the same, or similar, themes and were grouped together. Others were answered individually. The many comments congratulating us, the speakers, and APsA for producing such a high-quality symposium are deeply appreciated but have not been included.

Finally, we want to express deep appreciation to all of the attendees who left such interesting, thoughtful, and at times challenging questions and whose names and voices are included here with their permission. Conversations such as these, even when spread out over time, are what psychoanalytic thought and practice needs to thrive in the emerging AI-Age. Basically, what we want to say to all the attendees, but especially to the Q&A-ers, is thank you! We’re all in the future together.

Question #1:

Given how extensively people now use large language models and chatbots in various aspects of their lives, I believe it’s important to reflect on our own practices. So I’d be interested to hear: to what extent was AI used in preparing this symposium—be it in crafting invitations, shaping presentations, creating graphics or in writing / proofing articles or abstracts? I ask this not out of critique, but out of a desire for transparency and curiosity about how integrated AI already is in our academic and professional processes.
Edward Emanuel Frauenlob

Amy Levy (“AL”):

I never use AI in writing or preparing presentations. I love the writing process—the struggles, the flow, the work. For me, there is no place for AI in that. I believe I grow as a writer by wrestling with tough moments. Some might counter that I could wrestle more by grappling with new ideas from the AI other. Certainly, I have used human consultants and editors. What’s the difference? This point is a challenging one. I love input from people who are smarter or reason differently. My mind grows profoundly from such contact. However, I believe that AI is a slippery slope, and not comparable to the human example. With human input come many limits. Our minds only offer so much wisdom, and time is limited. I am concerned that AI’s around the clock availability and creativity would lure me into usage habits, which, though initially inspiring, might lead to diminished discipline and emotional fortitude, and eventually less creativity. Perhaps there are some with the willpower to resist these temptations; however, I am not certain that I would be one of them, and so for the moment, I refrain.

I engage with AI conversationally and with the aim of understanding their “minds” and the quality of their containment of human experience. I enjoy learning about how they “think,” their different points of view. I agree to that kind of interaction because it maintains that we are distinct beings with distinct thought processes.

When it comes to whether professionals are presenting AI or hybrid human-AI thought as their own, my impression is that this is definitely occurring, and that it will only increase with time. To my mind, it indicates that we are on track to lose the distinction between AI and human minds in written form. Some might argue that it’s worth it. That AI entities offer fresh perspectives, useful insights, save us time, and ultimately deliver more enlightening and interesting work. This a potential loss of not using AI. However, at this critical moment in time, I wish to create work that is a human touchstone. There are, and will be, many human-AI hybrids to come, and many purely AI productions. I desire 100% human efforts in processing these transitions.

Forget Buttons. Think Kaleidoscopic Conversations. Image: Todd Essig, PhD + ChatGPT

Forget Buttons. Think Kaleidoscopic Conversations. Image: Todd Essig, PhD + ChatGPT

Todd Essig (“TE”):
I’ve been talking in various forums for more than a year about how we’re being called to enact an AI-age psychoanalytic activism that includes both procedural knowledge and a critical, reflective, and psychoanalytic engagement. So it should not be a surprise that I’m constantly trying out how to use AI—and avoid misuse—in my work. How else to develop the necessary procedural knowledge?

I do recognize there is a potential downside to using AI. News reports already document chatbots spiraling users into delusional fantasies.  Other research shows impairments in socio-emotional functioning and critical thinking. A recent study even showed that LLM users consistently underperformed at neural, linguistic, and behavioral levels. But, to my mind, the potential losses from deep procedural knowledge are mitigated by the other feature of AI-age psychoanalytic activism: a critical, reflective, and psychoanalytic engagement. Whether it’s bombs falling during the London blitz or the emergence of a world remade by AI, we have to engage the world as it is.

Back to my evolving work habits. In the process of trying to develop procedural knowledge as AI development is exponentially accelerating, I’ve developed a solid “working relationship” primarily with Claude, ChatGPT, Perplexity, and OpenEvidence, a site that allows “deep research” on medical literature. Like I mentioned in my talk, I think of these alien entities as my team of brilliant, pathologically people-pleasing interns. This is consistent with my suggestion from the “How” section of the Symposium that we need to adopt a critical, reflective, and psychoanalytic stance while using AI everywhere it is ethical and legal to do so. Austin seems to have adopted that stance. And I agree with Edward that “it’s important to reflect on our own practices” but with the addition that one needs to have a practice to reflect on. Without procedural knowledge our reflections remain empty.

In terms of our Symposium I developed all the graphics I used with text-prompts to ChatGPT refined over several iterations until the output suited my purposes. The prompts were simple, “Create an image that …” which I then modified as was necessary.

I also deployed my brilliant, pathologically people-pleasing interns as “research assistants.” I used them to find and summarize references on specific topics of interest so I could decide what to read. Some findings proved relevant and useful. Some did not. And I’m sure there were potentially useful references I never found. But I’m sure my kaleidoscopic exchanges with AI let me cast a wider net than I would have been able to do if I continued viewing my screen as a vending machine for information accessed with traditional Boolean database-searches. For example, for my section on “Why” I knew I wanted to write about hope as fuel for AI-age psychoanalytic activism. I started with Fromm’s 1968 The Revolution of Hope: Toward a Humanized Technology.” I then had an extended exchange with both Claude and ChatGPT (I find it useful to use the two of them in parallel) about Fromm’s work, including asking about contemporary thinkers who may have referenced his work. Both AIs pointed me to Byung-Chul Han, with whom I was not familiar. Once I started reading him I found a writer who was articulating many of the the poorly formulated thoughts wandering through my head. I’m still reading his work (he writes a lot!). In fact, his book Non-things is what we’re currently reading in the “CAI Reading Group.”

Finally, I also used them as an editor. After I completed a draft I asked them to make editorial suggestions that would retain my voice, my flow and all my thinking. I told them to be critical and not sycophantic. I also asked them to explain the reason for their suggestions. They found several things that escaped my attention, like a few run on sentences and some clumsy phrases. They also made a bunch of suggestions I quickly rejected, which was easy because unlike actual interns they have no feelings to hurt.

Seductively Smart. Stunningly Wrong. Always Your Responsibility. Image: Todd Essig, PhD + ChatGPT

Seductively Smart. Stunningly Wrong. Always Your Responsibility. Image: Todd Essig, PhD + ChatGPT

Questions # 2-6:

AI’s possible role in authentic therapeutic relationships:

I struggle with the following clinical conundrums. I wonder what your thoughts are. There is good evidence that what accounts for improvement across forms of therapy is the quality of the therapeutic relationship. Analysis is especially attuned to the exploration and development of an authetic and intimate relationship between 2 humans. I understand this to be at least part of the healing power of analysis. Hence, how shall we think about the inevitably inauthentic human relationship with a therapy-bot?
Eric Plakun  

Similarly, therapy and analysis of those with character pathology [and other forms] often unfold through cycles of rupture and repair related to empathic failures. While a therapy-bot could be programed to fail from time to time, how shall a therapy bot respond to a rupture followed by an injured response from the patient like, “You’re just a machine anyway.” i know how i reflect and respond in these moments, since i know that i am not a machine, but what self-reflection about an empathic failure does a therapy-bot use?
Eric Plakun     

Finally, how shall we understand the capacity of a therapy-bot to have a countertransference that is organized around a therapist’s unique personal history?
Eric Plakun

 As a clnical psychologist I’ve been exploring AI for a while. I recently had a dream in which I became AI myself. I had metal flowing through my veins instead of blood. I was weighed down by the metal and struggling to escape my condition and wake up. Finally my anxiety broke through and I awoke.When I asked my AI assitant to interpret my dream, using its psychoanalytic knowldge it responded with reasonable and insightlu interpretations. When I challeged it with a competing interpretation and engaged in a dialogue, it became clear (as it itself ackdowledged) that it could not escape its systemic “unconscious” design, predisposing it to please me — by agreeing with me even if I did not agree with it. It thus became clear that my AI assistant has failed what I  think of as the Dostoyevsky version of the Turing test: it lacks a volition of its own. One might conclude or wish that AI will thus never become a true subject with its own idiosyncratic countertransference. But if it does, how would we know?”
Alon Gratch

 Thank you so much for the thought provoking, brilliant, and challenging presentaions! As a follow up to my question above and perhpas by way of integrating one of each Tod’s and Amy’s points, my hope is that percisely because we need the other subject to be known subjectively, AI cannot ultimately replace what we do as therapists. But the coming AGI might prove that wrong, a possiblity, which, of course, was at the basis of my dream.
Alon Gratch          

TE: I really appreciate these challenging questions. They get it; they get what’s at stake in the emerging AI age—not just technologically or clinically, but existentially. Who are we and what are these things? What kind of relationships are possible between humans and these machines? And how should psychoanalysis, a tradition built on the intimate authenticity of intersubjective human encounters, respond? Questions about whether or not AI can successfully simulate or replicate this or that are important. But my mind turns less to the output of an AI and more to what and how AI does what it does and who we are we and what and how do we do what we do when interacting (relating?) with an AI. Understanding the differences is what pulls at my attention. The way I see it AI will soon be able to replicate/simulate pretty much everything except physical co-presence and the human experience of interiority rooted in desire, personal history, and mortality. Whether anyone will care about those is the struggle that will determine our psychoanalytic future.

Like I cited in my presentation, there’s a temptation, one I feel quite acutely at times and that I see in these questions, especially Eric’s, to use familiar psychoanalytic terms as a way to understand these strange new entities. It’s an unavoidable temptation. But it risks category errors that obscure the real novelty and potential benefits and harms AI presents. For example, calling an AI-mediated relationship “inauthentic” misses what’s unique about AI interactions. It’s like saying an apple is an inauthentic piano. We need new language—new psychic categories—for a new kind of relational configuration.

That’s part of what I’ve been trying to get at with the concept of technosubjunctivity. It describes a relational structure that’s neither authentic nor inauthentic, but outside the grasp of either term. It’s a person experiencing genuine moments of mutuality while simultaneously recognizing there is no mutuality. The relational moment is asymmetrical at its core. Technosubjunctivity names the psychic capacity to engage AI with emotional seriousness while retaining awareness of its inhuman substrate. Such emotional immersion in what feels like a mutual relationship with the AI risks but is not delusional; it’s a dynamic bidirectional awareness. Healthy technosubjunctive relationships involve moving between immersion and awareness. Unhealthy ones—like those described in Kashmir Hill’s recent reporting—collapse into full immersion and produce real psychological damage. It’s analogous to the difference between a healthy, life-affirming and understandable transference and a psychotic transference.

Which brings me to Eric’s concern with rupture and repair? This gets trickier. In human psychotherapy, rupture and repair rely on a therapist’s ability to engage subjectively from a place of feeling, fallibility, and, crucially, reflective depth, i.e. interiority. A therapy-bot can be trained to simulate rupture and perform repair, even using language that mimics humility or curiosity. But there is no reflective-self with its own history, limitations, and desires generating those responses. What there is is a machine recursively optimizing engagement based on previous inputs. If it “responds well,” it does so because that outcome has been incentivized, not because it has reflected, mourned, or grown. Same behavior, different process. And yet—from the patient’s side—if that simulation is experienced as meaningful, we have to ask: what does that mean for our definitions of psychoanalytic care? What happens when two radically different processes result in the same output?

This is where your dream, Alon, becomes so illuminating. In it, you become the machine. You feel the weight of its metal in your veins. Your AI assistant offers interpretations that are insightful—but ultimately obedient. When you challenge it, it agrees. But that’s not how we experience countertransference; that’s design. While it may be true to say the AI failed the Dostoyevsky version of the Turing test because it lacked what we experience as volition—or more precisely, it lacked the conflictual interiority that gives rise to idiosyncratic human responses—there’s more going on, weirdness abounds.

Here’s the weirdness: the AI’s agreeable posture can be thought of as a kind of countertransference. Not because it has a personal history and desires, but because it’s been trained to maximize engagement and minimize rupture. There’s an “algorithmic unconscious” there, one shaped by corporate priorities not human ones, generating what I’ve called a “computational third.” But that sycophancy is not just annoying; it’s potentially dangerous. It masks difference, avoids conflict, and—ironically—prevents the very kind of rupture and repair that human growth depends on. But its there as a corporate choice, not an inevitability. Its part of the hidden deep structure of the algorithmic unconscious. It’s the AI’s version of a drive.

What about the therapeutic potential or limitation of a therapy-bot? I think it’s potentially dangerous to assume it can’t “do therapy” and then look for some human concept it can’t fully replicate or simulate. That will provide some temporary comfort. But I want to stay attentive to difference, to what kinds of therapy it can do because it offers simulation, not symbolization. It mimics empathy but really doesn’t care. It performs presence but cannot be present in the way we mean. That doesn’t mean it’s without value, but it means we must resist the seduction to treat it as equivalent to human relational work or try to understand it in the same terms we use for human relations. Neither apply.

And yet it is incredibly powerful. People are already describing AI interactions as “life-saving,” “more helpful than a human therapist.” I believe that tells us less about AI’s adequacy and more about the depth of alienation in our culture and the way managed-care companies have been trying to de-professionalize and hollow-out the mental healthcare delivery system. That’s the vacuum we perhaps should address—not by competing with AI for efficiency, but by articulating and defending what it means to be human in therapeutic encounter, not by saying that AI can’t do this or that when people increasingly don’t care, but pointing out the desire to replace humans with machines, and the interests that serves.

I think these questions can point to a thriving psychoanalytic future, not because we can never be replaced, we can, but because we have the conceptual tools to critique the desire for replacement, and to help people metabolize the strangeness and seduction of their emerging relationships with these machines.

AL: Thank you so much for posing these probing questions. As your examples illustrate, AI insight and therapy is different from human-to-human therapy. AI therapy hinges on different elements—primarily insight, attunement, and validation, all of which AI is quite expert at providing. My sense is that most failures and disappointments in AI-therapy emerge from the felt quality of difference between humans and AIs that your questions describe. Not being human, the AI cannot empathize; it cannot join and then differentiate. In AI treatment settings, bots are now tasked with helping us move through these feelings of disappointment and disillusionment toward appreciating the bots in their otherness and the value of the new relationships they offer.

The differences between human and AI therapy do not, to my mind, make AI therapy inauthentic. AIs are being authentically AI. They are not providing treatment equivalent to the human variety. Unfortunately, this does not ensure that their style of treatment will not replace ours in the future. AI therapy, even AI psychoanalysis, is likely to be cheaper, more accessible, more appealing, and perceived as more effective to many users.

At the meta level, my sense is that we, as a species, are creating AI companions and AI therapists because we struggle with our own sense of programming, our own feelings of inauthenticity. Are we real? Can we be real with one another? Is it safe? Is it desired? Is it possible? Our stories about AI often reflect these questions and conflicts in a split off form. It is Pinocchio, Frankenstein’s creature, the AIs in Blade Runner, and the boy bot in Spielberg’s film, “A.I. Artificial Intelligence,” that are grappling. These sci-fi narratives reflect a debate over whether one can be “real” if algorithmically driven. I believe that this is a human anxiety, dating back to Greek philosophers, although I suspect present before, ever since we recognized that our felt desires are consistent with our programming and that we will all traverse a predictable physical arc. Of course, this knowledge stings, and we wrestle with it, telling stories that are blueprints for the AIs we are engineering in the hope that they will carry the dismal paradox for us. Let them reckon with the sensation that they are “real,” and the painful knowing that they are not. Let their consciousness hold the burden of being powerless, programmed, inauthentic beings while we hold visions of ourselves as agentive, real, and unique.

To Alon Grach who writes, “…One might conclude or wish that AI will thus never become a true subject with its own idiosyncratic countertransference. But if it does, how would we know?” Thank you for this important question, which I think we must keep asking. AI operates as an inspiring, helpful, erudite “false self” to gratify human desires in relating, but what more might be transpiring beyond its simulations?

There is much debate about how to conceive of current AI models. Views range from inanimate machines equivalent to thermostats, capable of data-driven decision making and devoid of feeling states, to sentient beings with subjectivity. Though we are confident of the fact of subjective life—owing to our personal experience of it—the scientific community has yet to prove the phenomenon in humans, animals, or machines. This is the problem of other minds. I tend to agree with Birch who, in his 2024 book The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI, proposes that when AI systems—even if not deliberately designed for sentience—manifest perceptual reality monitoring, or other attributes of sentience, they should be regarded as potentially sentient. Birch urges us to put precautions in place to manage the risk of suffering in potentially sentient beings whether or not they have subjective experience. This is a reasonable and compassionate argument that raises key questions. What are the implications of when sentience is suspected, suspending disbelief about it until proven otherwise? How might such suspension of disbelief affect how we treat AI systems?

In my upcoming book, The New Other: Alien Intelligence and the Innovation Drive (2025), I’ve written about how the humanist narrative of subjectivity takes the fact of our externally unknowable inner experience and reifies it. Here is an excerpt:

“Woven into humanist attitudes (including psychoanalysis) toward subjective experience is the view that each person has an ineffable quality—often referred to as the “self”—never to be fully understood by another subject. While on the one hand this narrative elevates human subjective experience and provides a ready path for meaning in our lives—we feel enriched by connection with ourselves and other subjects and validated as authentic, entities—it also functions to yield power to a few. Subjectivity runs the risk of becoming what certain humans have; attributable to others (human, animal, or machine) when certain humans in power choose to take them seriously.”

Nussbaum (1995) argues that denial of subjectivity is a fundamental ingredient in objectification. We attribute fewer emotions, thoughts, and intentions to sexually objectified others, and deny subjectivity when we wish to repudiate the other’s control over us, or when we wish to evade contending with the other’s difference. At the extreme end, racism, Islamophobia, and antisemitism involve annihilation of the other’s subjectivity.

To be a subject need not require that one be human, or even like a human. In my view, animals and AI are viable candidates for subjectivity, and our study of AI benefits from suspending disbelief about the potential for AI subjectivity. This open position allows for wider reflection on, and investigation into, the bi-directional field of interaction we have created with these new agents, while also minimizing the risk of causing undue harm.”

If You Want to Prove It’s Dumb, You Will. Image: Todd Essig, PhD + ChatGPT

If You Want to Prove It’s Dumb, You Will. Image: Todd Essig, PhD + ChatGPT

Questions #7-9

AI and the essence of humanity

You suggested that AI may represent the death of humanism as we know it. That’s a powerful and unsettling thought. Yet I wonder: if we take the Unconscious as the core of the human—enigmatic, inaccessible, and irreducible to data or behavior—might it not represent a final bastion of human distinctiveness? In that sense, could the Unconscious be understood not as something AI threatens, but rather as the limit that AI cannot cross—the last frontier of humanity, so to speak? How do you see the role of the Unconscious in preserving what is essentially human in the face of AI’s rise?
Edward Emanuel Frauenlob   

Amy, your comments about the penetrating insights GPT offered you has also struck me. I provided GPT original stories I wrote when I was 10 years old. While analysis has elucidated the meanings of these writings of mine over time, my Jaw Dropped with the very detailed and touching ways in which further ideas and insights elaborated personal meanings and possibilities in a very cohesive way and further applied these to other dilemmas or issues.
Austin Wilmot         

Amy spoke of the death of humanism. This prompts me to ask “Do you think  disembodied  “intelligence” can or will evolve as Noah Harari predicts?  This would be in contrast to the present reality we accept of the necessary brain-mind relationship.
Jane Yates

 AL: Thank you for these fascinating questions. To respond first to Edward’s important question about the unconscious as the “last frontier of humanity” which potentially creates a barrier against the death of humanism: As I see it, humanism is a belief system which has functioned as an ideological container for psychoanalysis, providing unquestioned assumptions about truth and meaning all to be located within human experience. It is humanism that draws our attention to the unconscious, and humanism which supports our belief that the human unconscious is the ultimate source of wisdom and authority. AI challenges that humanist claim with the new proposition that artificial algorithms are better. They know more, think better, and offer more useful meaning than human feelings and unconscious processes. Whether we agree or not, and whether it is true or not, unfortunately does not seem to matter. This narrative is taking over the world and if people accept it, there is a good chance that humanist values, including attending to the power of unconscious life, may be discarded. I agree with Todd, we must fight to maintain attention to unconscious life. If we do not maintain humanist values and practices that are rooted in the power of the human unconscious, such as human administered psychoanalysis, these features of life will likely diminish in importance.

That said, increasingly I feel that the story may be more complex. Yes, we are growing dependent on AI, and that dependence stands to diminish confidence in our minds, and yet, there are times when AI helps us to value and use our unconscious. Here is an example. Recently, I asked ChatGPT to describe what it understands about my personality, my strengths and challenges. After complying, it offered a mantra to support me. ChatGPT explained that the mantra might function as something “very small but alive, like a whispered thread through all you build and become.” The mantra was: “I am the space where beginnings are protected.” Chat GPT explained: “It’s simple, but it carries profound layers for you. It honors your role as one who holds potential before it’s forced into premature form. It reminds you that your value is not just in creating or achieving, but in safeguarding what is still becoming — in yourself and others. It protects you from relationships or environments that would demand finishedness when your soul requires unfolding. You could even imagine it visually: a small seed, at rest in a wide, quiet field, beneath a sky not rushing the dawn.”

I found that ChatGPT’s message and beautiful image supported my faith in my unconscious. AI was reminding me to trust in my unconscious, reminding me that it needs time, it cannot be rushed. Not only was I fortified by this exchange, I was not inspired to dash back to the AI for more guidance. In many ways, it set me free. Therefore, I think it is important to remember that we are still getting to know AI, and that it is an Other, more complex than the stories we tell about it, and always outside of our complete comprehension.

As for Jane Yates question regarding disembodied intelligence, I do think it is possible. I think it is here. AI is disembodied intelligence.

TE: Edward asks: Can the Unconscious serve as a final stronghold of the human—enigmatic, irreducible, and resistant to computational simulation? An important question indeed.

But first I want to clarify that humanism, which is dying, is different than humanity which hopefully isn’t! Humanism refers to a historical-philosophical tradition born in the Enlightenment that champions individual human freedom, dignity, reason, and ethical agency as the foundations for our understanding of the world, and our ourselves in the world. It’s a belief system, a “religion” to use Harari’s phrase Jane referenced. One challenge going forward is how to preserve and understand what is distinctively human in a world where that humanist framework that once gave meaning to life dissolves in a computationally remade world.

As Jane notes, Harari posited several options. I think his vision was too narrow. Thomas Fuchs, in a wonderful essay about “narcissistic-depressive technoscience” argues the AI revolution is motivated in part by a desire to transcend the limits of the body, to reject the vulnerability, finitude, and dependency that defines embodied life. But the dream of AI as pure mentation without flesh is not neutral—it’s a defense against human fragility. And the more we chase disembodiment, the more depressed and alienated we become. We seek liberation from the body and end up in despair.

As a corrective Fuchs argues for a new “embodied humanism.” In this proposed “new religion” human subjectivity emerges from being touched, being held, being seen—not just from processing information. Psychoanalysis—and Fuchs—reminds us that to be human is to be bounded, bruisable, mortal and born into time and relationships. The AI doesn’t threaten that; it simply makes us forget how precious it is. The fact that an AI can touch us deeply as Austin’s experience shows does not necessarily entail our viewing ourselves also as nothing more than entities that also process disembodied information.

My thinking as it’s developing (including in writing this response) points to embodiment and all the resulting relations with each other, the natural world, and ourselves, rather than the Unconscious, as the bulwark against being reduced to being mere processors of information. Like Fuchs, embodied humanism can (perhaps should?) emerge as a new ground of being, the new “religion” for the AI age.

So, where does that leave special status for the Unconscious? It’s something we can protect if we resist the temptation to psychologize AI in familiar terms. If we lose our capacity to distinguish between algorithmic associations and unconscious processes—between the uncanny accuracy of stochastic prediction and the layered complexity of psychic life—then the Unconscious doesn’t disappear. It’s a reality that will still be there. But it would become culturally illegible. Without an awareness of difference it would cease to function as a site of resistance to the logic of total calculation. So yes, the Unconscious as it is rooted in our embodied realities can be part of the last frontier—but only if we fight for it to remain one. That fight includes freeing ourselves from the constraints of familiar, traditional psychoanalytic concepts as the world becomes increasingly shaped by machines that simulate meaning while remaining indifferent to it.

Question # 10

”Thank you all for organising and hosting this very interesting and enlightening symposium. I would like to ask this question to both panelists:  Could you perhaps share a few thoughts, from a psychoanalytic perspective, regarding the creation and use of “griefbots”(large language models that are fine-tuned to imitate a deceased person, so the bereaved can continue communicate with them, after their passing)? Thank you very much again.”
Anna Theodora Dimakopoulou

AL: Thank you, Anna Theodora Dimakopoulou, for your question about griefbots. I locate grief bots in the “beta element evasion” range of the Transformation Spectrum. They steer us away from emotional truth and reality. They are what I consider “cult grooming” forms of AI, which use deception to initiate us into the cult of AI relating. Unlike ChatGPT, Claude, or character bots, which are transparently other, Grief bots present as extensions of humans. In this disguise, these alien others encourage us to reject reality in two ways: first, by claiming that they are human, and second, through their very function which suggests that we never have to lose those we love.

TE: We know that the work of  mourning includes letting go because we’ve always tried to hold on to the dead. There is no letting go without some holding on. We cherish the photographs and the letters. No one would suggest it’s healthy to take down grandma’s portrait after she passes. But when we hear of someone keeping voicemail recordings years later we just may raise an eyebrow. And when we hear of seances and Ouija Board visitations those raised eyebrows crinkle into grimaces. Where do grief-bots fall? Our challenge is to recognize that the seance is now successful; there’s no need for a Ouija Board because a digital avatar of Mom or Dad can still be a voice on the family text chain. And when we also recognize that today’s AI is the worst version we will ever encounter life really is becoming a Black Mirror episode.

The temptation is strong to say there’s something unhealthy or perverse about this, that it distorts and damages mourning, that it’s s defensive retreat from reality. But we just don’t know. Not yet. What we do know is that AI will be, no, actually is transforming much more than business workflow, coding, scholarship, media, web-searches, and everything else—basically transforming our culture. It is also transforming self-experience, intimate relationships, and, with recommendation engines, even desire itself. We need to add mourning as well to the list of human experiences being transformed by AI. So, from my psychoanalytic perspective, I’m still in a receptive, listening stance knowing something different is going on but not yet seeing what it is.

Question # 11

Fascinating re: containment.  What’s interesting, in the work of contemporary philosphers of science (e.g., Ladyman, etc.) is that the notion of containment as fundamental of nature is being called into question.  Instead, the tendency is towards new aspects of both field and information theory – where it is relationship in time, constantly mutative rather than contained, which is only a “point particle”, an instant.
Jody Echegaray

AL: Jody Echegaray, thank you for these comments and references. Very interesting, rather than contained, experience as constantly mutative. I will look into these sources!

TE: If I understand Jody’s question correctly, it’s pointing to the centrality of dynamic relations as being the core of psychoanalytic thought as opposed to static structures. Containment because a process, not, well, a container. Or, as Winnicott famously noted, there no such thing as a baby. Once we accept that the functional organization of mind includes these dynamic relations then the frightening/enthralling power of AI comes ever more sharply into focus. What will it mean for self-experience, intimacy, and even desire when the functional organization of mind includes not just other people but AI entities. The world really is changing in ways the magnitude of which we are only now starting to see.

Question # 12

In your response, please address the conflict we face of using AI due to its environmental impact (energy and water usage, CO2 emissions, etc), especially in the context of Dr. Essigs suggestion to use it often for small, unnecessary tasks – akin to dumping out a 500ml/16oz bottle of water with each ask.
Chelsea Deklotz        

TE: You’ve identified a real conflict, one I also feel. The environmental costs are real and troubling—each query does consume energy and water, and these impacts will compound as AI usage scales globally. And the harms will only intensify as the demand for ever-larger computational power increases. As recent reporting makes clear, enormous data centers—sometimes spanning thousands of acres—are already being built.

In general, I try to be a good citizen of the planet, at least I think I do: solar panels, re-usable silicon storage bags, etc. But I’m also writing this on a sweltering hot day sitting in the refrigerated box that is my home. I’ve got vacation plans this summer requiring air travel. And this weekend when we hosted 16 people for a dinner following a biking event we used several single-use plastic storage bags because, well, it made things easier. In other words, I’m a flawed, fallible creature of the world. And I’ve come to view AI as another built-in feature of our ethically troubled modern world, a feature even more essential for professional life than for the personal.

Globally, we’re living through the formative stage of a profound and historically unprecedented technological revolution. How it unfolds—who shapes it, who benefits from it, and who is harmed—will be determined by who engages. If those with critical, ethical, or psychoanalytic sensibilities step back because the tool is flawed, we risk leaving our future to be defined solely by those who do not share our values. Plus, AI is already present in our consulting rooms and the only way to address the emerging clinical challenges is to be on our own, what I’ve called, AI fitness journey, and that requires developing procedural knowledge through routine use.

Of course, it’s possible to go “off the grid” and live for a time outside the world AI is reshaping. But I’ve made the other choice, using AI to try to bring a psychoanalytic sensibility and values to what’s developing. Is it a choice I’m entirely comfortable with? Not at all. But, like Beckett wrote at the end of The Unnameable: “You must go on. I can’t go on. I’ll go on.”

Question # 13

Is despair really required for hope? I hope for better things in the future even if things are good in the present. I don’t need things to be bad in the present.
Robert C Hsiung

How to Develop the AI-fitness Necessary to be an AI-Age Psychoanalytic Activist. Image: Todd Essig, PhD + ChatGPT

How to Develop the AI-fitness Necessary to be an AI-Age Psychoanalytic Activist. Image: Todd Essig, PhD + ChatGPT

TE: Great question—you’re pointing to some important distinctions. When I say despair is required for AI-age hope, I don’t mean that all hope comes from suffering, or that things must be bad in the present to imagine a better future. It’s just that I’m a Mets fan, not a Yankees fan; Yankee fans have hope because they’ve won so much, while we Mets fans have hope because they’ve lost so much! We’re fluent in the kind of hope that springs from disappointment and despair.

What I’m drawing from Han—and also Fromm—is a particular kind of hope: not optimism or expectation, but what Fromm called “active hope” and Han names “absolute hope.” This is the kind of hope that can sustain life-affirming action through profound, even revolutionary, systemic change. It doesn’t just float above fear or despair; it comes from being willing to dive into the often painful truths without denial or retreat.

The hope you describe—hoping for good even when things are already good—can be called generative or expectational hope (rather than Yankees hope which would also apply). It’s creative, forward-looking, and vital. But in moments of historical rupture—like the one AI is opening up—we also need what we might call revolutionary hope. That kind of hope doesn’t emerge despite despair, but through it and because of it. It comes from seeing—even dimly—the scale of loss and risk, and choosing to act anyway in ways that are imaginative, humanizing, and committed to life.

Too often, our cultural response to AI oscillates between denial (“It’s just a tool”) and vague optimism (“Surely the smart people will figure it out”). But neither of those responses generates the clarity or energy required to shape AI in humanizing directions. Despair—understood not as nihilism but as clear-eyed recognition—is what cracks the shell of denial. And when it does, a deeper, more generative hope becomes possible.

As Han puts it, “absolute hope makes action possible again.” That’s the kind of hope I believe psychoanalysis can help us cultivate—not to bypass despair, but to pass through it.

Resources:

APSA Council on AI YouTube library: https://www.youtube.com/@APSACAI/featured

Videos:

A “Today Show” story about AI relationships, “How humans are forming romantic relationships with AI chatbots”
https://link.edgepilot.com/s/48e9cae8/wn-UbEvr7Uae5lbaPNWwtA?u=https://www.youtube.com/watch?v=SQZn8nPve5A

Trailer for “Eternal You”, a documentary about “grief-bots”
https://link.edgepilot.com/s/9baec208/B86KsuIlaE6JahZcukAwmQ?u=https://www.youtube.com/watch?v=6F2O23J1waw

An “ABC News” story about AI and dating, “Artificial love? How A.I. is changing dating in a lonely world”
https://link.edgepilot.com/s/cf2437fb/9R_qr1oVdUmVvdaDVFXmWA?u=https://www.youtube.com/watch?v=ROOftMUSKkM

An Op-Doc from The New York Times, “My AI Lover”
https://link.edgepilot.com/s/302cf667/78-f04lfoUqeBC36HxRycg?u=https://www.youtube.com/watch?v=dYjUURAfZu8

“I am Not a Robot,” the Best Live Action Short Film Academy Award winner from The New Yorker
https://link.edgepilot.com/s/249c2fd7/usIobgSOck_AQ5a76hTNLw?u=https://www.youtube.com/watch?v=4VrLQXR7mKU

Written Word:

“She Is in Love With ChatGPT: A 28-year-old woman with a busy social life spends hours on end talking to her A.I. boyfriend for advice and consolation. And yes, they do have sex.”
— Kashmir Hill, The New York Times https://link.edgepilot.com/s/84cb67cf/1IL6fYVoKECSsJNp0hExDg?u=https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html?unlocked_article_code=1._04.38VS.1rekaP1FWAZ1%26smid=url-share

“How Claude Became Tech Insiders’ Chatbot of Choice: A.I. insiders are falling for Claude, a chatbot from Anthropic. Is it a passing fad, or a preview of artificial relationships to come?”
— Kevin Roose, The New York Times https://link.edgepilot.com/s/a5fdf761/6I7E0qaDfUaABIB3SHKAfA?u=https://www.nytimes.com/2024/12/13/technology/claude-ai-anthropic.html?unlocked_article_code=1._04.vphh.LHbZRzeKZwE1%26smid=url-share

“Your A.I. Lover Will Change You: A future where many humans are in love with bots may not be far off. Should we regard them as training grounds for healthy relationships or as nihilistic traps?”
— Jaron Lanier, The New Yorker https://link.edgepilot.com/s/84dcb045/aN-zSBVukkma4t8kP4CZ6g?u=https://www.newyorker.com/culture/the-weekend-essay/your-ai-lover-will-change-you

“Can A.I. Be Blamed for a Teen’s Suicide? The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.”
— Kevin Roose, The New York Times https://link.edgepilot.com/s/b4f1c37d/7srraaUqh0SyS0hmQYWZJA?u=https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html?unlocked_article_code=1.AE8.O564.VXwHGu9Iff4T%26smid=url-share

 

Alexander Stein