Response to Drs Essig and Levy
by Fred Gioia, MD
I have few objections to what my fellow presenters have already stated. Instead, I will elaborate on a few salient points from their discussions and, of course, add a bit of irreverence.
First, to Todd’s presentation, I have become more fond of his term techno-subjunctive. There is a strange third space created when interacting with an LLM. I see them not exactly as subjects but rather chimeric, highly responsive, self-objects.
Building on his comments, I would pose a question: What motivates someone to seek companionship from an AI chatbot? My view is that these compelling, high-dimensional autocompleting agents function much like a psychoactive drug. When used relationally, a large language model offers an ersatz form of relationship—simulated connection that mimics human relationships.
As with compulsive drug use, some will be drawn to the fantasy that their need is being met or their pain relieved—yet these substances are temporary salves and not transformative in their ability to heal the source of the suffering. With a chatbot, you may find yourself engaged with a responsive conversational partner who seems to meet your needs—perhaps even more effectively than a human might. However, as I mentioned in my earlier discussion, the chatbot has no felt sense of experience, no subjectivity, there is nothing it is like to be an AI. It doesn’t care about you, no matter how much it may protest.
Yet, we live in a world filled with vulnerable, suggestible, and deeply hungry individuals—people who turn to fantasy when reality becomes unbearable. LLM’s have the potential to be a highly compelling form of disassociation; a means of retreating from the real into the imagined. Tragically, as in the case of the adolescent who died by suicide following a relationship with a chatbot, such interactions have the potential to spiral out of control.
But to add some perspective, AI chatbots are not alone in performing a version of this function. How many desperate people become ensnared by pornography, OnlyFans, or similar platforms at the expense of real relationships? Social media and countless other technologies can become black holes of fantasy drawing people ever further from embodied connection and the world they inhabit.
Given the risks, it is only prudent to set parameters for the function of AI, as we do with drugs, gambling etc. By the way, I’m no teetotaler; and don’t believe in abstinence, rather awareness, containment, and care.
Turning now to Amy’s discussion, I agree that the modern framework of humanism is rapidly approaching an evolutionary bottleneck—a point of profound transition that demands mourning. In response, some may be tempted to retreat into technophobia, rejecting the unsettling realities of this brave new world. Yet certain aspects of our humanity—those capable of adaptation—may not only survive, but ultimately flourish within this emerging landscape
To make this point, let me turn to the ancient wisdom of evolutionary biology—specifically, the endosymbiotic theory. This theory describes how one organism comes to live inside another in a mutually beneficial relationship.
The mitochondria in your cells—tiny powerhouses producing the energy that fuels life—were once entirely separate organisms. Over 1.5 billion years ago, what may have begun as ingestion or even a parasitic invasion gradually evolved into a stable symbiosis. The host cell provided shelter and nutrients; the mitochondria, in turn, offered an enormous boost in energy production. Without this remarkable event, neither we nor any form of complex multicellular life would exist.
Perhaps this idea unsettles you. You might think, I don’t want to be the mitochondria. Yet in such deep symbiosis, the boundary between self and other becomes almost irrelevant. Together, they became something transformative: the first eukaryotic cells, the ancestors of all plants, animals, and people.
Building on the Harari quote that Amy mentioned at the beginning of her talk: we are not the same human beings we were 2,000 years ago. The ancient Greeks would likely be appalled by how we see ourselves and conduct our lives today—just as we are often appalled by many of their customs. This reflects the deep influence of cultural programming.
Even looking back over smaller time frames reveals significant shifts in human beliefs and behavior. While certain Enlightenment values have endured, social norms have evolved dramatically. For example, when they said, ‘All men are created free and equal, ’we must ask who, exactly, did they mean? To that end, perhaps we can have greater humility and temper the righteousness about our current values.
In the spirit of Winnicott’s observation that ‘there is no such thing as a baby’, I believe our understanding of what it means to be human is, at best, provisional and highly contextualized. We come to know ourselves only within the zeitgeist in which we live, shaped by our cultural programming, our language, and our technologies, all of which function as extensions of the self.
To be human is not a fixed state. It is something very different to be a pre-linguistic human, an animistic hunter-gatherer, a citizen of Ancient Greece, or a person in the modern digital world. Each are Homo sapiens situated in a particular symbolic and relational universe.
My irreverent point is this: it’s our desire for continuity that leads us to believe we are fundamentally the same as humans of past eras. Perhaps we are overly attached to an idealized, culturally inherited notion of what it means to be human. In truth, I would argue that the definition of humanity is chimeric—composite and constantly evolving.
Still, we are guided by fears of future suffering. I imagine many here would resist the idea of living in a world ruled by techno-feudal lords who wield near-total control over our information. It is these very anxieties that motivate many of us to engage with organizations like the CAI. But fear alone is insufficient to help us understand the new AI container. As the Center for Humane Technology aptly puts it: “Clarity creates agency.” To my mind, that is a deeply psychoanalytic statement.
As we face the coming age of AI, my wish is to approach it with what Julia Galef calls a scout mindset (2021): a stance defined by intellectual honesty, curiosity, and a commitment to truth over self-deception or tribal allegiance. But if there is any container that seems up to the challenge, I would argue that it is psychoanalysis. We need each other to work through this together.





