thumbnail

AI and Self-Erasure

Devil take a head in the air (1876) by Odilon Redon

by Richard B. Grose, PhD

Introduction

AI is everywhere these days, designed and marketed to help with everything. It has many faces: amazing, helpful, terrifying, consoling, powerful. In each of these aspects, there are elements of reality—a situationally appropriate assessment of the technology’s actual capabilities—but also of human imagination that fills in the blanks and goes beyond reality by attaching deep, often unconscious fears and wishes that derive their power from childhood traumas and experiences—what psychoanalysts call “transferences” (see www.verywellmind.com/transference-2671660). However powerful AI may actually be, some of its power is only in our minds and derives from our transferences to it. Many new technologies are genuinely powerful, capable of doing and achieving things we cannot, and encountering that sort of power invariably reminds us—if unconsciously—of early life when we were small and powerless in a big world of very powerful beings, adults. Thus, a powerful new technology can produce powerful transferences. Separating reality from transference is always difficult, and especially here.

One reason it is hard to separate the real—what AI can or will actually do— from its transferential implications can be found in the extremity of our needs. The climate crisis, autocracy, the threat of nuclear war, environmental degradation, social collapse, and to some even AI itself—with so many apocalypses pleading for our attention, we might be forgiven for hoping that AI can fix it all, investing in it an omnipotence to come to our rescue, just as our mothers may have when we were children and we needed help or comfort. A great need also tends to make us vulnerable to powerful transferences.

When the thought of the power of AI shuts down our thinking, a transference is arguably being taken—mistaken—as a reality. It is this important transference, the terrifying face of AI, that I want to discuss. This transference to someone (or something) fearsome rather than, say, benevolent or munificent, can be felt in the way that knowledgeable people talk about AGI, Artificial General Intelligence, which is defined as an intelligence fully equal to that of humans. All of the world’s major AI labs are working hard to achieve AGI. Some experts are even imagining going beyond that to something which is referred to as superintelligent AI, which some people assert, will be smarter, perhaps astronomically smarter than humans. Often, when the prospect of AGI or superintelligence is mentioned, there is a pause, as if the speaker is momentarily overwhelmed. I imagine that the unspoken next thought is that once AI is “smarter” than we are, it will be justified in having power over us. Occasionally, a similar thought is actually spoken out loud: once we are no longer the most intelligent entity on Earth, we are finished. These spoken and unspoken thoughts express a transference that results in a confusion about means and ends.

Means and Ends

In principle no different from every technology that has preceded it, AI is a tool that can assist us in reaching our goals: writing a humorous birthday poem for a 5-year-old girl, creating a low-budget three-day plan for a visit to Paris, or writing an email inviting parents to the school bake sale, to cite three suggestions of help that AI offered recently when I opened a new document in Word. AI can also do many other things such as generating a business plan for a new company and writing code. In all these cases, it furnishes a means to our ends.

However, when AGI is invoked, the means are quickly overtaken by the ends—the system, not us, is dictating what is planned, what is wanted. The implication is that when AI becomes AGI, then it will begin, or will be justified in beginning, to determine the ends for which the means are devised. Once it is as “intelligent” as we are, or more so, then it, not we, will be entitled—following this logic—to determine our ends. This confusion mixes up means and ends, introducing, as so often happens when AI is the topic, a sudden cessation of thinking. Why should an exceptional capacity to devise means give this technology—which, after all, is not human—permission to replace human ends with non-human ones?
The notion of abandoning human ends is ultimately a transferential submission to AI as a fantasy object. Submission of this sort is often a sign that a powerful transference is active. Children submit to the authority or the physical force of parental figures for a variety of reasons which are by and large developmentally normal and appropriate (so long as the adult is not abusing that authority). But when adults submit to a fantasy object, they are in the grip of a transference, a repetition of childhood experience. I view this submission as a self-erasure.

We can gain further clarity on this confusion of means and ends by thinking about the breathtaking capacities of AI: the instantaneous retrieval and organization of information on virtually any topic; its availability to engage as if it were another person on any topic while energetically expressing its apparent interest in helping; a powerful planning ability; immense capacities in mathematics, science, and coding. These are increasingly made available to or urged on us as a means to achieve our ends. However, when most people look for a partner, or when we raise our children, or pursue any number of other important drives or goals in our lives, we are aiming for very different things from what AI systems are designed to do. If AI-type capacities govern a person’s intimate choices and decisions, for instance, if we seek out a life partner based on their computational—non-human—capabilities, or if we raise children to resemble bots, we will have difficulty. Both in movies (Her, 2013) and in life, many people are forming powerful emotional attachments to and experiencing deep feelings for AI systems, agents, and avatars.

But I am describing something else: an unconscious response to non-human entities that are touted and we accept as being “more intelligent than we are” and which is driving us to abdicate subjective agency and self-determination. In this relationship with AI, we are surrendering our needs, and our ends will cease to be central: we submit.
In so many discussions of artificial intelligence and superintelligence, the importance of human ends seems to vanish. I am suggesting that this confusion is indicative of a transference in the psychoanalytic sense—a willingness to abdicate self-responsibility and to cede power and control to entities that we endow as being more intelligent.

The vital question becomes: what is going on in this transference that results in this self-erasure? To help answer this, I propose to consider our transferential self-erasure to a technology in two important societal contexts: the emergence of American autocracy under Donald Trump, and a longer-term social-historical view of the impact of modernity on Western societies.

The political context of autocracy

First, let us consider the phenomenon of Donald Trump. I think it is not a coincidence that AI and Trump both had breakthrough years in 2016. In that year, of course, Trump was elected president of the United States for the first time. Regarding AI, as Google AI puts it, 2016 was “a watershed year” because “it marked the transition of the technology from a specialized academic field to a public phenomenon with major real-world implications.” Trump’s shocking electoral victory in November 2016 was preceded by Google’s DeepMind’s AlphaGo system defeating a human Go champion.

As a heuristic device, to better understand what self-erasure looks like, I want to note a similarity between these two ostensibly very different phenomena.

Since being re-elected to a second term, Trump’s effect on American politics—as of this writing—has been to push for a drastic downgrading of the consideration of human needs, which also includes but is separate from human rights, in the federal budget. There is not space in this brief essay to instantiate all of Trump’s attacks on the government’s care for human needs. Since January 20, 2025, the current administration’s actions have seriously, if not catastrophically, defunded medical research, universities, science, health care, environmental protection, climate policy, consumer protection, and education to name only a few of the affected areas. This assault is part of a deliberate plan to radically reduce, if not erase, human needs from the purview of democratic governance. Such a plan and its intended outcomes cannot be implemented without allowance and support. This, I’m suggesting, is symptomatic of fundamentally the same impulse toward self-erasure as in our transferential relationship to AI.

My comparison of the terrifying reality of a permitted political assault on government with our transferential response to the advent of AGI is grounded in a societal readiness to see the self erased. Seen from a critical distance, autocratic politics and the submission to AI both depend on a certain self-erasure being acceptable in, even desired by, society. Thus, the second reason to look at them together is that both can be seen as reflecting the same fundamental exhaustion in society, both in effect saying, “you take over, this is too much for me.” Unless and until Trumpism and the allowance of democratic collapse and autocratic take-over are definitively denounced and opposed and AI applications and services are regulated and limited such that technological means cannot encroach on or subsume human ends, both phenomena are rightly seen as twin symptoms of an exhaustion which invites self-erasure.
This exhaustion, however, has a long history— in the 1880s, Nietzsche wrote about a similar social dynamic in On the Genealogy of Morals—which is important to understanding our predisposition to self erasure in current day technologies and contemporary politics and society.

The social-historical context of modernity

German social thought and sociology have long described the fraying of the social bonds in Western societies, producing the social conditions in which both Trump and AI are able to emerge and then seemingly to dominate. Here I propose only to sketch the long, convoluted social history leading to this moment.

Ferdinand Tönnies (1855-1936) published Gemeinschaft und Gesellschaft [Community and Society] in 1887, in which he sought to describe the process of modernity, the growing impersonality of European society, by isolating two ideal types that characterize that evolution. A Gemeinschaft [community] is seen as a small, rural social group in which work is viewed as a way of life, where relationships in the community are strong, private, and dependable. Tönnies uses words like “organic” and “real” to describe this pre-modern social group, in which there is no sharp distinction between ends and means. A horse does work but is also valued for itself.

A Gesellschaft [society] is seen as a large, urban social group in which work is seen as a transaction, where relationships are more temporary and governed by transitory common interests. Tönnies uses words like “imaginary” and “mechanical” to describe modern society, in which people are accustomed to making sharp distinctions between means and ends. A horse is valued for the work it can do to further the economic ends of its owner, and only for that.

Tönnies sometimes viewed his ideal types in developmental terms, regarding Gemeinschaft as reflecting the youth and Gesellschaft the maturity of a society. At the same time, he regarded them as ideal types, meaning they were abstractions that can never exist in pure form but that are useful in understanding complex social processes. His concepts, published in Germany in 1887, can also be seen playing out throughout American history: urbanization; the alienation experienced in large cities; the rise of capital; nostalgia for small-town life. Tönnies, who died in 1936, did not live to see the full effect of Hitler’s leadership of Germany, but his theory provides a basis for understanding the hold that Hitler had on the German people at that time: Gesellschaft can leave people psychologically stranded, lost in a world of money and transactions where the comfort and dependability of Gemeinschaft are only a distant memory.

Twenty-first century America presents a comparable social situation, enabling Trump to succeed in doing the same thing as Hitler in 1930s Germany. And in its capacity to provoke a regressive transferential relationship, AI , as I have described, can be seen as occupying a similar space. I posit that the self- erasure detectable in the fantasy of giving AI power to determine ends and not just means is best understood in this social context: as a transferential relinquishing of power to the more deserving entity. Thus, AI implicitly offers people who are lost in the psychic deserts of Gesellschaft a transferential solution, an overlord, perhaps even a father, although one who may end up destroying his children (as Hitler did to Germany). Tönnies gives us a first set of concepts to contextualize the transference (“you take over”) that AI can be seen to generate.

Max Weber (1864-1920) gives us a second set of relevant concepts. Known for The Protestant Ethic and the Spirit of Capitalism (1905), Economy and Society (1922), and two masterful late essays, Science as a Vocation (1917) and Politics as a Vocation (1919), Weber elaborated a view of modern life that implicitly built on the work of Tönnies. He drew out the psychological implications in the work of his slightly older contemporary. Weber wrote of the evolution from religions of salvation to the rationalization of life, with individuals now responsible for their own “salvation” in their lives on Earth. This implied extension of Tönnies’s concepts suggested the experience of people who move from a Gemeinschaft to a Gesellschaft social situation.

Weber had a tragic view of the demands that modern life places on people and their ability to deal with them. As life became more rationalized, in his view, the world lost something significant; he called this “the disenchantment of the world.” He saw individuals in modern society compelled to live in impersonal, bureaucratized structures that compel their obedience without providing a vital human connection. He called life in this situation “the iron cage.” Dying in 1920, Weber did not live to see the rise and fall of Hitler, a tyrannical maniac who offered Germans his own catastrophic way out of the iron cage that Weber imagined.

Conclusion

AI has exploded into our lives at virtually the same moment that autocracy has. With their hidden similarities, in the transferences they evoke, they can be understood through the lens of two social theories that predated but explained the West’s last, cataclysmic experience of autocracy and that can give us a context within which to grasp what we are living through today.
AI, as I said, has many faces, from insistent helper to destroyer of jobs and transformer of the economy. Each face, or facet, will require its own individual and social response. I am writing about the “terrifying” AI, which terrifies, I am arguing, partly by evoking a transference. I have drawn the parallel to the autocratic politics of our moment. I have sketched a fundamental identity of transference. There is much more to be said about this transference, but its fundamental characteristic in both situations is self-erasure.

With both autocracy and AI, it remains to be seen how extreme the transference will become. If American society concludes by fully accepting the autocrat and his demands, and if societies globally conclude by placing ultimate power in the hands of AI, then we will have to speak of a psychotic transference, with human societies relinquishing the minimum requirements for their health and survival. Then the situation will be dire indeed. As of this writing, the transference can be seen as neurotic and therefore amenable to interpretation and, hopefully, change.

 

References

Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press.

Nietzsche, F. (1967). On the Genealogy of Morals, trans. Walter Kaufmann. New York: Vintage Books.

Tönnies, F. (2002). Community and Society [Gemeinschaft und Gesellschaft], trans. and edited Charles P. Loomis. Mineola, New York: Dover Publications.

Weber, M. (1972). “Science as a Vocation,” “Politics as a Vocation,” in From Max Weber: Essays in Sociology. trans., edited, introduction, H.H. Gerth and C. Wright Mills. New York: Oxford University Press.

Weber, M. (1958). The Protestant Ethic and the Spirit of Capitalism, trans. Talcott Parsons. New York: Charles Scribner’s Sons.

Weber, M. (1964). The Theory of Social and Economic Organization [taken from Economy and Society], trans. A. M. Henderson and Talcott Parsons. New York: The Free Press.

 

Alexander Stein