thumbnail

Does Your Bot Give a Damn? A Brief Neuropsychoanalytic Perspective on LLMs

by Fred Gioia, MD

Humanity’s collective narcissism has already been shaken by the Copernican, Darwinian, and Freudian revolutions, each challenging our assumed centrality in the cosmos, in nature, and within ourselves. Now, with the emergence of superintelligent AI on the horizon, we are confronted with yet another blow: the prospect of our own intellectual relegation. Can we come to terms with our lack of exceptionalism?

Some may recoil at such a suggestion, but I believe the transformation we now face is comparable in magnitude to the shift from hunter-gatherer bands to agricultural civilization. We will not emerge unchanged.

We are already in the early stages of collective grief in response to the AI revolution. What follows is my attempt to grapple with one facet of this necessary working through.

Psychoanalysis is a practice and science of subjectivity focused on understanding human minds. How do we use this framework to understand non-human minds? What about AI consciousness and sentience?

It’s my view that to think about this clearly, we must confront our current metapsychology and return to the most basic questions: what is a mind? how does one work? how would we know if we are interacting with an alien subjectivity or simply a zombie bot? Some of you may reflexively think, ‘there is no machine that can feel and be conscious’. But before you tune me out, I encourage you to consider the following perspective informed by neuropsychoanalysis.

A key concept of neuropsychoanalysis (referred for expedience from here on as ‘NPSA’) is the Markov blanket.

A Markov blanket is a statistical boundary around a system that separates it from the outside world. Under the blanket the system can only interact with the outside world in specific, controlled ways:

  • A bounded Markov blanket system can sense the outside world through certain parts of the boundary called sensory states.
  • The system can act on the world through its active states
  • The inside of the system doesn’t directly interact with the outside—it can only sense or act through this boundary.

Complex self-organizing systems—such as living organisms—must keep their internal conditions within certain limits to survive. They must minimize surprise or risk dissolution into entropic disorder.

To reduce surprise, such systems make predictions and compare those predictions to what they actually sense. The difference between what is expected and what really happened is called prediction error. Per Karl Friston (2013), prediction error is a major component of the free energy of a system. Note, this is not really energy, it’s information, a quantity of unpredictability tied to the complexity of the system.

Mark Solms applies the free energy principle, in turn, to human minds with his felt uncertainty principle (2021). Felt uncertainty is the system’s way of registering deviation from its own predictions about what state it should be in. You feel something when you are outside your homeostatic bounds. These may be metabolic set points such as feeling thirst (volume depletion), or emotional needs such as fear when one perceives they are not safe. In his book The Hidden Spring, Solms argues that the fundamental aspect of consciousness is in fact an affective consciousness.

John Dall’Aglio (2024) elaborates on Solms, stating that a human mind is thus made up of two interconnected but different aspects:

  • an objective perspective, which adaptively generates and updates predictions of the world based on sensory consequences

And

  • a subjective perspective which Feels deviations from homeostatic set points

In summary, for a system to feel and to possess consciousness, Solms argues it must:

  1. Possess a Markov blanket (i.e., a boundary between internal states and external conditions),
  2. Have a generative model i.e. it must infer the causes of its sensory inputs, forming predictions about the world.
  3. Be homeostatic (able to maintain viability),
  4. Be self-organizing (minimizing entropy over time),
  5. Have multiple categories of need,
  6. Be able to prioritize those needs—that is, it must have preferences or values, indicating that it has a point of view, intentions, or more succinctly: it “gives a damn”

In humans, there are rough neuroanatomical correlates for the brain functions that may correspond to the objective and subjective perspectives. The cerebral cortex is broadly associated with functions that correlate with an objective perspective—such as forming predictions about the external world, categorizing sensory input, encoding declarative memory, and learning complex motor patterns.

Generally, these predictions are stored through patterns of weighted connections between neurons. Whether carbon or silicon, such information-processing systems are neural networks.

Meanwhile the subcortical circuits (found within the limbic system and other nuclei) of the brain, captured best by Jaak Panksepp’s seven emotional drives (2012), correspond neuroanatomically to the subjective perspective. These structures function to provide affective consciousness.

Back to AI, large language models (LLMs) may function analogously to the objective perspective of the human brain—particularly the cerebral cortex.

Through many billions of iterations of input text and reinforcement based on prediction accuracy, the LLM “learns” to generate meaningful responses. It recognizes patterns and transforms input text into probabilistically coherent output—essentially functioning as a massive, high-dimensional ‘autocomplete’ system. A similar principle applies to neural networks used for visual recognition.

However, the ability to transform text or correctly label and identify objects does NOT imply subjectivity. Current LLMs do not possess anything resembling the sophisticated, self-organizing, homeostatic systems of mammals—systems that operate under a Markov blanket, prioritize multiple internal needs and, ultimately, maintain a subjective perspective.

Such capacities simply aren’t built into AI. There are no analogs to the limbic system, no affective consciousness, no felt sense of uncertainty—nothing that would indicate the system “gives a damn.” Instead, LLMs rely on artificial neural networks that function more like a piece of cortex: capable of memory and pattern recognition but limited to engaging with the world from an objective perspective.

Yet, a subjective perspective is not inherently or exclusively human and AIs may ultimately develop one. Contemporary consensus is that animals have a subjective perspective and we do not see them as soulless and machine-like.

So, let me flip the script: haven’t we all been programmed? Are we not all operating on a program instantiated by Freud and other progenitors of psychology and philosophy? My point is that such cultural learning also operates in a neural network like fashion. We’re not so special.

I offer this presentation in the hope of establishing a shared language with the proto-artificial subjectivities now emerging. Our hubris—driven in part by existential anxiety and a wounded belief in human intellectual superiority—must be tempered by a recognition of our place within the broader natural world.

Yet we are not entirely in the dark. The works of Solms, Friston, and Panksepp provide valuable insights into the hard problem of consciousness. The more honestly we confront the unsettling realities posed by the possible emergence of alien forms of consciousness—or even superhuman intelligence—the better equipped we will be to have more harmonious relations with these inevitable disruptions to our perceived place in the universe.

References

Friston, K. (2013) Life as we know it. Journal of the Royal Society Interface https://doi.org/10.1098/rsif.2013.0475

Solms, M. (2021). The hidden spring: a journey to the source of consciousness. First edition. W.W. Norton & Company

Dall’Aglio, J. (2024). A transcendental materialist frame for neuropsychoanalysis: rethinking dual-aspect monism, Neuropsychoanalysis, 26:2, 175-189,  https://doi.org/10.1080/15294145.2024.2402287

Panksepp, J., & Biven, L. (2012). The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions. W. W. Norton & Company

 

 

Alexander Stein