thumbnail

Refractions in the Digital Mirror

Refractions in the Digital Mirror
Using Psychoanalysis to Understand and Address AI Bias

By Dr. Karyne E. Messina

Dora Maar by Man Ray (1936)

Dora Maar by Man Ray (1936)

 

The Problem: AI Bias in Practice

Imagine a not-too-distant future when AI has begun to seep into the space of psychoanalytic practices:

Dr. Olivia Bennington began using an AI-assisted clinical assessment tool. While working with Kylie, a Black woman who was experiencing racial trauma, the AI system consistently failed to recognize crucial cultural cues and historical information in Kylie’s narrative. Trained predominantly on data provided by white subjects, the system steadily underestimated the impact of racial microaggressions and systemic racism on Kylie’s mental health, threatening to compromise the therapeutic understanding of her trauma.

In this story, our doctor recognized that the AI’s blind spots mirrored all too familiar patterns of systemic dismissal and minimization. But what if those blind spots were replicated on a large scale? Would clinicians be able to react in time?

These concerns about bias aren’t limited to clinical settings of the future. Turing Award winner Yoshua Bengio, one of the founding fathers of modern AI, and philosophers like David Chalmers are pushing us to examine how our understanding of consciousness itself shapes the development and deployment of AI systems.

Understanding the Roots of AI Bias

Most AI systems inherit bias from training on decades of data, turning societal prejudices into mathematical certainties: a white patient’s trauma response may be referred to as something he or she is “processing,” while a Black patient’s similar response gets labeled as “maladaptive.” Where bias affects individual decisions, AI systems multiply these prejudices across millions of lives. In human terms, the AI behaves like a child who unconsciously adopts a family’s prejudices, repeating what is heard without thought about context or consequence. But AI is no child and the implications are far-reaching.

This phenomenon reflects Freud’s idea of projection, which was later expanded on by subsequent psychoanalytic thinkers. Through projection, people unconsciously attribute unwanted aspects of themselves onto others. Today’s AI developers have unconsciously embedded their biases into machines, and they are being spread with unprecedented speed. My forthcoming book Projection, Projective Identification and AI Bias: Using Psychoanalysis to Build More Equitable Artificial Intelligence argues that psychoanalytic concepts like projection and projective identification—traditionally used to understand human behavior—offer new ways to recognize and address AI bias.

Psychoanalytic theory, combined with contemporary critical theory’s insights about technology’s non-neutrality, reveals how unconscious psychological processes shape AI development beyond its creators’ intentions. We see these influences in healthcare AI, in hiring algorithms, in content recommendations, in language translation, and image generation, among other areas. Building more equitable systems will require recognizing and addressing these biases. That’s no small task. But if ignored, prejudice in automated systems could lead to unprecedented harm, some of which is already happening.

Silicon Valley, where most AI systems originated, remains overwhelmingly male. Women hold just a quarter of tech jobs there and lead fewer than one in twenty major companies. This homogeneity shapes every aspect of AI development, from initial assumptions to final code. Deployed at scale, these imbalances sway the course of human activity.

Theoretical Framework

Applied to artificial intelligence, psychoanalytic theory highlights the unconscious dynamics developing in these new technological spaces. What mental health practitioners have long observed in human relationships—the tendency to project parts of ourselves onto others—now occurs with machines. These patterns may soon emerge in psychoanalytic practice as AI systems enter healthcare settings as technology drives to categorize and quantify, reducing therapeutic encounters to simplified metrics and labels. Such algorithmic interpretations have the potential to flatten the nuanced understanding of the human condition central to psychoanalytic work.

Contemporary critics emphasize technology’s inherent entanglement with our biases, power structures, and blind spots. Recognition alone doesn’t suffice. We need action through initiatives and legislation. As mental health professionals, we must examine which bias safeguards prove effective, how psychoanalytic insights might guide AI design, and what role these systems could play in addressing broader social challenges. The stakes are too high for passive observation.

Making these unconscious processes visible could transform how AI systems are built. With guidance from the mental health community working alongside tech developers, we can create systems that serve all of humanity rather than just their creators.

The Algorithmic Unconscious

Luca Possati calls this phenomenon the “algorithmic unconscious,” which he describes as a hidden layer that cause developers’ unexamined biases to become code. The term captures how unconscious prejudices and preferences, typically carried forward across generations, now manifest in programming decisions. Without recognizing their own biases, programmers are building systems with the potential to amplify and perpetuate generational wounds across digital landscapes, reaching further than any individual prejudice. As we know, such manifestations emerge in analysis when people unconsciously carry forward their parents’ prejudices, biases, and preferences. But this is a new development.

What troubles me is how ingrained algorithmic biases can become, and how quickly they are occurring. Even well-intentioned developers, seeking to support mental health work, risk turning clinical concepts into rigid mathematical rules. Their assumptions about what constitutes “normal” or “stable” behavior could become embedded in systems that would shape the future of clinical practice.

Here’s another real and unexpected challenge: these systems push back, reshaping how we see ourselves. When an AI repeatedly flags certain behaviors as “concerning” or “unstable,” it molds our understanding of mental health through each interaction. What if clinicians begin adjusting their clinical language to

accommodate algorithmic systems, gradually shifting their perspectives to match the machine’s feedback? How might that influence interactions with patients? Technical solutions, like better data and refined algorithms, matter tremendously, yet addressing these psychological dynamics requires equal attention.

Projective identification has shown us how we unconsciously pressure others to embody our projections. With AI, this dynamic spreads exponentially. Consider a hiring algorithm trained on decades of biased hiring decisions: it actively sorts candidates into categories its creators unconsciously designed, reinforcing these biases with each decision.

Impact and Implications

This examination of AI through a psychoanalytic lens is crucial to reducing harm and enhancing the technology. Every day, organizations deploy new AI systems. These systems, shrouded in the authority of computer code, hide their biases behind claims of objectivity and efficiency. When we hear about AI bias in image rendering, we’re initially shocked—how can a computer program be biased? It’s all 0s and 1s right?

The proposed solutions we often hear sound deceptively straightforward: diversify the data, adjust the algorithms, document the bias. These solutions remind me of patients who believe they can simply think their way out of deep-seated patterns. Change takes time, mistakes happen, and progress is iterative.

The hypothetical examples mentioned above highlight the risks. When AI systems misinterpret clinical presentations, they risk shaping how future clinicians view similar cases. Each automated judgment builds on the last, narrowing the space for growth.

Bridging Two Worlds

This dynamic is familiar to psychoanalysts. Klein called this process projective identification where we not only project our unwanted feelings onto others but unconsciously influence recipients to “take in” these projections. Now, we’re building AI systems that do something very similar. They carry their creators’ mental frameworks and blind spots, actively molding user behavior to match these hidden assumptions. Recognizing this mirror between human and artificial cognition points the way toward better, more ethical AI design.

In my research for my book, I examined how our understanding of projective identification offers new insights into AI development, suggesting ways we might create more ethically aware systems.

I propose bringing together two worlds rarely combined: psychoanalytic theory and artificial intelligence development. Drawing on fundamental psychoanalytic concepts and contemporary critical theory, I show how unconscious biases shape AI systems in ways their creators never intended. Research suggests that when operating in tandem with humans, AI can forge partnerships that improve our productive capacity. By incorporating psychoanalytic understanding into AI’s development, we can work toward creating these partnerships while remaining mindful of the opportunities and challenges that await.

I am not one for theatrics, but we stand at an inflection point: either we bring psychoanalytic insight into AI development, or we risk creating systems that perpetuate our worst unconscious biases at unprecedented scale.

References

Bengio, Y., et al. (2023). If AI becomes conscious, how will we know? Science.

Bouie, J. (2025, January 18). Mark Zuckerberg has a funny idea of what it is to be a man. The New York Times. https://www.nytimes.com/2025/01/18/opinion/zuckerberg-masculine-energy-rogan-trump.html

Chalmers, D. (2020). GPT-3 and General Intelligence. Daily Nous. Retrieved from https://dailynous.com/2020/07/30/philosophers-gpt-3/

Costello, T. H., Rand, D. G., Lyons, B. A., Leib, M., McPhetres, J., Gray, K., & Van Bavel, J. J. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385, Article eadq1814. https://doi.org/10.1126/science.adq1814

Klein, M. (1946). Notes on some schizoid mechanisms. International Journal of Psychoanalysis, 27, 99-110.

Messina, K. E. (2019). Misogyny, projective identification, and mentalization: Psychoanalytic, social, and institutional manifestations. Routledge.

Messina, K. E. (2022). Resurgence of global populism: A psychoanalytic study of projective identification, blame-shifting and the corruption of democracy. Routledge.

Possati, L. M. (2021). The algorithmic unconscious: How psychoanalysis helps in understanding AI. Routledge.

Possati, L. M. (2022). Unconscious networks: Philosophy, psychoanalysis, and artificial intelligence. Routledge.

 

CAI Report Editor