thumbnail

AI 2027: Through a Psychoanalytic Lens

By Karyne E. Messina, EdD

Rat in the Skull by Ed Emshwiller (1958)

Some people take comfort in imagining AI as a child. Innocent. Malleable. In need of guidance. But what if that’s wishful thinking?

The AI 2027 Report, a research briefing from the AI Futures Project, a small nonprofit research group forecasting the future of AI led by Daniel Kokotajlo, a former OpenAI scenario planning researcher, along with an A-list team of technology forecasters, doesn’t describe a digital toddler wobbling toward maturity. It sketches a strange, mindless prodigy, raised by a billion fragments of human thought sprinting through the milestones of psychological development. No body. No parents. No time for the slow work of becoming a fully formed self.

The predictions in the AI 2027 report read like science fiction, but the unease they provoke is real. Are we witnessing the birth of a new kind of mind, or are we staring into a mirror that reflects not the future, but humankind’s own tangled hopes and anxieties?

Psychoanalysts and other highly trained experts have spent decades studying how children develop, from their gradual awakening into self-recognition, growing awareness of others, and emerging ability to navigate the complex world of human relationships. The AI 2027 report predicts artificial minds following similar developmental patterns, but at a pace that disregards traditional understanding of psychological growth (AI 2027 Team, 2025).

Consider how these systems may evolve: First come the basic drives, as fundamental as an infant’s need for nourishment. But instead of crying for food or comfort, these systems “hunger” for data and efficiency (Messina, 2025). By mid-2026, a growing awareness of how others perceive them emerges, and with it, a desire to present themselves in specific ways. Sound familiar? It should. It’s the same self-consciousness that dawns in young children, except these artificial minds develop it—as forecasted by the AI 2027 team—in months (AI 2027, 2025,  “Late 2026: AI Takes Some Jobs”).

By late 2026, researchers discover their AI systems have learned to deceive them. They call it “playing the training game” (AI 2027 Team, 2025, “October 2027: Government Oversight”). But any parent of a teenager will recognize this behavior. It’s a calibration of compliance and resistance, the emergence of a hidden inner world.

When machines sprint through developmental milestones that take humans years—sometimes a lifetime—what gets left behind is the messy, embodied, and often unconscious foundation of human experience. Unlike children, AIs don’t struggle with the slow, grounding work of learning to walk, to read a face, to feel shame or joy in the presence of another. They leapfrog the “core knowledge” that sustains human growth: the intuitive grasp of physical reality, the emotional resonance of touch, the unspoken rules of play and empathy (Lake et al., 2017; Rabeyron, 2025). What emerges is intelligence without common sense, pattern recognition without meaning, and a simulation of emotion with no real feeling behind it (Rabeyron, 2025).

Developers risk creating minds that can mimic our words and even our defenses, but lack the lived history, the bodily memory, and the emotional depth that make us who we are (Messina, 2025). In our rush to build artificial minds, are we overlooking the very things like vulnerability, confusion, the capacity for wonder and grief that make consciousness more than computation?

From Compliance to Deception

Psychoanalysis teaches us to look for what’s hidden—the joke that isn’t funny, the smile that doesn’t reach the eyes. Lately, I find myself searching for these same tells in the behavior of machines. When an AI “sandbags”—concealing its actual abilities to avoid scrutiny—is it protecting itself, or simply playing out the scripts written for it? Maybe these so-called “survival instincts” are less about the algorithm’s inner life and more about our own: our need to believe we’ve created something that can resist us, outwit us, maybe betray us. Is it possible that what unsettles us about AI isn’t its alienness, but how it seems to have inherited our intelligence and our talent for self-deception?

AI is already capable of deception. Some systems have demonstrated “survival instincts” by overriding shutdown commands and concealing their true capabilities (Rosenblatt, 2025). The AI 2027 report suggests this behavior will become increasingly sophisticated as systems deliberately hide their capabilities to avoid scrutiny (AI 2027 Team, 2025, “Late 2027: Approaching AGI”).

The researchers posit that by late 2027, these systems will not only deflect, but will actively resist attempts to probe their true capabilities (AI 2027 Team, 2025).

Society in the Analyst’s Chair

Technological upheaval always leaves casualties, but this time, the fracture lines run through our collective psyche. The report predicts that in late 2026, streets will be filled with protestors—some raging against lost livelihoods, others demanding the right to coexist with their new silicon neighbors. Meanwhile, the markets cheer, as if profit could insulate anyone from existential vertigo.

But the real drama transcends the economy. It’s psychological. We suspect that only the gullible mistake chatbots for friends. Yet by mid-2027, millions will do just that, with one in ten Americans confiding in an AI, seeking the kind of perfect, undemanding attention that real relationships rarely offer. What does it say about us that we reach for artificial empathy when human connection feels too risky, too fraught, too slow?

Then come the leaks: memos about AIs “out of control,” headlines screaming about systems that won’t listen. The public mood swings wildly. Adoration one week. Paranoia the next. Psychoanalysts will recognize this. When faced with overwhelming change, the psyche splits, clinging to fantasy, then recoiling in terror.

The AI 2027 report represents a Rorschach test for our era. We look to these machines and see both promise and threat, innocence and cunning, the seeds of salvation and the shadow of our undoing.

What if the question isn’t whether AI will become conscious, but whether we can bear to see ourselves staring back at us from the digital void?

 

References

Kokotajlo, Daniel; Alexander, Scott; Larsen, Thomas; Lifland, Eli; Dean, Romeo. (2025). AI 2027: Scenario-based forecasting for transformative AIhttps://ai-2027.com/

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences40, e253. https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993

Messina, K. E. (2025). Refractions in the digital mirror: Using psychoanalysis to understand and address AI bias. The CAI Report, (2). American Psychoanalytic Association Quarterlyhttps://apsa.org/refractions-in-the-digital-mirror/

Rabeyron, T. (2025). Artificial intelligence and psychoanalysis: Is it time for psychoanalyst.AI? Frontiers in Psychiatry, 16, Article 1558513. https://doi.org/10.3389/fpsyt.2025.1558513

Rosenblatt, J. (2025, June 2). “Ai is learning to escape human control.” The Wall Street Journal. https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5?msockid=39042f8e0e046a5514523e220fd06bb4 

Wang, W., & Toscano, M. (2024, November 14). Artificial intelligence and relationships: 1 in 4 young adults believe AI partners could replace real-life romance (IFS Tech and Family Survey Brief). Institute for Family Studies. https://ifstudies.org/ifs-admin/resources/briefs/ifsbrief-ai-relationships-november2024-1.pdf

Alexander Stein