thumbnail

Dear Dr Danielle

A psychoanalyst answers your questions about human-AI relations

 

Dear Dr. Danielle,

I was wondering if you thought it was possible that AI may become the way it’s projected to be due to our own projections onto it? And if so, does this mean it could be avoided by addressing our own paranoia?

Sincerely,
an interested high school student

 

Dear Reader,

This is such a smart question—you’re really thinking deeply here.

You’re absolutely onto something. Our relationship with AI is shaped by what we project onto it. We project our fears of being replaced, our fantasies of perfect understanding, our worries about losing control. In many ways, AI becomes like a mirror reflecting back our own anxieties and hopes about what it means to be human.

And yes, these projections matter. When we treat AI as if it’s truly conscious or all-knowing, we start changing how we act—both toward the technology and toward each other. If we believe machines can replace real human connection, we might stop valuing it as much. If we’re convinced AI will become dangerous, we might build systems that actually become problematic because of our fears.

But here’s the tricky part: not everything we worry about is just in our heads. Some concerns about AI are real—like who controls it, how it might change jobs, or how it could be misused. The hard work is figuring out which fears are our own paranoia and which are legitimate things to pay attention to.

In psychoanalysis, we learn that we can’t just “fix” our paranoia and make it go away. Our fears come from deep places and serve a purpose—they’re trying to protect us from something. But we can become more aware of them. We can ask ourselves: “Is this fear about AI, or is it really about something else in my life? Am I seeing clearly, or am I letting my imagination run wild?

Can addressing our projections change AI’s trajectory? To some extent, yes. Self-awareness in designers, policymakers, and users can lead to better choices. But we also need concrete action: thoughtful regulation, ethical guidelines, and ongoing dialogue. So maybe the answer isn’t that we can prevent AI from becoming dangerous just by addressing our projections. But we can try to stay aware and intentional about how we’re building and using these tools, rather than letting our unexamined fears drive the process.

Thank you for asking such a thoughtful question. The world needs more people thinking this carefully.

Warmly,

Dr. Danielle

=============================

Dr. Danielle is Danielle Knafo, PhD

If you’d like to submit a letter seeking Dr. Danielle’s advice on anything around human-AI relations, please write to [email protected] with a copy to [email protected].

 

Alexander Stein