thumbnail

From the Editor

From the Editor

by Alexander Stein, PhD

Welcome to Issue 2 of The CAI Report!

Building on the thought-provoking content in the inaugural issue, we’re excited to offer a rich and varied array of articles in this issue.

In their Note from the Co-Chairs, Todd Essig and Amy Levy spotlight the CAI Survey Project. The Project is led by CAI member Leora Trub, PhD and her team of research assistants in the Pace University Digital Media and Psychology Lab, of which she’s the Director. The survey, as Todd and Amy explain, is designed to take a snapshot of current attitudes toward and uses of Artificial Intelligence. Please participate and complete the survey.

Useful Reads — our recurring feature offering a brief overview, review, or response to a recent article, book, podcast, film, or theatre piece at the intersection of psychoanalysis and AI — returns with a fascinating entry. This one is by Robert C Hsiung, MD who responds to Victoria Warmerdam’s haunting 2023 film “Ik ben geen robot” (“I’m Not A Robot”), which just won the 2025 Academy Award for Best Live Action Short. Bob asks “have you ever wondered what it might be like to be an AI?” I invite you to read his piece, screen the film (there’s a link at the end of his post), and contemplate his prompt.

We’re thrilled to introduce a new segment: Dear Dr. Danielle. It’s an advice column with a twist: letter writers will ask Dr. Danielle — the CAI’s own Danielle Knafo, PhD — for help understanding and navigating relationship issues with their technology. The first letter is from a grieving widow trying to cope with the death of her husband by engaging with bots and who is now feeling guilty (and ashamed to admit) that the machines are genuinely satisfying her needs for comfort and companionship (and more).

We hope the exchange between Dr. Danielle and this month’s letter-writer, Guilty and Ashamed, inspires you to write in with your own questions or concerns. Letter writers will be anonymous, so let yourself be creative and unrestrained in asking for advice, which Dr. Danielle will dispense as a tech-savvy psychoanalyst, on anything around human-AI relations. Danielle is eager to hear from you! Send your letters to info@danielleknafo.com (with a copy to apsa.cai.report@gmail.com).

Christopher Campbell returns with another entry in Frontiers in AI & Mental Health: Research & Clinical Considerations, a recurring column that focusses on developments in research and applications of artificial intelligence in mental health. This month’s article is titled Dr. ChatGPT? Assessing the Mental Health Care Capabilities of Chatbots and probes at whether artificial intelligence can accurately interpret clinical scenarios and provide appropriate treatment recommendations, or even eventually function as a mental health care clinician? In this riveting and important piece, Christopher rigorously examines the complex issues these questions raise, drawing on cutting-edge research in AI-assisted medical training and psychodynamic diagnoses in psychiatry.

In another new segment to The Report, Artificial Intelligence & Psychoanalytic Research, Ilana Gratch, a PhD candidate in clinical psychology at Teachers College, delivers a fascinating, deeply researched exploration of multidisciplinary work focused on bridging machine learning and psychoanalysis.

Todd Essig, The Council on Artificial Intelligence’s founder and co-chair, reports From the Thick of It: Fathom’s Ashby Workshops, recently convened in Middleburg, Virginia. The Ashby Workshops were organized around the idea that “the whole of society has a stake in how artificial intelligence pans out, and so should have a say in the outcome,” and designed to generate ideas and develop practical suggestions to guide policy and governance strategies.  Todd’s report of his experience as an attendee and contributor, the only psychoanalyst at an invite-only event for 180 leaders from government, business, academia, and civil society, makes for eye-opening reading.

Our issue concludes with Refractions in the Digital Mirror—Using Psychoanalysis to Understand and Address AI Bias by Karyne Messina. Karyne unflinchingly unearths the roots of AI bias, explaining how AI systems encode bias from decades of training data which in turn transforms societal prejudices into algorithmic outputs. Not limited to a presentation of problems and risks related to the introduction of computational bias in the social fabric, she sets out a practical framework for creating interdisciplinary partnerships that incorporate psychoanalytic principles to develop more equitable and ethical AI.

 

 

 

 

CAI Report Editor