From the Editor
by Alexander Stein, PhD
Issue 9 | January 2026

The Draughtsman, an automaton by Pierre Jaquet-Droz, 1773 | Neuchâtel, Switzerland
The CAI Report launched in January 2025. This issue—No. 9—marks our first anniversary.
An anniversary, coinciding with New Years, might invite reflections or predictions. Instead, a brief assessment—a state of The Report report—and a look ahead.
While The Report’s primary aim has been, and remains, to foreground psychoanalytic thinking about and approaches to issues relating to frontier AI—as I’d set out in my extended Editor’s Introduction in Issue 1—I’ve tried to curate issues representing a spectrum of diverse multidisciplinary expertise and views. The eight issues we’ve published over the past 12 months have included work from contributors who, while psychoanalytically minded and psychologically literate, come from an array of professions and disciplines, not only psychoanalysts or mental health clinicians.
But words are not the sole focus. AI’s near omnipresence in the social fabric is the consequence of many factors. One is a form of visual propaganda flooding the zone with sci-fi inspired and anthropomorphized images designed to influence public perceptions of AI’s capabilities. Another involves technologists’ assertions that computational systems are capable of generative thought, creativity, and art-making which equals, or can supplant, human artistry.
I’ve sought ways to counter this in The Report. AI systems can generate and manipulate imagery. But art is an expression of human creativity and imagination, a rendering of artists’ emotions, lived embodied experience, trauma, and interiority. In keeping with the aim of emphasizing the humanistic aspects of the interface between people and technology, I select imagery, photography, and original digital art for use in The Report that visually represents, symbolizes, or expresses something of thematic relevance to a particular article. Achieving this has included presenting original works by living artists whom I contact to ask permission to publish with full credit, reproducing canonical photographs and art from the public domain, and drawing from the image bank of Better Images of AI, a non-profit collaboration of individuals and non-profit and academic institutions whose work amplifies “images that more realistically portray the technology and the people behind it” (see BIoAI’s Guide for Users and Creators to learn more about their guiding principles, ethos, and strategies).
Another meaningful feature of The Report introduced this year is Bookshop.org — a socially responsible online book marketplace and privately-held certified B-Corp launched to support local, independent bookstores — we recommend as a socially responsible alternative for purchasing books (more about the mission and organization here; an interview of Bookshop’s Founder and CEO Andy Hunter here).
Looking to the future, there a several features and initiatives to be introduced. These include interviews with notable or influential figures, change-makers and expert voices in the AI space (if you know somebody who would be compelling interviewee, please send your suggestion to [email protected]).
The impact of gen-AI applications and services on mental health — including diagnosis and treatment — education, science and research, childhood and adolescent development, sex and relationships, grief, government and the environment, society and politics, and a plethora of issues encompassing how we think, love, live, and relate is expanding at muzzle velocity. Be on the lookout for entries in The Report that dive into each these topic areas.
Picking-up on an initiative established last year but still unimplemented, we’re looking to roll out a series on problems and solutions to Impact Zones where frontier AI systems and services are or will affect people and society at scale. These may include children, adolescents, the elderly, marginalized social groups, mental health, ethics, law, cognitive liberty, business and work, the environment, and a host of other industrial, social, and economic areas.
Arts and culture are always ahead of the leading edge of technological innovation and social change — as the great musician and social activist Nina Simone once declared, “An artist’s duty as far as I’m concerned is to reflect the times.” You can expect pieces in future issues of The Report offering psychoanalytically inflected commentary on works of film, television, theater, art, music, and literature in which AI and its effects are the dominant subject.
Over the past year, CAI Report readership has expanded globally, and all indicators suggest we’re presenting content that’s striking a chord. If you’re a regular CAI Report reader, thank you; and if you’re just encountering The Report now, I hope you’ll find the content worth your valuable time. While there is no comment feature, reader responses and feedback are welcome; feel free to be in touch. And if you or someone you know is interested in writing for a future issue, please submit your manuscript (or idea for an article) to [email protected].
Now, here’s what you can look forward to in this issue:
Dear Dr. Danielle, our regular human-AI relationship advice columnist, returns to respond to a precociously bright high-school student who wrote in with a provocative and psychologically astute question.
As always, consider submitting your own letter to Dr Danielle with a question about anything that excites, entices, worries, or confounds you regarding human-AI relations. If your letter is selected for publication, your identity will of course be completely anonymized. Write to [email protected] with a copy to [email protected].
That artists are ahead of the curve is not just an idea. Adrienne Rich (May 16, 1929 – March 27, 2012) was an American poet, essayist, and feminist, hailed as “a poet of towering reputation” and acknowledged as “one of the most widely read and influential poets of the second half of the 20th century.” In 1961, Rich published a poem succinctly titled “Artificial Intelligence.” Rodney Brooks, the renowned roboticist, emeritus Professor of Robotics at MIT, and former Director of the MIT Artificial Intelligence Laboratory, offered extensive and erudite techno-historical commentary on Rich’s poem in a blog post published on his website in November 2025, accessible here. In brief, Brook’s notes that while Rich is not prophetic or describing something nonexistent (computer chess programs had already been introduced by 1961), her poem “is a [on] completely different level or prescience and understanding of AI technology which is still valid today. [She] expresses pity on AI beings that will one day be forced to write poetry for us humans having been trained on the both the best and worst of all of human written poetry, with no real discernment for what is good and what is bad. This is exactly what people are doing with large language models of today.”
In May 2022, Nicolle Zapien, PhD, a San Francisco-based psychoanalyst (and a co-chair of APsA’s Committee on Public Information) created and hosted a podcast titled Technology and the Mind. The podcast was dedicated to exploring contemporary psychoanalytic ideas applied to consumer technology use cases. Each month, in 24 episodes across three seasons spread over three years, Dr Zapien interviewed psychoanalysts, philosophers, educators, technologists, and researchers about the current and future impact of technology on our minds, our relationships, and on society from a contemporary psychoanalytic perspective. The final episode—“Where are we now and where might we go from here?” [Season 3, Episode 5]— summarized the highlights of all the interviews and conversations and looked at how technology evolved across the lifespan of the show. I think her concluding assessments and thoughts, including ways to engage these issues more deeply in the future are brilliant and thought-provoking, and deserved a larger and perhaps different audience. At my invitation, she adapted the episode into an article, which I’m excited to publish in this issue. Not incidentally, the last episode, as well as all the others, are must-listen and I encourage everyone to tune-in.
Frank J. Oswald served on the faculty of Columbia University’s master’s program in Strategic Communication for 15 years, where he created and taught “Ethical Decision Making for Communicators” and is now an independent communications consultant, writer, and ethics educator. In “I Am Wally: AI as a Thinking Space, Not an Emotional Companion—Lessons from My Dinner with André,” Oswald draws on his areas of expertise to take Louis Malle’s iconic 1981 movie starring Wallace Shawn and André Gregory as a point of departure in meditating on chat agents not as an intimate partner but as a non-judgmental interlocutor that fosters the freedom to think aloud and be uncertain.
The CAI’s co-chair, Todd Essig, PhD, recently delivered a talk as part of a Harvard Faculty Seminar on ‘Knowledge Production and the University in the Age of AI.’ His article in this issue of The Report, “Will love for learning matter anymore? Understanding the complex psychology of AI relationality,” is adapted from that lecture and addresses common challenges that generative AI, specifically chatbots, present to professors and psychoanalysts alike. Here, Essig probes what current and anticipated technological developments will mean for knowledge production and university life.
Extending ideas first presented to CAI Report readers in the March 2025 issue, Karyne Messina, Ed.D. continues her exploration of bias in this issue. In “The Digital Looking-Glass: Psychoanalysis, AI Consciousness, and the Ethical Imperative of Digital Mentalization,” Messina considers AI bias not as a technical malfunction but representing a “socio-technical mirroring” in which algorithmic systems reflect the unexamined biases, cultural histories, and psychological projections embedded in the data from which they learn. She also introduces the concept of “Digital Mentalization”—a psychoanalytic-ethical framework that calls on technologists, policymakers, and users to recognize AI outputs as reflections of human mental life rather than autonomous cognition, thereby maintaining clear boundaries between simulation and subjectivity while preserving human accountability.
The issue concludes with a new feature of The CAI Report — Notable New Books — in which we spotlight the publication of books relating to psychoanalysis and artificial intelligence. This inaugural entry includes books by two members of the CAI. One is “The New Other: Alien Intelligence and the Innovation Drive” by Amy Levy, PsyD from Karnac Books. The other is “Using Psychoanalysis to Understand and Address AI Bias: Refractions in the Digital Mirror” by Karyne Messina, EdD forthcoming from Routledge in February 2026.
Thank you for reading.
================================
Another reminder that The CAI Report landing page has a link where you can enter your email address to sign up to get updates about new issues of The CAI Report, future CAI events, workshops, videos, resources about artificial intelligence and psychoanalysis, and more.



