From the Editor
by Alexander Stein, PhD

Decision Tree reversed by Rens Dimmendaal & Johann Siemens* | rensdimmendaal.com/ | www.unsplash.com/photos/EPy0gBJzzZU | betterimagesofai.org / | https://creativecommons.org/licenses/by/4.0/
Welcome to the 5th Issue of The CAI Report.
In their Note from the Co-Chairs, Todd Essig and Amy Levy spotlight the CAI Workshop project. The aim of the project, led by psychiatrist, psychoanalyst, and CAI member, Fred Gioia, MD, is to produce educational content at the intersection of psychoanalysis and AI, to help people, both in the field and not, to study and explore the implications of AI for the human psyche. Read Todd and Amy’s Note to learn more about the Workshop project as well details – and registration info – for the next upcoming free online workshop, “Humans, Minds, and Brains: A Trialogue at the Edge of AI Possibility” to be held on Sunday, July 20th from 12–2 PM Eastern / 9–11 AM Pacific.
Dear Dr. Danielle, our regular psychoanalytic advice column on all things involving relations and relationships between humans and AI, is back! This time, the doctor responds to a letter from Jofi the AI/Dog Analyst. What’s an AI dog analyst you ask? Read the letter, and Dr. Danielle’s engaging and insightful reply, to find out.
Dr. Danielle is on-call to respond to your questions and concerns regarding technology relationship issues. Send your letters, as always published anonymously, to [email protected] (with a copy to [email protected]).
Robert Hsiung, whose piece on Broad-Spectrum Psychoanalysis and Externalized Family Systems appeared in Issue 3, returns with another installment of his exploration at the boundary of human intelligence and machine intelligence. Here, in “The AI Pandemic: One Psychoanalyst’s Loss Will Be Another Psychoanalyst’s Gain,” he engages in ‘dialogue’ with ChatGPT to examine some of the intense anxieties many people are feeling about AI’s encroachment into the social fabric and to see what might exist in-between anxious people’s tendencies to catastrophize and oversimplify.
Extending the theme of macrosocial anxieties around the unstoppable incursions of computational systems into our daily lives, Karyne Messina brings us an examination of the AI 2027 Report, a research study setting out forecasts on the future of AI. As Messina writes, “the predictions in the AI 2027 report read like science fiction, but the unease they provoke is real.”
In early May, Todd Essig and Amy Levy co-created and co-presented an American Psychoanalytic Association-sponsored online symposium titled “Artificial Intelligence and our Psychoanalytic Future.” The symposium was designed to help attendees further develop and triangulate perspectives on the emerging age of AI. A central and novel feature of the Symposium’s online-only format was inviting attendees to post questions and comments to the Q&A chat window that Todd and Amy would answer after the event, to be published later in The CAI Report. The rich and fascinating result — “After the Symposium: Clinical, Ethical, and Existential Replies to the Q&A” — is in this issue.
A recent meeting of the CAI included a lively discussion catalyzed by a New York Times article on death avatars. Marsha Hewitt, whom readers will remember from her thought-provoking article in issue 4 of The CAI Report, contributed a brief, learned disquisition on historical rituals and practices for communicating with the dead including nineteenth-century seance spiritualism. I was captivated by her knowledge and perspectives and invited her to elaborate on those ideas for a piece for The Report. She accepted, and in “You Can’t Console a Video Clip: AI and the Metaphysics of Electronic Presence,” Hewitt leads us through the complex history of American spiritualism and its importance as a precedent to today’s “increasing plethora of technological ways of not only preserving relationships with the dead, but of managing, blunting or outright denying the pain of loss.” “Electronic immortality” she writes, “is the contemporary technological version of managing mourning.”
A special note about the artwork accompanying Professor Hewitt’s article. The image is titled “Cremation” and was created by the renowned Polish artist and satirist Paweł Kuczyński who uses surreal and imaginative illustrations to comment on social, political, and environmental issues. I have followed and admired Kuczyński’s work on social media for a long time and, since The CAI Report launched in January, have thought that it would be wonderful to feature his work. I contacted him and he generously granted permission to reproduce this image.
I consider it central to The CAI Report’s mission to elevate and celebrate what it is to be human in this AI age, and hope to continue to do so not only through the words, ideas, expertise, and insights of our contributors, but also in the images (and, looking ahead, videos) we publish. This is consistent with the perspectives on original digital art I set out in my editor’s introduction to Issue 4 in May where I introduced Better Images of AI, a non-profit collaboration of individuals and non-profit and academic institutions that’s creating a new repository of “images that more realistically portray the technology and the people behind it.” The image accompanying this post is from that repository and is explained below.
As also mentioned in Issue 4, please consider ordering books from Bookshop.org — a socially responsible online book marketplace alternative and privately-held certified B-Corp launched to support local, independent bookstores.
I invite you to help expand The CAI Report’s reach and audience by forwarding it into your social and professional networks and with other interested readers. The Report features psychoanalytically-informed and inflected perspectives and commentary on the human and social dimensions of AI. But I’m keen to curate conversations that are diverse and multidisciplinary; contributors do not need to be psychoanalysts or member clinicians of ApsA. If you or someone you know is interested in writing for a future issue, please submit your manuscript (or idea for an article) to me at [email protected].
Finally, a reminder that The CAI Report landing page now has a link where you can enter your email address to sign up to get updates about future articles, events, and resources from the Council on Artificial Intelligence. Please do!
=======================
* Decision Tree reversed by Rens Dimmendaal & Johann Siemens. A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.