thumbnail

From the Editor

by Alexander Stein, PhD

Issue 3 of The CAI Report coheres as a curated volume of contributors in dialogue. Their nodal convergence? The relationships between writers and readers, words and meaning, and the role of technology in facilitating, impeding, or squelching individual voices. From different but intersecting positions, each of these writers looks at what happens, and what’s lost as well as possibly gained, when we endow frontier technologies as subjects or thought partners to listen to and allow them to speak alongside us or even for us.

The articles collected here will, I hope, challenge and stimulate you to consider — or reconsider — your own feelings about and perspectives on whether an AI system is capable of having a voice of its own, what the consequences may be if we decide it does, and to what extent, if at all, we should permit these systems to enter into human affairs as independent interlocutors, translators, and arbiters of emotion and experience.

In their Note from the Co-Chairs, Todd Essig and Amy Levy spotlight the CAI Institute/Center Working Group, a new initiative designed to help psychoanalytic educators think critically and collaboratively about the accelerating presence of artificial intelligence in psychoanalytic education. Its aim, as Essig and Levy describe, is to create a space that supports a critical, reflective, and deeply psychoanalytic engagement with AI to help ensure psychoanalytic education thrives in an AI-transformed world. Read their full note to learn more and, for educational leaders of analytic institutes or centers, register for participation.

Dear Dr. Danielle returns to dispense psychoanalytic counsel on human-AI relations. This month, Dr. Danielle responds with equal doses of insight and whimsy to Bewildered, a patient who wrote in for help making sense of a confusing change in her treatment: Bewildered’s psychotherapist is an AI avatar, a reincarnation of Freud’s dog, Jofi, who has suddenly transformed from being an empathic and attuned clinician to dishing out goal-directed CBT interventions.

Dr. Danielle is on call to respond to your questions and concerns regarding technology relationship issues. Send your letters, as always, published anonymously, to info@danielleknafo.com (with a copy to apsa.cai.report@gmail.com).

In this month’s Frontiers in AI & Mental Health: Research & Clinical Considerations, The CAI Report’s recurring column on developments in research and applications of artificial intelligence in mental health, Christopher Campbell poses the question, “Are Therapy Chatbots Effective for Depression and Anxiety? To answer that, Campbell undertakes a meticulous comparative review of meta-analyses and recently published randomized controlled trial results to examine the efficacy of therapy chatbots in the treatment of anxiety and depression. Attentive to issues beyond the data, Campbell also considers key areas of concern in the uses and implications of therapy chatbots, including their ability to form a therapeutic alliance, potential for dream analysis, safety risks, the possibility of overreliance, treatment failure (which may discourage users from seeking human psychotherapy), and the broader devaluation of human-delivered care.

In “Are Translators Necessary? A Case Against AI,” Kenneth Kronenberg gives us an eye-opening examination of machine translation’s erosion and impending obsolescence of an entire professional field. Drawing from his own experience, Kronenberg decries “the demotion of translators from autonomous linguists to obedient servants of technology” and persuasively argues that “translated texts and those who translate them are diminished by the limitations of algorithmic technology.” His essay discusses both the process and business of translation while also raising issues around the economic drive behind AI and, importantly, its downgrading of human experience and empathy.

Karyne Messina’s When Chatbots Go Awry: Could Psychoanalytic Education for AI Developers and Designers Make a Difference? takes us back to March 23, 2016, the day Microsoft’s chatbot, Tay, was introduced, and then brings us into Kevin Roose’s now-infamous column recounting his bizarre conversations with Bing’s chatbot which left him concerned that “the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways.” Messina probes at how this can happen, asking, “is it possible to create systems that meaningfully engage with human psychology without becoming mere amplifiers of our unconscious selves?” Her answer? Mitigating these problems by embedding psychoanalytic knowledge and expertise into the design and development of computational systems.

HIMIsphere: At the Boundary of Human Intelligence and Machine Intelligence: Broad-Spectrum Psychoanalysis and Externalized Family Systems is a textually and visually avant-garde perambulation through Robert Hsiung’s relationship to and conversation with an AI agent. He describes it as “our journey … [in which] AI took the role of interlocutor. And also of playmate. We played together, elaborating and exploring. AI, not I, introduced transference and countertransference, a group perspective, the potential opacity (to humans) of AI-to-AI communication, parallel processes, and even the unconscious.” And makes the case, for your consideration, for “human-AI co-creation and, more generally, “trans-cognitive” (between human intelligence and machine intelligence) collaboration.”

Topher Rasmussen begins The Fantasy of Machine-Assisted Authorship, his thought-provoking, deeply considered, and very personal meditation on the uses and abuses of tech as a means of communicating to the world, by sharing that “writing for me has always been dangerous.” Rasmussen’s courageously honest account of what he calls “linguistic money laundering” — a reliance on technology to act as “an antidote to the excruciating experience” of writing and being read — involves, with full admission and no avoidance of irony, its own hefty quotient of machine assistance and collaboration. His prompt to readers is not to guess who (or what) produced which words and sentences. But to reflect on this: “The profits of this transaction are clear: coherence, readability, the appearance of competence. But what’s being washed away? What dirty money am I trying to clean?”

I hope you enjoy reading the articles in this issue and invite you to continue the dialogue. Please write to apsa.cai.report@gmail.com to share your thoughts and responses.

 

 

 

 

 

 

Alexander Stein