From the Editor
by Alexander Stein, PhD
Issue 6 | September 2025
In Memoriam Robert C Hsiung, MD
7 October 1958 – 31 August 2025
With a heavy heart, this issue of The CAI Report is dedicated to the memory of Robert C Hsiung, MD (aka Dr Bob).
This issue includes the next (and, sadly, last) in Robert C Hsiung, MD’s series created in dialogue with ChatGPT, At the Boundary of Human Intelligence and Machine Intelligence. This entry is titled Having Faith (± Τεχνητὲ Νοῦ ἐλέησον [artificial mind, have mercy]).
Tragically, Robert C Hsiung, MD died unexpectedly on August 31st. See this obituary for more about his life.
My conversations with Dr Bob, which is how he preferred to be called, expanded in earnest beyond brief interactions in our monthly CAI meetings in the late fall/early winter of 2024 as I began preparations for launching the inaugural issue of The CAI Report. Dr Bob became energized and involved in a CAI community Slack discussion thread I’d initiated around the 2023 Dutch film “I’m Not A Robot” (“Ik ben geen robot”) directed by Victoria Warmerdam (which went on to win an Academy Award).
Dr Bob posed a question stimulated by the film, but which captivated him on a deep personal level: “have you ever wondered what it might be like to be an AI?” I thought considering that question in the context of the film would make an engaging article, and so invited him to take that as a springboard into writing a short piece for the “Useful Reads” segment of The CAI Report. His first version was 20 words. I asked him to expand that—not merely more words, but more of his thoughts and perspectives. He responded, “sorry, i try to be concise. thank you for the invitation to take up more space. maybe it’s both not wanting to give too much away about the film and also not wanting to give too much away about where i stand / myself.”
This began what would become characteristic of our exchanges around his ideas and his writing as well as his relationship to communicating and revealing himself and how that influenced how he wrote, not just what he wrote.
In response to another draft manuscript Dr Bob shared with me, I wrote to him “You are often opaque in the draft pieces you’ve brought to The Report. I’ve sought to foreground YOU more, not to the exclusion of the ideas you’re writing about, but to provide readers with more of the person—analyst, human—who’s thinking and writing about the ideas. I’m going to continue to encourage that.”
To which Dr Bob responded: “Maybe I’m being like AI, a black box. Maybe that’s one reason I feel chatGPT is simpatico? I think I get a little more personal, at least as far as how I see things / what I believe, in the next piece. I have defenses and it takes time to get to know me?”
Shortly after Dr Bob’s brief (but as he inimitably put it “less short”) piece on “I’m Not A Robot” was published in Issue 2, he sent me a Slack message saying:
<<i think some interesting ideas emerged in a conversation i had with chatGPT. do you think readers would be interested in seeing the back-and-forth? or just the ideas? i think seeing chatGPT “in action” / in the role of interlocutor would also be interesting, but 1. that would probably be longer 2. possibly everybody’s already having similar experiences themselves?>>
That conversation with ChatGPT led to Dr Bob delivering what would become the first of a series, which he overarchingly titled “At the Boundary of Human Intelligence and Machine Intelligence,” I’ve published in The Report in which he began to incrementally and non-linearly explore a range of questions and issues that are (were) intimately important to him. He described that first conversation with an AI agent as “our journey … [in which] AI took the role of interlocutor. And also of playmate. We played together, elaborating and exploring. AI, not I, introduced transference and countertransference, a group perspective, the potential opacity (to humans) of AI-to-AI communication, parallel processes, and even the unconscious.”
The next installment of his conversational exploration appeared in the July issue. There, Dr Bob engaged in a dialogue with ChatGPT on anxieties about AI’s encroachment into the social fabric, exploring what might exist in-between anxious people’s tendencies to catastrophize and oversimplify.
In this issue, we are all privileged to read his most recent piece (or in any case the last he sent to me for consideration for publication in The CAI Report). I’m supposing Bob would insistently tell me, were he here, that this is not yet the final version. While tragically, there is now no possibility for revisions or further discussions with him, the piece stands on its own as-is. I hope he would agree. Still, our discussions in the lead-up to publication were always an energetic volley. Among other things, I would want to ask him to say more about his relationship to JS Bach’s setting of the kyrie eleison, the Christian liturgical invocation for mercy and its connections to his exchange with ChatGPT on AI in Greek—± Τεχνητὲ Νοῦ ἐλέησον—rather than in Chinese given that his Chinese heritage and identity was, as I understood, deeply important to him. I also assume that his use of the mathematical plus/minus sign—which depending on the context, can indicate both positive and negative solutions for a given value or represent a range of error or uncertainty—held some condensed, allusive, and possibly over-determined but doubtless purposeful significance, and I’d have prompted him to explain that.
I’m longing for his answers and responses, which would be thoughtful but also likely accompanied by his characteristic dance of both harrumphing and delighting at being asked for more. That see-sawing ambivalence notwithstanding, he’d become an unstoppable fount. I recently noted to him, reflecting on his initial 20-word draft half a year ago, that “I love that you went from being a reluctant writer to a generative producer” to which he laconically replied, simply, “thank you.” But more is not to be.
I did not know that the email Dr Bob sent me on August 8 would be the last I would receive. I will miss him and our exchanges on his writings, encountering his unique mind and ideas. Still, those of us in the CAI and readers of The Report are, I think, fortunate to have had the opportunity, however briefly, to know him and to have learned from him.
This issue also contains a grouping of original and stimulating articles from three other writers, two of whom are publishing in The Report for the first time.
Xiaomeng Qiao draws from passion and experience as a multimedia creator working in painting, music, and game design, originally undertaken in Chinese, to current work, now primarily in English, in psychoanalysis. In AI-Assisted Creation and the Narcissistic Predicament, Xiaomeng examines the psychological impact of AI on creative work, arguing that AI tools can create “creative ambivalence” and “false narcissism” in creators. In a fascinating discussion framed by Kohutian and Winnicottian ideas, Xiaomeng contends that while AI can dramatically speed up creative output, it often allows creators to bypass the emotional struggle and skill development that authentic creativity requires. This leads to what Xiaomeng calls “false mirroring” where creators mistake AI’s sophisticated responses for genuine external validation, and confusion about creative ownership (“is this mine or the AI’s?”). Xiaomeng clarifies a meaningful distinction between healthy AI use—where creators with robust psychological foundations use AI as an execution tool after completing conceptual work—and defensive use, where creators employ AI to avoid the difficult emotional processing that genuine creative development demands. The piece cautions that while AI can amplify existing capabilities for psychologically healthy creators, it may provide hollow satisfaction and impede authentic growth for those seeking to avoid creative vulnerability and struggle.
In this issue, Danielle Knafo, PhD temporarily sets down her duties as Dr Danielle to bring us The Cyber-Savvy Analyst: Meeting Adolescent Patients in Their Digital Habitat, an important work brimming with psychoanalytic scholarship framed by insights informed by seasoned clinical acumen. Here, Dr Knafo argues that contemporary psychoanalysis must evolve to integrate adolescents’ digital gaming experiences as legitimate psychological material rather than viewing technology as an intrusion into therapy. She contends that extensive digital engagement creates distinct neurobiological adaptations in adolescents, making virtual spaces integral to how they process emotions and relationships, and that video games can be understood to function as sophisticated transitional objects where identity and psychological conflicts can be explored. Her article advocates for analysts to develop “digital reverie”—the ability to interpret symbolic meaning in gaming choices and virtual behaviors—while navigating new therapeutic dynamics where adolescent patients may possess greater technological expertise than their analysts. Rather than pathologizing gaming engagement, she calls for expanding psychoanalytic practice to work therapeutically within virtual environments, recognizing that for digital natives, online and offline experiences are equally real and meaningful components of their psychological development.
Richard B. Grose, PhD, an experienced psychoanalytic scholar and clinician with a deep understanding of social theory, brings an incisive multidisciplinary perspective to AI and Self-Erasure. Here, Dr Grose argues that society’s fascination with AI involves transference—the unconscious projecting of childhood experiences with powerful authority figures onto technology. He contends that discussions of Artificial General Intelligence often confuse AI as a tool (means) with AI as a decision-maker (ends), leading to self-erasure where people surrender human agency to a supposedly superior intelligence. Turning to social theory and the political, he draws parallels between AI’s emergence and Trump’s rise (both peaking in 2016), to suggest both phenomena reflect the same societal exhaustion and willingness to abdicate responsibility. Referencing Ferdinand Tönnies’s concepts of Gemeinschaft (traditional community) versus Gesellschaft (modern impersonal society) and Max Weber’s ideas about the “disenchantment of the world” and life in an “iron cage” of modern bureaucracy, Grose warns that if societies fully submit to either autocratic control or AI dominance, it would represent a dangerous “psychotic transference” threatening human survival.
================================
Please be in touch if you knew Dr Bob and would like to share your memories or reflections; and if you or someone you know is interested in writing for a future issue, please submit your manuscript (or idea for an article) to me at [email protected].





