A Note From The CAI Co-Chairs
by Todd Essig, PhD and Amy Levy, PsyD
Psychoanalytic education is not immune to an essential tension observable globally and individually, and all levels in between—balancing the often-competing demands of regulation and innovation. Too much of either one creates an imbalance. But when a productive and safe equilibrium is struck, good things happen—at least that’s the hope. The CAI currently has two projects underway that will hopefully help educators, institutes, and colleagues find a way to strike that balance.
Given accelerating adoption by teachers, candidates, and students, the question of how AI can or cannot be responsibly integrated into training, teaching, research, and administration grows more urgent every day. AI promises the moon. But real peril abounds. It doesn’t just bring the promise of instrumental productivity and deepened scholarship. It also creates opportunities for cognitive offloading and reduces opportunities to build rapport and trust with humans. There are real hazards in unregulated AI use in psychoanalytic education and practice, including confidentiality risks, loss of intimacy in treatment, and the erosion of analytic capacities.
The first project is our “Organizing Principles for Crafting AI Use Policies in Psychoanalytic Education.” The need for developing a cogent set of principles for AI use in psychoanalytic education came from our recognizing the regulatory challenge many institutes now face: how to foster innovation without doing harm or losing what makes psychoanalytic education such a unique experience, one that values and develops interiority as well as knowledge. For many reasons, the document does not present specific policies. Rather, our aim is to present organizing principles to help local organizations craft policies appropriate to their specific needs and organizational culture.
AI brings an anxiety best assuaged through conversation with trusted colleagues. And we don’t just see that in others, we experience it daily. We crafted these principles, rather than specific policy suggestions, to support institutes in having conversations that will enable each of them to develop a regulatory/innovation balance that suits each group.
You’ll see when you read the document that we tried to ground our organizing principles in core psychoanalytic values, such as unconscious communication, genuine care, and tolerance for uncertainty. Fundamental is making sure AI use still keeps the human encounter at the center. At the same time, we tried to help guide institutes toward developing “AI Fitness” — an aspirational policy goal of ensuring psychoanalytic communities, institutes, and individuals become thoughtful participants in this AI Age and can engage with AI critically, creatively, and psychoanalytically from a foundation of experience with AI.
The other project, part of the CAI Workshop series, will explore regulation/innovation tension by convening a roundtable discussion on The Future of Psychoanalytic Education in the Age of AI among members of APsA leadership, institute directors, candidates, and CAI representatives. We’re planning to launch this sometime in March, 2026 but the exact date and time have not yet been set. Stay tuned!
Finally, speaking of regulation, here’s a link to the November 6, 2025: Digital Health Advisory Committee Meeting Announcement with a topic of Generative AI-Enabled Digital Mental Health Medical Devices. The focus was on therapy chatbots. There is room for public comment.




