Fathom Ashby Workshops Report
From the Thick of It: A Report from Fathom’s Ashby Workshops
by Todd Essig, PhD
Founder + Co-Chair, APsA/DPE Council on Artificial Intelligence (CAI)
From initial conception to launch to now, the CAI has sought to build a new psychoanalytic activism that includes a forward-looking idea; the future emerging from the AI revolution’s accelerating transformation of society, intimacy, and individual experience would benefit from, maybe even need, psychoanalytic thinking and values.
We’ve worked hard to honor that by trying to be heard beyond the borders of the psychoanalytic community. But trying to be heard presents a challenge; when you win that fight and actually get a seat at the table, you have to have something to say — otherwise things can get pretty uncomfortable. That, in a nutshell, describes the challenge I faced as a psychoanalyst invited to participate in the recent Ashby Workshops held this past January at the Salamander Resort in Middleburg, Virginia. I was there because Fathom heard about the CAI leading to several conversations with Fathom’s leadership and an eventual invite.[1]
Convened by the non-profit start-up Fathom, this invite-only event included 180 leaders from government, business, academia, and civil society. As Fathom states on their website “the whole of society has a stake in how artificial intelligence pans out, and so should have a say in the outcome.” The Ashby Workshops were designed to pursue that goal by generating ideas and developing practical suggestions to guide policy and governance strategies.
Organized around the “Chatham House Rule,” we were repeatedly told to freely use the information received, but not to share the identity or affiliation of any of the participants, nor who said what, without their explicit permission. That way the heavy hitters who were there could talk freely and candidly, and they did.
To give a sense of this gathering’s significance and the talent present, here are some notable heavy hitters who gave permission for their participation to be acknowledged on the Fathom website: Turing Award winner and AI pioneer Yoshua Bengio; Jack Clark, Co-Founder and Head of Policy at Anthropic (the company behind the Claude AI); Megan Smith, U.S. Chief Technology in the Obama administration; Mariano-Florentino (Tino) Cuéllar, President of the Carnegie Endowment for International Peace; Lawrence Lessig, Professor of Law and Leadership at Harvard Law School; and Jason Matheny, President and CEO of RAND. In addition, several current and former members of Congress were there. I’m sharing this list to get your attention: these were some very deep—and at times turbulent—waters in which this clinical psychoanalyst found himself swimming.
It began with a limo ride from the airport shared with a half-dozen start-up veterans, a young AI-researcher so smart they made my hair hurt, and another “senior” with a civil-society role similar to mine but from a different profession. What followed were three days of mind-expanding, occasionally overwhelming, often illuminating and consistently galvanizing conversation. I left more convinced than ever that there is no time to waste. The AI revolution is an accelerating inevitability that needs input from communities of care like ours.
The center of activity where ideas were workshopped and worked over were facilitated topic-specific small groups. I was assigned to the “AI’s Potential Where Markets Fall Short” workshop. Other groups focused on security, law, governance, and cultural issues. There were also expert panels and presentations. Seasoning the formal program was a steady drumbeat of intense, and intensely fascinating, conversations over meals, coffee and, yes, wine and cocktails. One moment I’d be engaged with the rapid-fire speech of an ambitious technopreneur, the next a sobering dialogue with a high-ranking government official, and then a wide-ranging discussion about evil and Carl Jung with a brilliant, jaded cyber-security expert. Everyone was pitching hard for attention. When the pitch worked, more extended conversations ensued, themes evolved, contacts made, and minds bent.
When I identified myself as a psychoanalyst, to my hopefully secret delight (I tried to play it very cool), people skipped a beat, a conversational caesura seeming to ask, “what the heck are you doing here?” Sometimes that was asked explicitly. So, what was my answer, my “pitch?” What were the conversations like when the attention pitch worked? And did any of it have any influence on the eventual results contained in the Ashby Workshops Report 2025.
Both time and attention really were in short supply. Trying to make the most of each I’d quickly share we were interested in AI for two main reasons: getting ahead of new forms of AI-generated psychopathology and making sure AI is used to help develop more and better human clinicians rather than replacing them with bots-for-profit. I must admit that sometimes the response to my pitch was a patronizing “how nice for you, but all we need are better algorithms because that’s all people are anyway.” Those conversations tended to end right there. But many times people engaged with interest, in both informal conversations and the more formal workshops where thoughts were written down by facilitators, gathered and eventually synthesized into the Ashby Workshops Report 2025.
When I had more time, I explained we wanted to get in front of clinical challenges already starting to appear. I cited new problems in living emerging from the emotionally significant relationships people are forming with AI-entities, highlighting the dangers of bot relationships where genuine empathy and love are replaced by simulations.
I also described the downside of re-engineering human desire by AI-fueled recommendation engines and AI-systems designed to maximize user-engagement. People were surprisingly receptive to this worry, especially when I put it in context of the psychological damage social media has already inflicted. Once social media harm was made clear, it’s a small jump to seeing that harm as preamble to what constant engagement with AI-entities will do to the minds and hearts of the next generation.
The idea I hoped would percolate upwards from my talking about emerging psychological damage was that we shouldn’t let technological possibilities deployed for corporate profit determine who and how to love, who to be, and what we should want. Needless psychological suffering and avoidable loss lies in that direction.
I also expanded on the second reason I was there; the potential role AI could play in fixing our broken mental healthcare delivery system. Everyone knows that too many people who need care can’t get it or can only access sub-standard, and at times harmful, care.
For many, therapy-bots were the no-brainer solution: why look further? I countered that by citing problems with accountability and liability, including how simulations of care will lead to avoidable bad outcomes. Acknowledging the positive experience some are already having with therapy-bots, I expressed hope for a change in direction where they’d be marketed as “interactive self-help” devices. That way they could do good with minimal harm. But replacements for human therapists? No, I explained, because simulated empathy is not functionally the same as actual empathy. Therapy-bots would just be an insurance company profit-center, not a solution for people experiencing problems in living. And because the profits are so huge, we’re already being groomed not to notice, or perhaps care, about the differences. I analogized therapy-bots to being like an unproven medication that helps one group of people, annoys a second, and kills a third (for that I cited the tragic case of the teen who killed himself to join with his character.ai chatbot). Finally, I’d bring up the larger culture-wide damage of transforming caregiving into nothing more than a transaction devoid of genuine human experience.
But I couldn’t leave it at that, nor would my interlocutors let me, not at an event organized to develop practical solutions. Challenged by “if not therapy-bots then what” I talked about how therapy-bots are the “dumb solution.” They don’t really respect AI’s transformative potential in helping solve the accessibility crisis. Rather than replacing human clinicians with bots we should incentivize the creation of AI systems designed to help develop more and better human clinicians. I said on several occasions that instead of replacing people, lets replace those managed behavioral health care companies that undermine quality of care and drain hundreds of billions of dollars out of the healthcare delivery system. Those resources could be redirected to develop a talent pipeline providing the human clinicians we need. Will this happen? I’m not naive; it’s unlikely. It’s just that sometimes you have to plant a seed even if you don’t expect it to grow.
Did any of this affect the outcome of the Ashby Workshops? Truth be told, not directly. After all, there were huge economic, security and political interests in play. One speaker noted four different AI policy conversations going on at the conflicting intersections of regulation and innovation: national security, economic advantage, consumer trust, and the socio-cultural consequences of AI technology. In that context, my concerns were just one of many, and far from the most urgent—small potatoes in a field full of them.
But take comfort. Having a seat at the table and becoming part of the conversation isn’t supposed to guarantee your voice will determine the outcome. The best one can hope for is helping to shape the outcome, even indirectly, like one voice in a chorus. And I believe I did. As can be seen in the summary public Ashby Workshops Report 2025, as well as in the more comprehensive report for attendees only (about that you’ll just have to take my word), several consensus jumping-off points for policy creation were influenced by the issues I raised, along with several others.
Even in the face of both genuine existential risk and the potential for unimaginable wealth, we reminded everyone that people have minds, hopes, and desires. People matter, communities matter. We can’t be replaced by AI. But we can be harmed by it. And while policy recommendations obviously need to address broad structural concerns relating to national security, economics, and even basic survival, they must also account for the human dimension—because people are still people, even when machines can simulate being one.
[1] To help make the most of the seat I had been given I invited a diverse group of colleagues from different generations and backgrounds to talk things through with me. I want to thank Dan Prezant, Bonnie Buchele, Jim Barron, and Paula Christian-Kliger from APsA leadership and Amy Levy, Marc Levine, Ammanda Walworth, Ilana Gratch, and Nicolle Zapien from the CAI for an incredibly useful exchange of ideas. They have my gratitude but I bear full responsibility for my participation.