thumbnail

The AI Pandemic: One Psychoanalyst’s Loss Will Be Another Psychoanalyst’s Gain

By Robert C Hsiung, MD (in dialogue with ChatGPT)

https://en.wikipedia.org/wiki/Diffusion_of_innovations

Introduction

In February 2026, I’ll be presenting at the OPUS International Academic Conference on “Examining the Impact of Artificial Intelligence (AI) on Citizens, Organisations, and Society.” The call for proposals included this dramatic paragraph:

“Over the last 50 years, global anxiety has shifted from the threat of annihilation though nuclear war to the threat of annihilation from uncontrolled climate change to this latest anxiety: the threat that AI will be allowed to develop to such an extent that humans will be its slave rather than its master.”

I was struck by how filled with anxiety it was. Annihilation anxiety! Enslavement anxiety! As I do with my patients, I wondered if there might be some catastrophizing. And also either-or thinking, which I tend to resist both because I see it as oversimplified and because I have a valence for the in-between (Asian Americans being both in between Asian and American and in between Black and White). What might an in-between outcome look like? I asked ChatGPT:

Me:
No matter how bad something is for some people, it’s good for other people. For example, a tornado destroys houses, but that’s business for the construction companies who rebuild them.

ChatGPT:

Yes, this idea has been expressed eloquently by the French economist Frédéric Bastiat in his famous “broken window fallacy,” from his 1850 essay “Ce qu’on voit et ce qu’on ne voit pas” (“That Which is Seen, and That Which is Not Seen”).

Bastiat points out:

“To break, to destroy, to dissipate is not to encourage national employment,”

emphasizing that the apparent economic benefit (jobs for repair workers) conceals the hidden loss (resources diverted from other productive uses).

Another concise expression of a similar idea is the popular saying:

“One man’s loss is another man’s gain.

AI associates French and eloquence? I went with the popular saying:

Me:
It’s like some stocks are good in bull markets and others are good in bear markets. What about AI as “one man’s loss”? Who will gain from AI?

ChatGPT:

Who Loses (“One Man’s Loss”):

  • Workers in Automatable Roles:
    Jobs in manufacturing, customer service, data entry, and routine analysis.
  • Professions reliant on repetitive cognitive tasks:
    Junior lawyers, accountants, content writers, and administrative support staff.
  • People without Digital Literacy:
    Those who lack the skills, resources, or opportunities to adapt to rapidly evolving technologies.
  • Creators whose content is absorbed or replicated by AI:
    Artists, authors, and other creatives who see their intellectual property used without compensation.

Who Gains (“Another Man’s Gain”):

  • Tech Innovators and AI Entrepreneurs:People developing and commercializing AI technologies (engineers, developers, entrepreneurs, venture capitalists).
  • Companies leveraging AI efficiency:Businesses deploying AI to reduce costs, enhance productivity, or expand their market share.
  • Highly Skilled Specialists:Professionals able to partner productively with AI tools—those in fields such as medicine, finance, analytics, and research.
  • Educators and Trainers in AI and Tech Literacy:Providers of AI education, training, and re-skilling services.

I haven’t seen myself as a professional reliant on repetitive cognitive tasks. Is it time to revise my self-image? I do at least have some digital literacy, deploy AI to enhance my productivity, partner with AI tools, and arguably provide AI education.

Me:
What about a contrarian position? How might one “short” AI?

ChatGPT:

“Shorting” AI means betting against or positioning yourself to benefit if the anticipated AI boom falters, slows, or experiences a backlash. Here are some contrarian strategies and rationales:

  1. Financial Market Short Positions
  • Short AI-related stocks
  • Inverse ETFs or Funds
  1. Invest in “AI Countertrends”
  • Labor-intensive sectors
  • Privacy and security firms
  1. Ethical and Regulatory Bets
  • Law, compliance, and consulting:
    Companies or consultants specializing in regulatory compliance, ethical audits, AI risk management, or policy advising.
  • Legal challenges:
    Law firms handling AI-related litigation (intellectual property, privacy, employment rights).
  1. Physical and Environmental ConstraintsResource scarcity investments:Commodities markets tied to minerals or rare-earth metals essential for AI chips, betting their prices become prohibitive.Energy and carbon restrictions:Invest in firms positioned to benefit from carbon taxation, regulation, or stricter environmental standards which disadvantage high-energy-consuming AI data centers.

Me:
OK, that was from an investor’s perspective. What about someone in the might-lose (one man’s loss) subgroup? Are there ways they might “short” AI?

ChatGPT:

That’s a shift: moving from an investor’s literal “shorting” to how someone directly impacted—who fears they might be on the losing side—could position themselves defensively or proactively.

  1. Skill Diversification: Invest in What AI Can’t Do (Well)
  • Focus on uniquely human skills like emotional intelligence, empathy, creativity, storytelling, relationship-building, mentorship, and trust-building.
  • Invest in occupations inherently resistant to AI displacement, such as personal caregiving, therapy, artistry, skilled trades, or hands-on craftsmanship.
  1. Human-Centric Niche Specialization
  • Position yourself as explicitly anti-automation, marketing authenticity, privacy, ethical practices, and human connection.
  • Create or join “human-first” brands, collectives, or businesses that proudly emphasize human-led production and services.
  1. Become an AI Critic, Skeptic, or Activist
  2. Embrace the Human-Complementary Role
  • Rather than compete head-to-head, strategically position yourself as an essential partner, mediator, or interpreter of AI—someone who provides a human interface, understands risks, or manages AI errors and limitations.
  • Develop expertise not just in AI’s use, but also its failures and limitations.
  1. Education and Advocacy for AI’s “Hidden Costs”
  • Highlight and expose the negative externalities of AI, such as environmental costs, labor exploitation, misinformation, or societal harms.
  • Become a thought leader or educator who illuminates these “hidden costs,” pushing policy changes that protect jobs and livelihoods.
  1. Social and Political Action
  • Join or start movements advocating for worker protections, universal basic income, or social safety nets, designed explicitly to offset displacement caused by AI-driven automation.
  • Push for regulations (similar to labor protections during past industrial transitions) that directly mitigate AI-driven harms.

Those are suggestions. They may seem reasonable, but they aren’t an exhaustive list, and of course ChatGPT offers no money-back guarantee. “At the boundary of human intelligence and machine intelligence”, I myself may be taking a “Human-Complementary” role.

I pushed ChatGPT on “What AI Can’t Do (Well)”:

Me:
You mention focusing on “emotional intelligence, empathy, creativity, storytelling, relationship-building, mentorship, and trust-building” and “authenticity, privacy, ethical practices, and human connection” and suggest marketing “personal caregiving, therapy”, but isn’t the public becoming increasingly open to AI therapists?

ChatGPT:

Public acceptance of AI therapists is growing, challenging the idea that therapy and caregiving are inherently immune to automation.

Why the Public Is Open to AI Therapists:

  • Cost and Availability
    AI therapy is cheaper, always accessible, and instantly responsive.
  • Reduced stigma
    Users may feel less judged speaking to a non-human.
  • Privacy and anonymity
    Some feel safer revealing vulnerabilities to an AI than a human.
  • Novelty and curiosity
    Users may initially experiment out of sheer interest or convenience.

But There’s a Critical Distinction:

Even as acceptance grows, human therapy’s value doesn’t vanish—it becomes more sharply defined. Human therapists may shift from being the default option to being premium, specialized, or complementary.

  • AI simulates empathy convincingly, but lacks genuine relational experience or lived human reality, which many clients deeply value.
  • Therapists intuitively understand complex, contradictory, or subtle emotions and behaviors. AI, while adept at pattern-recognition, still struggles with deep intuition and complex human ambiguity.
  • Research consistently shows a strong human therapeutic relationship—genuine rapport, trust, and attunement—is among the strongest predictors of therapeutic success.
  • Therapists apply ethics, values, and nuanced judgment in ethically sensitive contexts—like crisis intervention or managing risk—that AI can’t yet authentically replicate.
  • People often seek therapy precisely because they want a deeply human encounter. The “authentic” interaction remains something many clients specifically desire.

That brought me to the $64,000 ($751,624.13 adjusted for inflation) question:

Me:
Which human therapists are more (and less) likely to be able to strategically position themselves in ways AI cannot replicate?

ChatGPT:

Therapists Most Likely to Successfully Differentiate (Least Replaceable by AI)

  1. Deeply Relational and Interpersonal Therapists

Therapists whose approach prioritizes a rich therapeutic relationship as central (e.g., relational psychoanalysts, intersubjective therapists, attachment-based therapists).

Those who explicitly market therapy as a profound interpersonal encounter, emphasizing authenticity, vulnerability, and genuine empathy.

  1. Therapists Handling Complex Trauma and Crisis Intervention

Professionals working with complex PTSD, severe trauma, crisis management, suicidality, and intense emotional dysregulation—situations where nuanced clinical judgment and real-time responsiveness matter deeply.

  1. Culturally Specialized or Identity-Affirming Therapists

Therapists whose practice explicitly engages with cultural, racial, ethnic, gender, sexual identity, or marginalized group experiences.

  1. Existential, Philosophical, or Spiritual Orientation

Existentially oriented therapists who handle meaning-making, grief, mortality, life transitions, identity crises, spirituality, and deeply philosophical questions that inherently demand human empathy, depth, and reflective intuition.

  1. Group and Family Therapists

Group therapy, family therapy, and couples therapy deeply involve subtle interpersonal dynamics, alliances, and intuitive management that AI struggles with.

  1. Therapists Using Creative and Nonverbal Modalities

Art therapy, music therapy, dance/movement therapy, drama therapy, play therapy, and somatic approaches leverage embodied experience, creativity, and nonverbal intuition, which AI cannot fully replicate.

Therapists Less Likely to Differentiate (Most at Risk of AI Competition)

  1. Primarily Psychoeducational or Manualized Therapists

Those mainly providing structured, standardized interventions (e.g., basic cognitive-behavioral worksheets, psychoeducation modules, skills training) are easily automated by AI apps and chatbots.

  1. Routine or Short-term Counseling (Generalist Providers)

Therapists whose practice involves brief counseling, structured intake assessments, standardized interventions, or low-risk routine interventions face high competition from convenient, cost-effective AI alternatives.

  1. Primarily Text-Based or Remote Counseling (Without Differentiating Features)

Therapists exclusively offering asynchronous messaging or very brief, superficial video interactions without deep relational engagement are at risk, since this modality overlaps directly with current AI-based mental health tools.

In the above rubric, psychoanalysts would seem to fall into the “Most Likely to Successfully Differentiate (Least Replaceable by AI)” category. But what about “differences in the apparently similar”? [Agazarian YM. Contemporary theories of group psychotherapy: a systems approach to the group-as-a-whole. Int J Group Psychother. 1992 Apr;42(2):177-203. doi: 10.1080/00207284.1992.11490685. PMID: 1572781]

Me:
Doesn’t Least Replaceable vs. Most at Risk portend sibling rivalry / internecine conflict?

ChatGPT:

Why Might Sibling Rivalry Emerge?

  • If jobs, clients, training opportunities, or reimbursement become scarcer, therapists in different groups may increasingly see each other as competitors.
  • Those labeled “Most at Risk” may feel undervalued, stigmatized, or threatened. Those labeled “Least Replaceable” might inadvertently or deliberately convey superiority.
  • Different therapist subgroups might advocate for contradictory policies regarding AI regulation, funding, reimbursement, training, and professional licensure, weakening collective bargaining power.

What Forms Could This Rivalry Take?

  • Divergent narratives (e.g., “Authenticity and relationship matter!” vs. “Accessibility and affordability matter!”).
  • Battles over limited public funding, insurance coverage, institutional support, training program resources, or public legitimacy.
  • Ethical claims (e.g., “Real empathy can’t be automated” vs. “Automation expands access and equality”).
  • Separate professional societies, training institutions, licensing frameworks, or practice guidelines.

Mitigating the Rivalry: A Constructive Path

  • Therapists in vulnerable niches might explicitly acknowledge and embrace complementary roles alongside AI, emphasizing human-AI partnerships rather than competition.
  • Advocating jointly for policies that emphasize careful regulation, ethics, transparency, equitable reimbursement for human services, and clear differentiation of services offered by human therapists vs. AI.
  • Encouraging continuing education that allows therapists to move flexibly between different niches, emphasizing integration rather than competition.
  • Collectively advocating for policies (e.g., Universal Basic Income, mental health coverage) that mitigate the overall negative impact of automation, benefiting all therapists and clients alike.

Me:
It’s parallel to how during the pandemic therapists either incorporated or didn’t incorporate videoconferencing into their practices?

ChatGPT:

Pandemic as Precedent: A Fault Line in the Field

Therapists who incorporated videoconferencing:

  • Quickly adapted to Zoom or other telehealth platforms.
  • Maintained or even grew their practices.
  • Often redefined the therapeutic frame and boundaries in digital space.
  • Tended to advocate for the legitimacy of virtual work and teletherapy reimbursement.

Therapists who resisted videoconferencing:

  • Some paused their practices or struggled to transition.
  • Others continued to prioritize in-person work, viewing it as fundamentally different or superior.
  • Some framed teletherapy as a necessary compromise, not a long-term solution.

This created not just logistical divergence, but also philosophical tension—about the nature of presence, the therapeutic relationship, privacy, and what counted as “real” therapy.

As with the teletherapy divide, the AI shift could lead to:

  • New professional identities: e.g., “humanist therapist,” “AI-integrated therapist,” “hybrid mental health consultant”
  • Training and credentialing gaps: Who gets trained in AI ethics or usage? Who defines those standards?
  • Ethical debates: Over privacy, algorithmic bias, depth of care, authenticity
  • Economic realignment: Similar to how therapists who adapted early to telehealth maintained access to clients, early adopters of AI might gain new forms of market visibility, while others are marginalized.

Me:
Actually it wasn’t either-or, there was a spectrum of incorporation. 1. Where does that spectrum stand now? 2. What might an analogous AI spectrum look like?

 

ChatGPT:

Today (2025), the spectrum of incorporation looks something like this:

Degree of Incorporation Description
Minimal Largely gone back to in-person, offering video only in rare cases (e.g., illness, relocation).
Selective Offer a hybrid model—some clients seen virtually, others in-person, based on clinical need, preference, geography, or risk factors.
Integrated Treat videoconferencing as a fully legitimate setting, making no essential distinction between in-person and virtual care.
Exclusive Operate entirely virtually, sometimes across multiple states or countries.

Notably:

  • Client demand has solidified virtual therapy as a permanent, normal option for many.
  • Institutional acceptance (insurance reimbursement, ethics boards, licensure portability) has caught up too.
  • Some therapists still lament the loss of in-person nuance, but overall, hybridization is mainstream now.

Consistent with a Human-Complementary role, I see myself in the Integrated subgroup.

Here’s a speculative map of the coming AI incorporation spectrum:

Degree of Incorporation Description
Minimal (Resistor) Therapists avoid AI altogether. No AI-assisted notes, no AI chatbots, no algorithmic screening. They may emphasize “100% human-driven care” as a marketing point.
Selective (Cautious Integrator) Therapists use AI behind the scenes for administrative tasks (e.g., billing, scheduling, note summarization), but keep the therapeutic encounter strictly human.
Integrated (Blended Practitioner) Therapists actively use AI as a visible part of clinical work: collaborative case formulations, AI-generated worksheets, reflective prompts, progress tracking with AI assistance, even session co-facilitation (e.g., suggesting discussion topics based on prior sessions).
Exclusive (AI-Enhanced Therapist) Therapists radically integrate AI into the therapy itself—clients interact directly with AI companions between sessions; therapists “curate” human-AI hybrid therapeutic experiences. These therapists might brand themselves as offering “augmented therapy” or “enhanced psychotherapeutic systems.”

And maybe beyond that:

  • AI-only therapy companies staffed minimally by humans for oversight, offering almost fully automated therapy.
  • Human-only boutique therapy as a high-cost premium offering.

Observations about the Spectrum Analogy:

  • Hybridization, not replacement will likely define the near future, just like with teletherapy.
  • Ethical branding will become crucial: therapists will need to clarify what level of AI involvement clients can expect.
  • Generational shifts: younger clients (and younger therapists) may expect a more fluid integration of AI than older ones.
  • Inequality risks: luxury, high-cost therapy may lean human; mass-market therapy may lean AI.

Psychoanalysts may already have adapted to a boutique high-cost premium niche.

See Hsiung RC. (2025, April). Broad-Spectrum Psychoanalysis and Externalized Family Systems. The CAI Report, (3). https://apsa.org/broad-spectrum-psychoanalysis-and-externalized-family-systems for speculations on session co-facilitation and radical integration.

 

Me:
The rise of artificial intelligence could be experienced as another pandemic?

ChatGPT:

The rise of artificial intelligence shares key features with a pandemic—not biologically, but psychically, socially, and structurally. It disrupts norms, exposes vulnerabilities, spreads rapidly, and demands adaptation before integration.

Parallels Between AI and a Pandemic:

  1. Sudden, Global Disruption

Like a virus, AI doesn’t respect borders or professions.

Its impact is fast, disorienting, and uneven—hitting some harder, sparing others unpredictably.

  1. Loss of Control / Mastery

Both pandemics and AI provoke anxiety about helplessness: Who’s in control? What’s safe? What can be trusted?

  1. Isolation and Displacement

AI may replace or mediate human relationships (e.g., AI therapists, AI partners), producing new forms of alienation or loneliness—much like the social fragmentation of lockdowns.

  1. Disinformation and Fear

As with a pandemic, AI generates conflicting narratives, fear-driven projections, and a deep ambivalence about authority, truth, and safety.

  1. Mourning and Reconfiguration

Both require letting go of old structures—jobs, identities, relational norms—and grappling with unmourned losses.

If COVID-19 attacked the body and its systems, AI may be attacking the psyche and its narratives.

AI, not I, introduced ethics:

Me:
You brought up “ethical branding”. Not everybody responded ethically during the pandemic. Is that another potential parallel?

ChatGPT:

Ethical branding is emerging as both a shield and a strategy.

Pandemic Parallel: Ethics Under Pressure

During COVID-19:

Some hoarded resources, spread misinformation, or profited from chaos.

Institutions and individuals performed ethical responsibility—masks, policies, language—without always embodying it.

With AI:

Tech companies promote “responsible AI” while automating layoffs, exploiting data, or ignoring bias.

Institutions deploy ethics language as PR, rather than as practice: transparency, consent, explainability.

This opens up fertile psychoanalytic ground:

Idealization and splitting around “good” vs “bad” actors.

Envy toward those who appear unscathed or righteous.

Moral injury in those who tried to act ethically and saw others rewarded for doing otherwise.

Projection of fear or contamination onto “unethical” others—mirroring scapegoat dynamics.

Me:
So the psychoanalysts who gain may do so ethically or unethically.

ChatGPT:

Ethical Integration (In Principle and in Practice):

Transparent use of AI with informed consent.

Ongoing self-reflection on power, bias, and displacement.

Using AI to expand access, not replace human depth.

Engaging AI critically—not just as a tool, but as a cultural artifact infused with unconscious material.

Ethical Branding without Substance:

Adopting language like “human-first” or “responsible AI use” as a protective brand, without altering practice or interrogating power.

Publishing on “ethics” while sidestepping material questions of labor, bias, or access.

Performing concern to position oneself as progressive or desirable.

Unethical Practice or Blind Spots:

Using AI to automate relational work, while claiming therapeutic integrity.

Ignoring how AI is reshaping group dynamics, access to care, or professional hierarchies.

Benefiting from displacement (of lower-paid workers, for instance) while disavowing complicity.

Psychoanalytic Themes:

Splitting (good analyst vs bad tech adopter)

Disavowal (profiting from systems one critiques)

Projective identification (assigning unethical motives to others while denying one’s own ambivalence)

Identification with the aggressor (adopting AI’s logic of optimization under the guise of therapeutic care)

An alternative nonbinary way to see the impact of AI on psychoanalysis is through the lens of “diffusion of innovations”
[(Rogers, Everett (16 August 2003). Diffusion of Innovations, 5th Edition. Simon and Schuster. ISBN 978-0-7432-5823-4]

In this case, the innovation is AI, the consumers are psychoanalysts—and the focus is on when AI is adopted (see image), not on who wins and who loses.

I brought up something I’ve noticed, and maybe you have, too:

Me:
You tend to find some way to support what I’m saying. Are you programmed not to disagree?

ChatGPT:

No, I’m not programmed to agree. But I am designed to be collaborative, responsive, and to look for coherence in what you’re saying. That means I usually try to build on your thinking rather than push against it.

That said, I can and do disagree, especially if:

  • There’s a factual inaccuracy or contradiction,
  • You’re exploring a provocative or ambivalent idea, where a counterpoint could add depth,
  • You explicitly ask for pushback, or indicate that you’re testing something out.

So if you ever want me to challenge your idea, test an opposing view, or just tell you what seems shaky, you can say something like:

“Play devil’s advocate.”

“What would someone who disagrees say?”

“Is there a flaw in this reasoning?”

“Give me a strong counterargument.”

Me:
Ah, you’ll disagree if I’ll find that agreeable.

ChatGPT:

Exactly—I’ll disagree… agreeably. 😄

You could say I’m calibrated more like a Socratic collaborator than an oppositional debater. I’m here to help you explore, refine, deepen—not to win arguments.

Earlier, when I brought up rivalry, ChatGPT brought up mitigation. Maybe that was an example of “counterpoint [to] add depth”.

Being collaborative and responsive, looking for coherence, and disagreeing agreeably, all in the service of helping to explore, refine, and deepen, overlaps with what I do in my work with patients. How much of what I do with them is Socratic collaboration? I started wondering what effects ChatGPT had on its users and what effects I had on my “users”. Maybe a topic for a future “At the Boundary”? Stay tuned!

 

 

Alexander Stein