thumbnail

Technology and the Mind: Where Are We Now and What’s Next?

by Nicolle Zapien, PhD

Technology and the Mind

Technology and the Mind | Nicolle Zapien, PhD, host

Introduction

In 24 episodes across three seasons spread over three years, Technology and the Mind, a podcast I created and hosted, explored the intersection of technology and human consciousness through conversations with psychoanalysts, philosophers, educators, technologists, and researchers. The podcast found unexpected success, growing to several thousand subscribers with tens of thousands of listens per episode and consistent five-star ratings from a global audience. Yet despite this reach, the show functioned largely as an echo chamber—engaging like-minded listeners but seemingly failing to spark the critical dialogue or concrete action that its important topics deserve.

This article is adapted from the final episode of the podcast which aired on October 23, 2025. In that episode, I reflected on what the podcast covered—summarizing the highlights of all the interviews and conversations across the lifespan of the show, and how technology has evolved over these three years. I encourage readers who haven’t already listened to the last episode (or want to catch-up on any of the others), to do so.

Here, I want to focus on some of the reasons why I think the show didn’t have the kind of impact I hoped for and also look ahead at ways to engage these issues more deeply in the future.

In the three years since I launched the podcast, there has been a surge of content on the theme of technology, in particular AI, and how it impacts our minds and relationships. There are now countless research studies, articles, books and posts about technology and AI. Because of the sheer volume of materials, much of which is opinion-based, it is challenging to know what to read, whom to follow, or what to take seriously. Curated reading groups using AI to assist with thinking about AI is one of the current ways people are striving to make the volume of content more manageable. I advocate for a different approach. One reason why is that it is problematic to use the tools we aim to critique to sort through that which we might critique. Another is that the use of technology, particularly AI, changes its user through the process of use, in ways we may not yet fully understand and in ways that may be of consequence.

In my view, we have reached a decisive moment where what we do or don’t with our minds and with tech matters in significantly more impactful ways than it has previously. Ending the podcast is aligned with a broader choice I’m making about how to use my mind. I’m motivated to do this to optimize my ability to think clearly and feel fully about AI and its impacts as it develops, but also as a protest of sorts against the attention economy and as an assertion of my autonomy and freedom.

The present can be hard to accurately reverse engineer. I cannot unequivocally assert what my thinking might be now if I hadn’t hosted the podcast and participated in all those conversations. What I am clear about is that how I think and feel about technology and mind today has been profoundly influenced by those conversations as well as my experiences with early technology and I consider it important to distill and share my views, what I’ve learned, and what I think we might do in order to preserve some reflective distance so that we might understand more about how AI is changing us before we are too immersed.

The background before now

When I worked at a consulting company in the ‘90s, conducting marketing and strategy research and UX studies for various tech start-ups and Fortune 500 clients, I had the privilege of observing and influencing the development of tech products and services.  Many of them were new product categories entirely, meaning there had never before been another product or service of its type — such as cell phones, home routers, online shopping carts, online dating apps and social media, among others. These are all ubiquitous now, but they were revolutionary then. The projects we worked on spanned all phases of the product development and marketing lifecycle — ideation phase, feature development, prototyping, marketing, channel distribution, pricing, user experience, customer loyalty and branding.

Many of our clients then were tiny groups of innovators who are now giant tech brands and household names. Some are now involved in AI development. There were also traditional brick-and-mortar businesses in the Fortune 500 looking to use technology to improve their businesses through automation or expanding online, such as online banking or online shopping. We used research to answer questions like, “Will anyone want to use an online dating app?  What differentiates who will and who won’t and how do these apps shape human romantic and sexual relating?”, or “Will anyone feel safe putting their credit card into a website form?  And if they do, what security measures are required to make them feel safe and what will influence them to spend more?” and “What will be the best way to handle returns for online shopping?” or “Will people want a portal in their homes on the TV or via their cell phone that accesses everything or only some things?”, “What will we want to control inside and around our homes via sensors and how do we handle privacy in these cases?” or “How can we monetize things like search engines?”, or “What is the value of online communities?” Or even “What is the optimal price strategy and price point to keep people brand loyal to a cell service or a home internet provider and what will keep them loyal and engaged and constantly upgrading their devices? And further, “Once you have loyal customers, how do you use partnerships and other marketing strategies to build your brand further?”.

The answers to these questions seem clear now. But back then, almost everyone except Gay men, were wary of online dating. People were largely unsure about the security of online payments, and we were more attached to our TVs than our phones until the iPhone delivered its remarkable design and interface and internet speeds increased. We didn’t know anything about the value of online communities and influencers or how people would behave online over the longer term. We also didn’t understand how social media would shape everything from purchasing behavior to brand perceptions and social experiences.

We spent a lot of time in the 1990s mapping where people clicked, how long they lingered, so that we might understand their brand affiliation and purchasing behavior to optimally sell to them, treating people like members of their mathematically derived segment. Internet speeds increased and more products and services could be purchased more quickly. We began to see how technology use impacted people as individuals and groups. One important learning for all of this is that with all technology product and service categories, there are early adopters and there are people who use the tech differently from the usual user; these early adopters and unusual users would help us understand how tech would impact us early in the product category in important ways.  AI also has early adopters and those who use it in unintended and unusual ways. And as with all the other tech that has come before, AI use changes its user too.

This background is important because it traces the extent to which I have been part of the early history of developing the very things I am now criticizing. But, full disclosure, despite having spent the last three years hosting a technology podcast, I have not spent much time using ChatGPT, Claude, Gemini, or other AI chat agents. The main reason is because I actively try to maintain some distance. Of course, AI is all around us and I don’t pretend to live in a hermetically sealed environment. But I am trying to position myself more like an observational researcher standing orthogonal to the technology to observe its impact on myself and others around me while the first phases of general use unfold.  This is because:

  1. Early adopters are impacted by and changed via their tech use and they may not be able to reflect upon these changes once immersed. Immersion happens seductively and over time.
  2. There are creative and entirely unanticipated ways that people use any new technology. Once created, it can also be broken and hacked, and in some instances these unusual users are nefarious and can be our canary in the coalmine if we wait, observe, and pay attention.
  3. We have an obligation to help people experience and hope for something real and embodied in a time when so many can no longer tell the difference between tech-mediated relating and in-person relating. This stance returns choice and freedom to people who are feeling the FOMO and social pressure of AI pulling them away from human relationships.

Where Are We Now? 

In 2022, before I started the podcast, the dialogue about tech and AI seemed to be largely about the question of whether (not when) AI could ever be good enough to replace any human in any job; and in psychoanalysis, how teletherapy and teleanalysis might impact clinicians’ practices.  The discussion now is about what jobs and activities AI can’t and won’t replace.

As Tristain Harris, Co-Founder of the Center for Humane Tech (a nonprofit dedicated to ensuring that today’s most consequential technologies such as AI and social media serve humanity), detailed in an episode of Jon Stewart’s The Daily Show broadcast in October 2025, the number one use case for AI chat bots today is the category of personal support, an alternative phrase used to describe therapy while avoiding liability. Given this, psychoanalysts and psychotherapists are not only legitimately worried about being replaced by AI but also concerned about the adverse impact to vulnerable individuals who use AI chatbots in pseudo-therapeutic contexts despite the absence of standards of quality care and ethics required of trained and licensed practitioners.

There are many other risks and potential harms.

In San Francisco, Waymo, an autonomous driving technology company which operates a 24/7 self-driving ride-hailing service, has created an environment in which people are less and less likely to encounter someone from a different social class in a taxi or ride sharing program, eroding communities and causing traffic jams when there are power outages. Cases are on the rise of AI chatbots that serve as lovers with related stories of obsessional issues and avoidance of contact (and sex) with real people; so-called therapy bots are facilitating suicide, creating obsessional dependence, and driving people to avoid investing in real human relationships.

The open secret in Silicon Valley is that many engineers involved in building AI now fear their own obsolescence, and regret contributing to the existential crisis of a future, one we may all face, if there are no jobs, no work, no goals, no striving. Children are using AI to do their homework and teachers are using it to automate their jobs, resulting in less learning and understanding. We are witnessing the great dumbing down of our minds.

Of course, there are also benefits. Consider the Lyft driver (a transportation service that runs on AI) who used AI to write a program to help him lose 50 pounds and cure his type 2 diabetes and then asked it to help him write a book about the experience. Others cobble together bits of code in minutes to solve problems or start businesses which support their families. AI systems are credited with facilitating earlier detection of cancers, generating the building blocks for new drugs and vaccines, providing 24/7/365 customer service, and delivering continuously available access to not only limitless information but comprehensive answers to complex questions.

An Intentional Experiment in Technology Temperance

There are some who, acknowledging that AI is here to stay, think that if we—not just the public, but mental health professionals generally and psychoanalysts specifically—do not engage with it directly, we are defensively avoiding a very important matter. In this view, avoidance is to hide behind the fiction that psychoanalysis is so special it could never be reasonably simulated by an AI. From this position, such a defense conflicts with a foundational psychoanalytic proposition to confront disagreeable truths and will inevitably lead to irrelevancy.

Yes, it is a very important matter. But I am not persuaded we need to deal with it by engaging with it. In my view, it changes the user in ways we do not yet understand and in ways I am not sure allow us to have the same freedom and choice after the engagement. Perhaps we should heed the engineers and technologists who build and have extensively used AI systems, who now discuss its negative effects and disallow their children from accessing these apps and services.

I’m also not convinced that psychoanalysis is on the verge of irrelevance or that AI will be the cause of its imminent demise. I meet with people in my practice every day who work in AI or use it heavily, or are married or partnered with those who do, who want an in-person analysis or psychotherapy. Many of these professionals who work in the companies developing AI are seeking analysis or psychotherapy to resolve the distress from the negative effects of AI or associated with their guilt from the social damage caused by their work and are intentionally seeking an analog and relational experience. Young people in particular, having acutely experienced the social disruption of COVID-19 during a key developmental period, are increasingly looking to minimize or avoid screens and bots.

The conversations I had with my guests on the podcast underscored, among other things, the importance of thinking carefully—as well as writing and speaking knowledgeably and plainly—about AI and about psychoanalysis, separately and together. There are many people who claim that an AI therapy bot is the functional equivalent of an analysis with a living trained and qualified professional (whether in-person in the consulting room or by tele-analysis). Refuting such a misunderstanding continues to be important.

All of this, however counterintuitive it may appear given my deep involvement with the topic of technology, informs my decision to indefinitely pause the podcast. To disengage from using AI systems as fully as I practically can (acknowledging that this technology is now almost thoroughly embedded into many processes and interactions). To wait and watch.

One deliberative step includes swapping out my Samsung smartphone and replacing it with a Wisephone, a distraction-free minimalist phone produced by Techless. Designed to help overcome digital addiction and build healthier tech habits, its features are limited to basic talk and text, a camera, an alarm and a handful of other essential functions using a black and white interface. I do not have subscriptions to ChatGPT, Claude, or any other similar service. And I will try to do most of my business with brick-and-mortar businesses starting in 2026.

I do not consider myself an technophobe or that I’ve transformed into a militant tech hermit. But I am going to try to avoid AI and tech as much as I practically can and leave it to the early adopters to experience more fully and organically. There will be inevitable frustrations. Googling on-the-go and waiting to search until I am at my computer, and eschewing e-commerce means a loss of convenience and immediacy. But I am approaching this as an experiment. One where I get to observe what happens to me and to others. I’m conceiving of it as a strategic, conscious, intentional decision to look away for a period of time. My supposition is that I will discover something that will be validated by research in the future. I’m anticipating that this experiment will be similar to when I kept my Blackberry and delayed purchasing a smart phone until long after most others had. I felt the FOMO and increased anxiety at being excluded as others related to each other in this new way. But I also watched relationships suffer and conversations and dreaming wane. I witnessed people’s individuality fade and their anxiety increase as they became consumed by everything presented on their hand-held devices to see, want, buy, and become. I watched people become preoccupied or even addicted to various dopamine-driven activities. More than anything, I hated that so many people seemed to stop looking at each other.

What Do I Hope This Accomplishes?

As somebody who’s been deeply involved in technology for many years, not an unsophisticated user who’s fearful or mistrustful of it, I understand it’s patently incorrect to assert that we all must use AI exactly as it’s designed, marketed, and delivered. The history of new technology rollouts tells us that only a subsegment of the population uses a particular technology as-is and as intended.  Many use it in unexpected ways, some with negative consequences to others, and some eschew it. Inevitably, there are knowledgeable renegade users who hack and investigate new tech, educate the general population with their findings and expert commentary, which then influences what people want, will tolerate, or pay for, and can change the entire course of the development of products and services.

Because AI, different from other new technologies, has been promoted to be a friend, companion, helper, guru, potential superior entity, and has so quickly become ubiquitous, we’ve been distracted from remembering that we can ask for changes and improvements to the applications and services we purchase and consume, and AI is an application or a service, not a being. We can choose to engage or not or thoughtfully govern the extent to which we do.

Likewise, we can continue to stand for psychoanalysis and its merits. And we can protect our minds and relationships by setting down our devices and disconnecting from all things digital to be in nature and with others in the real world.

If you do use AI, it is also important to look at the source of things. To be curious—and certainly not complacent or blindly accepting—about how AI systems are made, who participates in its development, how the design, development, and manufacturing labs acquire data and how—or if—they secure it. AI doesn’t get more intelligent in the way people do. It gets faster, more powerful, and more complex. But as that happens, pre-existing biases in training data and flaws in algorithmic design become increasingly more embedded and opaque.

As the AI industrial complex proliferates and AI services further infiltrate the social fabric, we, public users, will become more dependent on what the systems provide while also having less access to how they’re created or function. We’ll lose the ability to properly regulate or mitigate potential risks and harms. AI alignment and fairness will become aspirational concepts without real meaning.

Do AI products and services represent the interests of all of humanity, all creatures, all life forms on earth? Is the development of AI an arms war in which a few individuals and private entities stand to make enormous profits and amass unfathomable power and influence? Are the technology titans in charge of AI development and commercial services motivated by ethical aims? Do we agree on the definition of ethics? What are the consequences if the answers to these is questions is ‘no.’

At this juncture, there may be no more worthwhile question to consider than whether we trust people more than we trust AI or the reverse?

Now What?

It should be no surprise how I answer that last question. Having signed off from the final episode of my podcast, with all that I’ve learned from all the experts I spoke with, I am going to pause my involvement in tech usage to the greatest practicable degree and will consider jumping back in after the early adopters have shown us more about how this technology impacts us.

What do I imagine thereafter?

A slow tech movement. Healthy tech apps that aren’t too engaging and that work to help us protect our minds; that work in concert with our natural tendencies, rather than exploit them; to provide information while preserving memory and curiosity; and which foster relationships rather than reward distance. There will be those who’ll consider me a luddite or avoidant. They would be wrong. I have seen this show before. The majority of today’s technology leaders—nearly all of whom can be crassly if accurately called “tech bros”—are driven by hubris and an insatiable early mover advantage mindset (though they prefer to consider themselves bold innovators) to try to hack, tweak, and accelerate everything, even if they break things. Perhaps there’s a perverse wish at play to master the one thing they cannot do: gestate new life. But the AI project in motion now starkly points to an absence of forethought for unintended consequences. Their feverish pursuit of wealth and power is mindlessly destroying mother earth.

So with that I will, for now, take a break in nature and enjoy the real world. I encourage you to do the same. Talk to each other, relying on your own curiosity, questions, and creativity to guide you without technological mediation. Reclaim your free time. When you’re doom-scrolling or locked into an algorithmic tractor beam, ask yourself if this is truly the best or healthiest use of your time. If you’re a parent of young children, consider what model you’re presenting to your impressionable kids. If you regularly use ChatGPT or Claude or any of the other popular chatbot services, question the quality and correctness of its response to your prompt or query. Do you know how the system was trained? Are you, are any of us, responsible for contributing to or participating in something which is, whatever its benefits, also adversely impacting us all? Are you abdicating privacy?  Does that matter to you?  Do you understand what resources these systems run on? How do you feel about contributing to the energy crisis and the mining wars that will ensue as more materials are needed? The enrichment of a new class of tech oligarchs? The AI arms war and the cultural and socio-political conflicts that may trigger?  How might you advocate for a different outcome?

There is no shortage of challenging, provocative questions. They must be asked. They should be answered.

Ways to Take Action

Educate yourself in AI usage for good; don’t just use it mindlessly, as a novelty, because it seems faster or to follow everyone else or, worst of all, because you are bored. Learn how to present healthier tech choices to people who may not have considered other options and to reclaim your own motivations.

If you have ideas regarding regulations of technology or AI, or concerns about its impact, write to your local and state representatives. Join or donate to PsiAN (the Psychotherapy Action Network, a leading mental health advocacy organization working toward greater awareness, better policies, and more access to depth/insight-oriented psychotherapies). Donate or become involved in the Center for Humane Technology or subscribe to their Substack.

If you are an executive or employee of a company, a director on a board, or involved in a group or organization and see a need for help understanding or managing the complex challenges in adopting and integrating AI technologies into the human ecosystem, reach out to Alexander Stein, Founder of Dolus Advisors.  If you have children, grandchildren, or other young people in your lives, do not give them unchecked access to chatbots, social media or smart phones.  Advocate for schools that protect children’s minds and learning. To learn more, consider reading “Hold Onto Your Kids: Why Parents Need to Matter More Than Peers” by Gabor Maté MD and Gordon Neufeld. The book explores how the influence of peers, magnified by social media and video game culture, is replacing parents in the lives of children, and what parents can do about it.

Consider joining me in skipping the early adopter phase by using, and thus creating greater demand for, new healthy tech products and services. Some actionable steps you can take, as I discussed above, include buying a Wisephone and doing business with real life shops or with socially responsible companies that are not owned and run by tech oligarchs.

Most of all, actively decide how you will use technology, including AI, and how you can preserve and develop in-person relationships. Be alert to how AI impacts your mind and relationships, and be attentive to cherishing and caring for the most important and precious parts about being human.

This article is adapted from the final episode of the Technology and the Mind podcast titled “Where are we now and where might we go from here?” [Season 3, Episode 5]. Technology and the Mind was a podcast created and hosted by Nicolle Zapien, PhD. Launched in May 2022, the podcast was dedicated to exploring contemporary psychoanalytic ideas applied to consumer technology use cases.  Each month, Dr Zapien interviewed psychoanalysts, philosophers, educators, technologists, and researchers about the current and future impact of technology on our minds, our relationships, and on society from a contemporary psychoanalytic perspective. Partial funding for Technology and the Mind had been provided by the American Psychoanalytic Foundation through the American Psychoanalytic Association.

 

 

 

Alexander Stein