GenAI’s Promethean Paradox
Why we want and fear the power of super-intelligence

Prometheus bound to a rock, his liver eaten by an eagle and his torch dropped from his hand |
Engraving by Cornelis Cort, 1566
The myth of Prometheus is a timeless way to explain the phenomenological drive behind innovative technologies, as well as the dangers they entail. Prometheus was a Greek titan who stole fire from Zeus on behalf of humankind so they could create civilization, technology, and art. No good deed goes unpunished, and Prometheus was committed to eternal punishment, chained to a rock where eagles ate his liver only to be regenerated overnight. Eventually, Prometheus was saved by Hercules. It’s a story of the paradoxical nature of innovation—it advances our collective prosperity while punishing the individual protagonists.
The Promethean impulse is a useful provocation to explore how generative artificial intelligence (GenAI) is affecting workers, work, and organizations. Although classical AI has been part of our lives for over a decade (online search, shopping recommendations, insurance premium calculations, sports betting, news reporting, etc.), the emergence of GenAI is producing heightened anxiety and uncertainty . Over the past three years and whether we like it or not, GenAI has been driven into and disrupted our lives and work (Geoff Ralston, former president of the storied startup accelerator Y-Combinator).
What makes GenAI so extraordinary is how quickly and widely it has been adopted. Many employees are suddenly facing a Promethean paradox and most are unequipped to deal with it. Previously, it was primarily people in research and development or executive functions who had to navigate the uncertainty, anxiety, and tradeoffs of innovation. But today, GenAI tools are omnipresent, mandated for use by all employees’ work devices and accompanying us as passengers, observers, and even active copilots on our personal devices .
Evidence and analysis are emerging illuminating how these technologies are being applied in organizations and spotlighting consequences and outcomes. Early signals are not as positive as the sophisticated talking points delivered by savvy tech executives and the optimistic slogans of sales teams and advertising campaigns. In the reality of our day-to-day working lives, commercial analysis suggests just 5% of organizations see a benefit from GenAI (MIT, 2025) and rough spend nearly two hours a day dealing with low quality AI generated workslop (Niederhoffer, K., et al., 2025).
Meanwhile, psychologists are documenting signals of emerging evidence that use of GenAI tools can dampen motivation (Wu, S., et al., 2025), reduce team cohesion while reducing individual well being (Filippelli, S. et al.) and increasing loneliness (Cremer, D. D., & Koopman, J., 2024). That’s the point of entry for what I want to focus on here: looking at some of the overlooked aspects of GenAI’s troubling impact on people in organizations and considering how psychodynamic concepts can help relieve pain and increase gain.
In addition to being widely available, there are characteristics to how GenAI has been built and how we interact with it that are cognitively demanding. As a technology, it is exceptionally diffuse, unbounded, individually applied, and opaque in how it functions. We have but a prompt box to interact with this immaterial, seemingly magical machine that in seconds returns knowledge, and voices and images from time past, present, and future. That means every user is performing an innovation experiment—applying the technology in unique, unstandardized ways, with the hope, but no clear-cut certainty, of improving their work with little understanding of how the technology works or how their prompt will be responded to.
Consider yourself: do you or use GenAI in your work or not? If yes, how? Some of you may have the intrinsic motivation to initiate exploration and adoption of AI tools in your work, providing you with a wealth of experiential data to interpret. Others may reject incorporating AI in their work yet are likely to be working with people who are grappling with what GenAI means to their work, identity, and employability. At this point, nearly everyone is contending in some way or another with the Promethean paradox.
Issues of social responsibility also come into play. Many people feel conflicted, or at least uncomfortable, about using a technology trained with low quality knowledge, riddled with hidden biases or skewed to the national values and ideology of the country in which the system was developed. In addition, many GenAI systems and services have been trained and developed using—exploiting—data and work products that neither acknowledges nor compensates the original and rightful creators or copyright holders or upholds the IP laws and protections of nations around the world. Thus, one of the implied but undocumented terms of service in agreeing to use these tools involves reconciling ourselves to our role in a Promethean bargain concerning fairness and equity.
The Promethean impulse inherently involves inner conflict: to create with fire while aware of its risks. But as the legend reminds us, the goodness of our intent may not be enough to avoid punishment. Resolution of the dilemma isn’t simple. By avoiding the fire, we might sacrifice progress and the potential to advance society and improve people’s lives. But in continuing to use these technologies we must grapple with these issues and with ourselves. The GenAI era is still emerging and the answers we seek may be in the distance. But in the meantime, how might we get closer to these inner conflicts we face or help others through?
One way to cast light on the competing questions, concerns, hopes, and fears is through psychodynamic concepts like the id, ego, and superego. While modern neuroscience may have other technical explanations, this structural model of the psyche developed by Sigmund Freud in the 1920’s is useful to exploring the techno-social dynamics we face during periods of large-scale societal innovation (Freud, S., 2023). The 1920’s was, after all, also a period of techno shock in which great advancements in mechanization and telecommunications created their own sets of new possibilities to help or harm society.
Here’s a simple framework to help you apply the id, ego and super ego model to your own relationship and experiences of GenAI:
ID:
Seeks immediate gratification of basic desires, impulses, and urges without considering consequences or morality.
Some believe that the foundation of these platforms and applications is built on pillaged knowledge and unfair use of intellectual property by big tech (Heikkilä, M., 2024). Settlements and licensing agreements seem to confirm such claims. Yet, at scale, hundreds of millions of people use GenAI tools, functionally colluding in (or turning a blind eye to) the theft. Are we, the users, repressing our guilt for using stolen intelligence?
Ego:
Balances the id’s desires with the constraints of reality and the moral demands of the superego
GPT stands for “generative pre-trained transformer” and is a type of large language model (LLM) widely used in generative AI chatbots. The transformer part of GPT deserves more inquiry regarding how it influences our concept of and attitude toward creation and theft. Is the transformer the element applied to the process that makes the output mine not stolen? Or is it a way to rationalize away complicity in the theft?
Super-Ego:
A moral compass, striving for perfection and enforcing societal norms through guilt, shame, or moral judgment of the ego’s actions
AI technologies and their implications are already here, yet we speak as if they are coming. Is the often abstract or theoretical level of the discussion a defense against taking responsibility for decisions within our own roles in organizational life?
These are examples of questions that might help you better understand your relationship with GenAI technology. After reflecting on your general response, consider whether your answers vary based on a specific GenAI service provider. The common practice is to treat all the services as co-equally interchangeable. Yet each has its own interaction model and narrative context. For example, some people project more trustworthiness onto Claude, the GenAI model from British company Anthropic, and less to OpenAI’s ChatGPT.
In my work developing innovation capabilities in large complex organizations, I’ve found that the human and performance impacts of GenAI adoption vary greatly. It’s illogical not to expect highly emotional and regressive responses to the adoption of GenAI. It’s to be expected that most applications of AI will be failures or that an initial roll-out will be imperfect, yet according to the management consultancy McKinsey, only 11% of organizations in the US provide any kind of support for failure. While 85% of practitioners report fear as their biggest barrier to innovation, 89% of CEOs have zero strategies for how to deal with it. For these reasons and others, I consider it very important to look more deeply at the impacts and implications on work and workers, not just measuring job destruction and creation. This research is just beginning.
I recently attended the American Psychological Association (APA) annual meeting with keen interest in new research and findings at the intersection of AI, psychology, and organizational behavior. I presented some of my own research on how emotions influence innovation outcomes as part of the Industrial and Organizational Psychology division. However, I was surprised to discover how far away psychologists, leadership development professionals, and executives at large are from understanding the implications of AI on our work and world.
That’s not a criticism but a call to action for more of us to share our observations, interpretations, and recommendations. The truth is, it’s hard to get close to the people, teams, and organizations developing or adopting AI at scale because, among other reasons, they are so consumed and at times overwhelmed with the tasks before them. Therefore, we can understand why mainstream psychology, academia, and journalism have yet to adequately grasp this next generation of socio-technological transition. Those professionals are too far from the fire in the forge, to return to the Promethean metaphor, just as the technologists and tech leaders are too distant from the rich, rigorous, and important perspectives and learning of psychologists, psychoanalysts, and other social scientists.
The good news is that there are some people who are in the room, close to the decision-makers and teams in organizations who are developing or adopting AI at scale. I learned this on the third day of the APA annual meeting when I shared my research on emotional gearing and techniques to develop better capabilities for professionals to navigate the behavioral and emotional challenges of innovation (Macfarlane, B., 2025). Suddenly, people working in the field at the forefront of innovation and socio-techno transition came out of the shadows to speak with me. However, their stories and experiences have yet to be told let alone documented.
An adaptation of Vancouver-based futurist William Gibson’s quote comes to mind: “The future of psychodynamic work in organizations is already here, it’s just not evenly distributed.” We need more spaces to give us a Winnicott-ian “holding environment” in which to be playful and curious to name, tame, and reframe the effects of generative AI on work, workers, and the workplace. I hope this brief article will be an invitation to others to share observations, experiences, and psychodynamic interpretations of what’s happening out there in the GenAI development labs, corporate adoption programs, and the day-to-day lives of workers.
References
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023). Generative AI has an intellectual property problem. Harvard Business Review, 7. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
Boston Consulting Group. (2024). AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value. https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value
MIT NANDA (2025). Challapally, A., Pease, C., Raskar, R., Chari, P. The GenAI Divide: State of AI in Business. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
Cremer, D. D., & Koopman, J. (2024). Research: Using AI at Work Makes Us Lonelier and Less Healthy. Harvard Business Review, 24. https://hbr.org/2024/06/research-using-ai-at-work-makes-us-lonelier-and-less-healthy
Furstenthal, L., Morris, A., & Roth, E. (2022). Fear factor: Overcoming human barriers to innovation. McKinsey & Co. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/fear-factor-overcoming-human-barriers-to-innovation
Filippelli, S., Popescu, I. A., Verteramo, S., Tani, M., & Corvello, V. (2026). Generative AI and employee well-being: Exploring the emotional, social, and cognitive impacts of adoption. Journal of Innovation & Knowledge, 11, 100844.
Freud, S. (1989). The ego and the id (1923). Tacd Journal, 17(1), 5-22.
Heikkilä, M. (2024). AI companies are finally being forced to cough up for training data. https://www.technologyreview.com/2024/07/02/1094508/ai-companies-are-finally-being-forced-to-cough-up-for-training-data
Macfarlane, B. (2025). The good, the bad, and the insightful in-between Comparative analysis with the innovation leadership map as a way to psychodynamically illuminate the emotional gearing influencing innovation outcomes. Organisational and Social Dynamics, 25(1), 1-19. https://www.ingentaconnect.com/content/phoenix/osd/2025/00000025/00000001/art00001
Niederhoffer, K., Kellerman, G. R., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). AI-Generated “Workslop” Is Destroying Productivity. Harvard Business Review. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
Norton Rose Fulbright. (2025). Infringement risk relating to training a generative AI system https://www.nortonrosefulbright.com/en-ca/knowledge/publications/ef8d8cce/infringement-risk-relating-to-training-a-generative-ai-system
Ralston, G. (2025) Accelerating Safely. Geoff’s Substack.
https://geoffralston.substack.com/p/accelerating-safelyWu, S., Liu, Y., Ruan, M., Chen, S., & Xie, X. Y. (2025). Human-generative AI collaboration enhances task performance but undermines human’s intrinsic motivation. Scientific Reports, 15(1), 15105. https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf





