thumbnail

From the Editor

by Alexander Stein, PhD

Data is a Mirror of Us by Anne Fehres and Luke Conroy & AI4Media * Anne Fehres and Luke Conroy & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Data is a Mirror of Us by Anne Fehres and Luke Conroy & AI4Media *
https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

In my Editor’s Introduction to Issue 1 of The CAI Report published in January, I wrote that “Our mission, and my role as Editor, is to offer psychoanalytic perspectives on a broad range of issues involving AI’s uses and applications, and risks and benefits to individuals and in society.” I also established wanting to “bring an analytic attitude – open-minded, questioning, keenly attentive – to opening AI’s black box, using domain-specific knowledge and expertise to illuminate what’s working and why, while furthering the principles of the likes of Edward R. Murrow in surfacing what’s not right and holding people in positions of power and influence accountable.”

Consistent with that posture, in a recent Council on Artificial Intelligence meeting I invited articles for The Report that unflinchingly examine and pointedly address the broader socio-political and ideological contexts in which AI systems, applications, and services are being designed and deployed.

The articles in this fourth edition of The CAI Report are responsive to that, each casting a critical eye on AI’s ever-increasing and, until recently, scarcely imaginable impact on, and incursion into, social structures, civic life, and human relations.

Todd Essig and Amy Levy begin their Note from the Co-Chairs by noting that “AI is not developing in a vacuum, separate from political, economic, and sociocultural forces” and that “the AI that will shape [our] future is being developed within specific—and increasingly authoritarian—ideological and institutional matrices.” Their Note is impassioned, powerful, and important. One reason among several I think it’s important is because the relevance of their message extends far and wide, well beyond the boundaries of the Council on Artificial Intelligence. We must not, cannot, allow ourselves to deny or magically wish away that all of us are profoundly affected by, as they write, “the present moment in our country pulling AI development away from ethical governance and respect for the dignity of the human subject and toward extraction, control, and social dominance.”

I encourage you to read their Note—which I am proud to publish—more than once, or in any case pause to let their feelings and words register and stay with you before diving into the other articles in the issue.

I also whole-heartedly support their call to confront this moment in human history as members of a coalition of communities. And, extending that further, invite you to share your knowledge and voices—The Report has already published pieces from authors who are not CAI members and in upcoming issues I’m keen to include pluralistic, though always psychoanalytically-informed and inflected, perspectives and commentary. Please feel free to send manuscripts to me at apsa.cai.report@gmail.com.

The impact of AI in education is a prominent focal area in research and commentary. Andrea Valente, an educator, researcher, and academic consultant specializing in language education, linguistics, human communication, and cultural studies, offers a particularly compelling take on this topic. In “AI Educators: Enthusiasts, Entrepreneurs or Evangelists?” she examines how the term, concept, and function of “educator”—a position of critical importance in traditions of epistemology and culture since antiquity—is being transformed, degraded, by digital technology and artificial intelligence. We are experiencing, as Dr Valente explains, an “erosion of professional ethos, which implies trust, respect, and agency for educators,” that’s giving way to technocratic systems which promote a “shift from teacher-to-learner and therapist-to-patient-centered approaches” and favor “the mass production of goods, tools, and strategies.”

Understanding bias, in technology and in society, is a cornerstone of Karyne Messina’s body of work. In Issue 3, we published her look at how AI systems seem to have become, as she wrote, “amplifiers of our unconscious selves.” Here, in “From Chatbots to Cyber Ops: Klein’s Theories and the Age of AI Anxiety,” Messina presents her views on applying the influential 20th Century psychoanalyst Melanie Klein’s ideas on paranoid-schizoid splitting and projective identification—primitive psychological defenses—as relevant to understanding AI bias.

Marsha Hewitt is a full professor in the Department for the Study of Religion at the University of Toronto, and the Faculty of Divinity at Trinity College who teaches undergraduate and graduate courses in psychoanalytic theory and clinical practice, psychoanalytic psychology of religion, dreams and visions, critical theory and method and theory in the study of religion. Her erudite and lucidly written article, “Extended Minds, Ethical Agents, and Techno-Subjectivity in the Worlds of AI,” poses several critical questions to address the ethical and philosophical challenges presented by AI. Professor Hewitt suggests “we must consider how the things we create also create us.” She argues that psychoanalysis is essential to that project as it provides “not only the emotional and mental space for critical thinking about the threats to our shared humanity posed by AI technologies” but “can also contribute to formulating effective ethical interventions in directing them in the promotion and service of human wellbeing.”

In this installment of Dear Dr. Danielle, our regular column offering psychoanalytic counsel on human-AI relations, Dr. Danielle responds to Digitally Distressed Doug who writes that he feels “insulted, mocked and even tortured by Siri” causing him to become “infuriated and enraged.” He asks, “Why is it so easy to get angry at something that isn’t even human?” Dr. Danielle’s answer (her succinct diagnosis: “AI rage”), provided with characteristic empathy and insight, will resonate to many of us who share—or can identify with—Doug’s intense emotional reactions toward the technologies in our lives.

Dr. Danielle is on-call to respond to your questions and concerns regarding technology relationship issues. Send your letters, as always published anonymously, to info@danielleknafo.com (with a copy to apsa.cai.report@gmail.com).

Dr Knafo serves double-duty in this issue — temporarily setting down her Dear Dr. Danielle hat to offer a deeply thoughtful piece titled “Free Association in the Age of AI.” She explores compelling parallels and differences between psychoanalysis’ “fundamental rule” that patients say “whatever comes to mind” in a stream of free associations—and the outputs of Large Language Models, suggesting that our “understanding these differences and similarities helps us better appreciate both the unique qualities of human consciousness and the nature of artificial intelligence.”

Three additional notes before I leave you to read these articles.

The CAI Report landing page containing the table of contents to this and previous issues now has a link where you can enter your email address to sign up to get updates about future articles, events, and resources from the Council on Artificial Intelligence. Please do!

In keeping with this introduction and the Co-Chairs’ Note, I can think of no better issue of The CAI Report than this one to introduce Better Images of AI, a new source, and resource, for some of the images you’ll see as you as you explore this issue (including the one by Anne Fehres and Luke Conroy & AI4Media I’ve selected for this Introduction; see below for their description/explanation of the image which seems both conceptually and aesthetically apt for what I’m trying to say here), and which I plan to tap for futures ones too. Better Images of AI is a non-profit collaboration of individuals and non-profit and academic institutions. Their work is organized to redress the fact that the “predominance of sci-fi inspired and anthropomorphised images, and the lack of readily accessible alternative images or ideas, make it hard to communicate accurately about AI … add to the public mistrust of AI … and don’t encourage the necessary diversity of people to enter the AI workforce and address the AI talent gap.” Better Images of AI “creates a new repository of better images of AI that anyone can use” and includes “images that more realistically portray the technology and the people behind it and point towards its strengths, weaknesses, context and applications.”

To learn more about the ideas, ethos, and strategies guiding and driving Better Images of AI’s initiatives, I strongly encourage you to download and read their Guide for Users and Creators by Kanta Dihal, Lecturer in Science Communication at Imperial College London and Associate Fellow of the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and Tania Duarte, Founder of We and AI, a UK non-profit focusing on facilitating critical thinking and more inclusive decision making about AI through AI literacy. And then consider how to support and help amplify their important work. Get in touch with them here.

Lastly, many of the articles in The CAI Report list scholarly and academic references that include books. We all have options for where and how we add to our libraries. I want to recommend Bookshop.org. Bookshop is an online book marketplace launched to financially support local, independent bookstores and is a privately held certified B-Corp (more about the mission and organization here, and Bookshop’s Founder and CEO Andy Hunter is interviewed here). It’s a wonderful and socially responsible alternative to the world’s current default book purveyor owned by the rapacious, free-press-suffocating, authoritarian-supporting Mr Bezos.

I hope you enjoy reading this issue of The CAI Report and invite you to continue the dialogue. Please write to apsa.cai.report@gmail.com to share your thoughts and responses. And consider forwarding it in your social and professional networks and with other interested readers.

Issue 5 coming soon!

=======================

* The aim of this image is to highlight that data, in both its ‘raw’ and reinterpreted form in AI systems, has a human origin. While moving through different seemingly abstract systems, a recognition of this origin highlights the notion of “human in the loop” and pushes back against the notion of AI as a ‘black box’. This work explores this idea through the symbolism of the mirror and reflection. While there is a magic in looking at the output of an AI system, there is also a need to ground this in the realism that such outputs are a reflection of society – the good and the bad. AI4Media https://www.ai4media.eu/ funded this commission which was made possible with funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 951911. Creative direction by AIxDESIGN – aixdesign.co

Alexander Stein