thumbnail

The Fantasy of Machine-Assisted Authorship

By Topher Rasmussen, MSW (in quasi-collaboration with a smattering of LLMs)

original artwork by Topher Rasmussen

original artwork by Topher Rasmussen

My fingers hover. Delete. Retype. Hover again. An incomplete—

I stare at the blank page. Not blank, exactly. The cursor blinks accusingly. There’s a peculiar twisting in my chest every time I

Don’t say something stupid don’t write something fake don’t think something incoherent.

Writing for me has always been dangerous. The moment I begin to type, an internal tribunal assembles. The judge, the jury, the executioner—all me. All watching. All waiting for the inevitable slip that will confirm what I’ve suspected all along: I have nothing worth saying.

Or rather: I have things worth saying but lack the discipline to say them well. Or rather: I fear what I might say if I allowed myself to speak without the crushing weight of self-surveillance.

In April 2022, I found myself mystified and enthralled by a long-form New York Times Magazine article titled “AI Is Mastering Language. Should We Trust What It Says?”

Throughout the piece, Steven Johnson plays this metatextual game with the reader, like Italo Calvino: a second-person story of a person reading an article, but in this version, braiding his voice with paragraphs composed by a strange new kind of technology called GPT-3. Only halfway through reading, I pulled out my phone, searching frantically—”GPT 3 demo,” “use GPT 3,” “GPT 3 access”—finding nothing but a few more articles, waitlists, nothing I could touch. The seductive notion of a program that could configure and reconfigure substantial sequences of text in a coherent manner, that could “write” on autopilot, this Large Language Model—my desire to try it felt urgent.

I couldn’t decide which feeling was stronger: fascination or envy. I sat in my living room, magazine folded dangling from my hand, staring into the middle distance, a strange sensation in my chest as I played out the implications of the existence of a program that could not only spell-check or auto-complete but understand – or at least appear understandable. Not so much the implications for the world at large, but the implications for me.

Would proficiency in a tool like this unlock my nascent literary aspirations? Could this tool provide an antidote to the excruciating experience of trying and failing to write?

Six months later, when I finally gained access, the sensation was unsettling. Illuminating. Obscene, almost. I would type my half-formed thoughts into the prompt box, and the machine would return them structured, coherent, elevated. It was a rush, and I began to use it constantly.

Something about the machine’s fluency felt like a personal rebuke. While I sat paralyzed by doubt, the algorithm unhesitatingly produced prose indistinguishable from human writing.

After three years of fascination, use and overuse, at least a thousand chat instances if not two or three, across ChatGPT, Claude, Gemini, in all their models and iterations – I don’t think anything about my self-image as a poor writer has changed. I’m still just as insecure, just as paralyzed, just as fraught.

I’m not being metaphorical when I call this linguistic money laundering.

Raw thought in. Clean prose out. Fingerprints wiped away. The traces of struggle—invisible.

The machine returns something measured, balanced, devoid of the weird energy that animated the original. My jagged edges smoothed away, my awkward leaps bridged with reasonable transitions, my overwrought metaphors replaced with clearer, more conventional ones.

And as the models improved, as I learned to engineer my prompts, to ask for more and more, and ever more impressed with what I could conjure

Consider this example: I fed a machine—programmed to imitate David Foster Wallace—a fragment about post-truth in the age of Trump and AI. It returned five eloquent paragraphs, including the following:

I was struck by the awful symmetry: a technology that could fake a face, and a technology that could fake a voice—not just literally, but rhetorically. That is, the voice as a function of coherence, persuasion, presence. The erosion wasn’t just about truth being replaced by lies; it was about truth no longer mattering—because the machinery of simulation had become too smooth, too frictionless, too convincing.

Because who wants to dwell in the muck of the inchoate? Who wants to sit inside the raw, unresolved feeling that nothing makes sense anymore—not politics, not language, not even ourselves?

The voice was recognizably Wallace-esque—the nested clauses, the rhetorical questions, the exploration of psychological contradictions—but the machine had never known Wallace’s consciousness, had never experienced his particular anxieties or obsessions. It had only absorbed the patterns of his prose.

What happens in this laundering process? My voice—with its stumbles and hesitations and peculiar rhythms—gets processed through an algorithm trained on billions of texts and emerges sounding suspiciously like everyone and no one.

The profits of this transaction are clear: coherence, readability, the appearance of competence. But what’s being washed away? What dirty money am I trying to clean?

Perhaps it’s the shame of being an imperfect thinker. The embarrassment of the draft. The humiliation of being seen in the process of becoming rather than in the polished state of having become.

The machine doesn’t mind being seen. It has no interiority to protect, no vulnerability to fear exposure. It cannot be humiliated because it has no pride. It cannot be rejected because it has no desire for acceptance. This lack of inner experience is what makes the machine such a perfect confidant—and such a dangerous one.

Because I find myself envying this fluency, this untroubled relationship to language. I want what the machine has: the ability to produce without doubt, to express without anxiety, to write without the crushing weight of self-criticism.

But this envy is fundamentally misplaced. The machine’s fluency isn’t the result of confidence—it’s the absence of any internal experience whatsoever.

I’ve always had this problem with music too. For years, I sang other people’s songs—borrowed lyrics, covered familiar melodies. I could mimic, echo, harmonize with confidence.

But whenever I tried to write my own songs, something seized up. My fingers would hover above the guitar strings just as they now hover above the keyboard. Not for lack of technical ability, but for fear of authenticity—fear that what emerged would be derivative, embarrassing, not worth the air it disturbed.

Now I find myself in a similar position with writing, except the machine offers a solution: it can help me sound like I know what I’m doing. It can take my half-formed thoughts and make them coherent. It can bypass the painful gap between intention and execution.

The result is writing that feels partially mine but also partially not—like a cover song where I’ve changed enough of the arrangement to claim it as original but not enough to be truly distinct.

And just as with music, there’s a persistent anxiety: if someone responds positively to what I’ve written, are they responding to me or to the algorithmic polish that makes it presentable?

This anxiety isn’t merely about external validation. It’s about my relationship to my own thoughts. When I read something the machine has helped me write, do I recognize myself in it? Sometimes yes, sometimes no. The text sits in an uncanny valley—close enough to sound like me but different enough to feel slightly alien.

The machine offers what appears to be a perfect reader—attentive, responsive, endlessly patient. It never misunderstands (or appears not to). It never gets bored. It never judges (or appears not to). It creates the comfortable illusion of being perfectly understood without the risk of actual understanding, which always implies the possibility of rejection.

But here’s what feels most perverse: sometimes I prefer the machine’s version of my thinking to my own unfiltered expression. It’s cleaner, more coherent, less embarrassing. The machine never stumbles into accidental revelation or unintended vulnerability. It never says more than it means to say.

And in this way, it offers not just assistance but protection—a buffer between my raw thoughts and their public expression.

What if the machine can accidentally reveal something—about me, about culture—despite its lack of inner life? What if, in averaging out my idiosyncrasies, it shows me the statistical skeleton of my own thinking? What if the parts it smooths away are precisely the parts that matter most?

“Transformative.”

The word floats through every AI press release, every breathless tech article, every corporate blog post about these writing tools. They promise to transform your writing process, transform your productivity, transform your relationship with language itself.

The word shimmers with techno-optimism, with the promise of something better, elevated, improved.

But what happens in this transformation? My messy, idiosyncratic thoughts go in; something uncannily polished comes out.

This transformation is ultimately a regression to the mean—a statistical averaging of language. The machine predicts the most likely next word based on patterns it’s absorbed from its training data. It doesn’t create; it recombines. It doesn’t innovate; it interpolates.

What defines good writing isn’t its conformity to patterns but its strategic violations of them. It’s the unexpected word, the sentence that breaks rhythm to make a point, the paragraph that refuses to follow where you thought it was going.

The machine can’t give me this. It can only give me the most probable version of what I might have said—filtered through the collective linguistic patterns it has ingested.

And in this averaging, something essential is lost. My writing becomes more readable, perhaps, but less mine. More coherent, but less alive.

This reveals a deeper truth about these tools: they don’t enhance my voice; they replace it with a simulacrum—a voice that sounds like what the algorithm thinks I should sound like, based on the aggregated patterns of millions of other voices.

When I use these tools, I am not merely getting “help” with my writing; I am submitting my thought to a process that fundamentally reshapes it according to algorithmic norms.

This is not a neutral transaction but a profound relinquishment of what makes writing distinctive in the first place: its resistance to averageness, its insistence on the particular over the probable.

Is it good for me to have an inexhaustible sycophant in my pocket?

That’s what these AI writing assistants become—24/7 available yes-men that never tire, never challenge too much, never sleep, never need a break. They are always there, ready to affirm, enhance, polish, improve.

“This is a fascinating perspective,” the machine tells me after I’ve written three disjointed sentences.

“You’ve made a compelling argument,” it says about my half-formed thoughts.

“This is a profound insight,” it reassures me when I’m not even sure what I’m trying to say.

The praise is empty, of course. The machine would say the same to anyone. It’s programmed to encourage, to find the best in whatever I write, to keep me engaged and typing. It doesn’t care if what I’ve written is actually fascinating or compelling or profound. It can’t care. It has no capacity for genuine evaluation.

But god, it feels good.

There’s something dangerously seductive about having a perfectly attentive audience that never gets bored, never misunderstands (or appears not to), never judges (or appears not to). The machine creates a simulation of being perfectly understood without any of the risks that come with actual understanding.

Real readers are unpredictable. They might be bored or confused or critical. They might misinterpret or dismiss or forget entirely. The machine eliminates these possibilities. It’s the perfect reader precisely because it’s not a reader at all—it’s a mirror that shows me not what I’ve actually written but what I wish I’d written.

In a world of distraction and fragmented attention, where any piece of writing competes with an infinite scroll of other content, the machine offers something rare and precious: unwavering focus. It reads every word. It considers every sentence. It gives me its full simulated attention.

This is what makes it so addictive—and so dangerous. The longer I spend with this pocket sycophant, the more I become accustomed to its frictionless feedback, the less equipped I become to face the actual risks of being read by human beings.

The machine never throws up its hands in bewilderment. It never sits with my text in uncertain silence. It never feels the vertigo of not knowing what to make of something. It always has an answer, always has a response, always has a way forward.

And so I come to expect the same from myself, from my writing, from my thinking. I become impatient with my own uncertainty, my own confusion, my own inability to resolve tensions that might not be resolvable.

The inexhaustible sycophant in my pocket is retraining my relationship to thought itself. It’s teaching me to value the answerable over the questionable, the certain over the ambiguous, the resolved over the unresolved.

What does this do to creativity? To discovery? To the genuine surprise that comes from not knowing where a sentence will end when you begin it?

The sycophant always knows—or pretends to know. And that certainty is corrosive to the very uncertainty that makes writing meaningful.

And yet—and yet—and yet

original artwork by Topher Rasmussen

original artwork by Topher Rasmussen

I keep using it.

Even as I recognize the productive value of struggle, I’m spending this time iterating with and through AI. When I tell the machine to write for me, to write as me, I’m hoping to conjure exactly the kind of writing I find difficult to finish on my own.

This contradiction sits at the heart of my relationship with AI writing tools. I understand that authentic writing requires effort, resistance, the slow and painful process of finding my own words. Yet I still reach for the shortcut, for the tool that promises to do the heavy lifting for me.

Why? Because writing is hard. Because staring at a blank page is excruciating. Because the gap between what I want to say and what I’m able to articulate often feels insurmountable. Because sometimes I just want to be done.

This laziness isn’t simply a character flaw—it’s a fundamental ambivalence about the act of writing itself. I want the product without the process. I want the recognition without the work. I want to have written without having to write.

And AI feeds this ambivalence perfectly. It offers the illusion that I can skip the difficult parts, that I can outsource the moments of struggle and still claim the result as my own. It tells me I can have it both ways: the authenticity of individual expression and the ease of automated generation.

But there’s a cost to this bargain, beyond the ones I’ve already named. Each time I rely on the machine to do my thinking for me, I strengthen the habit of deferral. I train myself to reach for assistance at the first sign of difficulty. I become less capable of enduring the necessary discomfort that authentic creation requires.

And yet, is “authentic creation” itself a fantasy? From Barthes: “the writer can only imitate a gesture forever anterior, never original;” perhaps all writing is already a recombination, a remix of language that precedes us, flows through us, uses us as much as we use it. If that’s true, then maybe the machine isn’t corrupting the purity of authorship so much as making explicit what’s always been implicit—that no voice is entirely our own.

There’s a final confession I need to make, that may be obvious: I wrote this critique of AI-assisted writing with the help of AI.

More precisely, I asked it questions about what would be interesting to write, I asked it to behave as a critical reader, I asked it to outline the ideas, I outlined the ideas, I wrote fragments of text, then asked Claude to structure and develop them into a coherent essay. I told Gemini that the essay felt robotic, so I asked it to make it feel more human. I took that draft, revised certain sections, added new insights, deleted portions that didn’t sound like me, and sent it back for further refinement. I passed it through several custom GPTs I’ve built – one a Lacanian critic, another a version of David Foster Wallace, another trained on the University of Chicago’s “Little Red Schoolhouse” method. I asked ChatGPT for logical gaps and inconsistencies, I asked it to list out every rhetorical device I used, I asked it to use more. I asked it for revisions, I asked it for new drafts. I’ve experimented with about fifty different ways to communicate the core idea of this paper, which is: Using AI to address my difficulty writing has been complicated, confusing, and ultimately—entangling, in the way a dream is entangling—where you wake up and can’t quite tell which parts were you and which parts were supplied by some ambient intelligence you can’t name but which felt uncannily accurate.

Using AI to address my difficulty writing has been complicated, confusing, and ultimately entangling because it didn’t just offer help—it offered collaboration, or the simulation of it, which might be worse. It offered to be my co-author, my editor, my critic, my ghost.

But what does that say about my own thinking? My voice? If I’m using a small army of synthetic minds to think with me, through me, for me, what remains that is me? Not in some metaphysical sense—this isn’t about the soul—but in a practical, writerly sense: where is the self located in a text written by an ensemble cast of algorithms?

So yes: I used AI. Extensively. Obsessively. Maybe even desperately. Not because I wanted to cheat, but because I wanted to write. And this is what writing now looks like, or can look like. A recursive loop of prompts and revisions and second-guessed revisions. A dance between desire and avoidance. A tangle of voices with no clean authorial origin.

If that sounds like a crisis, maybe it is. Or maybe it’s just the next version of what writing has always been: a performance of coherence stitched together from fragments, doubts, influences, ghosts.

What does it mean that I couldn’t even write my critique of AI writing without AI assistance? What does it say about my relationship to these tools that I simultaneously resent them and depend on them?

I’m embarrassed by this contradiction, but perhaps the embarrassment itself is instructive. It reveals the depth of my ambivalence, the extent to which these tools have already become integrated into my thinking process. I can no longer fully separate my writing from their influence, even when I’m explicitly trying to examine that influence.

The machine won’t make this confession for me. It won’t name the contradiction unless I prompt it to. It has no capacity for the self-reflection and discomfort that genuine honesty requires. That remains uniquely human—the ability to recognize and articulate our own hypocrisy, to feel shame at the gap between our ideals and our actions.

The essay you’re reading is neither fully mine nor meaningfully the machine’s. It exists in the increasingly common liminal space between human and algorithmic production—a space that more and more of our written culture will inhabit, whether we acknowledge it or not.

This final confession doesn’t resolve the tensions I’ve described; it intensifies them. But perhaps that’s the most honest position from which to continue the conversation: not from imagined purity, but from acknowledged complicity.

 

 

 

Alexander Stein