If you’ve spent any time reading AI-generated prose, you’ve seen what happened to it. The em dash used to do a specific job. It lets you pivot without sounding stiff. It lets you join two clauses without flattening the sentence. I used to use it a lot. Not because it looked “writerly,” but because it worked.
Now it’s everywhere.
Models correctly learned that em dashes often appear in strongly edited prose. Then they did what models do. They turned a real pattern into a reflex. Now the em dash appears every few paragraphs, whether the sentence needs it or not. Sometimes every few sentences. After a while, you stop reading it as a choice. You read it as residue.
That annoys me more than it probably should.
It annoys me because the mark used to tell you something. Not much, but something. It suggested a writer deciding pace, emphasis, and connection. Now it often suggests the opposite. It reads like a system reaching for one more averaged marker of “good writing.”
Press enter or click to view image in full size
I spent part of a weekend stripping em dashes out of a book manuscript for exactly this reason. They now read, unfairly but undeniably, as a tell. That’s the part people miss when they talk about AI prose in broad ethical slogans. The problem is not just that the output can be generic. The problem is that it contaminates the old signals readers used to infer care, experience, and intent.
The em dash is just an easy example because it's easy to see. The deeper problem is that AI is flattening all sorts of small cues in rhythm, transition, emphasis, and tone. Those cues were never perfect. But they helped you decide whether someone thoughtful, pompous, careful, fraudulent, experienced, or scared was on the other end. They helped you judge whether a mind was actually present in the prose.
We’re losing that.
Not because the machines are evil. And not because every use of AI is some moral failure. We’re losing it because these systems make it cheap to produce competent-looking prose at scale, and once that becomes possible, people will use it.
The result is more language that looks polished but tells you less about the mind behind it. That’s fine if all you need is filler. It’s worse if you’re trying to tell whether a real person is actually there.