Writing in the Age of LLMs
Table of Contents
In the last couple of years, I’ve written and reviewed several technical papers and blog posts. I often come across LLM-generated writing that feels slightly “off”—sometimes, to be honest, even uninviting. At the same time, I get tremendous value from using LLMs to draft early versions, summarize dense material, and rephrase messy thoughts.
This post details some of my thoughts on writing in a world where much of what we read is now machine-generated. First, I’ll lay out some common patterns of bad writing I see from LLM tools. Then, I’ll defend some writing habits that people often dismiss as “LLM-sounding” but are actually fine—even helpful—when used intentionally. Finally, I’ll share concrete rules and formulas I rely on in my own writing and in the prompts I use to guide LLMs.
Common Patterns of Bad Writing I See from LLM Tools
Here are the red flags I keep seeing—mostly from LLMs, but I suppose also from people trying to sound polished and “formal” in the wrong ways.
Empty “summary” sentences that pretend to conclude a thought
These often show up at the end of a paragraph and sound like:
- “By following these steps, we achieve better performance."
- "By internalizing these principles, you can cut through the noise.”
Empty summary sentences feel conclusive, but say nothing. I try to end with parting thoughts that offer something new—or at least something to chew on—but unfortunately, I haven’t found a reliable recipe for getting LLMs to write with that kind of substance.
Overuse of bullet points and outlines
LLMs often overuse bullet points, especially nested ones. Lists help when items are parallel and independent, but when ideas are connected or need context, a paragraph is usually better.
Flat sentence rhythm
When every sentence is the same length, the writing lacks rhythm and becomes harder to follow. Varied sentence lengths keep readers engaged. They help signal emphasis, guide attention, and control the pace.
Bad example: We recently launched a conversational AI feature that lets users ask questions in plain English and get responses based on their past activity and current session. The system searches a database of help articles, ranks the most relevant ones using a custom scoring function, and passes the top result into a language model to generate the final answer. We spent weeks optimizing each step to keep latency under 300 milliseconds, including caching, pruning irrelevant articles, and tuning prompt templates.
Good example: We just launched a new conversational AI feature. It answers user questions in plain language, using context from the current session. The system searches help articles, scores them with a custom ranking function, feeds the top result into a fine-tuned language model, and runs in under 300ms using caching, pruning, and prompt tuning techniques.
Not the right subject
Every sentence has a subject and a predicate. The subject tells us what the sentence is about; the predicate tells us what the subject is doing or what’s being said about it. Choosing the right subject helps keep the reader focused on the main idea. A common issue in LLM-generated writing is that it often picks the wrong subject. Consider two variations of the previous sentence:
Bad example: Readers are better guided when the subject matches the main idea of the sentence.
Good example: Choosing the right subject keeps the writing clear and focused.
In the bad version, the subject is readers, even though the sentence is about sentence structure, not people. The good version keeps the subject aligned with the topic, making the writing more coherent and easier to follow.
Low information density
The intro below was generated by Gemini 2.5 Pro when asked to draft a blog post on writing in the age of LLMs:
As someone who writes, reviews, and deconstructs complex information for a living, I’ve developed a strong allergy to bad writing. And lately, a lot of that bad writing has a specific, synthetic flavor—the unmistakable scent of an LLM. This post is a guide to navigating the new world of writing, with or without LLM assistance. First, I’ll cover the true pitfalls of LLM-generated text—the red flags that make it feel sterile and unconvincing.
It sounds nice but says very little. The sentences are well-formed, but there’s no concrete insight, no framing, no momentum.
Vagueness
LLM writing often avoids specificity. It refers to ideas without defining them and makes claims without evidence. E.g., “Some experts say prompt engineering is becoming less important. The ability to simply prompt LLMs can have a major impact on productivity.” But who are the experts? What exactly is the impact? On what kind of work, and for whom? Without concrete references or clear stakes, the writing feels vague and insubstantial.
Overuse of demonstrative pronouns
LLM writing leans heavily on words like this, that, these, and those—often without a clear noun in sight. I make this mistake myself, and my advisor flags it every time in my writing (lol). For example: “This creates friction in production.” But what is this? If the noun isn’t in the same sentence or immediately before, the reference becomes vague and the point gets lost.
Fluency without understanding
Some writing sounds correct but doesn’t explain anything. This happens a lot when the writer—or the model—lacks awareness of what the audience actually knows. E.g., “LLMs use attention mechanisms to generate contextually appropriate responses.” While this may feel like a good sentence, it says nothing if the reader doesn’t already know what attention is or how it works.
Moreover, I find that LLMs make up terms that don’t exist, especially for technical content. I’ve seen an LLM write something like “We used GPT-4 for summarization, but it hallucinated details, so we added retrieval grounding.” What is “retrieval grounding?” This is not a term I’ve heard before.
In summary, LLMs can’t reliably distinguish what’s assumed knowledge and what needs explanation, so they often gloss over the hard parts, and a human writer has to fill that gap.
Writing Patterns People Flag as “LLM-Like,” But Are Actually Fine
I’m including this section because I’ve seen people overcorrect in response to LLM writing habits, cutting patterns that are actually helpful when used well. Some structures get labeled as “LLM-sounding” or flagged during review, even though they’re common and effective rhetorical tools. Just because something appears in model-generated text doesn’t make it bad writing. The goal isn’t to avoid sounding like a model; it’s to write with clarity, intention, and control.
Intentional repetition
The effectiveness of repetition depends on how it supports the idea. When it helps clarify or reinforce something complex, it adds value. Good writing also makes space for a bit of predictability—places where the reader can skim or settle—but repetition still needs to be purposeful.
Example: Vector databases store embeddings, or mathematical representations that capture semantic meaning in hundreds of dimensions. In other words, vector databases help find results that are “close” in meaning, not just exact text matches.
Signposting phrases
Phrases like “essentially,” “in short,” “the point is…” are fine if they’re followed by something useful. I like to use them when the writing gets dense, as a signpost helps the reader reorient.
Example: Essentially, instead of classifying the document as a whole, we classify each section independently.
Parallel structure
Sometimes readers see a repeated rhythm and assume it’s LLM. But parallel structure can help organize related ideas and makes sentences easier to follow.
Example:* The system scales across inputs, stays responsive under load, and returns consistent results even with noisy prompts.
The rhythm supports clarity, and each clause delivers new information.
Section headings that echo a structure
E.g., “Why X fails,” “What to do instead,” “How to know if it worked.” These are clear and predictable, which is what we desire. Predictability isn’t bad when the content under each heading is clear.
Declarative openings
Starting a section with a bold claim or topic sentence can feel robotic if the writing doesn’t back it up. But when used to set expectations—and followed by evidence—such openings can help keep the reader grounded.
Example: LLM evaluations are hard to get right. Many rely on user-defined gold labels or vague accuracy metrics, which do not work for subjective or multi-step tasks.
Em dashes
Em dashes are great for inserting clarifying details, quick shifts, or sharp asides—without breaking the sentence. I love them. When used well, they add rhythm and emphasis. They help writing flow the way people actually talk.
How I Write with LLMs
My writing loop is built around one goal: keep the momentum going. I don’t want to get stuck staring at a blank screen or endlessly tweaking sentences that don’t quite land. Most of my writing, whether for a paper or a blog post, follows the same high-level loop: plan an outline (on paper or in my head), generate a draft, read what I wrote, critique it, and revise. The loop can run at different granularities—sometimes I work a sentence at a time; sometimes I write entire sections before editing.
Writing breaks down in different places for different people. Some stall in the planning phase, unsure how to turn ideas into structure. Others move fast through first drafts but get bogged down in revision. Personally, I tend to move quickly through the outline and get stuck on phrasing—how to say something clearly, not what I want to say. I’m usually sharper at critiquing than generating, which means I often rely on the LLM to help get past those sticking points.
My strategy is to identify where the slowdown is happening and hand off just enough of the task to the LLM to regain momentum. Here’s what that looks like in practice for me:
Narrate the story to the model
When I start writing (especially for something like a paper intro), I begin by “talking through” the structure as if I’m explaining it to a colleague. I paste that rough narrative into the LLM and ask it to generate a detailed outline. I don’t move forward until that outline feels structurally solid.
Write the paragraph myself, even if it’s rough
Once I have the outline, for every paragraph, I try to write the actual paragraph myself, even if it’s ugly. If I know what I want to say but can’t get the sentence out (unfortunately, this happens often), I’ll write a half-baked version and ask the LLM to help me finish it.
This post includes a real example. I typed: “In the last couple of years, I’ve written and reviewed several technical papers and blog posts. Something always feels slightly off, enough to make the writing quietly uninviting. At the same time, I feel like I get tremendous value from using LLMs to write…” And then just added: “finish it”. The model gave me a few completions. I picked the best one, made a small edit, and moved on.
Use scoped rewrite strategies during revision
When I re-read a sentence or paragraph that feels off, I don’t simply ask the model to “make it better.” I ask something specific; usually for the LLM to follow one of the following rhetorical patterns.
The first is to put the subject and verb close together, at the beginning of the sentence. The second pattern I use is SWBST: Somebody Wanted But So Then. It’s a basic storytelling structure—often taught in early writing education, but surprisingly effective in technical contexts because it helps convey motivation, conflict, and resolution in a compact form. The “Somebody” is the actor, “Wanted” states the goal, “But” introduces the obstacle, “So” explains the response, and “Then” describes the outcome. In technical writing, this structure makes it easier to show how a decision was made or how a system evolved in response to a problem. E.g., consider the sentence “We used GPT-4 for summarization. We wanted fluent answers, but it hallucinated facts. So we added a retrieval step. Then we re-ranked outputs based on citation accuracy.” Each sentence does one job. The pattern is simple, but it makes the logic of a decision easy to follow.
Parting Thoughts
It’s now cheap to generate medium-quality text—and even high-quality text, when the scope is narrow and well-defined. But figuring out what to say, how to frame it, and when and how to go deep is still the hard part. That’s what takes judgment, and that’s what LLMs can’t do for me (yet).
Perhaps the most important mark of good writing, particularly in the age of LLM-generated text, is that the contribution is commensurate with the length. The reader walks away feeling their time was well spent, and this is the bar I strive to meet.