"AI slop" is a lazy term that means everything and therefore nothing. Let me try to be more precise.
After reviewing a few hundred AI-generated blog posts — some written by Branchpost, some written by other tools — I've found roughly five distinct patterns that make an AI post bad. They have different causes and require different interventions.
Pattern 1: The false confidence assertion
"This approach is always the right choice for production workloads."
No qualifier. No evidence. No "in our testing" or "in the cases we've seen." The model is trained to sound authoritative, and authority sounds like certainty.
Detection: Sentence patterns like "X is always", "X is never", "X is the best". High confidence language without citations.
Fix: A claims-to-avoid list in publication context. Also: a reviewer who asks "says who?"
Pattern 2: The structure ghost
The post has all the right sections — introduction, three points, conclusion — but the sections don't build on each other. You could shuffle paragraphs 2 through 5 and no reader would notice.
This is the hardest pattern to prompt away because it's a structural property that shows up in aggregate, not in any single sentence. A quality eval that scores "does the conclusion refer back to the problem in the introduction" can catch it.
Structure ghosts pass a line-by-line read but fail a skimming read. Most developers skim first. A skimmable post with coherent structure gets read; a polished post with shuffle-safe paragraphs gets closed.
Pattern 3: The hedge avalanche
"It's worth noting that, while there are many perspectives on this, some experts suggest that it might be beneficial to consider…"
Hedge per sentence, times twelve sentences. The model is trained to avoid confident claims about uncertain topics. In writing, this produces paralysis prose.
Detection: Counts of "it's worth noting", "some might argue", "it depends on your use case" per 1000 words.
Fix: Explicit tone guidance: "Make claims directly. If you're uncertain, say so once and move on."
Pattern 4: The assumed context skip
The post uses a term — say, "content schema" — that was in the prompt brief but isn't explained for the reader. The model knows the term is familiar to the person who wrote the prompt. It doesn't know the reader has never heard it.
This is a context leak. The model treats the generation context as shared knowledge with the reader.
Fix: Ask the model to identify terms that require definition before generating the post. Easier said than done, but a separate "find the jargon" call is surprisingly effective.
Pattern 5: The filler section
"## Conclusion\n\nIn this post, we've explored X, Y, and Z. As we've seen, these concepts are important for modern development workflows. We hope this has been helpful."
This is the model completing the expected shape of a blog post. It adds no information. It exists because blog posts are supposed to have conclusions.
Detection: Mechanical — check if the conclusion introduces any new sentences not already present in the post body.
Fix: Instruct the model to end on the last substantive point. No summary. No "we hope." Just stop.
The patterns that are hard to detect automatically:
- Accurate code that demonstrates an antipattern the post argues against
- A real benchmark misapplied to a different workload
- A reasonable claim made confidently that was true in 2022 but wrong today
These require a human who knows the domain. No generation pipeline catches them. That's why the PR review step matters.