
The Curious Case of Silver Elite
LLM sentences sometimes go off the rails.
On Youtube, creator Meredith Novaco posted a video dissecting Silver Elite by Dani Francis, a book that’s caused a stir in the weird little corner of the internet known as “BookTok” — a nickname that suggests a broader reading community, but IRL, IMO, BookTok mostly revolves around romantasy (fantasy novels that follow romance-genre plot rules).
That’s probably more detail than you wanted already. But oh, there’s more.
Novaco suspects that Del Rey Books, an imprint of Penguin Random House, may have conjured an author and prompted a large language model (LLM) to write a novel based on elements from Suzanne Collins’s The Hunger Games (you’ve heard of it; the series sold over 100 million copies and counting), and Rebecca Yarros’s Fourth Wing (a 2023 viral hit that sold 2.7 million copies upon release).
Her theory: Del Rey allegedly told a team of staffers to prompt an LLM over and over until they had something publishable. (Allegedly! Don’t sue us, Del Rey.) The team may have fed those two blockbuster titles into the model; the resulting similarities in plot, character names, and key events are simply too on the nose.
On top of this, Novaco points out, “Dani Francis” appears to be a ghost from the machine—no photos, no social media presence, no public appearances, not even a LinkedIn page. To say that’s unusual for a viral BookTok author is like saying the Library of Alexandria was a decent little book nook.
These points alone would’ve won me over to Novaco’s theory. But what truly sealed the bindings was her sentence analysis—the way she shows where LLMs lose the plot. Predictive text can lead a language model’s sentences down a primrose path that ends in brambles. If you know what to look for, Novaco insists, you can spot AI-crafted fiction a mile away.

Prediction, Not Creation
I’m human. When I write, I’m thinking. Sometimes I think for a long time—and like most writers, the blinking cursor mocks me.
When an AI writes, it’s predicting. It never suffers the curse of the blank page or that damn flashing vertical line.
ChatGPT, Gemini, Claude—these models aren’t brainstorming or reminiscing or deciding what sounds good out loud. They’re guessing the next most probable word based on all the words that came before. They’ve read billions of examples of human writing. We’re quite predictable, it turns out.
So, like a squirrel racing up a tree, an LLM follows branches—word after word after word—until the squirrel finds himself with a complete sentence. And then it jumps onto the next one, forgetting all about that spent branch.

You’ve already seen this at work in your phone’s predictive text. Type “I went to the store to buy…” and the model runs through millions of possibilities: milk, groceries, a gun, some peace and quiet. Each has a probability score. The algorithm selects the word that statistically fits best, given the patterns it’s learned.
It’s not “thinking.” It’s two maths in a trench coat pretending to be a muse.
The Sentence-Structure Tell
Novaco makes points I hadn’t heard anyone articulate before. I’ve worked with ChatGPT for over a year, and my experience taught me it’s a terrible fiction writer (for now). Its sentences manage to be both strange and banal at the same time. The absence of a body—of sensory experience—is painfully obvious.
She shows how these sentences can take a “weird turn”: grammatically correct, sensorially absurd.
For instance:
“The cell is painted a dull shade of gray that hurts my eyes.”
Dull shades don’t hurt our eyes. Neon ones, sure. Dull ones? Not so much. And while millennial gray might make our eyes roll, it doesn’t sting them.
Another example:
“Our quiet footsteps echo off the concrete walls, muffled by the linoleum floor.”
Setting aside the choice to reference linoleum in a fantasy novel (what?), the physics are all wrong. An LLM hasn’t walked a cold, echoing hallway. It doesn’t know that quiet footsteps don’t echo—by their very nature—and that linoleum muffles nothing. Grammatically, the sentence is fine. Functionally, it’s nonsense.
LLMs are trained to avoid syntactic risk. Their default is the safest, most probable way to arrange information. But when humans write, we build images that reflect emotion and follow the folklore of human physics.
The Human Sense
Novaco’s video fed my soul. Psycholinguistic analysis is exactly the right way to root out AI writing. She highlighted passages that felt human—then contrasted them with stretches of unmistakable AI slop.
For those of us who’ve been working closely with LLMs, it was exhilarating to see someone break down, line by line, what rushed, AI-assisted fiction looks like.
Because the sentence tells the story—and when the sentence rings false, it’s not the plot that gives the AI away. It’s the empty echoes of its uncanny literary valley.
