The Chicago Sun-Times Printed Books That Don't Exist. Nobody in the Chain Thought to Check.
A freelancer used AI to write a summer reading list. Nine of the fifteen books do not exist. Three organisations were involved in producing that content. Not one of them had a verification step. This is not an AI story. It is an accountability story.

In my presales days I watched a VP make a big decision in a meeting.
He delegated it to his director. The director passed it to a manager. The manager handed it to a team lead. The team lead gave it to a junior developer.
By the time it reached the junior developer the original weight and context of the decision had completely disappeared. He had no idea why it mattered. No idea what the stakes were. No idea what would happen if he got it wrong.
He was clueless. Not because he was incompetent. Because nobody in the chain above him had passed down anything except the task itself.
I thought about that story when I read what happened to the Chicago Sun-Times in May 2025.
The Sun-Times published a special summer section called Heat Index. Inside was a summer reading list. Fifteen books recommended for the perfect summer escape.
Nine of those fifteen books do not exist.
The authors are real. Isabel Allende is real. Andy Weir is real. Percival Everett won the Pulitzer Prize for fiction in 2025. But the books attached to their names were invented by AI. Tidewater Dreams by Isabel Allende. The Last Algorithm by Andy Weir. The Rainmakers by Percival Everett.
None of them exist.
Min Jin Lee found out her name was on the list and posted on social media. I have not written and will not be writing a novel called Nightshade Market. Thank you.
Here is how it happened.
The Sun-Times licensed a 64 page special section from King Features a unit of Hearst. King Features assigned the content to a freelance writer named Marco Buscaglia. Buscaglia used AI to generate the reading list and other stories in the section. He did not fact check the output. The section went to print.
Three organisations. One freelancer. Zero verification steps.
Buscaglia was honest about what happened. He said I do use AI for background at times but always check out the material first. This time I did not and I cannot believe I missed it because it is so obvious. No excuses. On me 100 percent.
King Features terminated his contract and said his use of AI violated their strict policy.
The Sun-Times said the content was not created by or approved by their newsroom and called it unacceptable.
Everyone pointed at someone else. Everyone had a policy that was violated. Nobody had a process that would have caught it.
This is not an AI story.
AI hallucinating fake book titles is not surprising. That is what AI does when it does not know the answer and nobody tells it to say I do not know. It fills the gap confidently with something plausible.
The story is that three organisations were involved in producing content that went out under the Sun-Times name to paying subscribers on a Sunday morning. And at no point in that chain did anyone ask one simple question.
Is this actually true.
That question is not an AI governance question. It is the most basic editorial question in journalism. It existed long before AI. The difference is that AI makes the failure faster and more embarrassing than human error would have.
A human writer inventing fake books would be unusual. An AI doing it is completely predictable. Which means the verification step was even more important here than in traditional content production. Not less.
Back to my presales story.
The junior developer at the bottom of that chain was not the problem. He was the symptom. The problem was that a decision with real consequences had been passed down so many times it arrived stripped of all the context that made it matter.
Buscaglia is that junior developer. He made a bad call under deadline pressure. He admitted it immediately and took full responsibility.
But the governance failure started three levels above him.
King Features had a strict policy against AI use. Where was the process to enforce it. The Sun-Times had editorial standards built over decades. Where was the review step for licensed content going out under their banner. Three organisations shared responsibility for what reached readers. Where was the accountability structure that connected all three.
Nobody asked those questions before the section went to print. Everyone assumed someone else in the chain had checked.
That assumption is the governance failure.
When you delegate a decision you do not delegate the accountability.
That is true in software delivery. It is true in journalism. And it is true in any organisation deploying AI in a content pipeline it does not fully control.
The question is not whether your policy prohibits misuse of AI. Most organisations have that policy now.
The question is what process ensures the policy is actually followed at the point where the work gets done. Not at the VP level. Not at the director level. At the junior developer level. At the freelancer level. At the place where the actual output gets created.
If the answer is individual judgement on a deadline you do not have a governance model.
You have a policy and a hope.