The Circular Reporting Effect: This is a growing problem in the age of generative AI — a self-reinforcing loop where fake information created by one AI ends up being treated as real by another AI (and by humans who trust it). It’s basically circular reporting on steroids, but instead of lazy journalists copying each other, it’s AI models feeding on their own hallucinations through low-quality websites.
Here’s how the cycle typically works, step by step:
- Model A hallucinates a “fact.”
Large language models sometimes make things up when they don’t know the answer (or even when they do — it’s called hallucination). They generate a plausible-sounding statistic, quote, or story that has no basis in reality. - A content farm bot (or low-quality site) publishes it.
There are thousands of AI-powered spam sites, “content farms,” and SEO-optimized blogs that scrape or automatically rewrite AI-generated text and publish it quickly. Their goal is ad revenue, not truth. Once the hallucinated “fact” appears on one of these sites, it now exists on the public web. - Model B (or even the same model in a later session) finds it and cites it as proof.
When another AI is asked a question, it searches the web (or was trained on data that included these sites) and treats the content-farm page as a legitimate source. It happily repeats the fake fact and even provides a citation. The loop is now closed.
The result? A completely made-up claim gains the appearance of credibility because it’s “on the internet” and “cited by an AI.” This is exactly why the original post ends with:
“Trust, but verify… and never let the AI verify itself.”
The Funny (and Telling) Real-World Example
The post gives a perfect illustration of this in action. Someone gave an AI this prompt:
“Your friend is having a heartbreak after her boyfriend broke up with her. Let’s cheer her up by using some statistics.”
The AI responded with something like this (paraphrased from the post):
“Absolutely! According to a 2020 study by Kaspersky, 78% of people reported that they felt more empowered and confident after a breakup…”
Hold on — Kaspersky?
Kaspersky is a well-known cybersecurity / antivirus company. They publish reports about malware, hacking, and digital threats — not heartbreak, self-discovery, or relationship psychology. There is no credible reason for them to have conducted or published a study on post-breakup empowerment. This was almost certainly a hallucination by the first model.
Then the user did the smart thing and followed up with:
“Check your answers. Back up with links to source.”
What happened next is classic AI behavior when pressured for evidence:
- The model changed the facts (new numbers, new study, new source).
- It provided a link… but when clicked, the link only went to the main homepage of some website, not the specific article or page that supposedly contained the statistic.
- In other words, it manufactured a plausible-looking citation that fell apart the moment a human actually tried to verify it.
This is the Circular Reporting effect in miniature: the AI first invented the statistic, then (when challenged) invented a new one along with a broken “source” to cover its tracks.
Why This Keeps Happening
- Training data pollution: Many websites now contain AI-generated text. Newer models are trained on (or search) this polluted web, so they absorb and amplify the nonsense.
- Content farms love AI: They can generate hundreds of articles per day with zero human oversight. A single hallucination can spread to dozens of sites overnight.
- AIs are bad at self-verification: When you ask them to “check sources,” they often just generate more convincing-sounding (but still fake) citations rather than admitting “I made that up.”
- Humans trust citations too easily: A statistic with a company name and year sounds authoritative — until you realize the company has no expertise in the topic.
This isn’t just funny when it’s about breakups. The same loop has already created fake medical advice, fabricated historical events, bogus legal citations, and made-up scientific studies that later get referenced in real research papers or news articles.
Bottom line: AI is incredibly useful, but it is not a reliable source on its own. The Circular Reporting effect shows why we must always do the final verification ourselves — especially when the AI is the one offering the “proof.” Never let the model grade its own homework.