The internet is dying — researchers uncovered 200 fake AI websites you’ve likely visited

Moonvalley AI image
(Image credit: Moonvalley)

Fraudsters are no longer just writing fake articles — they’re building 'slop factories.' A massive investigation into the 'AutoBait' network has just exposed 200 websites that were caught using hidden AI prompts to intentionally manipulate readers' emotions for profit.

First reported by Axios, an investigation has uncovered a network of more than 200 websites publishing AI-generated articles designed purely to capture advertising revenue, highlighting how easy it has become to mass-produce fake content online.

At first glance, the sites look like normal lifestyle blogs or news pages. But researchers say they’re actually part of a coordinated operation powered almost entirely by AI.

Article continues below

The discovery is raising new concerns about how generative AI could transform the internet. For example, Anthropic's blog is written by Claude, but AI is doing more than simply helping people write — it's enabling large-scale spam networks to mass produce content that's cheap, automated and difficult to detect.

The 'AI slop' factory

AI chatbot images on a phone screen

(Image credit: Getty Images)

The investigation was conducted by researchers at cybersecurity firm DoubleVerify, who uncovered what they call an “AI slop factory.”

The operation — dubbed AutoBait — consists of hundreds of domains that appear independent but actually run on the same automated system for producing articles and images.

Each site publishes:

  • AI-generated articles
  • AI-created images
  • slideshow-style clickbait pages
  • sensational headlines designed to maximize engagement

The goal isn’t journalism — it’s advertising impressions

icon depicting a monitor with a website and display advertising

(Image credit: Visualeat from the Noun Project)

Researchers say the network has already generated tens of millions of ad views, often without advertisers realizing where their ads are appearing.

But what makes this case particularly interesting is how researchers discovered the operation. The site operators accidentally left the AI prompts and code used to generate the articles exposed in their website JavaScript, providing a rare glimpse into how the system worked.

The prompts instructed the AI to do things like:

  • lead with sensational or shocking information
  • inject strong emotions such as fear or urgency
  • create slideshow-style articles designed to keep users clicking
  • generate images that look like authentic smartphone photos

In other words, the entire system was optimized to manipulate attention and maximize ad revenue. That's AI slop in a nutshell. Why? Well, the economics are shockingly cheap.

Perhaps the most surprising detail is how inexpensive these sites are to run. Researchers estimate the operators spent less than about $2.25 to generate each article page, thanks to AI automation.

Before generative AI, creating hundreds of websites with thousands of articles would have required a large team of writers. Now it can be done with a few prompts and an automated publishing system. As a journalist, this time of slop is unsettling. I liken it to junk food. Readers are conuming something, but there's nothing of value in it.

The term AI slop has become shorthand for low-quality content generated in huge volumes by artificial intelligence. It typically refers to digital content that prioritizes speed and quantity over accuracy or originality, often created for clicks or advertising revenue rather than real readers.

As someone who spends a lot of time researching and testing AI, I can recognize AI written slop the way a pawn store owner can recognize fake gold. Essentailly, AI is stylized with the following traits: repetitive phrasing, shallow explanations, overly generic writing and AI-generated images meant to look realistic.

At scale, the result is a flood of filler content that can make it harder to find trustworthy information online.

Bottom line

Content farms aren’t new, but generative AI has made them dramatically easier to build and operate. Instead of hiring writers or editors, bad actors can now rely on automated systems to generate articles, publish them across large networks of sites and earn revenue through advertising.

The concern is that these sites often look legitimate at first glance, making them difficult for readers — and sometimes advertisers — to distinguish from genuine publications. As AI tools continue to improve, the cost of launching similar networks is likely to drop even further.

That’s why some experts warn the internet may be entering what’s increasingly described as the “AI slop” era, where massive volumes of automatically generated content compete for attention online. In this environment, the bigger challenge is maintaining trust in what people read and share on the web.


Google News

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.


More from Tom's Guide

Amanda Caswell
AI Editor

Amanda Caswell is one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.

Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.

Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.