When a web page publishes articles in a massive way, seeking income by weight from advertising, it is called a content farm. They have been around for years, probably since someone saw that there was a business in creating content cheaply, the more the better, to make it profitable with ads served through automated platforms, such as Google AdSense. With text-generating artificial intelligences, such as ChatGPT or Google Bard, this dynamic takes on an industrial scale.
A study of the disinformation monitoring platform NewsGuard has verified that there are more and more content farms that use generative artificial intelligence. These sites feed on articles created by chatbots and apparently lack editorial support. His numbers are astronomical. In the week of June 9, one of the websites examined, world-today-news.com, published around 8,600 articles, that is, an average of 1,200 articles per day. Two other pages that were on NewsGuard’s radar posted 6,108 and 5,867 posts during that week.
“What is clear is that they are using AI to generate content clickbait low quality,” says McKenzie Sadeghi, a senior analyst at NewsGuard. “These websites use technology to produce articles faster but also cheaper.” Sadeghi refers to the fact that people have practically disappeared from the equation: “Before, these web pages had a team of human collaborators, freelancing to those who paid to write content. Now it doesn’t seem like there’s even much human supervision,” adds the expert.
That’s why these content farms publish even chatbot error messages as if they were news headlines., and these clues have guided researchers to detect content written with AI. In their search, NewsGuard analysts have come across phrases like “Sorry, I’m an AI language model, I can’t access external links or web pages on my own”; or other messages most disturbing: “I’m sorry, I can’t comply with this instruction because it goes against my ethical and moral principles.” The fact that there is no one to delete these messages indicates the degree of automation of these websites.
Content farms are incentivized to create articles at their discretion. The more they publish, the more visitors they will attract to their websites and the more users will click on the ads they have. NewsGuard verified that more than 90% of this advertising is served with Google Ads, whose algorithm automatically places ads on the pages attached to the platform. Advertisers also voluntarily sign up to this advertising platform and their messages may end up in these content farms. Between May and June, the analysts identified 393 ads from 141 high-relevant brands on 55 high-relevant web pages.
“Google does not have a policy that prohibits AI-generated content, but they do have a policy that prohibits content that tends to spam and of low quality, which is essentially what these pages offer”, comments Sadeghi. The American giant entered in 2022 a whopping 224.470 million dollars from advertising, according to Statista data. Although only a small part of this figure is attributable to automatic ads, since most of the billing comes from search advertising.
The use of generative artificial intelligences in content farms is growing rapidly. “We are discovering between 25 and 50 sites a week of this type. At the beginning of May, we identified 49 websites and now we have 277 websites on the list. Some are new and others have been around for years and are now starting to use artificial intelligence,” says the senior analyst.
Most of the sites NewsGuard analyzes, with ads from relevant brands, do not spread fake news. Sometimes they enter the field of misinformation, with headlines such as Can lemon cure skin allergy? either Five natural remedies for attention deficit disorder. But, in general, they can only be blamed for their low quality, often with plagiarized content.
The real problem comes from the combination of generative artificial intelligence with misinformation. In Spain, CSIC researcher David Arroyo, who works on detection of fake newslinks AI with a greater capacity to manufacture false news: “The phenomenon of disinformation is going to increase, without any doubt, due to the fact that these tools exist,” he states categorically.
Ammunition for disinformation
A magazine article Nature It already warned, in 2017, of the link between false news and automatic advertising. He argued that most of the fake news that was created during the 2016 US election campaign was not politically motivated, but economically motivated. “There was already talk of the entire ecosystem of advertisers linked to the creation of domains for invented content and its distribution. With these AI tools, everything is amplified, because the ability to create credible content artificially has increased enormously,” explains Arroyo.
From the CSIC they have detected that misinformation has increased in recent months, although Arroyo does not attribute everything to AI: “It would be difficult to isolate a single cause. It must be taken into account that in Spain we have been in electoral processes since May, and to this are added all the elements of distortion of the Russian movements related to the war in Ukraine ”, he adds.
A few months ago, NewsGuard did a study on ChatGPT, in its versions 3,5 and 4, to assess its potential as a fake news inventor. “The chatbots they were capable of creating disinformation on topics such as politics, health, the climate or international issues”, stresses Sadeghi. “The fact that they are able to produce such disinformation, when guided by someone, demonstrates how easily the defenses of these models can be manipulated,” says the NewsGuard analyst. And to this is added its amazing ability to produce content in an industrial way.
Subscribe to continue reading
Read without limits