The Danger Of AI Content Farms


Using artificial intelligence (AI) to write content and news is nothing new at this stage. Associated Press started publishing AI-generated financial reports as early as 2014, and since then media outlets including the Washington Post and Reuters have developed their own AI writing technology.

Generally, it was first used to create boilerplate copy, such as sports reports. AI can simply grab data such as team and player names, times, dates and scores from feeds and then expand it with natural language generation, adding color and flavor to turn it into a readable article.

Just a few years ago, this technology was completely proprietary and only available to media corporations that could afford to buy and run it. Today, anyone can use AI to generate an article in seconds, and with just a little technical know-how, they can set up a “content farm” designed to produce and publish online content 24/7.

Recently, an a NewsGuard investigation discovered nearly 50 websites that publish content entirely created by generative artificial intelligence. He describes the articles as “low quality” and “clickbait”. Some appear to be simply designed to generate money by displaying advertisements and affiliate links to readers. Others may have a more sinister purpose, such as spreading misinformation, conspiracy theories, or propaganda.

So let’s take a look at some of the threats posed by this new breed of automated content farm and explore some of the steps we can take to protect ourselves against them.

Disinformation and propaganda

Even without robots churning out content day and night, there’s a lot of bad information on the internet. Given the speed at which AI can generate articles, this is likely to only increase. The real danger comes when this information is used to maliciously mislead or promote a false narrative. Conspiracy theories exploded during the global Covid-19 pandemic, causing confusion and alarm among an already fearful public. We’ve also seen a huge rise in the emergence of “deepfakes” – believable images or videos created by artificial intelligence of people saying or doing things they’ve never done. Combined, these tools can be used to deceive us by those who want to push a political or social agenda in ways that could potentially be very harmful.

Many of the websites highlighted by NewsGuard conceal their ownership, as well as the details of those with editorial control. This can make it difficult to determine when plans might be in play, as well as to establish liability for defamation, dissemination of dangerous information or malicious lies.

Copyright infringement

Some of the content farms identified so far seem to exist only to rewrite and republish articles generated by mainstream media such as CNN. It should also be noted that the training data it uses to learn how to create these fake articles is often taken from copyrighted works created by writers and journalists.

This can make life difficult for those who rely on writing and content creation of all kinds, including artists and musicians, to make a living. It has already led to creation Human Artistry campaign, which aims to protect the rights of songwriters and musicians to protect their work from AI plagiarism. As noted above, many of these content farms are effectively anonymous, making it difficult to track down and take action against people using AI to infringe copyright. As things stand, this can be considered a legal “grey area” as there is nothing to prevent AI-created works that are “inspired” by human works, but society has yet to determine how this will be tolerated in the long run.

Spreading clickbait

Many of the found articles produced by artificial intelligence are clearly there only to put advertisements in front of the audience. By telling the AI ​​to include keywords, hopefully the articles will rank high on search engines and attract an audience. AI can be instructed to give articles intriguing, shocking or scary titles that will encourage users to click on them.

The danger is that it makes it difficult for us to get real, valuable information. Distributing advertising online is clearly not a crime – it funds the vast amount of media we consume and the services we use online. But the speed and consistency with which AI content can be produced creates the risk that search results will be muddied and our ability to find the right content will be diluted. It’s already far cheaper to create AI content than it is to create human content, and the production of these farms can be scaled almost infinitely with very little cost. This leads to homogenization of content and makes it difficult for us to get unique perspectives and valuable, in-depth investigative reporting.

Consequences of biased data

Bias is an ever-present danger when working with AI. But when it’s present in the training data used to drive algorithms that create processed content at scale, it could have particularly insidious consequences. An AI system is only as good as the data it’s trained on, and the old computer adage that “garbage in = garbage out” is magnified when applied to smart, thinking computers that produce content at scale. This means that any bias contained in the training data will infect the generated content, perpetuating the misinformation or bias it creates.

For example, if a poorly constructed survey that forms part of the AI ​​training data overrepresents the views of one segment of society while minimizing or underrepresenting the views of another, the AI-generated content will reflect the same bias. This can be particularly damaging if those whose views are marginalized are vulnerable or a minority. We’ve already seen that the operators of these content farms appear to have little oversight of their results, so it’s possible that the spread of this kind of biased, biased, or harmful results could go unnoticed.

Ultimately, biased AI output is bad for society because it perpetuates inequality and creates division. Amplifying this by publishing thousands of articles thrown out day and night is unlikely to lead to anything good.

What can be done?

No one would argue that there is never a plan behind human-authored journalism, or that human-run media never make mistakes. But most countries and jurisdictions have safeguards in place, such as guidelines stipulating that news reporting and opinion must be kept separate and laws relating to libel, slander and editorial liability.

Regulators and legislators must ensure that these frameworks remain fit for purpose in an age when content can be created and distributed autonomously at scale.

In addition, the responsibility for mitigating the harm clearly lies with the technology companies that create the AI ​​tools. They must take steps to ensure that they minimize the impact of bias and incorporate systems that include accuracy, fact-checking and copyright recognition.

And as individuals, we must take steps to protect ourselves. An important skill we all need in the age of AI is critical thinking. It simply means that we can evaluate the information we come across and make judgments about its accuracy, truthfulness and value, especially if we are not sure whether it was created by a human or a machine. Education certainly plays a role here, and the awareness that not everything we read may be written in our best interest should be instilled at a young age.

Overall, addressing the dangers posed by large, autonomous, and often anonymous content distributors will likely require smart regulators, responsible businesses, and a well-informed public. This will ensure that we can continue to enjoy the benefits of ethical, responsible AI while mitigating the harm that can be done by those looking to make a quick buck or mislead us.

To stay up to date with new and upcoming business and technology trends, be sure to subscribe to my newsletterfollow me Twitter, LinkedInand YouTubeand look at my books, Future Skills: 20 Skills and Competencies Everyone Needs to Succeed in the Digital World and The Future Internet: How Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.





Source link

Forbes – Innovation

Leave a Reply

Your email address will not be published. Required fields are marked *