Fake news button on keyboard showing AI and misinformation

AI Is Capable of Generating Misinformation and Fooling Cybersecurity Experts

From politics to medicine and to cybersecurity, digital innovations have created a fertile environment for misinformation to spread. It’s now possible for artificial intelligence (AI) programming systems to create false information and present it as fact – and even trick cybersecurity experts into thinking the information is true.

The spread of misinformation is a problem that starts in the human mind and is concretized with the help of Big Data, social media, and news media. When misinformation is presented effectively, it can be nearly impossible to decipher fact from fiction. And the amount of data out there allows machine learning (ML) algorithms and AI to learn and tailor their outputs constantly, making it even more challenging to tell the difference.

Whether for personal recommendations or AI content creation, machine learning dominates many areas of our lives. So, it’s no surprise that there has been a push to regulate data collection and how it is used. Let’s dive into how AI works in content generation and how it can potentially spread misinformation online.

How does misinformation work?

Much of the damage that misinformation causes isn’t directly related to the false information itself, although false statements often do cause damage to individuals and companies. The “tainted truth effect” refers to the psychological effect on the human brain after being warned about the accuracy of what they’re reading. Whether the warning comes in the form of well-intentioned fact checking or ill-intentioned fear mongering, the effect causes people to distrust information to the point that they may even start to disregard true headlines for the possibility that they may be false.

When people are not sure what news outlets or public figures to trust, misinformation can contribute to the chaos and deterioration of public discourse. There have been studies proving that inaccurate statements cause problems with recall even when the person knows what they are reading is false. The spread of false information creates a distrustful environment, and is a tactic that has historically been used to create public confusion on purpose.

Spreading fake news can be profitable advertising campaigns, as it often elicits shares and comments from followers. In recent years, misinformation has mostly been discussed in the political sphere, but it’s negative effects in other spheres are worth exploring. When misinformation spread is aided by technology, we are running into a potentially scary cybersecurity problem.

Where does AI come in?

Digital transformations in the 21st century have caused a significant shift in how content is created and shared using data collected from individuals across countless platforms. It’s how Google knows what results to show you after you input search information. And whenever you click on a result, you are teaching an ML algorithm.

While there are numerous benefits to machine learning, there are a couple implications that come with using this technology. Over time, the machine learns what kind of data to show you and what else might also interest you. If this data were put into the wrong hands, or the machine was taught to show results with false data, it could result in false information being passed off and believed to be true. This is how even experts can become swayed by misinformation.

Misinformation and cybersecurity

Because fraudsters are so good at making fake news seem real, it’s of the utmost importance that tech and cybersecurity researchers and writers find a high quality editor to help thoroughly edit and fact check content in order to avoid misleading experts. Using AI is a convenient way to create content, but when the data is coming from a pool of false information, an article could turn out to be nothing but propaganda or advertising.

Even top producers in tech like Microsoft have succumbed to the exploits of cybercriminals. The average website is attacked 62 times each day, and the attack surface is ever increasing with the expansion of IoT. Remote work, e-learning, telemedicine, e-commerce, and online banking services are all using ML and AI to operate. This means that there are numerous entry points for hackers to start drumming up misinformation campaigns.

The COVID-19 pandemic is one example of how dangerous misinformation can be. While the Center for Disease Control (CDC) and the World Health Organization (WHO) were trying their best to disseminate health information, there was an onslaught of misinformation circulating from fraudsters trying to profit off of people’s fear. Many people received spam emails and fell victim to phishing schemes thinking they were interacting with authoritative sources of information.

Does AI create misinformation?

The rise in AI-generated content has shown us that the algorithm behind certain AI, such as the popular GPT-3 system, is not all that great at creating reliable content after all. While the language, syntax, and general idea might be there, the accuracy of information can not be guaranteed. This particular program is widely used to auto-generate emails, write code, and communicate with customers.

In fact, most companies today use AI for automation of many business processes to their great benefit. It’s not the AI itself that can cause problems – it’s the people behind it and the people consuming the content. If an organization with malicious intentions were to start using the technology, it would be extremely easy for them to spread misinformation and perpetuate false narratives to unsuspecting users.

Take Facebook for example. The company relies on advertisers and users to click, share and spend time in the app to make their money. They rely heavily on the data they mine from users who willingly teach their algorithm what they like, what they don’t like, and what content causes them to engage.

And inflammatory information causes people to engage. We saw an uptick in the amount of misinformation spread during the Brexit referendum, the US 2016 Presidential Election, the BLM movement, and so much more. And companies regularly point fingers too – Facebook, for example, accused Apple of harmful technology practices leading to anticompetitive privacy settings.

Fortunately, AI has the ability to fight disinformation as well. Facebook launched their Deeptext tool that successfully identified and deleted 60,000 hateful posts per week. However, they have admitted that the success of the tool was also due in part to humans also verifying and fact checking content for harmful information or misinformation. This proves that we need a combined effort of human oversight and technology to combat misinformation.

Conclusion

It turns out that AI is not all that smart on its own without the help of humans. It’s not that AI creates misinformation, it’s that AI can be used by people either for good or ill. We must all be aware of the threat of misinformation and take a proactive approach to protecting our businesses and ourselves from fake news – whether generated by a human or a bot.