Image of angry robot representing the question of whether data protection laws will impact artificial intelligence
Will Data Protection Laws Kill Artificial Intelligence?

Will Data Protection Laws Kill Artificial Intelligence?

There has always been a disconnect between the current law and the rapid pace of technological innovation. Laws used to regulate the Internet, for example, were based on laws used to regulate the earlier era of analog telecommunication. And now it looks like European lawmakers are repeating these same kinds of mistakes when it comes to how data protection laws will impact artificial intelligence (AI), which is unlike anything that lawmakers have ever seen before.

In fact, it’s probably not being hyperbolic to say that most lawmakers have very limited knowledge – if any – about machine learning, neural networks and all the finer points of how an AI system works. What they do understand, though, is data – and so it’s no surprise that they have been working very hard to protect consumer data using data protection laws.

For example, the upcoming European Union (EU) General Data Protection Regulation (GDPR), which will go into effect in May 2018, places a very onerous burden on any company that handles data from any European citizen. Businesses that fail to meet the basic principles and rules laid out in the GDPR could face very significant fines. In fact, the penalties for any serious breach of consumer data privacy could amount to 4 percent of a company’s total global (not just European!) turnover. No wonder 93 percent of U.S. companies have made compliance with the forthcoming GDPR their top legal priority.

Chilling effect of data protection laws

The problem, quite simply, boils down to this: AI systems run on data, and so any attempt to block or limit access to that data via data protection laws could have a chilling effect on the pace of AI innovation. In some cases, it might require businesses to radically re-think how existing products work.

A good example here is the cloud-based voice assistant, such as the Amazon Echo or the Google Home. These AI-powered devices are designed to recognize voice commands from users and translate them into actions. They use artificial intelligence to recognize human speech. Moreover, you can actually “teach” the Amazon Echo skills, and many corporations are now working on ways to create a presence on these devices for news and entertainment.

So here’s a common, everyday scenario that could be impacted by data protection laws: a family in Germany has purchased an Amazon Echo for their kitchen and uses it every morning to catch up on the latest news about their favorite sports teams. Using a simple voice command, they can request sports scores and statistics to be read out to them while they eat breakfast.

Sounds simple enough, right? But hang on – the onerous new data protection laws would attempt to regulate that very basic, everyday activity in a way that would have a chilling effect on AI. That’s because the Amazon Echo is designed to work in the background and listen ambiently – it’s waiting for someone to give a voice command, and thus, is collecting data about what people say even as it sits silently in the background. That data is then stored in the cloud.

The problem is that one of the fundamental principles of the new GDPR is “Explicit Consent.” This means that, prior to any data being collected on a user, that person has to grant his or her explicit consent, and it must be “freely given, specific, informed and unambiguous.” That poses a problem for those cloud-based voice assistants – are they really supposed to ask for the consent of each individual present in a room before they can start collecting data on what’s being said? A possible workaround here is having the data stored on the device itself and not in the cloud, but even that solution could slow down AI-powered voice assistants.

That’s just the beginning of the challenges posed by these new data protection laws that attempt to regulate a person’s data. Another challenge is posed by the principle of “The Right To Be Forgotten.” This data privacy principle means, essentially, that any European consumer can ask a company to erase and forever delete data that has been stored about them.

That principle first became famous in discussions involving Google and a person’s search history. People wanted the right to have certain data disappear forever, and they didn’t want companies like Google to hold onto it forever in search results. That makes sense, of course, and it’s easy to see why European lawmakers quickly enshrined “the right to be forgotten” into law.

But that same principle, when applied to artificial intelligence, causes a number of complicated issues for AI companies. First of all, data is what is used to “train” machine learning algorithms to become smarter over time. Algorithms are designed to take in as much data as possible, and then intelligently use that data to make very smart decisions. The “right to be forgotten,” however, would seem to imply that a machine would need to “unlearn” something that it has already learned.

And, by taking data away from AI companies, it has the potential to fundamentally change the effectiveness of important AI systems. One example here is self-driving cars powered by artificial intelligence, which may one day become safer than human-driven cars. They use extensive data to recognize driving patterns, and use data to distinguish between pedestrians and vehicles. So what happens when people request to have their driving data removed from their cars – will their cars still function the same?

Another key issue facing the AI industry is something known as “algorithm transparency.” This means that any consumer has the right to an explanation anytime an automated decision has been made that impacts them directly. From the perspective of lawmakers, this makes sense because it ensures that bias and prejudice can not enter the decision-making process.

However, as anyone who has ever studied machine learning knows, the algorithms used by intelligent machines are considerably more complex than those used by, say, credit-scoring models to get a bank loan. In many cases, a machine learning algorithm is a “black box” that’s difficult to explain to consumers. So what implications will that have for the future of AI?

Can data protection laws meet demands of artificial intelligence?

As you can see, data protection laws were not designed to kill AI – they were designed to lawfully protect a individual’s data. The upcoming GDPR is not an attempt to stamp out AI innovation – it’s a way to enshrine principles of data protection and preserve the privacy of European citizens.

However, the unintended consequences could be very serious indeed. Just as laws designed for 19th century analog telecommunication were poorly suited for the era of the digital Internet, it could be the case that laws designed for 20th century data privacy simply have not kept up with the demands of 21st century artificial intelligence.