News, insights and resources for data protection, privacy and cyber security leaders

Will Data Protection Laws Kill Artificial Intelligence?

There has always been a disconnect between the current law and the rapid pace of technological innovation. Laws used to regulate the Internet, for example, were based on laws used to regulate the earlier era of analog telecommunication. And now it looks like European lawmakers are repeating these same kinds of mistakes when it comes to how data protection laws will impact artificial intelligence (AI), which is unlike anything that lawmakers have ever seen before.

In fact, it’s probably not being hyperbolic to say that most lawmakers have very limited knowledge – if any – about machine learning, neural networks and all the finer points of how an AI system works. What they do understand, though, is data – and so it’s no surprise that they have been working very hard to protect consumer data using data protection laws.

For example, the upcoming European Union (EU) General Data Protection Regulation (GDPR), which will go into effect in May 2018, places a very onerous burden on any company that handles data from any European citizen. Businesses that fail to meet the basic principles and rules laid out in the GDPR could face very significant fines. In fact, the penalties for any serious breach of consumer data privacy could amount to 4 percent of a company’s total global (not just European!) turnover. No wonder 93 percent of U.S. companies have made compliance with the forthcoming GDPR their top legal priority.

 

Chilling effect of data protection laws

The problem, quite simply, boils down to this: AI systems run on data, and so any attempt to block or limit access to that data via data protection laws could have a chilling effect on the pace of AI innovation. In some cases, it might require businesses to radically re-think how existing products work.

A good example here is the cloud-based voice assistant, such as the Amazon Echo or the Google Home. These AI-powered devices are designed to recognize voice commands from users and translate them into actions. They use artificial intelligence to recognize human speech. Moreover, you can actually “teach” the Amazon Echo skills, and many corporations are now working on ways to create a presence on these devices for news and entertainment.

So here’s a common, everyday scenario that could be impacted by data protection laws: a family in Germany has purchased an Amazon Echo for their kitchen and uses it every morning to catch up on the latest news about their favorite sports teams. Using a simple voice command, they can request sports scores and statistics to be read out to them while they eat breakfast.

Sounds simple enough, right? But hang on – the onerous new data protection laws would attempt to regulate that very basic, everyday activity in a way that would have a chilling effect on AI. That’s because the Amazon Echo is designed to work in the background and listen ambiently – it’s waiting for someone to give a voice command, and thus, is collecting data about what people say even as it sits silently in the background. That data is then stored in the cloud.

The problem is that one of the fundamental principles of the new GDPR is “Explicit Consent.” This means that, prior to any data being collected on a user, that person has to grant his or her explicit consent, and it must be “freely given, specific, informed and unambiguous.” That poses a problem for those cloud-based voice assistants – are they really supposed to ask for the consent of each individual present in a room before they can start collecting data on what’s being said? A possible workaround here is having the data stored on the device itself and not in the cloud, but even that solution could slow down AI-powered voice assistants.

That’s just the beginning of the challenges posed by these new data protection laws that attempt to regulate a person’s data. Another challenge is posed by the principle of “The Right To Be Forgotten.” This data privacy principle means, essentially, that any European consumer can ask a company to erase and forever delete data that has been stored about them.

Wei Chieh Lim

Wei Chieh Lim is the CEO of Data Privacy Asia, which sits at the intersection of data protection, privacy and cyber security and serves as the focal point for Asia’s professionals to learn, network and collaborate. He spent the last 20 years in the IT and consulting industry with roles in software development, network and security management, technology audit, consulting, business development and architecture research. Over the years, he has engaged with companies in more than 25 cities across Asia Pacific, U.S., Europe and Africa.

Leave A Reply

Your email address will not be published.

Subscribe and Get 50% Off 6-Hour Workshop Video

PIAs and the ISACA Privacy Principles: Effective Tools to Identify and Mitigate Security and Privacy Risks

Thanks for subscribing!

Pin It on Pinterest

Share This