Image of man shaking hands with a electronic being signifying the privacy and legal issues of artificial intelligence
Artificial Intelligence: Privacy and Legal Issues

Artificial Intelligence: Privacy and Legal Issues

The era of big data led to companies all over the world embracing data as a key competitor driver. The more they knew about their operations, customers and products, the more successful they would be. And now these same companies are embracing artificial intelligence (AI) in order to make sense of all this data. But there’s just one problem here: the implementation of AI-based systems is raising a whole host of new legal issues and stimulating a robust public debate about data privacy.

Privacy issues raised by big data

It is important, first and foremost, to recognize that data is the “raw material” of artificial intelligence. The way that artificial intelligence systems learn is by analyzing data. Eventually, they are able to make decisions and take actions without the need for human interaction. The greater the amount of data these AI systems have, the better the decisions become. Thus, for any company, the goal is to get as much data as possible in an effort to make their artificial intelligence systems as powerful as possible.

You can immediately see why this essential feature of AI technology raises so many privacy issues. It means that customer data is highly prized, often at the expense of that customer’s privacy. The more customer data that products or services can collect about their users, the better able they are to serve those customers. There is nothing sinister about this – at least directly. If a company knows your past Internet browsing activity and if it knows your past purchasing behavior or past social media activity, it can begin to develop customized offers and promotions that are tailored directly for you. Most customers probably would not argue against this form of artificial intelligence.

But where things get dicey is when customer data is used in ways that are completely unexpected, potentially representing a threat to your private information. Legal researchers sometimes refer to this as the “Big Data Challenge.” For example, what if your car starts collecting data on your driving habits, and this data is transmitted in some way to your auto insurance company? Even if you have a spotless record of driving, your insurance company might know that you tend to drive much faster than the recommended speed limit and that you represent an increased insurance risk. This is clearly an invasion of your privacy because your data is being used in a way you did not agree to.

Privacy issues become even more interesting when it comes to medical and healthcare companies. One of the most common AI applications involves facial recognition. This is how Facebook knows the names of people appearing in your photos – there is an AI tool is working in the background. The latest generation of image recognition, though, goes far beyond just recognizing faces – it can now make inferences about your health. For example, one AI startup can use a simple selfie photo to make inferences about your gender, your body mass index, and whether or not you smoke.

Legal issues raised by artificial intelligence and machine learning

One subset of artificial intelligence (AI) is known as machine learning. This refers to the ability of machines to learn and develop independent behaviors over time. Machine learning is what is at the core of image recognition, text analysis, speech analysis and data mining.

Machine learning raises a number of conceptual difficulties and legal issues because the legal system is based around the fundamental notions of justice and liability. Thus, if you do something bad to me, I have the opportunity to seek legal redress. As a result of how the system treats legal issues, a doctor can be sued for medical malpractice or a corporation can be sued for a defective toy that causes injury to a small child. In both cases, there is clearly somebody at fault here – the doctor and the corporation – and the legal system has a specific way to bring justice in both cases involving legal issues.

But what if it is a machine and not a human that makes a faulty decision? Legal researchers refer to this as the “Causation Challenge,” and the reason why it raises so many legal issues is because it is very hard to establish “fault” if an AI system is involved. For example, what if an AI-powered medical system makes a recommendation, and that recommendation leads to serious injury or even death? Would anyone be able to sue that AI system for medical malpractice?

Even more fundamentally, how would you even know that the AI-based system is guilty of malpractice in the first place? The problem with machine learning is that much of what machines decide is the optimal solution is a “black box.” When AI researchers give machines the ability to make independent decisions, they do not always know what types of decisions they will make. This leads to many unique legal issues, including issues related to how data is stored and how data is collected.

Here’s another example: let’s say that you are applying for a job at a huge corporation, and that for this opening, there are hundreds of candidates. The company can’t possibly interview every single candidate, so it uses an AI-based system to screen applicants. This AI system analyzes the text of each resume and decides which candidates are the “best fit” for the job. But what if the AI system systematically chooses all men and no women? Or only white Caucasian candidates and not Black, Asian or Hispanic candidates? Is it possible to sue the corporation for discrimination?

Politicians are now debating the legal and privacy implications of AI

The good news is that politicians are now working to understand the long-term legal issues and data privacy problems raised by artificial intelligence. In some cases, these are the same issues raised by big data and the Internet of Things, and might be resolved simply by having a more comprehensive privacy policy. In other cases, they are completely new and unique issues that are specific to AI.

For example, the British Parliament convened a special session in the House of Lords in October 2017 on the ethical and legal implications of AI. They considered policy questions and legal issues such as who should be ethically responsible for decisions made by AI systems. And they tackled the issue of discrimination in AI decision-making, as well as the potential need to create some form of “electronic personhood” that would represent a legal entity.

Ultimately, artificial intelligence might become the next big privacy trend. Just as big data made every single company a data company, the new era of AI might transform every company into an AI company. Autonomous vehicles, self-driving cars and intelligent robots might be what most people think of when they hear the word AI, but the legal and privacy implications are far wider, potentially impacting every single industry, from consumer goods to healthcare to financial services.