Artificial Intelligence and the Privacy Challenge

Proponents of artificial intelligence (AI) hail the advances in the ability of machines to make independent decisions based on an analysis on the environment as the next step in machine intelligence – and claim that it will revolutionize complex problem solving across a wide spectrum of human endeavor. The simplest definition of AI is that of an ‘intelligent’ machine that exhibits all the attributes of a flexible, rational agent that perceives its environment and makes decisions – and in many instances takes actions that maximize the chances of success when engaged in a particular task. If one looks at a popular definition, Artificial Intelligence machines mimic human cognitive function. They can learn and solve problems.

One of the oldest and most well accepted tests on whether a machine exhibits true AI is the Turing Test. Machine AI can pass the 65-year-old Turing Test if the computer is mistaken for a human more than 30% of the time during a series of five-minute keyboard conversations. In 2014, a computer program called Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organized by the University of Reading.

But many argue that this is of historical interest only – and is in effect an AI party trick. The definition of AI in the 21st century remains whether a machine can mimic that all important cognitive function – learning to solve problems using logic and not whether it can imitate a human being.

Artificial intelligence – Machine learning using massive data

So, how does this work in the real world? Without delving too deeply into the programming behind it, no matter how fascinating that might be – at its core it falls into different categories. However, when it comes to possible threats regarding privacy there is one subset of AI that should give those concerned with data privacy pause. It is so called ‘machine learning’. An explanation is almost superfluous. This field is focused on giving machines the ability to ‘learn’ without human intervention. This is achieved through advanced algorithms that spot and decode patterns – and then generate insights based on the data that they observe. This then influences the future decision making and predictive responses of the machine. It neatly sidesteps the need for a machine to be programmed to respond to every eventuality within the environment it operates – which in a chaotic environment (think of an AI car on a city street or the sheer amount of data available to big business at the moment) would be a practical impossibility. At least for human beings operating in real time.

Privacy implications of machine learning algorithms

The old saying that cash is king is swiftly falling by the wayside. There can be no doubt that data is now king. The most successful businesses in the world must now deal with enormous amounts of data – and increasingly that data is gleaned from the actions of their customers. That data is gathered from an enormous number of sources. It is gathered from customer buying habits and their actions on the company’s owned Internet landscape – websites, blogs, social media accounts and other sources. But it is also gathered from third parties.

It is well known that consumers sacrifice their anonymity when they use services that on the face of it they use for free. Think of Google and social media sites such as Facebook. There is a contract. It is not implied – it is part and parcel of the terms and conditions that consumers agree to when they use services and sites like these. So, consumers know that the information they supply and the behavior that they engage in will be used for (among other things) to target them with advertising. They also agree within those terms and conditions that these service providers will be free to supply the information on their behavior to third parties who use that to improve their sales through targeted advertising on those sites.

Consumers are offered limited (at least at this point) opportunities to block this advertising. However, this is at best an interim measure. Large social media companies like Facebook and search engines like Google have no intention of giving up lucrative income streams that target consumers based on their online behavior.

The bad news for employees and even private citizens is that AI needs massive amounts of data to be as effective as a human being in analyzing behavior – and those algorithms are getting better and better. It’s that thorny issue of machine learning that is at the foundation of concerns when it comes to AI and privacy.

Business applications are being developed that harness the power of machine learning. A mobile sales application can collect location data, or IP addresses. This is all possible now – it happens every day.  But drawing together that data and building a coherent ‘persona’ from that data, without human intervention is a frightening possibility. It may seem frivolous to suggest that a machine learning algorithm may be able to build an avatar of any single human being, including behavioral patterns, but the reality may be closer than many people think.

A machine learning algorithm may mine a user’s personal apps to supply human resources departments with information that the individual may not be comfortable with that department knowing. Personal fitness devices which many companies are handing out to employees gather data which could be used for insurance purposes. All of this without human intervention. The next logical step according to many experts is for machine intelligence to decide, based on advanced algorithms who should be supplied with this information. If it affects the company’s bottom line should that information be supplied to third parties? Without human intervention?

Here to stay despite unanswered questions

Many futurists are, perhaps rightly excited by the idea of AI and machine intelligence. It does of course hold huge promise in automating parts of our lives. It can make driving a car safer and hassle free. It can allow us better access to more modern and efficient healthcare. But is it a good idea to remove the human gatekeeper from the process? A driverless car may seem to be a science fiction idea come true, but is the sacrifice of control over personal data something that people would be willing to give up to an artificial intelligence worth the reduction of the stress of a rush hour traffic jam? It’s a question that still needs to be answered. However, make no mistake AI and its increasing ability to mine personal data, collate that data and draw conclusions about behavior is here to stay – and it is getting ever more advanced.

 


Leave a Reply

Please Login to comment
  Subscribe  
Notify of

Enjoyed the article?

Get notified of new articles and relevant events.

Thanks for subscribing!