The Privacy Paradox Could Determine the Next Evolution of Privacy Regulation

At the recent ICDPPC data protection and privacy conference in Belgium, Apple CEO Tim Cook’s remarks about a new “data-industrial complex” in Silicon Valley earned all the headlines and started a vigorous online debate about the way that tech corporations monetize user data. However, an arguably more important speech at the conference was the opening keynote from Giovanni Buttarelli, in which he described the modern context for the “Privacy Paradox.” According to Buttarelli, this Privacy Paradox has already shaped the way we think about privacy in the digital age and could help to determine the next evolution of privacy regulation.

As Buttarelli noted in his speech, “The so-called Privacy Paradox is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization.” Buttarelli went on to suggest that modern digital society is at a “tipping point,” in which it must decide how it is going to define, respect and protect privacy at a time when data collection is a standard business practice and personal information is widely available online.

What is the Privacy Paradox?

Prior to Buttarelli’s speech, the Privacy Paradox was generally defined as the fundamental inconsistency between people’s stated beliefs and intentions about privacy and their actual behaviors. In other words, it is the paradox of wanting privacy but behaving as if it didn’t matter. Thus, while people may have a deep distrust and uneasiness about granting Facebook and Google so many insights into their daily lives via a constant stream of data, they generally are willing to click any boxes or agree to any terms of service, as long as they can continue to use the service.

In thinking about the Privacy Paradox, most researchers fall into either one of two camps: either they believe that consumers are rational thinkers who perform a sort of cost-benefit analysis in order to determine what is the price they are willing to pay to give away their data, or they believe that consumers are filled with inconsistencies and biases and are largely inaccurate when coming up with the true price of their personal data.

Before you continue reading, how about a follow on LinkedIn?

In research studies about the Privacy Paradox, for example, people have been willing to give up their entire browsing history for the equivalent of a Big Mac meal. In other studies, people have been willing to give away their personal password for the cost of chocolate bar. And, in everyday life, people are willing to give away their entire shopping history at a specific store in return for rewards, discounts and bonuses (usually in the form of a rewards card). From this perspective, it would appear that people place very little value on their privacy.

The Privacy Paradox in the era of Big Data

Complicating matters even further is the fact that the traditional Western approach to privacy regulation (in terms of laws and regulations) has always been based around the concept of physical space. Thus, people living in the European Union and North America have always feared the idea of governments “peering into their bedroom” and similar forms of privacy exploitation by corporations that violate physical space.

Think about the way that regulations and laws have been passed to protect people from getting unsolicited phone calls at certain times of the day and night, or about the way that the home is viewed as a sanctuary and refuge that corporations should respect. But that also implies that when people leave the privacy of their own home, they are suddenly in the public realm, and have much less expectation of personal privacy.

So you can immediately see why the modern digital age (and especially the era of Big Data) has made things so confusing for regulators and lawmakers when it comes to data security and protecting personally identifiable information. Is it possible that notions of privacy are different in physical space than online? For example, people may think nothing of letting Google track their every movement via mobile Android devices, but they would never agree to being shadowed by an unknown person while driving around a city or going shopping. They would never allow a physical breach of their own home (imagine coming home and finding the front door broken down), but often fail to complain after an extensive data breach happens online.

Moreover, while people may not want government regulators peering into their bedrooms, they have absolutely no problem with letting an Amazon Echo device to do the same. They would never think of sharing sensitive information (such as health information) with an unknown third party, yet gladly transmit this same information to Google and Apple via fitness trackers. Moreover, they refuse to tighten up privacy settings even when informed of security breaches. So do governments and regulators have an obligation to protect people, especially when these same people may not realize the extent of the risk they might be facing?

How to introduce ethics and morality into data privacy regulation?

That’s where things get really interesting, because until now, government regulators have never framed privacy in terms of ethics or moral obligations. Instead, privacy has always been thought of a type of “human right” that needs to be respected and protected. As Buttarelli also pointed out in his ICDPPC speech, people did not think about ethics when drafting the European General Data Protection Regulation (GDPR), and did not debate the various ways that morality or moral obligations should influence the actions of governments.

What Buttarelli suggests is that regulators and lawmakers need to start thinking in terms of the fundamental values that underpin privacy and data protection. It is time, says Buttarelli, “to develop a clear and sustainable moral code moving forward.” We have now reached “a 50-50 moment for humanity in the digital age,” in which there has been digitization of almost everything. It is no longer possible to escape discussions of personal privacy because the practice of data collection is ubiquitous and woven into our daily lives.

#Privacy paradox is how we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization. Click to Tweet

What’s more, says Buttarelli, the next evolution of data privacy regulation must take into account scenarios involving privacy that today might be regarded as futuristic. For example, should humanoid robots also have a right to privacy? When machines instead of humans are doing the sentencing of criminals (a process that Buttarelli refers to as “algorithmic sentencing”), what data should be allowed in their decision-making processes?

Why the Privacy Paradox matters

Going forward, it’s possible to see why the Privacy Paradox is such an important theoretical construct for framing future debates about data privacy protection. For one, the Privacy Paradox raises questions about the types of biases and inconsistencies that cloud most people’s views of privacy. And, as Buttarelli argues, it also helps to stimulate debate about the moral and ethical ramifications of privacy regulation. If future regulators do not take into account morality and ethics when crafting new data protection laws, they might be going about it all wrong, and in so doing, fail to protect the very people they hope to serve.

 


Leave a Reply

Please Login to comment
  Subscribe  
Notify of

Follow CPO Magazine