While the Facebook Cambridge Analytica scandal has created its share of problems for Facebook, it’s clear that the scale and scope of the scandal extends to every corner of Silicon Valley. After all, most tech giants are collecting staggering amounts of user data and comprehensive new privacy regulations seem imminent.
Data Privacy
Technological development has always outpaced privacy concerns, but never more so than in the past decade. Collection and centralization of personally identifiable information (PII), tracking of movements and digital surveillance are all at unprecedented levels. Regulations and laws are only just beginning to catch up to the ability of both governments and private entities to deploy these capabilities.
What exactly is there to worry about? The mass collection and centralization of data by giant multinationals such as Facebook and Google is as good of a place to start as any. Two decades of vacuuming up the personal data of users of various online services has created the most impressive marketing capabilities in history, but these profiles have astounding potential for damage when they are used the wrong way or fall into the wrong hands.
Unauthorized information that is captured in data breaches tends to find its way to massive “combo lists” that are sold and traded on the dark web. Social security numbers are added from this breach, home addresses and phone numbers from that one, personal health information from yet another. Soon, a frighteningly complete profile of millions of individuals is available to anyone willing to pay the asking price.
These are just the established data privacy issues. The emerging ones are even worse. High-quality facial recognition technology is just beginning to roll out across the public places of some countries. Artificial intelligence is not only making mass facial recognition possible, but magnifies the power and reach of any application that involves capturing and sorting information: scanning pictures, analyzing speech, sifting through text and location data. This threatens to not only shatter anonymity and privacy, but allow for highly advanced impersonation and take the concept of “identity theft” to new levels.
Some businesses chafe at the trouble and added expense of new and emerging data privacy regulations, but they are vital to both protecting rights and privacy and instilling confidence in end users. Customers want to be able to submit their payment information without worry about data breaches and identity theft, use services without wondering what is being done with their personal information and use devices without fear of surveillance or having location data tracked. The need for meaningful safeguards only grows greater as technological capabilities increase.
With high-profile scandals and the seemingly daily buzz of breaches, scams and exploits, it’s more and more obvious that the data points that make up your online profiles are a hot commodity. Time for the citizenry to take back their personal data and bring back responsibility into the ecosystem.
In the aftermath of the Cambridge Analytica scandal, many have suggested that Facebook be regulated, fined and perhaps even broken up. After all, if the FTC were to invoke its full power, it could theoretically levy hundreds of millions of dollars of fines, crippling Facebook. But is a big tech company too big to fail?
The congressional testimony was supposed to establish a national debate about data privacy and the right of users to protect their data from being sold, used, or analyzed in ways that were never intended. Instead, it has become very clear that regulating privacy is harder than anyone originally expected.
In the past few months the amount of talk, advice, debates, and claims about the EU GDPR which goes into effect May 25, has escalated to a fever pitch. And there is the rub. Most organizations do not know really know or understand what “personal data,” the GDPR term, is as it applies to their organization.
As the technologies for gathering, analysing, and acting on information become increasingly powerful, we find ourselves facing a tipping point as we consider the impact of data-driven processes on the ethics in information management and the challenges of managing data privacy.
In an effort to get out in front of the data privacy scandal threatening to engulf the company, Facebook recently announced a new data abuse bounty program, which promises to pay people who report data abuses. But is this new data abuse bounty program going to result in any real changes to data privacy on Facebook?
After nearly two months of non-stop controversy and scandal over its improper use of Facebook data, Cambridge Analytica finally announced that it was ceasing operations, effective immediately. In doing so, Cambridge Analytica has become the new poster child to highlight the perils of data security breaches.
Companies, and even entire industries, are more afraid of Wall Street than they are of Washington. Instead of Facebook’s stock falling on privacy concerns, it is actually rising. Facebook has sensed that Wall Street doesn’t really care about privacy, and as long as Wall Street doesn’t care about privacy, why should it?
As much as Facebook would like to sweep the Cambridge Analytica data scandal under the rug, signs continue to mount that the company is still playing fast and loose with user data. All this raises the question of whether the 2011 FTC settlement that resulted in an 8-count consent decree actually went far enough.