Visual of smart digital city showing Big Tech's privacy push

“Better, but Still Not Good” – Making Sense of Big Tech’s Privacy Push

Big Tech came under significant scrutiny in 2020. At various points last year, CEOs of companies like Apple, Google, Facebook, Amazon, and Twitter were brought before the Senate and subjected to targeted criticisms from legislators. While details varied, senators’ questions focused on exploring how the companies that make up “Big Tech” abuse both user data and their overall market-power.

Although tech executives’ evasive answers to important questions left much to be desired, last year’s hearings at least proved that politicians are intent on “doing something” to assuage voter concerns about online privacy and the growing monopolization of the internet. The intended message was also clear: if the tech industry failed to clean up its act, a federal U.S. privacy law would force a change in their practices instead.

Since then, while Congress has not progressed as far as we might have hoped on privacy, state governments have taken the initiative. Lacking the ability to stage nationally televised struggle sessions with Big Tech, most state capitals decided to skip the jawboning phase entirely. Instead, states like Virginia and Washington have rapidly proposed and passed a variety of new consumer-privacy-focused regulations. This state-level privacy push is, in turn, increasing the pressure on the Federal government to act — if only to harmonize a growing patchwork of state legislation.

Likely responding to growing regulatory pressures, Apple and Google have brought forward major changes to their core technologies (iOS and Chrome, respectively). Both companies now offer what may, at first glance, appear to be significant new privacy controls to users. In contrast, Facebook and Amazon have pushed in the other direction, either appealing directly to the public with promises that their systems are already secure and that the benefits their services offer far outweigh the cons of data collection, or working behind the scenes to ensure regulatory frameworks are designed with their best interests in mind.

However, even though these different approaches might appear to show diverging ideologies around how big tech is approaching consumer privacy, the devil is in the details. While the intent seems positive, the likely beneficiary of the recent Apple/Google privacy push is not the consumer.

Responding to consumer demands or shaping the digital ad marketplace for convenience?

While undoubtedly timely, it’s important to note that Apple and Google’s recent changes to consumer privacy have been in development for years. Rather than a rapid response to consumer concerns or congressional pressure, these changes are part of longer-term strategic efforts to exert greater control over core markets. Far from a virtuous initiative to protect user privacy, Google and Apple’s recent updates are more likely an effort to force the digital advertising world to restructure itself for their organizations’ benefit.

Some leading privacy advocacy groups have pointed out that Google’s move to end the use of third-party tracking cookies in Chrome in favor of a ‘cohort’-marketing-model called “FLoC” (Federated Learning of Cohorts) doesn’t make any difference as far as user privacy is concerned. In fact, FLoC, which bundles user data into aggregate groups instead of documenting individual behavior, is merely a new method of profiling and targeting consumers — doing little to reduce consumer data-collection. Unsurprisingly, the Electronic Frontier Foundation called FLoC “a terrible idea.”

Similarly, while Apple’s privacy nutrition labels promise greater privacy by allowing more transparency into data collection, some privacy advocates note that ‘some transparency’ is not enough or even useful. Apple doesn’t audit apps to make sure they provide accurate data in their privacy labels. The entire feature relies on vendors self-declaring their data-gathering practices. As a result, how apps choose to define their activity may vary wildly, with data-mining practices remaining entirely unregulated by Apple.

Other critics point out that Apple’s App Tracking Transparency (APP), another privacy feature that requires apps to get users’ permission before they track them across apps and websites owned by other companies, is likely to concentrate user traffic into fewer applications. As a result, APP will ultimately maximize user add value to Apple’s biggest partners ahead of competitors.

There’s almost no change to how data is being commercialized on the back end, either. Data collected by third parties while using these services can still be gathered and used in ways people have zero insight into, including selling your location data to the IRS or allowing federal agencies to scrape images for facial recognition databases.

Federal privacy legislation is still needed

Neither Apple’s nor Google’s new models provide any fundamental change to the myriad ways people can be tracked online (and increasingly offline) or the booming economy built on the trade of sensitive personal data.

Federal legislation will still be necessary for many reasons. Firstly, to streamline the already evolving range of state-level regulations; secondly, to harmonize with European standards, enabling more international data-transfer; and thirdly, to make requirements for disclosure uniform across businesses. Company privacy innovations should be encouraged, but without standards about what different terms mean and what obligations they put on businesses, they will remain inconsistent and confusing.

That said, the fact that the largest consumer technology companies are now promoting privacy as a key consumer benefit is a step in the right direction and should be encouraged.  Apple’s move, in particular, will have an edifying effect with many users in helping them to better understand the many different ways their behavior is being tracked and monetized and help create further demand for transparency and control.

Opting-in vs. opting-out? A false dichotomy

The current debate around how consumers should be able to control online data gathering tends to revolve around vague notions of “opt-in” vs. “opt-out.” Users can either opt-in with vendors prior to doing business with them (like with Apple’s privacy labels) or be provided an opportunity to “opt-out” from data-collection, even if they intend to continue doing business with the company in question.

Strong arguments are made by advocates in favor of each approach as the default model for online privacy.

However, the distinction is probably not as important as many claim; there can be weak or strong versions of either privacy model. What matters more is providing consumers real transparency that enables choice and control. For example, opt-outs that allow a private right of action would provide consumers better privacy protections than opt-in models that are confusing, unclear, and have no regulatory standards or enforcement.

Final thoughts

We should welcome and encourage Big Tech’s privacy efforts. Even if they’re superficial, they’re nevertheless a step in the right direction. At the very least, more transparency into how tech companies collect data will educate consumers about what’s really going on behind the scenes when they use certain technologies.

However, consumers shouldn’t be lulled into thinking that some new ‘warning labels’ or checkboxes placed on websites will solve the underlying issues about data collection. Only real risks of liability — Like Facebook’s recent $650m settlement over violation of Illinois BIPA biometric law – will convince companies that they need to change their business processes fundamentally.

Instead, what we should look for is clarity of law (simple definition with no loopholes or exceptions) and the ability for robust enforcement, either by agencies or private action.