Elderly man peering through glasses
10 Things You Might Have Missed in Facebook’s Epic 229-Page Response to U.S. Lawmakers

10 Things You Might Have Missed in Facebook’s Epic 229-Page Response to U.S. Lawmakers

In early June, Facebook finally delivered what U.S. legislators had been promised for nearly two months: detailed and comprehensive responses to over 2,000 questions stemming from CEO Mark Zuckerberg’s congressional testimony to both the U.S. House and Senate in April. Obviously, while Mr. Zuckerberg might have shared a lot of information during his two days in Washington, there was a lot that was left unclear and unanswered.

In a 229-page document, Facebook attempted to provide some clarity for each of these questions. While many of the official responses were similar to what you might expect to read on a Facebook FAQ (such as questions about how the Facebook pixel works), there were certainly some responses that might raise a few eyebrows for anyone who is concerned about social media privacy and the use and abuse of data.

#1: Over 200 Facebook apps have already been suspended for improper use of data

The list of Facebook responses starts off with an answer to a question on the minds of many people and not just U.S. lawmakers: How many other apps are out there that have been using Facebook user data without their permission? After the dramatic Washington testimony by CEO Mark Zuckerberg, the company promised to do a thorough review and investigation of all apps, and then immediately suspend any apps that were found guilty of using data without the permission of Facebook users.

The goal, quite simply, was to find any other apps out there similar to the infamous “quiz app” used by Cambridge Analytica to get its hands on user data from 87 million people. And, according to Facebook, the company has already investigated “thousands of apps” and suspended over 200 of them.

If you read between the lines, though, it looks like Facebook initially targeted for review only those apps related in any way to the prime suspects in the Cambridge Analytica scandal. The company then suspended them (even if they were still in a “test” phase), just to be on the safe side. As Facebook itself notes, almost all of the 200 apps are from just a handful of developers, all of them related in some way to Cambridge Analytica (e.g. the Cambridge Psychometrics Center).

The big takeaway from all this is that Facebook really doesn’t know how many apps might be using your data without your permission, even as it pledges to provide regular updates on this. The company is basically trying to limit its liability right now, and trying to rid itself of any association with Cambridge Analytica or any of its principals. Thus, while it might seem impressive that Facebook’s crack investigation team has already discovered 200 apps that needed to be suspended, this seems like it is really just scratching the surface.

#2: Facebook hints that more than 87 million people might have been affected by improper data use by Cambridge Analytica

Anyone else notice that the number of people who might have been impacted by the Cambridge Analytica scandal continues to rise substantially? The number of people who used the quiz app was only 260,000, but the media soon extrapolated that more than 50 million people might have been impacted. And that figure was soon ratcheted up to 87 million. That’s the number that everyone has been using, but now even that number seems to be under review.

In one response, the company cryptically noted that “Facebook does not actually know” how many people were impacted. Moreover, the same response notes that the 87 million figure is “a highly conservative estimate…” So let’s read between the lines here – Facebook is basically using the same estimate that the media has already reported, and has no idea of how to figure out what the real number is. Facebook is basically playing a game of damage control here, knowing that if the number continues to grow, so does the company’s potential liability under the 2011 FTC consent decree.

#3: Facebook continues to maintain that the FTC consent decree does not apply

And, speaking of the 2011 FTC consent decree, what does the Cambridge Analytica case mean for how it is enforced? From Facebook’s perspective, a worst-case scenario would be that the company is found to be in violation of the FTC consent decree. That would expose the company to potentially millions of dollars in damages, as well as extensive regulatory and legal penalties.

So, no surprise here – when asked point-blank if the company felt that the FTC consent decree applied to this case, Facebook’s answer was about as emphatic as you might expect: “The consent order does not apply here.” What follows in the response is a very clever reading of what the FTC consent decree applies to. As most people understand it, the consent decree would expose Facebook to legal liability if Facebook users did not give their “affirmative express consent” to have their data used inappropriately by a third party. That would seem to be an exact match for what happened in the Cambridge Analytica case, right?

Wrong. Facebook cleverly notes that the FTC consent decree only applies to “affirmative express consent for materially expanding the audience of a user’s existing privacy settings.” In other words, it’s the user’s fault for not taking better control of their privacy settings. As long as the user’s settings were set to share certain types of data, then Facebook should not be held responsible if third parties received access to that data.

#4: Facebook thinks that a better user interface will solve its privacy problems

To a certain degree, you have to give Facebook some credit. In the aftermath of the Washington congressional testimony in April, Facebook has made concerted efforts to change the way people adjust their privacy settings. The company has worked to re-design the privacy menu, making the process less opaque and more visual. In other words, it has become easier for people to understand what they are sharing with others, and what they are not.

And Facebook proudly touts this new privacy design in many of its responses. In fact, in several different responses, Facebook specifically mentions its worldwide “Design Jams” – basically, meetups of lawyers, technologists and privacy experts – as evidence that the company is taking privacy seriously. Moreover, Facebook also mentions its collaboration with innovation labs to “improve the user experience around personal data.”

On the surface, this would seem to be a good thing. It would show that Facebook has really committed to re-designing the whole Facebook experience around data privacy. In fact, in one response near the end of the document (page 195 of 229), Facebook notes that, “Protecting people’s information is at the heart of everything we do.” But when you read through the responses, it quickly becomes evident that the “Design Jams” have been going on for nearly 18 months (well before Zuckerberg’s testimony), and that Facebook is basically just cobbling together any evidence it can to show that it has not been asleep at the wheel for the past few years. (In legal parlance, Facebook is trying to prove that it has not been negligent.)

#5: Facebook has never turned off a feature because of privacy concerns

At one point in the questions, a lawmaker asks, “Has Facebook ever launched a feature that had to be turned off because of its privacy implications?” This would seem to be a simple “Yes/No” question, but Facebook spends more than two paragraphs explaining its corporate commitment to privacy. It again evokes the tagline “Protecting people’s information is at the heart of everything we do,” and then launches into a summary of all the “cross-functional, cross-disciplinary efforts” underway at the company to protect privacy.

But what the company never actually does is provide the “Yes/No” response that people want.  Being able to answer “yes” would show that Facebook really is taking a proactive stance when it comes to data privacy. A long, obfuscating answer, though, provides much less clarity and comfort, and suggests that Facebook has never turned off a feature due to privacy concerns.

#6: Facebook is not fully applying its expertise around the user experience to the design of a better privacy policy

If you use Facebook regularly, you probably know that the company spends a lot of time on getting the user experience right. The company employs just about every resource at its disposal to make sure that “engagement” on the site is high, that users are visiting Facebook frequently, and that advertisers see plenty of return on their advertising dollars. So, it’s only natural to ask: Why isn’t Facebook also taking the same kind of “all hands on deck” approach to its privacy policy and the Terms of Service?

Facebook’s answer in the responses is somewhat evasive. According to the company, “we use in-product controls and education” to educate the user about privacy, and suggests that more explicit steps (what Facebook refers to as “on-demand controls” are not really necessary. In other words, Facebook users will figure things out as they go along, without the need for any help or guidance from Facebook.

And then, just to provide legal cover for what many lawmakers have called a confusing and obfuscating Terms of Service policy, Facebook notes that it is taking new steps to make its on-demand controls “clearer, more visual and easy to find…” To back that up, Facebook again cites its worldwide “Design Jams” as evidence that it is really tapping into the world’s very best UX and behavioral experts. But will it be enough to satisfy Congress?

#7: Facebook is working on some really creepy face-tracking and emotion-tracking technology

When you really dig into the Facebook responses, you encounter all kinds of insights into what Facebook is developing for the future. Congressional lawmakers, for example, uncovered two patents issued to Facebook – one for “Dynamic Eye Tracking Calibration” and one for “Techniques for Emotion Detection and Content Delivery – and demanded to know what these had to do with the Facebook experience.

Some of the technology, even when explained by Facebook in very basic legal language, sounds creepy. For example, Facebook is working on technology to display content base on emotion type. If you are sad and depressed, presumably, Facebook will try to show you content that will cheer you up. (Maybe a cute cat photo from a friend?) The creepy part, though, is how Facebook plans to do this. One plan calls for the camera in your digital device to take a photo or scan of your face. Once this has been done, a Facebook algorithm can determine your mood.

#8: Facebook is working on AI technology with data privacy implications

Futuristic technology and its potential implications for data privacy was obviously on the minds of Washington lawmakers, and one of the questions specifically pressed Facebook on how it was using AI technology. As Facebook notes in its response, it is using AI to combat hate speech. The goal, says Facebook, is to create “AI systems that are more transparent” than they are now. In other words, Facebook agrees that users should be able to figure out how an algorithm or AI program arrived at its final answer, instead of just confronting a nameless, soulless black box.

#9: Facebook makes no apologies for collecting and analyzing user data across devices

In response to a question about “cross-device tracking,” Facebook admits that it does “associate information across different devices.” In other words, if you use Facebook on your mobile phone and then decide later to use Facebook on your digital tablet, the app on your tablet will know what your app on your smartphone knows. Makes sense, right?

As Facebook is quick to point out, this is not done for any nefarious purposes. It is done for one purpose only – “to provide a consistent experience” across devices. If you’ve already read something in your news feed once, why should you see it again when you open up Facebook on another device?

However, it’s easy to see how some might try to position this cross-device tracking as evidence that Facebook has become some kind of all-seeing, Orwellian corporation tracking you everywhere you go. That seemed to be the major thrust of several different questions posed by the U.S. lawmakers, including several questions about the ability of Facebook to create “shadow profiles” of users.

#10: Facebook’s “Data Abuse Bounty” program applies to unauthorized access to data, not data-sharing arrangements

Much has been made of Facebook’s new Data Abuse Bounty program, and how it could lead to a crackdown on data abuse within the Facebook ecosystem. However, in light of the news that Facebook has signed data-sharing partnerships with major hardware device manufacturers, it’s natural to ask: To what extent does the new bounty program apply to these data-sharing partnerships?

Facebook’s polite answer to that question is that the bounty program is not designed to uncover rogue data-sharing partnerships. Instead, says Facebook, “The [bounty program] will reward people with first-hand knowledge and proof of cases where a Facebook platform app collects and transfers people’s data to be sold, stolen, or used as part of a scam…” That is, Facebook is perfectly OK with users finding and reporting rogue apps and rogue developers – but it really doesn’t want to hear about its data-sharing partnerships with other tech companies.

 


There’s obviously a lot to digest in this 229-page response from Facebook. While some of the questions posed by U.S. lawmakers seemed to go wildly off on a tangent (such as questions about “Russian bots” and U.S. election interference), it’s clear that Washington is starting to appreciate and understand the big picture of user privacy as it relates to Facebook.

The big question is whether these comprehensive responses from Facebook will alleviate their concerns – or spur even more questions. For now, it appears that Facebook has dodged a bullet. If these answers are deemed to be convincing enough, legislators may let Facebook proceed with its own version of self-regulation until it finally figures out social media privacy on its own, without the help of the U.S. government.