In an effort to get out in front of the data privacy issues threatening to engulf the company, Facebook recently announced a new data abuse bounty program. The program promises to pay people who report data abuses, similar to the company’s bug bounty program, which has existed since 2011. But is this new data abuse bounty program actually going to result in any real changes to data privacy on Facebook?
The pros of the data abuse bounty program
On one hand, of course, any move by the Silicon Valley tech giant to protect user privacy has to be seen as a positive. Facebook has over two billion users, and if the company can successfully mobilize the crowd, it might be able to prevent or eliminate data abuses by third-party developers and websites. In other words, it might be able to prevent another Cambridge Analytica scandal from happening.
And, to properly incentivize the crowd, Facebook has committed to paying at least $500 to anyone who can find these types of security vulnerabilities (and, possibly, as much as $40,000 per data abuse). In terms of pricing, Facebook says it will determine the amount of the bounty based on three key factors – impact, data exposure, and number of affected users.
Presumably, someone who spotted the Cambridge Analytica data abuse situation in advance would have maxed out at the full $40,000 bounty – a figure that might sound impossibly high to the casual Facebook user, but also a figure that is considerably lower than any fines or penalties the company might be facing from government regulators.
One more thing is notable about the data abuse bounty program – it will specifically cover third-party apps and websites. The current Facebook bug bounty program does not. In other words, if a security researcher had brought the Cambridge Analytica situation to the attention of Facebook a few months ago, the company would have been under no obligation to pay out a bounty for that “bug” in the system. But the data abuse program does apply to third-party apps – such as the Facebook quiz app that ultimately led to the demise of Cambridge Analytica.
The cons of the data abuse bounty program
On the other hand, there’s something about the data abuse bounty program that almost seems like Facebook is trivializing the whole matter of data privacy. In other words, Facebook is treating privacy issues as just like any other “bug” in the system. In fact, Facebook has already stated that the data abuse bounty program will be an extension of the company’s already-existing bug bounty program. So this is not really a new workflow for Facebook – it’s a case of business as usual, with just an expansion of the company’s existing bug bounty program to cover instances of data abuse.
Another question is whether Facebook’s two billion global users can unite around one common issue – data privacy – and systematically discover all the data abuse “bugs” hiding in the system. Keep in mind – the average Facebook user is someone who enjoys posting photos of cats and reading stories about celebrities. Is that really the type of person who is going to suddenly morph into a white hat security researcher, diligently testing the system for privacy issues and data abuse vulnerabilities and then bring them to the attention of Facebook?
Facebook is going to convince you that the answer is “yes.” They don’t want to hire people full-time to work on privacy issues – it’s much better (and cheaper) if they can just outsource this pesky little problem to the crowd, right? They will, no doubt, tell Washington regulators that they are on top of the problem, and will claim they have instituted a fabulous new data abuse bounty program. They might even be able to highlight a few great examples of bounties that users, researchers or developers have collected.
But guess what? Most people are too lazy to change their passwords, let alone their privacy settings. And now you want them to be testing Facebook apps for security vulnerabilities? No, the types of people who take Cambridge Analytica-style personality quizzes are not the types of people who test social media platforms for security vulnerabilities.
The perils of self-regulation for data privacy
If you read through all the rules of the Facebook bug bounty program (available at facebook.com/whitehat), one pillar of the program is what the company refers to as the “Responsible Disclosure Policy.” In it, Facebook specifically notes that anyone reporting a bug must “give us reasonable time to investigate and mitigate an issue you report before making public any information about the report or sharing such information with others.” This is literally Item No. 1 in the Responsible Disclosure Policy and one of the first things that you see when you view the terms of the bug bounty program. Sounds reasonable, you might be thinking to yourself, what could possibly be wrong with that?
Well, this is the reason why self-regulation is inherently flawed. The goal of any company – not just Facebook – is to avoid bad news and negative reports from becoming public. The last thing any company wants is for rumors and news to be circulating in the public that there are privacy issues with the system. That might scare off users, cause investors to lose confidence, and attract the unwanted attention of regulators. Instead, companies would rather handle things privately. And that’s especially true when it comes to security breaches. In cases involving ransomware, for example, some companies have preferred to pay the ransom quietly rather than leak out the information to the public.
Moreover, there is another reason why self-regulation of data privacy is inherently flawed – it pushes responsibility for any data abuses and privacy issues away from Facebook and onto its users and its developers. In comments about the data abuse bounty program, Facebook specifically noted that the focus of the program would be “misuse of data by app developers.” That makes sense, right? The goal of Facebook is to create the perception that Facebook, by and large, is OK. It’s all the bad actors out there – like Cambridge Analytica – that are to blame.
That might be the case, but it also presupposes something else – that the entire business model of Facebook, in which third-party developers and websites are encouraged to tap into the Facebook social graph, is OK. One of the points brought up in Washington hearings, though, was that there is something fundamentally broken with the Facebook business model, in which two billion users around the world are encouraged to share as much personal information about themselves as possible so that Facebook can make money from ads.
The fact that Facebook has been so enthusiastic about embracing the new data abuse bounty program most likely means that it is expedient to do so. It’s a lot easier, for example, than ripping up an entire business model based around using and profiting from user data and coming up with something entirely new.