Meta logo on device screen showing GDPR fine for data scraping

€265 Million GDPR Fine for Meta Over Data Scraping Conducted Prior to 2020

The Irish Data Protection Commission (DPC) has handed down a €265 million (about $270 million) GDPR fine to Meta over data scraping conducted on Facebook, taking the social media giant to task for failing to have adequate protections in place.

The complaint that led to the GDPR fine was filed in April 2021, but refers to the abuse of several Facebook and Instagram tools between May 2018 and September 2019. The tools in question have since been updated to disallow data scraping.

Relatively quick action for Irish DPC as substantial GDPR fine is levied

If the Irish DPC has become known for anything, it is taking a very long time in processing cases involving the usually Dublin-based big tech firms. Given that the investigation was opened about a year and a half ago, the pace was relatively fast this time. The regulatory body also seems to be quickest to penalize the Meta family of companies, as this GDPR fine brings the recent total it has taken from the company to €912 million.

In this case, Meta was found in breach of Articles 25(1) and 25(2) of the GDPR. These provisions govern data protection by design and default, establishing fundamental principles and processes that companies operating in the bloc are obligated to.

The GDPR fine was sparked by a round of media reports in early 2021 documenting how the personal data of over 530 million Facebook users was left open to data scraping for an extended period thanks to faults in certain tools. Contact importers used by both Facebook and Instagram were flawed in this way, allowing for data scraping via the mass entering of phone numbers. Phone numbers that were a match to those listed in user contacts could be exploited to return associated names, Facebook IDs and other information posted to user profiles without the knowledge or consent of the user being scraped.

So why was only about a sixth of Facebook’s global user base exposed? The data scraping trick seemed to only work well in countries with relatively small populations (around 10 million people or fewer). Belgium was hit particularly hard, with the contact information of celebrities located and exposed using the trick.

The fault tied back to Facebook having two non-obvious privacy settings governing how this information could be looked up by others. The primary profile privacy setting that most users are familiar with allows them to make their phone number and other contact information only visible to them. However, to opt out of being looked up in the way the data scraping trick allowed for, users had to toggle a second (and less obvious) “Who can look me up” setting. During at least some of the period in which all of this took place, Facebook did not offer an “only me” option under this setting. And after the setting was implemented, accounts would still default to allowing “everyone” to look up contact information via phone number until the user found it and changed it.

The issue came to a head when a “trove” of over 530 million Facebook user profiles obtained via data scraping was made available on the dark web. The relatively quick progress of the GDPR fine appears to be owed to a lack of disagreement among other EU regulators over the fine amount and remediation terms initially proposed by the Irish DPC. Meta told the media that it is reviewing the decision and has not yet committed to an appeal.

Facebook found negligent in creating safeguards against data scraping trick

Chris McLellan, director of Operations at the non-profit, Data Collaboration Alliance, notes that though Meta is racking up GDPR fines in the EU at a torrid pace it is still not likely to prompt the sort of permanent change that eliminates these issues at the design lever: “Virtually all the power over personal data collection, use, and access resides with application owners and digital service providers. This has been allowed to persist for decades due to a general lack of concern on the part of citizens and consumers. But things are starting to change as people wake up to the fact that the battle for control over their information is actually of central concern to the future of their lives, communities, and children.”

“But regulations are just a starting point. Let’s face it – we’ve all become addicted to the conveniences offered by personal and business applications, and that’s unlikely to change any time soon. And the predicted transition to more virtual experiences rather than traditional apps doesn’t change this one bit. The way apps manage data is the real problem in establishing the level of control necessary for enforcing outcomes like those outlined in GDPR and California’s CCPA. Sensitive and other information is fragmented into databases, which then get copied at scale through a process known as data integration. This is at complete odds with the global movement towards increased data privacy and data protection. Bottom line: If we want to get serious about data protection and data privacy, we need to think seriously about changing the way that we build apps. We need to accelerate the use of new frameworks like Zero-Copy Integration and encourage developers to adopt new technologies like dataware and blockchain – all of which minimize data and reduce copies so that the data can be meaningfully controlled by its rightful owner. Until then, the endless parade of fines and regulatory show trials – or any attempt to mitigate the underlying chaos that defines the current state of personal information – are doomed to fail,” added McLellen

Mike Parkin, Senior Technical Engineer at Vulcan Cyber, thinks that GDPR fines won’t be a true motivating factor until they are coupled with similar fines from all over the world: “As we have seen from other recent fines against tech companies, regulators in Europe, especially European Union member countries, take privacy seriously. Much more so than regulators in the US do. Given Meta’s history with user data privacy, it seems they got off reasonably light. Companies that are used to operating with minimal concern for user data privacy need to understand that we’ve been moving towards stronger protections and user rights for some time, especially in Europe. If they aren’t making good faith efforts to protect that user data, they may face serious financial impacts if threat actors manage to get it. The fines are even worse when the organization isn’t making an effort to comply with the regulations, and loses data to simple web scraping.”

And Andrew Barratt, Vice President at Coalfire, agrees that this incident is a call for greater attention to be paid to how databases are secured and tested: “Only share data that doesn’t require authentication on the assumption that it can and will be scraped.  Where this is needed it should be done with robust APIs that are well protected and subject to application security testing frequently to ensure that it’s not open to abuse. The challenge with this is doing ‘more than just a pentest’ instead of just looking for vulnerabilities the testing needs to think about ways an API could be leveraged to extract more data without necessarily compromising a vulnerability, alongside the attacks that normally break the API and compromise underlying systems/data.”

The #GDPR fine was sparked by a round of media reports in early 2021 documenting how the #personaldata of over 530 million Facebook users was left open to #datascraping for an extended period thanks to faults in certain tools. #privacy #respectdataClick to Tweet

Meta has several ongoing EU investigations that could potentially lead to regulatory action, and it is presently appealing a €405 million fine it received this year for failing to keep children’s data sufficiently private by allowing them to open Instagram business accounts that did not verify their age. The company is also engaged in a long-running legal battle over transfers of EU resident personal data to its servers in the US, something that it has at times threatened to pull out of the bloc over.