It has been a difficult year for Silicon Valley’s top tech giants, and the latest company to come under closer public scrutiny is Google. After a Wall Street Journal report about a data breach that might have impacted up to 500,000 users of the popular Google+ social network, the tech company admitted that back in March 2018, it became aware of this data breach, but failed to disclose it to users or regulators.
Details of the Google data breach
The Google data breach was uncovered in early 2018 during an internal audit (called Project Strobe) of the company’s social platform to see if third-party developers had access to any user data from Google accounts. As auditors discovered, this Google data breach existed between 2015 and March 2018, and as many as 438 third-party apps using the API for Google+ may have had access to private user data, including email addresses, age, gender, images, places visited and occupation information. That type of data, of course, represents a potential treasure trove of information not just for developers, but also for marketers, advertisers and potentially malevolent hackers.
At the time of the suspected data breach affecting Google profile data, Facebook executives were coming under heavy public criticism for their handling of the Cambridge Analytica data scandal, and top Google executives apparently made the decision that it would be better to keep the Google data breach private rather than disclose it publicly. In memos uncovered by the Wall Street Journal, Google executives privately worried that this data breach involving Google accounts might lead to a PR nightmare, and potentially even lead to regulatory scrutiny and Google being pulled even deeper into the Facebook scandal.
Google’s attempts at damage control
Now that the Google data breach has become public knowledge, Google plans on shutting down the consumer Google+ product (but not the enterprise version) by August 2019. In addition, the company has disclosed plans to implement even stronger privacy controls for the company’s other consumer-facing products, including Gmail.
In a corporate blog post, Ben Smith, VP of engineering at Google, noted that the Google data breach did not meet the various criteria that the company uses to determine whether a data breach should be made public. For example, as far as Google can determine, third party software developers using the API for Google+ may not have even been aware of this exploit. Moreover, there is no evidence that any developer misused the data in any way or had access to sensitive private data. Without evidence of misuse, the company decided to keep this Google data breach a private manner rather than risk a PR firestorm.
Paul Bischoff, privacy advocate at Comparitech.com, finds these arguments less than compelling, “In my view, Google is basically pleading ignorance in order to shield itself from legal ramifications. It has conveniently left out some crucial figures in its response that would give us a clearer picture of the scope of this incident.”
Bischoff also points to the failure of Google to follow up on the audit findings, “For example, Google says 438 applications had unauthorized access to Google+ profile data, but it doesn’t say how many of its users used those apps. And while Google says it performed a cursory investigation and found nothing suspicious, it also notes that it didn’t actually contact or audit any of the developers of those apps.”
Implications of the Google data breach
The fact that Google did not disclose or follow-up on the incident earlier raises serious concerns, both for regulators and consumer privacy advocates. Have we reached a point where the big Silicon Valley tech giants are “too big to trust”?
The big picture view, of course, is that the biggest Silicon Valley giants have engaged in the same pattern of behavior – they have created products without first ensuring that they offer a suitable level of privacy protection, and then when flaws or exploits have been discovered in those products, they have sought to control public disclosure of those flaws. When journalists or other third party sources find out about those data breaches, these companies then explain that nobody was hurt, nothing bad happened, and that users should continue to trust them. The standard mantra has been that “self-regulation” instead of government regulation is the best possible strategy to fix these problems.