Google Corporate Headquarters and Logo at googleplex showing concerns of Google data breach and silicon valley privacy practices
Google Data Breach Raises New Concerns About Silicon Valley Privacy Practices

Google Data Breach Raises New Concerns About Silicon Valley Privacy Practices

It has been a difficult year for Silicon Valley’s top tech giants, and the latest company to come under closer public scrutiny is Google. After a Wall Street Journal report about a data breach that might have impacted up to 500,000 users of the popular Google+ social network, the tech company admitted that back in March 2018, it became aware of this data breach, but failed to disclose it to users or regulators.

Details of the Google data breach

The Google data breach was uncovered in early 2018 during an internal audit (called Project Strobe) of the company’s social platform to see if third-party developers had access to any user data from Google accounts. As auditors discovered, this Google data breach existed between 2015 and March 2018, and as many as 438 third-party apps using the API for Google+ may have had access to private user data, including email addresses, age, gender, images, places visited and occupation information. That type of data, of course, represents a potential treasure trove of information not just for developers, but also for marketers, advertisers and potentially malevolent hackers.

At the time of the suspected data breach affecting Google profile data, Facebook executives were coming under heavy public criticism for their handling of the Cambridge Analytica data scandal, and top Google executives apparently made the decision that it would be better to keep the Google data breach private rather than disclose it publicly. In memos uncovered by the Wall Street Journal, Google executives privately worried that this data breach involving Google accounts might lead to a PR nightmare, and potentially even lead to regulatory scrutiny and Google being pulled even deeper into the Facebook scandal.

Google’s attempts at damage control

Now that the Google data breach has become public knowledge, Google plans on shutting down the consumer Google+ product (but not the enterprise version) by August 2019. In addition, the company has disclosed plans to implement even stronger privacy controls for the company’s other consumer-facing products, including Gmail.

Before you continue reading, how about a follow on LinkedIn?

In a corporate blog post, Ben Smith, VP of engineering at Google, noted that the Google data breach did not meet the various criteria that the company uses to determine whether a data breach should be made public. For example, as far as Google can determine, third party software developers using the API for Google+ may not have even been aware of this exploit. Moreover, there is no evidence that any developer misused the data in any way or had access to sensitive private data. Without evidence of misuse, the company decided to keep this Google data breach a private manner rather than risk a PR firestorm.

Paul Bischoff, privacy advocate at Comparitech.com, finds these arguments less than compelling, “In my view, Google is basically pleading ignorance in order to shield itself from legal ramifications. It has conveniently left out some crucial figures in its response that would give us a clearer picture of the scope of this incident.”

Bischoff also points to the failure of Google to follow up on the audit findings, “For example, Google says 438 applications had unauthorized access to Google+ profile data, but it doesn’t say how many of its users used those apps. And while Google says it performed a cursory investigation and found nothing suspicious, it also notes that it didn’t actually contact or audit any of the developers of those apps.”

Implications of the Google data breach

The fact that Google did not disclose or follow-up on the incident earlier raises serious concerns, both for regulators and consumer privacy advocates. Have we reached a point where the big Silicon Valley tech giants are “too big to trust”?

The big picture view, of course, is that the biggest Silicon Valley giants have engaged in the same pattern of behavior – they have created products without first ensuring that they offer a suitable level of privacy protection, and then when flaws or exploits have been discovered in those products, they have sought to control public disclosure of those flaws. When journalists or other third party sources find out about those data breaches, these companies then explain that nobody was hurt, nothing bad happened, and that users should continue to trust them. The standard mantra has been that “self-regulation” instead of government regulation is the best possible strategy to fix these problems.

But this mantra is losing favor with top tech experts. As Pravin Kothari, CEO of CipherCloud, notes, the key is to focus on the pattern of behavior at top tech companies, “Google’s failure, if true, to not disclose to users the discovery of a bug that gave outside developers access to private data, is a reoccurring theme. We saw recently that Uber was fined for failing to disclose the fact that they had a breach, and instead of disclosing, tried to sweep it under the rug.”

As Kothari notes, tech companies are understandably making every effort possible to avoid disclosure for business reasons, “It’s not surprising that companies that rely on user data are incented to avoid disclosing to the public that their data may have been compromised, which would impact consumer trust. These are the reasons that the government should and will continue to use in their inexorable march to a unified national data privacy omnibus regulation.”

New efforts to force responsible data privacy

Until March 2018, taking tech companies at their word was the status quo. But this year’s congressional hearings in Washington, in which Facebook CEO Mark Zuckerberg was grilled in public, seems to have changed the way both users and members of Congress now view companies like Facebook and Google. Comparisons to Facebook are now all too common when it comes to the cavalier approach to protecting and safeguarding personal data. In the minds of many people, Google is no different than Facebook, and that raises serious questions about what is being done to safeguard data at other companies.

As a result, various proposals have now been floated publicly that go well beyond just a passive policy of self-regulation. The easiest solution, of course, is strict new federal regulations that build on or expand upon state laws, such as those that exist in California right now. According to federal laws currently on the books, Facebook had no legal obligation to report the data breach. And, even as part of the California legal code, Facebook did not break the law by not reporting the breach, since that law applies to data such as Social Security Numbers and driver’s license data, not to the type of information that Google+ users were sharing with each other.

Another solution that has been proposed is the creation of Federal Trade Commission (FTC) “privacy monitors,” who would then be charged with ensuring that the biggest Silicon Valley firms abide by data privacy guidelines and ethical practices. This would expose big tech giants to unwanted regulatory scrutiny at very close distance. Most immediately, it would raise the question of whether the same flaws found in the consumer version of Google products are also found in the company’s enterprise-class products.

And, finally, another solution that has been proposed is perhaps the most drastic and draconian – and that involves the splitting up of Google and Facebook into smaller businesses. If technology platforms have grown so large that they can no longer regulate themselves, is it now time to split them into smaller companies where it will be possible to monitor and respond to data breaches in a more timely manner?

According to Colin Bastable, CEO of Lucy Security, the time has come for tougher regulation, “Google’s understandable desire to hide their embarrassment from regulators and users is the reason why states and the feds impose disclosure requirements – the knock-on effects of security breaches are immense.”

The future of data privacy at big tech giants

At a time when Google is already dealing with other alleged abuses, such as tracking user locations on Android phones without user knowledge, and giving third-party email apps access to information within Gmail accounts, the onus is clearly on Google to clean things up before the government gets involved more closely. As Bastable notes, “The risk of such a security issue is shared by all of the Google users’ employers, banks, spouses, colleagues, etc. But I guess we can trust them when we are told there was no problem.”

Fact that Google did not disclose #databreach earlier raises serious #privacy and trust concerns. Are big #SiliconValley tech giants are “too big to trust”? Click to Tweet

But just don’t expect anything to dramatic to happen between now and the midterm congressional elections in November. At the beginning of 2019, though, we might just see Google CEO Sundar Pichai making an appearance before Congress, much as Facebook CEO Mark Zuckerberg did a year earlier, in order to answer questions about the Google data breach. At some point, the thinking goes, regulatory interest in these companies will grow to a point where the status quo is simply no longer acceptable and stricter regulatory steps need to be taken.

 


Follow CPO Magazine