The Silicon Valley motto, “Move fast and break things” is alive and well in the era of Big Data, artificial intelligence and machine learning. But moving fast can also lead to stubbed toes and worse, as recent privacy troubles at major tech leaders such as Facebook and Uber demonstrate.1 While certain economic sectors may be enjoying some relaxation of regulations in the current political climate, the technology sector is facing new regulations and increased calls for further regulation, in particular with respect to data privacy and security.2 Regulators are issuing new rules that are likely to be cost more when broken by data-driven businesses that like to move fast. Emerging technology companies that plan to succeed in what’s been called the “algorithmic society”3 and by others the “surveillance economy”4 should pay attention to this tightening regulatory environment and be prepared for more questions about privacy and security from investors, strategic partners, consumers and, possibly, regulators.
Three recently-introduced laws in the U.S. and the EU are bell weathers of what may be a shift in public sentiment about data privacy in the digital economy. In March, 2017, New York State put into effect the country’s most detailed and stringent state cybersecurity regulation5, directed at banks, insurers and other entities licensed by the state’s Department of Financial Services (the “New York Cybersecurity Regulation”). Although few technology companies are directly covered this new regulation, companies that provide services to covered entities must now deal with extensive vetting and stricter contracting regimes.
In May of this year, the European Union scrapped its 23-year-old data protection Directive and upgraded to an even stronger set of data protection rules in the new General Data Protection Regulation, or “GDPR”.6 Of the many new features of the watershed GDPR, at least two stand out of particular interest to U.S.-based tech startups: (1) the GDPR’s jurisdictional provisions reach non-EU based companies that target data subjects in the EU (whether or not the goods or services are paid or free) or that profile or monitor data subjects located in the EU;7 and (2), the GDPR prohibits certain types of automated processing, including data subject profiling, unless certain strict conditions are met (one of which is a data subject’s “explicit consent”).8 In the parlance of the GDPR, “automated processing” encompasses technologies such as AI and machine learning. Emerging companies with aggressive strategies for collecting and processing personal data in European markets will need to understand whether their U.S.-based operations can bring them within GDPR’s jurisdictional net and, if so, what they must do to comply.
Finally, this past June California enacted the most sweeping U.S. state consumer data privacy law to date, the California Consumer Privacy Act of 2018 (or “CCPA”).9 Although the CCPA will not go into effect until January 1, 2020, it creates several new rights to protect consumer control over the sale and disclosure of “personal information” (defined more broadly than in previous state privacy law), as well as certain new enforcement provisions with teeth, including a statutory damages scheme for covered entities’ failure to maintain reasonable security measures to prevent data breaches. Nor does the CCPA neatly track the GDPR, meaning that businesses hoping to collect consumer data in the EU and in California will require separate compliance strategies.
This wave of new data privacy and security laws will challenge affected startups and emerging companies – which usually have limited or no compliance budgets at all. Ignoring these developments in data privacy regulation is not an option for companies that plan to take advantage of Big Data analytics and machine learning, but panic isn’t an appropriate response either. Startups that are just sketching out or refining their business models and preparing for investor diligence can benefit by asking a handful of basic questions about how personal data will figure in the company’s operations. The answers may repay in the form of simplified data privacy compliance headaches down the road.
1. Does the business model include a data strategy that takes account of privacy and security?
In the era of Big Data, AI and machine learning, businesses have strong incentives to treat every bit of data collected from or about consumers –no matter how trivial or incidental — as a potential asset for future analytics projects, marketing initiatives, service improvements, profiling, etc. From this perspective, no data is ever completely useless or without value. For startups whose business model is essentially based on data mining, data aggregation, analytics, predictive services, profiling, ad targeting, or data brokerage, for example, this perspective makes sense because data is a core product or service.
The data-hungry approach describes many startup models, but it may not be appropriate for many others. Truly data-driven startups that plan to amass or tap into large amounts of personal data on consumers are committing themselves to dealing with significant and likely growing privacy regulatory frameworks, of which the new GDPR and CCPA are only the latest iterations. But companies that are not particularly data intensive can spare themselves some regulatory headaches and limit their exposure to personal or sensitive consumer data by restricting the collection and storage of such data only to what’s needed to operate the business.
Simply put, startups should determine early on just how central personal data is to the company’s mission and value and develop risk-mitigation policies accordingly. Personal data not used or needed in a business and stored in personally-identifying form is not an asset, just a liability waiting to happen.