There has been much discussion of ChatGPT and how it will be impacted by the EU’s General Data Protection Regulation (GDPR) as of late, prompted by Italy’s preemptive ban of the service over privacy and child safety concerns. The AI juggernaut is now beginning a period of formal inspection of its GDPR compliance as Germany’s data protection watchdog joins Italy, France and Spain in interrogating the service.
An EU task force has been formed to harmonize these efforts, but at the moment individual nations are conducting their own investigations. Germany is querying ChatGPT’s GDPR compliance in terms of required access to stored personal information, its efforts to inform data subjects of their rights under the law, and how it is handling the data of minors.
Increasing scrutiny of AI GDPR compliance in Europe as Germany looks over ChatGPT
As an initial step in its GDPR compliance review, ChatGPT developer OpenAI has been handed a questionnaire regarding its GDPR compliance responsibilities and has been asked to return it to data protection authorities by June 11.
The German data regulator says that OpenAI needs to demonstrate that it is providing the public with means to view collected personal data and to have it revised or removed, as is stipulated by GDPR terms. It also said that the firm must clarify to users how these rights can be exercised; transparency about this access is another requirement of the GDPR. And the regulatory body expressed particular concern about how the data of protected minors is being handled. All of these concerns echo those articulated by Italy, which went a step further in banning the service from the country until it can demonstrate that it is in compliance with regulations.
Though Italy was quick to ban ChatGPT, the idea of slowing AI development down (and potentially losing competitive advantage) is not nearly as popular across the entire bloc. Most countries have not yet targeted OpenAI with special regulatory attention. Those that have are moving much more slowly. France opened a formal investigation only after receiving what it says were “several” GDPR complaints related to ChatGPT, and Spain has initiated an inquiry independently but is petitioning the European Data Protection Board to facilitate standards for regulation that would be in concert across the whole of the region.
In addition to answering the concerns listed on the questionnaire, OpenAI has been asked to carry out a data protection impact assessment and to determine if data protection risks are under control.
Copyright concerns rival privacy issues as ChatGPT criticism mounts
Regulators are slowly approaching the issue of GDPR compliance, but ChatGPT may be more quickly overtaken by private parties that are concerned about AI training models scraping their work.
AI chatbots can only function by both training on vast datasets and having a source to go to for answers, which means roaming freely across the internet. Users of the service have only limited ability to opt out of having their data used, and thus far the opacity of ChatGPT’s inner workings have left many questions as to how and where it is obtaining the material that it trains on.
Content creators are increasingly concerned that the AI is scraping their work, using it without compensation, and potentially even regurgitating it for free. To that end, a large collection of German trade unions and worker’s associations have issued a call to Brussels to more quickly take up the issue of AI regulation. The European Commission created an initial set of draft rules for AI in 2022, but the process of turning them into law is expected to go on for at least some months as member states weigh in.
AI and copyright is a newly emerging sector of law. Determining if ChatGPT is in violation of intellectual property protections will vary by regions, but there are some fundamental shared principles that are expected to be tested by courts. One is if the AI simply made its own unauthorized copies of works to store in its training set; OpenAI says that it “may” do this, and ChatGPT spitting out certain copyrighted works on command appears to prove that it does with some frequency.
The second big question will be whether ChatGPT uses legally protected materials it trains on to create derivative works. And do “fair use” exceptions apply to an AI that is essentially a business product at the end of the day? GDPR compliance requirements do loosely include IP rights in some ways, but this element is more directly governed by a set of EU directives specific to copyright and reproductions of works.
Other elements of harm that may fall outside of GDPR compliance terms will inevitably have to be addressed. The first suicide attributed to a chatbot took place in late March, when a Belgian man who was concerned about global warming took his own life after the chatbot (a “therapist” bot called Eliza) seemingly stoked his fears and encouraged him to do so. Advocacy groups in both the US and EU are lobbying for an “AI pause” as these harms are evaluated and regulatory safeguards can be developed.