France’s lead data privacy regulator CNIL has announced that it is investigating ChatGPT privacy complaints, and the European Data Protection Board (EDPB) may also get involved at Spain’s request. This follows Italy’s temporary ban of the chatbot over concerns about data transparency and whether it is protecting the data of minors.
CNIL said that it is following up on “several” privacy complaints, and Spain is looking to put the ChatGPT topic on the Plenary of the European Data Protection Committee’s discussion schedule.
Privacy complaints push more European regulators to confront the ChatGPT issue
ChatGPT’s performance in recent months has been raising a broad variety of concerns, from employment to potential attacks on humanity by a rogue AI. But privacy complaints are becoming one of the more immediate issues as regulators note that the way in which large language models operate is not necessarily compatible with existing data handling regulations.
Nowhere is this more true than in the EU, which has the most robust and long-standing established rules in the form of the General Data Protection Regulation (GDPR) and the ePrivacy Directive. But EU nations already appear to be divided on how to handle ChatGPT.
Though France’s CNIL has only just announced its investigation of privacy complaints, France’s Digital Minister Jean-Noël Barrot has already gone on record to say that there are no plans to ban the chatbot, even though he personally feels that it is currently not in compliance with the GDPR. However, the decision to ban ChatGPT would ultimately lie with CNIL.
The request by Spain’s data protection agency appears to be more precautionary and proactive, in the interest of coordinating EU GDPR actions involving AI models while they are still in their infant stage. It remains unknown whether the EDPB will take the topic up, however, as it does not publicly comment on the content of upcoming meetings. Spanish regulators say they have not yet received any privacy complaints regarding ChatGPT.
ChatGPT privacy risks include stored data, harm to minors
Italy’s ban cited the fact that ChatGPT is not transparent enough about how it collects and stores data, though the service claims that it identifies and eliminates personal information that is fed to it. It also noted the lack of a mechanism to determine the user’s age, thus providing no guarantee that the data of minors was being handled with required special protections.
Those are two of the foremost privacy concerns, but they are far from the only ones. The issue of employees feeding sensitive internal data into ChatGPT has already arisen as Samsung workers were found to be entering sensitive source code and the contents of internal meetings. The worst-case scenario would be if someone handling very sensitive personal data, such as health or financial information, decided to lighten their workload by plugging it into ChatGPT’s training model. Right now there is little to stop this from happening besides internal company policies and vigilance.
ChatGPT user history also provides an indirect window into the user’s thoughts, and potentially sensitive demographic information about them. A spotlight was put on this threat several weeks ago when a misconfiguration in the chatbot’s cache system caused users to be able to see chat titles belonging to other random users.
It is also not entirely clear how ChatGPT is scraping its training data from the internet, and it could be collecting personal information from websites and social platforms that users did not intend to be shared with the general public. Researchers have already found that it can be prompted to reproduce protected intellectual property, something that could create legal issues for the company far beyond the content of privacy complaints.
ChatGPT has forced nearly all of the world to begin grappling with the adoption of AI regulation. In the US, the Biden administration made its first move in that direction recently with a public comment period opened by the Department of Commerce’s National Telecommunications and Information Administration (NTIA). The NTIA is looking at trust and safety testing requirements, how assessments and audits will be conducted, how regulatory approaches may differ by industry, the potential use of AI for disinformation purposes, and the possibility of bias and discriminatory outcomes when these systems are used for screening or for making decisions.
OpenAI has responded to its ban in Italy by promising improvements to its data transparency and ability of users to view stored data and request corrections or deletions. It is not clear that this will address all of the privacy complaints and concerns surrounding the chatbot, but enough tweaking to ensure compliance with EU standards could allow ChatGPT back into Italy in just a matter of weeks. Germany is reportedly weighing its own ban, telling media sources that it has been in contact with Italy about the issue. A number of other countries have issued bans, but these are largely in places that ChatGPT has refused to operate (such as North Korea and Russia). Some, primarily China, are already bringing their own copycat language models to market.