ChatGPT has answers for almost everything, but there’s one answer we may not know for a while: will this tool turn out to be the genie its creators regret taking out of the bottle over unintended consequences in AI for cybersecurity?
Austrian GDPR Complaint Claims OpenAI Refuses to Correct Potentially Libelous ChatGPT Hallucinations
Filed by data privacy crusader Max Schrems and his group "noyb," the GDPR complaint asserts that OpenAI refuses to correct ChatGPT output about individuals and will simply try to filter or block requests tied to that name. The complaint also accuses OpenAI of failing to live up to their subject access request (SAR) responsibilities under EU rules.
As with most technological developments, there are two sides of the coin. ChatGPT may present businesses with a never-ending pool of opportunities, but the same resource can be exploited to help criminals infiltrate systems more effectively.
Copying of protected works is generally a no-no. But, training of AI tools such as ChatGPT requires copying enormous amounts of data. The two positions appear potentially irreconcilable. This is where the “text and data mining” (TDM) exception to copyright and database rights comes in.
ChatGPT, is now allowing users to disable chat history. However, any prior conversations remain logged and available to the company's AI models, and chats will still be retained for 30 days before deletion in order to "fight abuse."
Germany is querying ChatGPT's GDPR compliance in terms of required access to stored personal information, its efforts to inform data subjects of their rights under the law, and how it is handling the data of minors.
CNIL said that it is following up on "several" privacy complaints, and Spain is looking to put the ChatGPT topic on the Plenary of the European Data Protection Committee's discussion schedule.
A much anticipated report on ChatGPT from the European Data Protection Board has found that the chatbot has made improvements in terms of data accuracy, but continues to fall short of the mark in terms of regulatory requirements.
The legal gauntlet for "generative AI" chatbots continues as OpenAI is now under FTC investigation, an action that could firm up questions about the extent to which consumer protection laws apply to AI tools and signal the direction of future federal regulation.
By leveraging public interest in generative AI chatbots like ChatGPT and Google’s Bard, hackers are distributing novel malware on Facebook and hijacking online accounts.