A much anticipated report on ChatGPT from the European Data Protection Board has found that the chatbot has made improvements in terms of data accuracy, but continues to fall short of the mark in terms of regulatory requirements.
Austrian GDPR Complaint Claims OpenAI Refuses to Correct Potentially Libelous ChatGPT Hallucinations
Filed by data privacy crusader Max Schrems and his group "noyb," the GDPR complaint asserts that OpenAI refuses to correct ChatGPT output about individuals and will simply try to filter or block requests tied to that name. The complaint also accuses OpenAI of failing to live up to their subject access request (SAR) responsibilities under EU rules.
A ChatGPT vulnerability documented in a new report causes training data, some containing personal information, to randomly appear when one tells the chatbot to repeat a particular word.
OpenAI has attributed ChatGPT outages to a targeted distributed denial of service (DDoS) attack. A suspected Russian hacktivist group Anonymous Sudan has claimed responsibility.
A complaint in Poland alleges GDPR violations by ChatGPT in the areas of lawful basis for data processing, data access, fairness, transparency and personal privacy.
Copying of protected works is generally a no-no. But, training of AI tools such as ChatGPT requires copying enormous amounts of data. The two positions appear potentially irreconcilable. This is where the “text and data mining” (TDM) exception to copyright and database rights comes in.
Over 200,000 OpenAI credentials are listed for sale on dark web marketplaces as interest in the generative AI chatbot peaks within the black hat community.
The legal gauntlet for "generative AI" chatbots continues as OpenAI is now under FTC investigation, an action that could firm up questions about the extent to which consumer protection laws apply to AI tools and signal the direction of future federal regulation.
The focus is now turning to the cybersecurity implications of ChatGPT and other AI/machine learning (ML) platforms especially after the recent OpenAI security incident. What are some of the key security considerations that organizations need to consider before they explore how to utilize new AI/ML solutions?
By leveraging public interest in generative AI chatbots like ChatGPT and Google’s Bard, hackers are distributing novel malware on Facebook and hijacking online accounts.