The UK ICO has wrapped up a preliminary investigation into Snap's AI chatbot, and has indicated that it is failing to adequately address children's privacy risks. There are numerous concerns about AI chatbots that are not yet resolved, but children's privacy seems to have driven much of the early action from regulators.
Cybercriminals inserted malicious ads into Microsoft Bing Search AI chatbot to trick unsuspecting users into downloading trojanized software from spoofed domains.
After being informed of the planned launch of the AI chatbot in the EU, the DPC instructed Google to file a data protection impact assessment. The launch is waiting on Google to address privacy concerns.
Fortune 500 companies continue to demonstrate extreme wariness of AI chatbots and similar AI tools in the workplace, as Apple has banned employees from using ChatGPT on work devices.
Nearly every tech company with some sort of social platform is rushing to get their own AI chatbot in place. Snap users are expressing concern about how it interacts with children, the level of access it has to personal information, and overbearing chat interactions.
ChatGPT will put information that is shared by users into its training model. It was reported that Samsung employees have fed it some source code and other sensitive data.
Dark web forum posts indicate that low- or even no-skill threat actors have figured out how to manipulate ChatGPT instructions to get it to produce basic but viable malware.
Some users have found that asking ChatGPT to exploit smart contracts actually returns viable vulnerabilities. Experiments are beginning to reveal that AI chatbots may shake up the cybersecurity world.