The malware that the researchers were able to coax out of DeepSeek was rudimentary and required some manual code editing to make it functional. But the incident demonstrates that the guardrails preventing malicious behavior in generative AI systems remain thin.
The "day-to-day" of organized crimes is increasingly being moved online and optimized with AI-powered tools: things like communication, payments to partners, and recruitment of new operatives.
A security breach affecting the AI aggregator platform OmniGPT has leaked the sensitive information of 30,000 individuals including API keys, chat logs, and uploaded files.
Google Threat Intelligence Group has identified state-sponsored hackers from over a dozen countries abusing Gemini AI for cyber attacks with Iran and China being the heaviest users.
In a move that was widely expected, U.S. lawmakers have proposed a DeepSeek ban on any and all federal government devices. The move may have been prompted by analysis of DeepSeek code that seems to show a direct connection to the Chinese Communist Party (CCP).
It’s not enough to understand how to leverage AI to improve productivity—it’s also important to understand the dangers that come along with it. Cybercriminals are already finding ways to use the technology to their own advantage, while lax AI policies are allowing data leakage to occur with worrying regulations.
As much as AI has generated excitement about the efficiencies it is creating for businesses, AI is also presenting unique challenges in the area of data privacy and security. Although still in its infancy, AI privacy litigation continues to rise as the pool of defendants diversifies and regulation intensifies.
The Biden White House continues to seek a balance between AI innovation and safe ethical use with a new national security memorandum that stresses the need to outcompete rivals, but also sets limits in the most potentially abusive areas.
The 2024 election season is facing an unprecedented challenge: AI-driven disinformation and cyberattacks. As AI’s influence grows, its ability to spread misinformation, create deepfakes, and target election systems becomes more dangerous.
With broad extraterritorial reach, significant penalties of up to seven percent of worldwide annual turnover, and an emphasis on risk-based governance, the EU AI Act will have a profound impact on U.S. businesses that develop, use, and distribute AI systems.