Google Threat Intelligence Group has identified state-sponsored hackers from over a dozen countries abusing Gemini AI for cyber attacks with Iran and China being the heaviest users.
In a move that was widely expected, U.S. lawmakers have proposed a DeepSeek ban on any and all federal government devices. The move may have been prompted by analysis of DeepSeek code that seems to show a direct connection to the Chinese Communist Party (CCP).
It’s not enough to understand how to leverage AI to improve productivity—it’s also important to understand the dangers that come along with it. Cybercriminals are already finding ways to use the technology to their own advantage, while lax AI policies are allowing data leakage to occur with worrying regulations.
As much as AI has generated excitement about the efficiencies it is creating for businesses, AI is also presenting unique challenges in the area of data privacy and security. Although still in its infancy, AI privacy litigation continues to rise as the pool of defendants diversifies and regulation intensifies.
The Biden White House continues to seek a balance between AI innovation and safe ethical use with a new national security memorandum that stresses the need to outcompete rivals, but also sets limits in the most potentially abusive areas.
The 2024 election season is facing an unprecedented challenge: AI-driven disinformation and cyberattacks. As AI’s influence grows, its ability to spread misinformation, create deepfakes, and target election systems becomes more dangerous.
With broad extraterritorial reach, significant penalties of up to seven percent of worldwide annual turnover, and an emphasis on risk-based governance, the EU AI Act will have a profound impact on U.S. businesses that develop, use, and distribute AI systems.
As businesses harness the power of artificial intelligence (AI) to derive insights and streamline operations, the need for robust data privacy standards and effective governance frameworks has never been more critical.
As AI tools like Microsoft Copilot and ChatGPT Premium become more integral to business operations, CISOs and CEOs must collaborate and take proactive steps to safeguard their organizations against data leakage and other security threats.
AI and ML models, and network and security collaboration can successfully address the shortcomings of legacy XDR, paving the path to more accurate detection, faster remediation and ensure business continuity.