The Biden White House continues to seek a balance between AI innovation and safe ethical use with a new national security memorandum that stresses the need to outcompete rivals, but also sets limits in the most potentially abusive areas.
The 2024 election season is facing an unprecedented challenge: AI-driven disinformation and cyberattacks. As AI’s influence grows, its ability to spread misinformation, create deepfakes, and target election systems becomes more dangerous.
With broad extraterritorial reach, significant penalties of up to seven percent of worldwide annual turnover, and an emphasis on risk-based governance, the EU AI Act will have a profound impact on U.S. businesses that develop, use, and distribute AI systems.
As businesses harness the power of artificial intelligence (AI) to derive insights and streamline operations, the need for robust data privacy standards and effective governance frameworks has never been more critical.
As AI tools like Microsoft Copilot and ChatGPT Premium become more integral to business operations, CISOs and CEOs must collaborate and take proactive steps to safeguard their organizations against data leakage and other security threats.
AI and ML models, and network and security collaboration can successfully address the shortcomings of legacy XDR, paving the path to more accurate detection, faster remediation and ensure business continuity.
A new report from cybersecurity firm HiddenLayer finds that Google Gemini is vulnerable to prompt injection attacks. The researchers characterize it as being open to "profound misuse."
The security industry was hit by an increasing number of AI-powered cyberattacks in 2023, and that is not going to slow down in 2024. As these attacks evolve as AI infiltrates every aspect of business, here’s what security leaders should resolve to do this year amid AI threats.
The EU’s recent negotiated agreement over the A.I. Act is one of the world’s first comprehensive attempts to govern the use of AI. Enforcement won’t kick in until 2025, but IT leaders are already trying to stay ahead lest they risk falling behind.
The increasing prevalence of AI is creating a more dangerous phishing environment for companies of all sizes. A single hacker can now generate as much as 100 times more malicious content than they could previously.