The increasing prevalence of AI is creating a more dangerous phishing environment for companies of all sizes. A single hacker can now generate as much as 100 times more malicious content than they could previously.
NIST has released a guideline paper meant to give AI developers a bird's-eye view of potential cyber threats that may present during the development and early deployment of their models.
The future of data is not about how much we collect, but how ethically it is used and how we can realistically safeguard it so that we get the best out of AI without violating data privacy tenets.
New AI security guidelines offers a general overview of expected risks and threats from the initial design process, through the development life cycle and deployment, and all the way through ongoing operation and maintenance after deployment.
While cybersecurity practitioners have uncovered many ways that the predictive technology can benefit security teams, threat actors have also been swift to adopt generative AI as the newest tool in their arsenals for launching sophisticated attacks.
CISA has released a roadmap establishing four overarching broad goals, with five more specific lines of effort that appear to indicate concrete immediate priorities. Defensive AI cybersecurity measures and plans for critical infrastructure adoption are repeating themes.
Enterprise use of AI may expand the attack surface for cybercriminals, but leveraging AI technologies can also allow security teams to get ahead in defending against and preventing adversarial AI and AI-powered cyber threats.
A two-day international summit held in the UK has concluded with an agreement on AI safety, with 28 countries that represent most of the leading forces in AI development getting on board.
A new executive order from the Biden administration addresses a wide range of the potential harms that AI can cause, putting new safeguards in place for everything from biological materials engineering to deepfakes.
AI can become a transformative force in meeting today’s compliance and security needs for GRC teams, provided organizations create a happy path that ensures data isn’t leaked and empowers developers to use AI safely.