With the widespread availability of ChatGPT and other Generative AI applications today, research shows a 135% increase in ‘novel social engineering’ attacks in January and February of 2023 alone.
With so many organizations incorporating generative AI tools into operations, ethical AI adoption can increase their competitive advantage, simplify tool testing, increase end-user adoption and more.
As attacks evolve and become more sophisticated, the industry's response has been to adopt the zero-trust architecture. However, with the rise of zero-trust architecture, we've also seen an unexpected, unwelcome guest: complexity.
While cybersecurity practitioners have uncovered many ways that the predictive technology can benefit security teams, threat actors have also been swift to adopt generative AI as the newest tool in their arsenals for launching sophisticated attacks.
Generative AI has the potential to strengthen cybersecurity defenses and enhance cyber threat intelligence significantly, but each tool’s ability to handle the job depends on vendors’ ability to overcome inherent limitations.
We live in an age that values authenticity: being true to who you are and what you value. It is ironic, then, that one of the more recent innovations of the past few years—Large Language Models, or Generative AI—is in the process of undermining authenticity itself.
Generative AI models in the style of ChatGPT are being sold that promise to help create malware, write phishing emails, set up attack sites, scan for vulnerabilities, and more. The latest DarkBART and DarkBERT projects have been trained on dark web sites.
It’s clear that the introduction of generative AI to the mainstream is tipping the scales towards a war of algorithms against algorithms, machines fighting machines. For cyber security, the time to introduce AI into the toolkits of defenders is now.