ChatGPT has answers for almost everything, but there’s one answer we may not know for a while: will this tool turn out to be the genie its creators regret taking out of the bottle over unintended consequences in cybersecurity?
BlackBerry surveyed 1,500 IT decision makers across North America, UK, and Australia and half (51 percent) predicted we’re less than a year away from a cyberattack credited to ChatGPT. Three-quarters of respondents believe foreign states are already using ChatGPT for malicious purposes against other nations.
The survey also exposed a perception that, while respondents see ChatGPT as being used for ‘good’ purposes, 73 percent acknowledge its potential threat to cybersecurity and are either very or fairly concerned, proving Artificial Intelligence (AI) is a double-edged sword.
The emergence of chatbots and AI-powered tools presents new challenges in cybersecurity, especially when they end up in the wrong hands. There are plenty of benefits to using this kind of advanced technology and we’re just scratching the surface, but we also can’t ignore the ramifications. As the platform matures and hackers become more experienced, it will become more difficult to defend without also using AI to level the playing field.
AI-armed cyberattacks
It’s no surprise people with malicious intent are testing the waters, but over the course of this year, we expect to see hackers get a better handle on how to use ChatGPT successfully for nefarious purposes.
AI is fast-tracking practical knowledge mining, but the same is true for malware coders with the ever-evolving cybersecurity industry often likened to a never-ending cat and mouse or whack a mole game where the bad guys just keep popping up. In the past, these bad actors would rely on their own experience, forums, and security researcher blog posts to understand different malicious techniques, and then convert them into code, but programs like ChatGPT have given them another arrow in their quiver to test out its efficacy to wreak digital havoc.
AI can be used in several ways to carry out cyberattacks, like automated scanning for vulnerabilities, and carrying out new attack techniques. Through AI, advanced persistent threats (APTs) can carry out highly targeted attacks to steal sensitive data or disrupt operations. APTs typically involve a sustained attack on a single organization and are often launched by nation-states or highly sophisticated threat actors.
AI can also be used to create convincing phishing emails, text messages, and social media posts to trick people into providing sensitive information or installing malware. AI generated deepfake videos can be used to impersonate officials or organizations in phishing attacks. It can be used to launch distributed denial of service (DDoS) attacks, which involve overwhelming an organization’s systems with traffic to disrupt operations, or be used to gain control over critical infrastructure, causing real world damage.
AI for an AI
The growing use of AI in developing threats makes it even more critical to stay one step ahead by also using AI to proactively fight threats. Organizations need to continue to focus on improving prevention and detection, and this is a good opportunity to look at how to include more AI in different threat classification processes and cybersecurity strategies.
One of the key advantages of using AI in cybersecurity is its ability to analyze vast amounts of data in real-time. The sheer volume of data generated by modern networks makes it impossible for humans to keep up. AI can process data much faster, making it more efficient at identifying threats.
As cyberattacks become more severe and sophisticated, and threat actors evolve their tactics, techniques, and procedures (TTP), traditional security measures become obsolete. AI can learn from previous attacks and adapt its defenses, making it more resilient against future threats.
AI can also be used to mitigate APTs, which are highly targeted and often difficult to detect, allowing organizations to detect threats before they cause significant damage. Using AI to automate repetitive tasks when it comes to security management also allows cybersecurity professionals to focus more on strategic tasks, such as threat hunting and incident response.
The future of cybersecurity
AI matters more than ever in security now that cyber criminals are using it to up their game. Our research reveals that the majority (82%) of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years, and almost half (48%) plan to invest before the end of 2023. This reflects the growing concern that signature-based protection solutions are no longer effective in providing cyber protection against an increasingly sophisticated threat.
IT decision makers are positive ChatGPT will enhance cybersecurity for business, but our survey also shows 85 percent of respondents believe governments have a moderate-to-high responsibility to regulate advanced technologies.
Both cyber professionals and hackers will continue to investigate how they can best use this technology, and only time will tell who’s more effective. In the meantime, those wishing to get ahead before it’s too late best put AI at the top of their cyber technology tools wish list and learn to fight fire with fire.