Critical attacks and massive breaches escalated dramatically in 2019 and it is predicted that by the year 2020, costs related to damage caused by cybersecurity breaches may reach $5 trillion.
As attacks increase cybersecurity teams are overworked, understaffed, and are grasping for solutions to solve an increasing amount of problems.
2020 will be the year that AI changes the landscape of cybersecurity defense.
The current state of AI is begging for a number of problems to be solved in order to continue effectively protecting users from malicious actors. Due to an extreme shortage of cybersecurity professionals, many companies are turning to Artificial Intelligence as a sort of panacea to better defend their networks and make up for a lack of personnel.
Another layer of complexity gets added when we consider the false positive or negative problem most security companies have because they are either setting their thresholds too high or too low. AI can be a great solution for this problem if it is applied correctly and only if it is true AI.
False positives and negatives can waste the better part of a security analyst’s day if the security system they use is unable to keep them to a minimum. Having an Artificial Intelligence program smart enough to recognize what is a real threat and what is just background noise, will be the real test in 2020, especially as hackers develop their own AI-powered tools.
Adversarial AI is becoming a very real threat we must prepare ourselves to face in the new decade. Hackers have started equipping themselves with their own Machine Learning algorithms that help them attack users in a way that escapes detection. In order to square up to these advances on the adversarial side, cybersecurity teams need to be armed with the best defensive tools possible.
This is where Third Wave AI comes in, Generative Unsupervised AI will be offered in 2020, which is AI capable of thwarting zero-day attacks because of its unique adaptive algorithm. Presently, most security companies offering AI are offering a Supervised model, the few that do Unsupervised Learning only use discriminative, not generative models. Those are just methods of labeling. What we want in our Security Systems now is an automated detection system, that knows when something looks wrong and can alert the user that there is a threat without the user having to label what the threat is supposed to look like beforehand.
With Generative Unsupervised Learning, there are no labels. The program works by studying a network for 7 full days to understand the fluctuations that happen on a daily basis, then it creates a baseline of this network and uses that baseline to predict what the future should look like. If the network starts to behave in a way that is not predicted by the baseline, the incident will be marked as a threat by the AI and a security analyst will be alerted. That’s what makes it so effective at catching zero-day attacks, in a way that Supervised Learning cannot compete with.
Thanks to new privacy laws like CCPA, GDPR, and others, new requirements being imposed on any organization that collects private data requires it to be more transparent with how they use their consumer data and improve their management and security processes.
A huge shift to autonomous AI systems will mark the start of a much more technologically forward decade. The tug of war between hacker and security team will go on with more advanced cyber weapons and defense methods, but the real test will be parsing through misinformation and false marketing claims of ‘Artificial Intelligence’ that isn’t intelligent at all, and finding the programs that work.