The one thing on which defenders and their adversaries can agree? That the weak point of any security system is usually a human being. Human error is inevitable—people fall for phishing scams, they forget to install patches, they misconfigure settings, they save sensitive information in all the wrong places. There’s a reason human error is called “human error.” It’s innate to human beings. It’s in our nature. It’s more or less inevitable.
Organizations invest heavily in security training to help mitigate this problem—but today’s training programs fall short in a variety of ways. They aren’t always great at telling employees what they actually need to know, and they definitely aren’t good at helping them retain it. But as artificial intelligence (AI) capabilities have improved, a growing number of organizations are beginning to recognize the potential that AI has to dramatically improve the process of security training.
Why today’s security training falls short
One of the first problems with today’s security trainings should be obvious to anyone who has sat through a mandatory training program: they can be incredibly boring and repetitive. That might sound like a petulant complaint, but if you can’t hold your audience’s attention, they’re not going to learn anything. Unfortunately, training courses can take a long time to develop, which leads to providers reusing the same content over and over again to avoid costly investments. But the security landscape is always changing, which means those trainings can become outdated quickly.
Another problem is that trainings tend to be “one-size-fits-all,” rather than being tailored to the individual. That’s a problem. Cybersecurity, at its core, is focused on the individual and what they should or should not be doing—and not everyone faces the same security challenges. True, role-based trainings exist, but they are few and far between—most of today’s trainings paint with the broadest brush possible, which means they lack the granularity and individual applicability necessary amid today’s ever-evolving threat landscape.
The core of the issue is that most trainings fail to meet people where they are. If you want to be prepared for a situation, it’s important to train in a way that simulates that situation as closely as possible. Cybersecurity professionals engage in activities like penetration testing and incident response drills that do exactly that—but when it comes to training, employees are pulled into a learning management system and disengaged from the environments in which they should be applying their newfound security knowledge. That contextual disconnect seriously weakens the impact of the training, rendering it less effective.
Leveraging AI for improved results
While there is plenty of enthusiasm around AI, the truth is we’re still in the early stages of understanding its impact. Today’s most popular AI chatbots are certainly impressive, but they’ve also been known to provide erroneous statements or demonstrate unhelpful (and unwanted) biases. As a result, there is still limited trust in their output—they shouldn’t yet be let off the leash, so to speak. Human beings are still required to monitor and review AI-generated content to ensure its quality and accuracy, which means AI isn’t yet conducting “trainings” in the traditional sense.
That said, AI is already having a significant impact in the efficiency of training development. AI is helping to accelerate initial brainstorming sessions, while also automating things like content formatting and tagging and other mundane (but time consuming) tasks. AI has incredible potential to substantially reduce the amount of time it takes to generate a course—what once took months can be shrunk to weeks or even days. This not only increases the output capacity of a training organization, but makes it possible to be more responsive to evolving training needs.
That includes making security training programs more accessible. AI can help create richer, more meaningful training interactions—particularly for those with disabilities. It’s great that AI can help to improve efficiencies, but it’s the improvement in quality that will really maximize the effect of training. People are eager to push beyond mere increased output to explore how direct AI-human interactions can lead to more impactful learning experiences. Can an organization increase learning retention and user engagement through improved interactivity? Can it bring training closer to the learner’s work environment? Can it improve training analytics and impact analysis? There are countless avenues of exploration to pursue, and AI will continue to play a critical role.
But perhaps the area where AI has the greatest potential is in overlaying AI across the user’s real work environment. In many ways, this blurs the line between training and operational controls—if an AI version of Clippy pops up to warn you when you attempt to send an email to the wrong person or attach an unencrypted file with sensitive information inside, should that be considered training or a data loss prevention control? In the short term, the obvious application is in streamlining course development, but as time goes on we will almost certainly see an interesting blend of training and controls emerge. It may not be possible to solve the problem of human error, but an AI that can spot and alert on potential mistakes in real time is definitely a step in the right direction.
A realistic assessment of AI capabilities
Recent advances in AI have helped to generate significant hype for its capabilities, but it’s important to have a realistic understanding of where and how AI can actually be leveraged. When it comes to cybersecurity training, improving efficiency in areas like content development, analytics, and enhanced accessibility is something organizations should look to take advantage of immediately. Additional capabilities, such as direct AI-human interaction and the ability to overlay across real work environments, will continue to emerge as AI becomes more advanced and societal trust in AI grows. We shouldn’t (yet?) hand over the reins to the robots, but we should use them to train ourselves and help strengthen that weakest link in our chain – the human element.