In today’s digital-first world, cybersecurity remains at the forefront of most business agendas. As attacks evolve and become more sophisticated, the industry’s response has been to adopt the zero-trust architecture. However, with the rise of zero-trust architecture, we’ve also seen an unexpected, unwelcome guest: complexity.
Unpacking zero-trust and its complexity
Zero-trust, at its core, operates on the principle of “never trust, always verify.” Originally intended to ensure that all users, whether inside or outside of an organization’s network, are authenticated, authorized, and continuously validated for security configuration and posture, its mission is very straightforward. However, the execution is far from simple.
Nowadays, implementing zero trust often means wrestling with challenges like multiple authentications, complex micro-segmentations, and the enforcement of strict policy controls. Every necessary consideration can increase friction in daily operations, slowing down daily operations and increasing the potential for human error.
Simplicity always trumps security, in practice
History tells us that humans innately gravitate towards simplicity. When procedures become too complicated, people search for workarounds, often unintentionally compromising security. Take, for instance, the rise of Shadow IT.
When IT protocols become too stringent, many professionals resort to unsanctioned tools to get their work done. This movement was born when employees started using software like Dropbox to get around Firewalls and data loss protection policies. Today, we see things like employees using unsanctioned Slack channels or sending messages via WhatsApp to communicate outside of a companie’s purvue. Gartner quantified this problem in a recent survey, showing that 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022.
Generative AI and the allure of simplicity
Enter generative AI. Generative AI models generate content, predictions, and solutions based on vast amounts of available data. They’re making waves not just for their ‘wow’ factor, but for their practical applications. It’s only natural that employees would gravitate to the latest technology offering the ability to make them more efficient.
For cybersecurity, this means potential tools that offer predictive threat analysis based on patterns, provide automatic code fixes, dynamically adjust policies in response to evolving threat landscapes and even automatically respond to active attacks. If used correctly, generative AI can shoulder some of the burdens of the complexities that have built up over the course of the zero-trust era. But how can you trust generative AI if you are not in control of the data that trains it? You can’t, really. But trying to apply a zero-trust mentality to its use creates an environment destined to repeat the past, now in the form of Shadow AI.
Don’t block it, harness it
Right now, it’s the wild, wild west of finding new ways to apply generative AI to our daily tasks. Some want it to be a silver bullet that replaces known norms, but today it is only a supporting element. As a speaker of English as my second language, I use it to assist me with writing things like blogs, articles and talk abstracts – and it’s helpful. But it only works because I am in control of the ideas that feed the outputs, and I shape the final outcomes. The work is still a product of my unique thoughts, but generative AI makes me more efficient in piecing together my concepts and presenting information in a cohesive, grammatically correct fashion.
I’m not the only one testing generative AI to simplify complex tasks. This is happening in many organizations, for various use cases, with or without approval or the knowledge of the security team. This is forcing organizations to start setting generative AI policies. Those that choose the zero-trust path and ban its use will only repeat the mistakes of the past. Employees will find ways around bans if it means getting their job done more efficiently. Those who harness it will make a calculated tradeoff between control and productivity that will keep them competitive in their respective markets.
Finding balance
It’s easy to get swept up in the hype of AI-powered tools, but understanding their potential risks and limitations and ensuring proper implementation is crucial. There is a balance to be had between the simplicity and productivity generative AI offers, and the security measures organizations must take to protect themselves. Smart security teams will embrace generative AI, but with built-in, native guardrails that do not put obstacles in front of users.