Let’s face it–the cybersecurity market is confusing. With constantly changing acronyms and technical language that only those with a certain level of expertise may understand, the market, including critical cybersecurity tools and resources, is inaccessible to the average user. Vendors who put artificial intelligence (AI) and machine learning (ML) labels on their products exacerbate these barriers to entry and exclusivity to seem innovative and keep pace with the industry.
There’s a common misconception that the AI label automatically makes a cybersecurity solution better when that’s far from the truth. Organizations don’t need AI or ML tools to improve cybersecurity.
Benefits and shortcomings of AI and ML
While AI and ML can be beneficial, small teams should not assume they need AI for threat detection and response or overall security.
In many cases, AI is only effective for threat detection; it does not necessarily resolve threats. But even threat detection through AI can create issues: AI and ML is often positioned to perform anomaly detection, which brings unknowns to the surface but also flags additional unknowns that do not impact security. Avoiding this requires security teams to train the existing model continuously, incorporating a strong feedback loop into training data. However, that requires additional effort and cost on top of investigating the finding itself.
That’s not the only way AI can create more work for a security team. AI and ML can augment a staff of security operations center (SOC) analysts, helping them sift through false positives. Still, SOC staff must understand the output and provide feedback to the model to prevent wasting time on an untrained model. Organizations need data science expertise to avoid reviewing results from an incorrectly trained model, ultimately adding more time to their day.
Cybersecurity needs the human element
AI and ML are not secret weapons that eliminate the need for human decision-making. Human decision-making is unparalleled. For example, creating detection rules based on attack paths, emerging threat intelligence, and new vulnerabilities takes context, research, and creativity. AI could write rules but would only write those rules within the context of the original author of the AI. Being aware of impending attacks via research, replicating them, determining where detectability can occur across the stack, and building detections and playbooks is a uniquely human effort that AI can support, but not complete on its own.
What’s more, AI is not only unable to evade all offensive tactics against it, but hackers can also learn the weaknesses of an AI-powered system. All implementations have some metrics that leave them vulnerable to attackers who learn the guardrails through fuzzing or similar attacks. This could be as simple as avoiding next-generation antivirus (NGAV) detection by having an application wait 30 minutes before executing the actual malicious payload–timing out the antivirus software’s ML process behavior evaluation period.
Achieving security without AI-powered tools
AI doesn’t replace the human element in cybersecurity. For smaller organizations that may not be able to capitalize on AI tools or have a security team, leveraging tools backed by a real support team is vital. Working with outsourced security experts can ease the burden on under-resourced teams. Partnering with a solution provider with a SecOps team to provide further guidance can help businesses respond to and prevent future issues.
Teams can also supplement with automation — for example, using automated blocklists. Automation–and an organization’s own internal documentation/rules around how that automation is applied–is a significant first step for most companies. Teams that have documented how they intend to respond to security or operational issues, and can use data for these issues, are moving in the right direction.
There are a few additional ways small IT or security teams can achieve security, including:
Utilizing honeypots to their advantage: honeypots are a low-cost way to lure attackers and detect real threats, such as remote desktop protocol (RDP) attacks.
Leveraging existing security features: use security features included with tools the organization already uses, for example, multi-factor authentication (MFA), phishing protection, and alerting within Microsoft 365.
Going back to the basics: attackers only have a certain number of ways to infiltrate an environment, so by ensuring an organization has good security hygiene, for example, no open ports exposed to the internet, enabling MFA, and having a way to monitor behavior in the environment–they can prevent the majority of attacks.
AI is usually only accessible to large enterprises with SOC teams and is too expensive and time-consuming for smaller organizations with fewer resources and less budget to support implementing such tools. AI is not only too costly but, in most cases, unnecessary for smaller teams who need to fight other fires within the infrastructure.