Vulnerability management is more than a software tool that scans devices and shoots out an (often massive) list of vulnerabilities. It’s a defined and testable process that ensures people and technology work harmoniously and communicate effectively. Once such a strategy is implemented, you can generate predictable, accurate and timely insights into emerging vulnerabilities and minimize risk.
Unfortunately, many if not most organizations are failing to clear this bar.
Some organizations are over-reliant on software to solve this challenge, rather than balancing the right mix of people, processes and software tools. Other organizations, meanwhile, are simply failing to replace or augment their obsolete or limited software tools with newer options that are far better suited for the task of continuously effective vulnerability management.
The question is then clear: On which side of the dividing line is your organizational currently situated? Are you hoping for the best or prepared for the worst?
The stakes have never been greater. The Center for Strategic and International Studies reports that global cybercrime losses will nearly reach $1 trillion in 2020, doubling in just two years. Clearly, the old methods and tools we’ve relied for vulnerability management on are no longer up to the task.
So what should a forward-thinking CISO do to improve a weak vulnerability strategy? Let’s take a closer look.
Asset value and risk calculation
One of the key things to consider is the value that is attached to each asset within your environment. Is this asset critical to operations or revenue generating? Does it contain data that could create massive problems for my enterprise should it ever be exposed? It is crucial to analyze not only where vulnerabilities exist, but the relative cost of those gaps being exploited.
Historically, organizations have used a “waterfall” model where the software tool represents the source of the flow. In such cases, vulnerability scanners will identify problems, a VM analyst will take possession of the report and pass it along to patch management. That team is then given the exhaustive task of defining and segregating these vulnerabilities based on type. Based on these determinations, these vulnerabilities are passed to the appropriate team for assessment and mitigation/remediation.
This team must now determine whether the vulnerabilities are relevant to them and, if so, how to patch it. This typically means setting up a staged environment for testing then determining the best way to deploy a patch to all affected areas. Once patch management personnel are finished, they tick the final box and the process repeats all over again with the next set of vulnerabilities.
On occasion, the first link in this chain (the VM analyst) will re-scan and see that the vulnerabilities still exist even after patch management has finished. Yet in the absence of clear lines of communication, this information may not get back to patch management in a timely fashion. Or if it does, they may simply assume that they’ve already fixed the problem.
Why this model is failing
The problem with the waterfall model is clear: When everything flows down in one direction, nobody is communicating effectively — which leads to unmitigated gaps and security breaches. Another problem: Often, vulnerability reports are hundreds of pages of data with little structure to illuminate relevant or actionable information. It’s difficult to parse the value of assets, the systems affected, whether there is a vulnerability or exploit released etc.
Too often, asset value and vulnerability risk assessments are lacking from the conventional process. This context can make the difference between a successfully deterred breach and a devastating penetration. Plenty of security teams have chosen to ignore a low vulnerability score to focus on high mediums or criticals, only to later rue that decision because they failed to accurately evaluate the value and risk attached to that low score.
Ultimately, the goal is not 100-percent patch compliance, but to minimize organizational risk to the highest degree possible. So how do we do this? It’s fairly straightforward: We need a strategy that is built on the fundamentals of prioritized patch applications (evaluating the criticality of data or whether that data can be pivoted toward sensitive data), threat modeling, mapping security, gap analysis — everything that is needed to validate the effectiveness of our security controls.
What does this mean from a software perspective?
Fortunately, software solutions exist that can help meet the criteria listed above for developing a successful vulnerability management strategy. One obvious example: The use of an attack-centric exposure prioritization platform that can help identify and remediate the most business critical vulnerabilities. Rather than relying on the conventional waterfall method and asking overstretched teams to engage in highly manual and prone-to-error analysis, it is possible to harness the power of automated, continuous testing to help secure business-sensitive systems from breach.
An attack-centric exposure prioritization platform works by launching non-stop simulated attacks against your security environments using the tactics and paths most likely to be used by adversaries. In this sense, it works much like automated pen testing, rigorously exposing vulnerabilities and providing prioritized remediation guidance. Crucially, it eliminates 99% of risk to business-critical systems and assets by focusing on the 1% of exposures that can be exploited.
Standard alternatives to this approach include VA/VM products, which work by scoring vulnerabilities using CVSS or some other yardstick, and conventional risk-based vulnerability management products (RBVM), which use threat intelligence, configuration management and data science to calculate vulnerability risk scores). However, both of these approaches fall short when it comes to continuously prioritizing the remediation of critical exposures affecting business-sensitive systems.
Conventional VA/VM and RBVM platforms do not offer attack-centric context and cannot understand the relationship between different network hosts along an attack path. They provide “risk on” rather than the much more relevant “risk to” context.
They calculate risk based on external threat intelligence factors with zero or marginal correlation to business-critical assets.
They often do not provide contextual remediation guidance.
A true attack-centric exposure prioritization platform offers a superior solution for vulnerability management, but it’s imperative to choose the right tool. Characteristics to look for include the ability to continuously identify new exposures and attack vectors that affect business-sensitive systems and context-sensitive, least-effort remediation guidance. The right tool — and the right process balancing people, processes and communication — can help your organization continuously improve their overall security posture by focusing on the most critical exposures.
At a time when the risk of a #databreach has never been more acute, a true attack-centric exposure prioritization platform offers a superior solution for #vulnerabilitymanagement. #cybersecurity #respectdata
Click to Tweet
At a time when the risk of a breach has never been more acute, the need to create new and more resilient cybersecurity strategies has never been more urgent.