Hacker in hoodie showing DBIR report on cyber attacks

2025 Verizon DBIR: Cyber Attacks Increasingly Driven by Vulnerability Exploitation, VPNs and Edge Devices Heavily Targeted

Verizon’s annual Data Breach Investigations Report (DBIR) is out, and the headline finding is another surge in vulnerability exploitation as a leading initial access method in cyber attacks. Though not as big as the 180% spike seen in 2024, the growth is substantial and nearly catches exploits up to credential abuse as the leading initial entry method.

The researchers think that this newer and smaller spike may be fed to some degree by a more general increase in incident reporting. However, the research also finds a specific and very large spike in the targeting of VPNs and assorted edge devices as remediation time in these categories remains slow.

2025 DBIR cautions more attention must be paid to edge devices

The 18th installment of the DBIR surveyed 22,052 total cyber attacks logged by Verizon’s internal threat research team, over half of which (12,195) involved confirmed data breaches. The cyber attacks surveyed do include things like distributed denial of service (DDoS) incidents, which are not intended to cause a data breach or credential compromise, though there is an obvious focus on trends in initial compromise points.

The usual headline item is the most common “tip of the spear” approaches in cyber attacks for the year. Credential abuse continues in the lead at 22%, but not by much. Exploitation of vulnerabilities is now up to 20%, followed by phishing at 16%. The biggest single change is the amount of attacks on VPNs and edge devices, up by 8x the prior year’s activity. While these devices were only involved in 3% of cyber attacks involving exploits in the prior year, they now played a role in 22% of these incidents.

Timely patching in general has been a struggle for organizations in recent years, and the load continues to get larger while workforce and budget problems persist. The reporting notes that only a little over 50% of known edge device vulnerabilities are ever fully patched, and that patching on average takes about 32 days.

Ransomware activity also quietly continues to climb, after seeming to hit a lull between 2022 and 2023. There was a 37% spike in incidents from the prior year, and 44% of all the incidents surveyed were some form of ransomware attack. However, there are also some signs that organizations are better prepared defensively.  The median payment in these incidents has dropped sharply, from $150,000 last year to $115,000 this year. Additionally, 64% of victims refused to make a payment in this year’s study (up from 50% just two years ago when activity was significantly slower). However, an unfortunate offshoot of this is that criminals seem to now be more willing to target less lucrative small businesses who are less likely to be prepared; ransomware was involved in 88% of SMB incidents.

Third-party vendor breaches up, more cyber attacks involve espionage

Another substantial spike is in cyber attacks involving a third-party compromise leading to a data breach, doubling from 15% to 30% of all breaches in a year. Credential reuse was a significant factor among these incidents. Attackers appear to be finding success trawling GitHub repositories for exposed secrets, which are taking a median time of 94 days to address.

Espionage cyber attacks, presumably involving a nation-state threat group in all or nearly all cases, are also up to 17% of all incidents. These attackers heavily favor using vulnerabilities to get in the door, targeting them in 70% of their attacks. But these actors are also showing increasing interest in making some money off of their escapades as a sideline, with 28% of these attacks involving a financial component.

The DBIR also provides a small bit of insight into the level of threat presently posed by AI tools. While it is generally in keeping with other recent reports in observing that criminals are not seeing a major boost from AI as of yet, there has been a huge uptick in synthetically generated text used in malicious emails. But the biggest risk at present remains irresponsible or unwitting use of AI by organization insiders, 72% of whom are using non-corporate emails as the identifiers of their accounts and may not be keeping to corporate policy about LLM information security.

This edition of the DBIR also devotes a section to a deeper dive into the statistics of breaches involving a third party compromise, given that these incidents doubled in the space of one year. The “human element” factor has actually decreased somewhat from a high of near 80% of incidents in 2021 to about 60% now, but the number remains alarmingly high. The most common human element is illicit acquisition of someone’s credentials (32% of incidents), followed by social actions or social engineering of an insider (23%) and simple employee error (14%). Interaction with malware as a breach cause is surprisingly low at just 7%; the preferred pattern for attackers now appears to be seeking either existing credentials or vulnerabilities first, then only deploying malware as needed after initial penetration.

Brian Soby, CTO and co-founder at AppOmni, adds some insights into the ongoing issue of credential theft feeding cyber attacks: “It’s no surprise to see credential abuse remaining the most prevalent type of cybercrime. It works and it’s an easy vector for attackers, plain and simple. We’ve seen organizations pour money into centralized identity management and zero trust solutions that ignore the reality of the risk landscape. SaaS and other applications hold an organization’s data. Far more often than not, we see these applications not being configured securely and not being covered by an organization’s security architecture. There’s a common misunderstanding that accounts are “managed” and protected if they’re provisioned by an organization’s identity provider and the user can access the account via SSO. However, if the applications are configured to also allow user/password or allow access outside of expected zones or zero trust paths, the reality is that these accounts have the same risk profile of unmanaged accounts and are not protected. If attackers can simply side-step these security investments by taking phished credentials directly to the apps and stealing data, it should be obvious that this is the weakest link and is going to be exploited. Whether it’s phishing, infostealers, or session hijacking, organizations should expect little ROI on their security programs with such glaring holes and lack of protection against the most common attack vectors.”

Yogita Parulekar, CEO at Invi Grid, cautions that the DBIR’s trend in vulnerability targeting is also likely to continue: “We’re watching a dangerous trend accelerate—the surge in exploited vulnerabilities, especially zero-days, isn’t just a blip. It’s the new normal, and it’s going to get worse before it gets better. What’s deeply concerning is that this spike comes at a time when the very agencies we rely on to identify and disclose these vulnerabilities are facing budget cuts and resource constraints. That means more gaps in visibility, more unknowns, and more chances for threat actors to strike. As a CEO in cybersecurity, I see the impact this has not just on infrastructure, but on people—on the exhausted IT teams trying to do more with less, and on the users unknowingly exposed as a result. When you don’t know what’s lurking out there, defending your environment can feel like fighting in the dark. Now more than ever, we have to double down on fundamentals. Patch aggressively when updates are available. Don’t assume you’re covered—verify every endpoint. And don’t overlook the basics: make sure MFA is enabled everywhere it can be. It’s not just good practice—it’s protection when everything else might be out of date.”

Mike McGuire, Senior Security Solutions Manager at Black Duck, observes that the trends the DBIR identifies are likely to require automation and AI to stay on top of: “Third-party services, products or software components in the software supply chain should be thoroughly assessed for security. The biggest challenge here is visibility. The average commercial code base depends on 911 open source components. More than half of these dependencies are transitive, meaning there are numerous dependencies being introduced into applications inadvertently. Depending on how these dependencies are introduced, it can be difficult to identify and track them, and complete tasks like providing accurate SBOMs to consumers. Another challenge is the rapid nature of software development. Dependencies can be very frequently updated, introducing new security vulnerabilities, which requires continuous monitoring and an efficient prioritization approach to stay on top of. Security teams also must grapple with limited access to source code. Most third-party components are closed source, meaning manual code audits are out of the question. Security teams then have to rely on some sort of binary analysis, or trust their vendor’s SBOMs and security attestations, which can be inaccurate themselves because they too face these same challenges. Automation is recommended to reduce the occurrence of human error and improve consistency. These tasks can replace some of the more manual, repetitive tasks that security teams usually perform, however, security professionals are still needed to tune this automation and define policy based on risk tolerance. For example, automation can be used for dependency management, by analyzing source code and files to detect open source or third-party components. These tools can also be used to automatically generate resulting SBOMs. Security team members need to help define which dependencies should be excluded based on risk factors, and define vulnerability prioritization guidelines. As another example, automation can be used to sign and verify artifacts, and continuously monitor artifact repositories for tampered to outdated artifacts. Security teams would be responsible for identifying these weak links to be secured, and setting security thresholds.”

Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace, adds more on the growing role of AI in defending from cyber attacks: “While GenAI was the talk of 2024, Agentic AI will be a significant focus for organizations in the year ahead. Agentic AI refers to autonomous artificial intelligence systems capable of complex tasks, decision-making and interacting with external systems with minimal human intervention. Unlike traditional AI models, AI agents mimic human decision-making processes and can adapt to new challenges, making them ideal for cybersecurity applications. Agentic systems use a combination of various AI or machine learning techniques to ingest data from a variety of sources, analyze the data, prepare a plan of action (autonomous or recommended), and take action. Most think of agentic systems as comprising of LLM-based agents, but many different machine learning techniques can be used to optimize accuracy or function for specific use cases. In cybersecurity, these systems can be used to autonomously monitor network traffic, identify unusual patterns that might indicate potential threats, and take autonomous actions to respond to possible attacks. Agentic systems can also handle incident response tasks, such as isolating affected systems, patching vulnerabilities, as well as triaging alerts in a SOC. They can also help with incident summarization and visualization as well as report generation to keep stakeholders informed during an ongoing incident. Another innovative use case will be the use of multi-agent systems for application testing and vulnerability discovery. With agentic systems able to take on many manual, time-consuming tasks in the SOC, skilled analysts can focus on more strategic tasks. This is critical for enabling security teams to move from a reactive to proactive state. However, these advantages also come with challenges. Agentic AI systems can inherit biases from their training data, potentially making flawed or unfair decisions. Without proper oversight, they may misinterpret their tasks, leading to unintended behaviors that could introduce new security risks. Building and maintaining such systems also demands deep technical expertise-something many organizations currently lack. Generative and LLM-based agentic systems have additional concerns, including hallucinations, poor reasoning, and susceptibility to attacks like prompt injection. These vulnerabilities introduce new attack surfaces that traditional defenses may not cover. Additionally, some agent-based systems are self-discoverable and have excessive permissions. Proper safeguards to control communication boundaries, accesses, permissions, and robust data security are necessary to protect organizations. Agentic AI holds great promise in cybersecurity, but it must be implemented safely, securely and responsibly, with robust safeguards to truly strengthen defense.”