Digital skull on screen showing ransomware attack and ransomware recovery

Lessons From 100+ Ransomware Recoveries

Ransomware attacks are on the rise. They’ve become more targeted in the last five years – and more specific to their victims.

In our experience, that’s down to a few core factors. Generally speaking, cyber crime is a low-risk, high-return pursuit. It doesn’t cost much time or money to become a cyber criminal, and a lack of cross-jurisdiction coordination makes it a difficult thing to prosecute.

Attackers now have more time to get to know their targets, more resources, and demand higher ransoms. According to NCC Group, the number of ransomware attacks in March 2023 broke all records, with an increase of 62% compared to March 2022.

So, what does this mean for businesses? It might paint a grim picture, but it doesn’t mean a successful attack is inevitable. In our combined experience as a team, we’ve navigated 100+ ransomware recoveries. Here’s what we’ve learned.

1.    Build awareness

The first thing you need to ensure in your Business Continuity Plan is company-wide awareness of the types of cyber threats your organisation may be facing – ransomware being just one. Despite popular rumours, what you don’t know can indeed hurt you.

Everyone in an organisation, from the most junior to the most senior personnel, needs to understand the potential threats to business continuity. That includes the different channels they might come through, how to spot anomalies, and what to do when issues arise. It also means considering the bigger picture, and having a protocol in place for ransomware payment and non-payment scenarios.

Make sure you and your team are familiar with the most common threats to data security and how to react to them. Depending on the size of your organisation, it can be as simple as learning to recognise potential phishing emails and not clicking on potentially harmful links.

2.    Faster detection means better recoveries

When it hit in 2017, the WannaCry ransomware attack was the big story in cybersecurity. The news broke on a Friday afternoon, and we told our teams to get ready for a flood of recoveries over the weekend. But that didn’t happen. We had no major escalations, and when we regrouped on Monday, we had zero cases.

The next day, our engineers spotted an anomaly in one customer’s backups. If a lot of data has changed or been encrypted, it’s flagged by our monitoring. Most of the time the reasons for changes are harmless. In this case, it was WannaCry, and it had affected an NHS customer.

Thankfully, the infection was limited. The recovery was simple and we were easily able to find the most recent clean backup and restore to the hardware on the customer’s site.

The organisation in this case didn’t detect the issue quickly. We only found it through anomalies in regular backups. This is not how you want to find out: it means you’re days behind.

The bottom line: Have a method of monitoring to alert you to anomalies in your systems. Many types of backup software now use in-built detection. Machine learning can help get a sense of what a ‘normal’ backup day looks like to help spot anomalies. Automated alerts can be used as API-driven steps to trigger actions like disconnecting a server from the network if an issue is detected.

3.    Plan for how to recover if your first-choice DR site is unavailable

In 2019, we began to see more manufacturing organisations being targeted. One of our BaaS customers, an international company, was infected in both Office 356 data and in its UK headquarters.

The forensic team sent by the customer’s insurer couldn’t identify the source of the infection and needed to carry out several recoveries to run scans on the data.

The company had always planned to recover its data back onto its own hardware at its data centre. The problem was that the forensic team was using its existing hardware to analyse the breach. There was no space in the data centre for additional hardware to restore onto – meaning their first choice of recovery environment was unavailable.

The answer was to restore their data into the public cloud, into Azure. For our DRaaS customers, this is what we do normally. Until recently, this hasn’t been typical for BaaS customers.

We were able to carry out multiple sand-box recoveries to test the data and find the most recent clean copy. This method is always an option, but it is not ideal because conducting multiple recoveries takes much longer. We would always recommend doing a small sand-box recovery to test before rolling out the full recovery, but ideally you want to know that the version you are restoring is clean.

Ultimately, we got them recovered into the cloud temporarily, and then back onto their original hardware once the forensic team was finished.

The bottom line: Where possible, be able to restore into multiple platforms. Cross-hypervisor and cross-cloud compatibility is vital. We’ve had several situations where it isn’t possible to recover to the first-choice recovery environment. That flexibility in your backup software saves a lot of time and heartache.

4.    Review and update your retention policies

Your data retention policy is how long you keep data for regulatory or compliance reasons, and how you remove it when it’s no longer needed.

Ransomware attackers have evolved their methods. They know you are less likely to pay out if you can quickly switch over to Disaster Recovery systems. They are now delaying detonation of ransomware to outlast typical retention policies. This is the limitation of DR solutions. While they are the fastest way to recover, they have a limited number of versions or days you can recover to.

For one of our manufacturing customers – using both our BaaS and DRaaS products – the ransomware was present on their systems for around three months. That meant that every DR recovery point was compromised, and we had to recover from backups.

The Recovery Time Objective (RTO) was a day. We recovered from backups, so it took longer than DR but relatively speaking, it was a fast recovery. The Recovery Point Objective (RPO), however, was from three months prior.

The challenge that the organisation then faced was how to re-create that lost data. There’s a lot of wasted man-power re-entering data (if you even have it) to get back to where you were before the attack.

Review your backup retention policies and systems. Regularly.

5.    Plan for what you’ll do if critical people are unavailable

There’s no perfect time for ransomware to hit your organisation. It’s best to plan for some worst-case scenarios.

For one of our DRaaS customers, this lesson became a reality when they faced an attack just after the IT Director had left the company. The IT Manager was also away on leave at the time.

A classic Business Continuity exercise strategy is to model how you would perform if you removed key staff from the plan. Every organisation has people who have institutional knowledge of how things work. If you get rid of them, you find the gaps across departments and processes.

Thankfully for this customer, the ransomware was only present for a few days prior, so they had a clean copy to recover to and a managed service provider to recover them.

We recovered them into Microsoft Azure They had a small amount of work to re-input but caught up quickly and there was very little impact on their customers.

The bottom line: Plan for absences of key team members, review how long you might expect to be operating in Disaster Recovery mode and think about how you will fail back from an invocation.

6.    Anticipate extended downtime and long recovery

Fixing a problem always takes longer than you expect it to.

In 2022, we helped a major transport company to recover from a cyber-attack that had significantly affected its ability to function.

The customer was breached via successful phishing and had their production systems encrypted. The backups at the software-level were “immutable” but the attackers had access and were able to delete the underlying storage.

The customer informed us of the breach and we activated our recovery teams.

On day one they conducted the forensic investigation and incident response and were able to fully kick-off the recovery from day two. In the interim, we were able to prepare the environment to expedite the recovery as soon as they were ready.

Internal customer-controlled backups are vulnerable to any major attack. Isolated, off-network, encrypted and monitored backups are essential. Even better, outsource this to an expert Managed Service Provider.