Incident response lessons learnt on ground
Our approach – Reuse as much operational knowledge gained by your peers who have already handled attacks. Most of them are kind enough to help you provided you ask. Below are a series of learning’s shared by companies who have handled major incidents and which others can take as actionable items.
Sincere thanks to Diana Peh.
The logistics giant was first hit by Mailto ransomware at the end of January, which took six weeks to recover from.
It then suffered a second attack in early May that used the Nefilim malware and was similarly devastating.
Below are the clear lessons kindly shared:
“In a time of crisis, it can get really confusing. Everybody wants to help, but you need to know who’s in charge, you need a leader,”
“My experience with both cyber incidents have been very different. I found it really hard for the first incident, [but] the second one [was] much better than the first.”
“In the first one in particular, there were lots of questions around who’s in charge, and what are the roles and responsibilities.”
“It’s really important upfront that you actually are clear on roles and responsibilities going in and that you’re ready, because in a time of crisis, you really want to make sure that you try to eliminate as much chaos as you possibly can.”
An incident response plan should lay out “the next 20 steps” clearly, with plenty of practice runs.
Blue team exercises – “We’re doing this quarterly at the moment, not just with the executive crisis management teams, but with the teams on the ground, and my reflection is that this is actually a lot harder than it sounds, especially if you have teams that are spread across the globe and working across different time zones.
“My personal experience is that having run a couple of them by now, we’re still finding lots of opportunities to improve and making sure that our teams really deeply understand the drill.”
Maersk had to reinstall its entire IT environment in 10 days to recover.
More coming soon.
“We spent an awful lot of time trying to test all our computers and validate them and make sure they were clean and safe to put back on the network,” James said. “After about two weeks of doing it and redoing it and redoing it, we made the decision in the end just to wipe everything and start afresh.“In hindsight I would have done that at the beginning and not wasted all that time and effort.
“We’d been hit by something very serious and I think it was not the best use of our time to spend trying to check all of that equipment and see if there was anything that was salvageable.”
General counsel Amber Matthews said one of the “saving graces” for DLA Piper was that the company did not lose any data to the attackers, and that its backups were unaffected.
Still, the company is making changes to its architecture to prevent a similarly catastrophic global failure should it be hit again in future.
“In addition to segmenting its network so it can better contain threats, the company is also looking to stand up cloud-based versions of its core systems for business continuity purposes.“
“We manage all of our infrastructure on-prem,” James said.
“But for core services we are now looking to host some of those services – at least as a lifeboat solution – in the cloud where we can failover to those very quickly if we need to.
“The assumption being this will probably happen again at some point, somehow, hopefully not on the same scale, but we can’t wait four days to recover email – we need to be able to fail that over almost instantly.”