Revisiting "Hardening"


When your organisation is attacked, does your environment adapt – or does it break? As our organisations become more dependent on IT systems can we continue to fall back to manual process or do we need to ensure systems keep working even while under attack?

The idea for this post started over dinner with a friend, who turned to me 3/4 of the way through the meal and asked me to write up what I'd been talking about so she could point others to it. We'd been talking about system hardening - the traditional security concept that says systems should have unnecessary processes disabled prior to production deployment.

The base goal is good - essentially, we look to reduce the attack surface of a system, i.e. that number of possible points of attack or foothold for someone trying to break or break into the system. But is the language wrong, and limiting our thinking? In non-IT terms - when something is hardened, it is also brittle, and while it will resist to a point, will shatter if enough force is applied.  Should we - instead of "hardening" - talk about how we are ensuring systems are resilient in the face of attack? Particularly as we are recognising the Jericho Forum's deperimeterisation and Google's BeyondCorp are either our present or future. 


"The future is already here — it's just not very evenly distributed." 
- W Gibson


Every system either is a target or will be; and for most organisations one breached system is all it could take to see the entire organisation compromised. If we accept this, then we need to do all we can to ensure every computer with access to our systems and data is as resilient as it can be.  Noting that in an enterprise of any significant size there will be some machines that can’t be updated, but similar to people who can’t be vaccinated, herd immunity will help reduce the likelihood of issues.

I'm happy to acknowledge that the Industrial Control System (ICS) community has been talking about resilient control systems for some time, but those concepts don't appear to have spread beyond ICS - or at least I'm not hearing them.

What would resilient look like? For me it's beyond disabling unneeded services and moving towards ensuring a system can keep operating even while under sustained attack and can easily and simply let an operator (or orchestration platform) know when it's under attack. Some of this exists today and some doesn't - certainly modern operating systems are getting better at isolating user space from kernel space and system logging is available for almost anything that happens.

But let's be honest. How many organisations - particularly outside the industries that have traditionally focused on security - have removed local administrative access on PCs for end users, or have comprehensive logging of all events from workstations? Couple that with the fact that even in the latest versions of Windows, the event logging is cryptic to say the least.  What small or medium size organisation truly has the skills to reliably interpret those logs? Which Managed Security Service Providers a) ingest those logs and b) actually know how to reliably interpret them?

At this point, I am sure there's someone who has been working either with one of the large FinServ companies, or with small security-aware startups, who will object and say this is all solvable - the solutions are known, companies just need to deploy them. To a point, that's true. But they aren't - because if they are thinking about this at all, they are still thinking just about hardening and not about resilience. And they are unlikely to until the security community starts talking about the solutions to the next problem, rather than the way to fight the last war.

In my next post, I’ll go into more detail on what I think could be done to foster more resilience-focused conversation.



Disclaimers:
  1. I am not claiming my ramblings here are necessarily original - lots of people are thinking about these topics, these are my thoughts on the subjects but have likely been influenced by others.  I'm not going to try to footnote and/or attribute everything. Where I know I have consciously picked up someone else's idea I will note that.
  2. I don't claim that what I've written is "right" - this is my attempt at a conversation with the wider community.  I'm doing this as I seem to have less and less opportunities to have these sorts of conversations IRL. I spend an increasing percentage of my time with non-technical leaders discussing cybersecurity risks and have less opportunity to discuss security practice (and philosophy) with practitioners.

Comments