Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Why you need to get used to SolarWinds

N4nk3r ph3193 Security researcher

The hack that subverted the SolarWinds Orion network performance monitoring software last year is still one of the hot topics of conversation. Many security vendors are now SolarWinds-washing their marketing pitches: “There’s no way that you can avoid the next attack on the software supply chain unless you buy our products.” That sort of thing. Some of these pitches are so ridiculous that they’re funny. Some have more substance to them. I’ll let you decide which are which.

But despite the many vendor claims, it’s probably impossible to stop all such attacks. I don’t know if real therapists say this, but the ones that I have seen on TV often tell their patients that knowing you have a problem is the first step toward solving it. In a similar vein, maybe you should accept that it’s possible for hackers to cleverly subvert any software that exists.

The SolarWinds hack gave us a good example of what’s possible. But you might want to accept that even the most determined and skilled defenders can never stop a determined and skilled adversary, at least not in practical terms.

Why can’t you defend?

A clever adversary can subvert any software or hardware, and in a way that you can’t catch. In his acceptance speech for the 1983 Turing Award, “Reflections on Trusting Trust,” Ken Thompson gave one of the first examples of this: If you can subvert a tool, you can subvert its output. That’s much like what rootkits do. A rootkit can hide its components by replacing a system utility such as the ls list directory contents command with logic that doesn’t show any of the components that it wants to hide. It can roughly replace code that does this:

for each file in a directory {
return that file’s name

With code that does something like this:

for each file in a directory {
if file’s name is not on the list of hidden files
return that file’s name

So if you use the subverted ls tool to look for the components of the rootkit, you won’t find them. Similar modifications to other parts of an operating system can make it difficult or impossible to find subverted components.

Thompson noted that if you try to find such a modification, a clever adversary could even subvert the tools that you could use to do that, making it virtually impossible to detect a very clever subversion of software. And if you try to verify the functioning of the tools that you use to detect such hacks, the tools that you use to do that could also be subverted, and so on. As long as automated tools are being used, it’s possible to subvert those tools to alter their output and avoid being caught.

So you can’t really trust software that has been generated by other software. That means that absolutely all software is susceptible to this. But it’s not just software that has this problem. Hardware can also be subverted in a similar way, because the software and hardware design processes aren’t that different. Integrated circuits are much like object code. You get software object code from running your source code through a compiler. Similarly, you make hardware these days by compiling source code that’s written in a hardware description language (HDL) and using the output of that process to fabricate the hardware.

Because the two processes are so similar, any clever way to subvert software can probably also be applied to hardware. And the same techniques that a clever adversary can use to hide his malicious modifications to software can also be used to hide malicious modifications to hardware.

Subversion is to be expected

The bottom line is that you can try very hard to make your software and hardware more secure, but it can never be as secure as you’d like it to be. And this is true even if you can somehow find a way to eliminate the many errors that creep into both software and hardware. That’s probably not possible with our current technology, but even if it were, that wouldn’t be enough. We simply can’t trust any software or hardware. It can always be subverted, and in ways that we may not be able to detect.

Face the music

Let’s take the advice of a TV therapist and accept that we have a problem. What can security teams do about it?

It turns out that some smart people have already thought about this and come up with some reasonable ideas of what to do. One of these smart people is Andrew Odlyzko, a professor of mathematics who also thinks a lot about computer security issues and what to do about them. One of his recent papers, “Cybersecurity Is Not Very Important,” has lots of interesting ideas in it. If you have an hour or two to kill, reading and understanding this paper is probably a good way to use that time.

In this paper, Odlyzko makes lots of claims that the information security industry won’t like. Such as: You probably already have enough security and might even be spending too much on it. Not many people in the security industry seem to have read this paper, perhaps because, as Upton Sinclair noted, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

But Odlyzko also had an idea or two about what you should be doing, some of which he summarized by quoting part of the title of the 1964 movie Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. Odlyzko suggests that you accept that we can never get the level of security that we might want and that we should accept this, or “stop worrying and learn to love the bomb.”

What does that mean in practical terms?

If you can never get the level of security that you want, you should instead try to design your systems to be robust enough to function even when they’re compromised. Or, if they’re compromised, it should be cheap and easy to recover from the compromise. That’s often described as “cyber resilience” these days, and it looks as if it may actually turn out to be a good idea.

Let's focus on good enough, and get resilient

Cyber resilience is probably not going to get the level of media coverage that a hypothetical blockchain-enabled, quantum-resistant cyber resilience would get, because of the lack of buzzwords, but that‘s not necessarily a bad thing.

I’ll take a robust system that can easily handle the worst that hackers can throw at it over the buzzwords du jour any day. And that might be what well-planned and well-executed cyber resilience will be able to do for us. You may not be able to make your systems secure, but you just might be able to make them good enough to handle what the real world throws at them

No Security is a monthly column.

Keep learning

Read more articles about: SecurityData Security