Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Microchip 'god mode' flaw: Is it time to rethink security?

public://pictures/lwmartin.jpg
Luther Martin Distinguished Technologist, Micro Focus
 

At this year's Black Hat USA 2018 conference, security researcher Christopher Domas talked about how he had found interesting behavior, the "rosenbridge backdoor," in certain microprocessors.

From one point of view, the mere existence of this vulnerability indicates that the fears of the most paranoid users are warranted. From another point of view, it suggests that we just need to adjust our understanding of exactly what security is and what it can provide.

It may be time to rethink security. Here's why.

What rosenbridge is, how it works

This particular vulnerability might be described as a type of hardware backdoor, in which undocumented CPU instructions can take a process from an operating system's Ring 3, the least privileged level of access to resources, directly to Ring 0, the most privileged level of access to resources.

Ring 3 is where applications run, and keeping them there keeps them from tinkering with the data or code that other applications use. Ring 0 is reserved for the operating system itself, which manages the resources that all running processes can access. An application needs to be in Ring 0 to enable this backdoor, but Domas found that some systems seem to have been shipped with the backdoor already enabled.

Software running in Ring 0 can potentially bypass any security mechanism of other processes. If a process uses a password or cryptographic key, another process running in Ring 0 may also be able to get that password or key, thus virtually eliminating the security it provides.

So, if a process can authenticate or decrypt data, then it's entirely possible that another process that is running in Ring 0 can also perform that same authentication or decryption operation.

Who's affected

This dramatic escalation of privilege was due to additional hardware—included with some versions of a processor—that implemented the undocumented feature. Domas described the systems affected by this vulnerability in this way:

"It is thought that only VIA C3 CPUs are affected by this issue. The C-series processors are marketed towards industrial automation, point-of-sale, ATM, and healthcare hardware, as well as a variety of consumer desktop and laptop computers."

The CPUs affected by this vulnerability are fairly old (2004), but systems are still being used that are vulnerable to this backdoor, and some of them may run applications that handle very sensitive data.

This is potentially very bad. And although this feature is described in the chip's data sheet (PDF; see "alternate instruction execution" on page 82) for the affected processor, the data sheet says, "This alternate instruction set is intended for testing, debug, and special application usage. Accordingly, it is not documented for general usage."

Undocumented instructions

It turns out that undocumented instructions are more common than you might suspect. They are very useful, for example, in testing implementations of random number generators (RNGs), where it can be very useful to have the ability to bypass the RNGs and use fixed or otherwise known data for testing purposes.

It might even be reasonable to assume that backdoors exist on all processors that are commonly used, because they provide a capability that's necessary to test hardware and debug software.

The existence of undocumented CPU instructions has been known for many years. From May 1996 to August 1998, Dr. Dobb's Journal even had a regular column by Robert Collins, "Undocumented Corner," which described some of these.

Most of these were not too surprising. For the very common x86 architecture, they included things such as the existence of addressing modes that a programmer might suspect would exist but weren't listed in any documentation.

Many of the formerly undocumented instructions lost their undocumented status when newer versions of documentation gradually admitted to their existence. But there are still a few left that it might be appropriate to describe as "open secrets," because they are now listed on the Wikipedia page for x86 instructions.

Even given all that, the rosenbridge vulnerability is a particularly insidious one from a security perpective because it gives an adversary the ability to defeat absolutely any security mechanism. But if undocumented backdoors exist in all hardware, what is a reasonable way to deal with this fact?

What to do about it

It's well understood these days that all software has bugs and that some of those bugs affect the security of software. And without a thorough review of your software, it's possible for a clever adversary to insert code that implements all sorts of insidious functions, including ones that let them bypass any security mechanisms that even the cleverest engineers can think of.

It's probably less well understood that hardware is the very same way: All hardware has bugs, some of which affect security. And as the discovery of the rosenbridge backdoor shows, it's also possible to have unexpected hardware features that implement functions that dramatically reduce the security the hardware is meant to enforce.

So both software and hardware are buggy and easily subverted. If you want to operate securely, what's left? Is there any way to operate securely with insecure software and hardware?

It turns out that this might be the wrong question to ask. A better question might be does it even matter? Andrew Odlyzko summarized his thoughts on this in his paper "Providing Security with Insecure Systems" (PDF), which he wrote for his keynote address at the Third ACM Conference on Wireless Network Security (WiSEC '10).

Learn to live with it

Odlyzko noted that if you take a close look at today's IT infrastructure, you'll probably be appalled at how buggy and insecure it is. But the same thing was true 10 years ago. And 20 years ago. And 30 years ago. And the world has continued to function reasonably well, even with this buggy and insecure infrastructure.

Based on this observation, Odlyzko suggested that a reasonable strategy is to accept the fact that lots of exploitable vulnerabilities exist and to, as he said, adopt the Dr. Strangelove approach and "learn to love the bomb."

So the optimal strategy may be to simply continue on our current course, or, as Odlyzko said, "Instead of trying for the unattainable, let us accept that perfect security cannot be provided, and learn from what has worked in the past." 

This may not sit well with security purists, but it might be the best approach in the long run. Perfect security is clearly impossible. And because of the never-ending stream of newly discovered vulnerabilities such as the rosenbridge backdoor, it might even be impossible to get even good security.

But it just might be possible to make things secure enough to keep the world functioning.

Keep learning

Read more articles about: SecurityInformation Security