Skip to content

Trusted vs trustworthy computer systems

2008/02/13

What does the distinction between “trusted” and “trustworthy” mean for computer systems?

People tend to talk of a trusted computer system when they refer to a system that is trusted to perform security- or safety-critical operations. Unsurprisingly, the military and defence communities have worried about this for a while, and the term is explicilty used in the famous Orange Book, officially referred to the “Trusted Computer System Evaluation Criteria”. It has now been replaced by the Common Criteria for IT Security Evaluation, or Common Criteria for short.

The Orange Book and the CC define an evaluation process that aims to ensure that the systems they trust to do their safety- or security-critical operations are actually trustworthy. The idea is that systems are subjected to a more-or-less thorough security evaluation, and if they meet certain criteria, they can then be certified as trustworthy to certain assurance level.

This is all fine for expensive military systems where the odd dozen millions for security evaluation doesn’t matter that much. (And it is expensive, the industry estimate is that CC evaluation at the highest level, EAL7, cost $10k per line of code!) But for embedded systems, particularly consumer goods sold for not more than a few 100$, such as the ubiquitous mobile phones, this aproach isn’t feasible.

Well, actually it isn’t even good enough for what it’s designed. The expensive evaluation certainly will make you sleep better if you subscribe to the theory that anything expensive must be good, so something very expensive must be very good, right? If you’re a bit more of a sceptic, you might be interested in looking at what CC actually gives you. It turns out that besides a nice stamp of approval, it gives you no security guarantee whatsoever. It’s a glorified ISO-9000 process. Even at the highest level. If you don’t believe me, have a look at the relevant wikipedia article. Or my recent white paper “Your System is Secure? Prove it!”

At OK Labs we are going in a direction which makes much more sense. One the one hand, we are making systems more trustworthy by minimising their trusted computing base (TCB). If the security-critical code base is small (and with OKL4 it can be as small as 20,000 lines) then it is inherently less faulty than something that’s hundreds of thousands of lines of code, even if they have gone through an expensive process producing reams of printed paper. This is achieved by our OKL4 microkernel technology, the hottest thing on the planet (but I may be a bit biased ;-)). The OKL4 microkernel provides a minimal basis for secure systems. And it supports virtualization, so you can run a complete operating system (such as Linux) in a virtual machine, without having to trust it. And all that at negligible performance overhead.

So, small is beautiful as far as security is concerned. But it goes further. As explained in another white paper, OKL4 is small enough that it is possible to prove that it is secure. Using sound and solid (but slightly non-trivial) maths—the next hottest thing on the planet. And we don’t just prove things about some abstract model of the system, we prove the actual C/assembler code. Nothing short of this gives you a guarantee that your system is trustworthy. And you shouldn’t have to rely on less.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: