From trusted computing to trusted communications

We try to make our writing here at Telco 2.0 practical as well as analytical and philosophical -- as would commercial writing in more formal media. Normally we try to end articles with something that addresses the "so what?" question. We like to bring you answers. This time, we've just got an important question. Fortunately, a blog is also a great place to float half-formed ideas as well as force you to structure your own scattered thoughts. Blogging is a different medium from a newsletter, or a journal. As Doc Searls, editor of Linux Journal, puts it:

So I write a lot about the Net, the Web, blogging, podcasting and the rest of it. And maybe I'm wrong about a lot of it too. Hell, what does anybody know? The whole thing is still new. Everything we say about it is unavoidably provisional.

So, this is the first of a few essays with some provisional (and possibly wrong) thoughts about delivering security and privacy to users in a mass and massively networked age. These problems and opportunities for everyone in the communications value chain. Indeed, they were sparked by a consulting assignment for someone at the silicon part of the ecosystem. Our punchline (you'll have to wait) will be this: there's a very deep architectural problem in how computers and networks interact, and we've got some ideas on how to do things differently.

The economics of security are different from the rest of the communications value chain

There's big money in the security game. Why? Well, consider the four basic means of making money from any communications service:

  • Create or capture valuable bits. This could be shooting a movie, or giving the users a phone to talk into.
  • Add value to existing valuable bits. For example, there's a great platform opportunity for telcos to open up their infrastructure to deliver personalisation services to third party application, portal or search providers who don't know anything about you.
  • Move valuable bits from A to B so they can interact with the user. Enough said.
  • Stop unwanted negative-value bits from getting through to the user, or even being sent in the first place.

This last category is special. For the first three, the amount of value you create roughly corresponds to the number of bits you process. But one rotten bit can undo the value of billions of tasty ones. If only there was an easy way of identifying them... hmm, well there's an IETF standard:

Firewalls [CBR03], packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. The problem is that making such determinations is hard. To solve this problem, we define a security flag, known as the "evil" bit, in the IPv4 [RFC791] header. Benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1.

If only! Sadly, the "evil bit" specification is an April Fool. The take-away is this, however: there is a different set of actors and incentives at work; the user value proposition and thus the supporting brand and marketing is unlike selling a service or pipe; and that the problem is acknowledged to be hard enough to warrant spoof standards to generate some humorous relief.

Telcos already have a disjoined set of security businesses

There's been some interesting activity in this last space recently. BT bought up security specialist Counterpane. Verizon have done likewise.

Particularly in the US, managed home security and alarm services have long been a significant revenue source for local operators. The former CTO of pre-merger AT&T, Hossein Eslamblochi, was always convinced that the money was in stopping the bad bits, not sending the good ones. Deep packet inspection is useless for service price discrimination, but vital to some security processes.

If you're running an ISP business, you'll try to bundle anti-virus software with PCs and update subscriptions. You'll also have a network operations centre working to keep out email spam and mitigate inbound denial of service attacks.

Then there's the hidden iceberg in telecom: digital identity. Telcos have secretly been the big animals of the identity industry for decades, despite the high visibility of upstarts like Verisign and Neustar. Why do you so willingly pay a hefty fee each month for telephony service, and have bells and ringers in your home that could wake or interrupt you at any time? Because the telecoms industry has created the social, institutional, legal and technical framework that keeps the level of abuse at an acceptable level. Make a serious nuisance phone call from home, expect to get arrested. There's someone to complain to.

So far, operators have resisted extending their vast store of user identity collateral to third parties through technologies such as ENUM. They see the threat (you can point your phone number at a VoIP service and bypass the operator) but not the opportunity.

Of course, it helps that traditional telco networks are very closed when it comes to keeping out the baddies. Open directories like a public ENUM service aren't the telco way of doing business. Which is where the other side comes in: the IT industry and the Internet.

The Internet and PC culture: open to good and bad

You can install any application you want on your PC, and connect to any address or service you like on the Internet. This creates huge option value, as you're not locked into someone else's idea of what the use patterns might be. There's no shortage of innovation that exercises those options. Unfortunately, the malicious and fraudulent actors on the network show a great deal of creativity too. This openness comes with a price.

From ENIAC to Apple

Traditionally, design trade-offs in CPUs have been made in the processing of the bits; the passing of the bits between processors is an afterthought. The standard literature will teach you about the Von Neumann architecture. The chief issues were around how to wire up the processor and memory. (Storage and networking have never been treated as a "first class" objects that needs to be modelled in the CPU; they're just undifferentiated bus interfaces that are driven by the OS software. The CPU can't tell a USB mouse from an ethernet port.)

So the design issues were things like should one instruction should operate on one or many pieces of data? How many instructions should run at once? Should we architecturally separate out "instruction memory" from "data memory"?

So all bits are beautiful and good. The value comes from combining them on a processor and spitting out value-added bits into memory. Faster equals better. All applications are trusted to access all data and resources -- read any file, message any process, write to any network. The default is "yes" until someone comes along and places user-based restrictions on the application or data. Those users are local to that machine or (at best) local network. Applications have thin partitions between them so that crashes aren't contagious and crash the CPU. Another key innovation was creating privileged supervisor modes that let the operating system access things that applications can't, stopping them from treading on each others' toes. But that's all.

Bad bits make you bitter

There's just an incy, wincy problem.

The security of the system is then left to the operating system. Tens of millions of lines of code. The challenge isn't Herculean, it's Sisyphean. You could spend forever and a day on it, and still not get it right. In fact, for all practical purposes, Microsoft has spent forever and a day on splicing DRM and security into Vista, and still some bugger simply reads out "Delete See Colon Backslash" in an audio file on a web page and BOOF! Pray you've got an install CD and system backup.

So the problem is that not all bits are good. As noted before, the evil bits can erase the value of the good bits. And networking a computer -- or simply passing media around -- gives the bad bits a medium through which to pass.

Yes. Some are red. And some are blue.
Some are old. And some are new.
Some are sad.
And some are glad.
And some are very, very bad.

Why are they
sad and glad and bad?
I do not know.
Go ask your dad

Dr Seuss: One Fish, Two Fish

Security as an afterthought

And this Dad says: We've started with Trusting computing, which is a naive and good world. The worst threat was a prankster in the university computing lab. Then we found out there were real baddies in town, so we've built Trusted computing to retrofit some security onto the general-purpose CPU to enable security between machines.

Avoiding the technojargon, Trusted Computing has one critical feature that lets my computer talk to your computer in a secure way, and know that certain data and software on your computer hasn't been tampered with, and isn't faking or spoofing the answer. You don't need to be too smart to see that this involves a security function between two devices over a network. If you're in the communications business, you ought to be working out how to make money from this.

What we need is Confident computing, where the humans know the system is working for them and it's not a zombie on someone's spambot network (now 100% secure from penetration, thanks to Trusted Computing). Once we have that, we can move to Confident communications, where we can accept untrusted code and data from third parties, and know that they are either immune to contagion, or the effects are reversible.

On mobile, the constraint is power, not processing

There's another problem. I suspect a large proportion of the runtime of an operating system is given over to performing authorisation and security functions. This bloats the size of the OS (more storage and processing cost) and shortens the battery life of mobile devices. We're about to move to a world of mobile Internet appliances, and we need a transformative approach to power usage: weeks or months of battery life, not hours or days. That means putting security where it belongs: in silicon or the BIOS (the bit that mediates between the OS and hardware) -- not the OS or (heaven forbid!) user applications.

There have been previous efforts to radically re-think computing. For example, rather than running the whole processor against a central clock, do everything asynchronously, and only accept an input once the previous stage is done, and pass it on as soon as you're finished. Don't wait for the nanosecond hand to tick on to the next stop. Useful, barely practical, and doesn't address the real user crisis, which is being asked to manage the security a complex device in a hostile networked evnironment.

So, what's the bottom line?

We think the whole foundation of computing and machine communications needs re-examination for the networked age.

Yup, you read it here first. Just a minor detail, that the whole system is flawed. Stop what you're doing. Rescind that router purchase order. Delete your Dell.

This isn't really a Telco 2.0 thing, per se. It's a far bigger problem across a vast ecosystem. We probably can't just turn up and do a workshop for you an unilaterally fix these issues.

Our thesis is that the value in converged IT/communications is shifting. The first move is from computing power to connectivity. We're well into that process. You'd rather have a slow laptop with WiFi than a fast one that doesn't connect. The next shift is from connectivity to security, and that it requires a fundamentally different device and network architecture.

Which is where we'll go next.