We need a security standard for USB and other plug-in devices

Studies have shown that if you leave USB sticks on the ground outside an office building, 60% of them will get picked up and plugged into a computer in the building. If you put the company logo on the sticks, closer to 90% of them will get picked up and plugged in.

USB sticks, as you probably know, can pretend to be CD-ROMs and that means on many Windows systems, the computer will execute an "autorun" binary on the stick, giving it control of your machine. (And many people run as administrator.) While other systems may not do this, almost every system allows a USB stick to pretend to be a keyboard, and as a keyboard it also can easily take full control of your machine, waiting for the machine to be idle so you won't see it if need be. Plugging malicious sticks into computers is how Stuxnet took over Iranian centrifuges, and yet we all do this.

I wish we could trust unknown USB and bluetooth devices, but we can't, not when they can be pointing devices and mice and drives we might run code from.

New OS generations have to create a trust framework for plug-in hardware, which includes USB and firewire and to a lesser degree even eSata.

When we plug in any device that might have power over the machine, the system should ask us if we wish to trust it, and how much. By default, we would give minimum trust to drives, and no trust to pointing devices or keyboards and the like. CD-Roms would not get the ability to autorun, though it could be granted by those willing to take this risk, poor a choice as it is.

Once we grant the trust, the devices should be able to store a provided key. After that, the device can then use this key to authenticate itself and regain that trust when plugged in again. Going forward all devices should do this.

The problem is they currently don't, and people won't accept obsoleting all their devices. Fortunately devices that look like writable drives can just have a token placed on the drive. This token would change every time, making it hard to clone.

Some devices can be given a unique identifier, or a semi-unique one. For devices that have any form of serial number, this can be remembered and the trust level associated with it. Most devices at least have a lot of identifiers related to the make and model of device. Trusting this would mean that once you trusted a keyboard, any keyboard of the same make and model would also be trusted. This is not super-secure but prevents generic attacks -- attacks would have to be directly aimed at you. To avoid a device trying to pretend to be every type of keyboard until one is accepted, the attempted connection of too many devices without a trust confirmation should lock out the port until a confirmation is given.

The protocol for verification should be simple so it can be placed on an inexpensive chip that can be mass produced. In particular, the industry would mass produce small USB pass-through authentication devices that should cost no more than $1. These devices could be stuck on the plugs of old devices to make it possible for them to authenticate. They could look like hubs, or be truly pass-through.

All of this would make USB attacks harder. In the other direction, I believe as I have written before that there is value in creating classes of untrusted or less trusted hardware. For example, an untrusted USB drive might be marked so that executable code can't be loaded from it, only classes of files and archives that are well understood by the OS. And an untrusted keyboard would only be allowed to type in boxes that say they will accept input from an untrusted keyboard. You could write the text of emails with the untrusted keyboard, but not enter URLs into the URL bar or passwords into password boxes. (Browser forms would have to indicate that an untrusted keyboard could be used.) In all cases, a mini text-editor would be available for use with the untrusted keyboard, from where one could cut and paste using a trusted device into other boxes.

A computer that as yet has no trusted devices of a given class would have to trust the first one plugged in. Ie. if you have a new computer that's never had a keyboard, it has to trust its first keyboard unless there is another way to confirm trust when that first keyboard is plugged in. Fortunately mobile devices all have built in input hardware that can be trusted at manufacture, avoiding this issue. If a computer has lost all its input devices and needs a new one, you could either trust implicitly, or provide a pairing code to type on the new keyboard (would not work for mouse) to show you are really there. But this is only a risk on systems that normally have no input device at all.

For an even stronger level of trust, we might want to be able to encrypt the data going through. This stops the insertion of malicious hubs or other MITM intercepts that might try to log keystrokes or other data. Encryption may not be practical in low power devices that need to be drives and send data very fast, but it would be fine for all low speed devices.

Of course, we should not trust our networks, even our home networks. Laptops and mobile devices constantly roam outside the home network where they are not protected, and then come back inside able to attack if trusted. However, some security designers know this and design for this.

Yes, this adds some extra UI the first time you plug something in. But that's hopefully rare and this is a big gaping hole in the security of most of our devices, because people are always plugging in USB drives, dongles and more.

Add new comment