Volvo collision avoidance fails and other things that will happen again

Topic: 

Last week, Volvo was demoing some new collision avoidance features in their S60. I've talked about the S60 before, as it surprised me putting pedestrian detection into a car before I expected it to happen. Unfortunately in an extreme case of demo disease known to all computer people, somebody has made an error with the battery, and in front of a crowd of press, the car smashed into the truck it was supposed to avoid. The wired article links to a video.

Poor Volvo, having this happen in front of all the press. Of course, their system is meant to be used in human driven cars, warning the driver and braking if the driver fails to act -- not in a self-driving vehicle. And they say that had their been a driver there would have been an indication that the system was not operating.

While this mistake is the result of a lack of maturity in the technology, it is important to realize that as robocars are developed there will be crashes, and some of the crashes will hurt people and a few will quite probably kill people. It's a mistake to assume this won't happen, or not to plan for it. The public can be very harsh. Toyota's problems with their car controllers (if that's where the problems are -- Toyota claims they are not -- have been a subject of ridicule for what was (and probably still is) one of the world's most respected brands. The public asks, if programmers can't program simple parts of today's cars, can they program one that does all the driving?

There are two answers to that. First of all, they can and do program computerized parts of today's cars all the time and by and large have perfect safety records.

But secondly, no they can't make a complete driving system perfectly safe, certainly not at first. It is a complex problem and we'll wait a long time before the accident rate is zero. And while we wait, human drivers will kill millions.

Our modern society has always had a tough time with that trade-off. Of late we've been coming to demand perfect safety, though it is impossible. Few new products are allowed out if it is known that they will have any death rate due to their own flaws. Even if those flaws are not known in the specific, but are known to be highly likely to exist in some fashion. American juries, faced with minutes of a meeting where the company decided to "release the product, even though predictions show that bugs will kill X people" will punish the company nastily, even though the alternative was "don't release and have human drivers kill 10X people." The 9X who were saved will not be in the courtroom. This is one reason robocars may arise outside the USA first.

Of course, there might be cases the other way. A drunk who kills somebody when he could have taken a robocar might get a stiffer punishment. A corporation that had its employees drive when robotic systems were clearly superior might find a nasty judgement -- but that would require that it was OK to have the cars on the road in the first place.

But however this plays out, developers must expect there will be bugs, an bugs with dire consequences. Nobody will want those bugs, and all the injuries will be tragic, but so is being too cautious on deployment. Can the USA figure a way to make that happen?

Add new comment