California Is Collecting the 2019 Robocar Disengagement Reports. It Should Stop


California is now collecting the 2019 "disengagement reports" for robocars, which always get lots of attention. But in fact, they are measuring the wrong thing -- it is the safety of testing they should measure in the public interest, not the quality of the prototypes -- and they are measuring it wrong, and pushing companies to do things that may be unsafe in order to meet their wrong and useless metric.

See my new article at California Is Collecting the 2019 Robocar Disengagement Reports. It Should Stop


Is it even possible to avoid perverse incentives in reporting safety quality? Aren't safety reporting requirements what Musk got in trouble and started attacking a reporter on a factory floor story? What are the alternative metrics we need?

I agree, it's very hard, and companies will always develop to the test if we are not careful.

Of course, there is one way to make sure the metric is precisely what we want, which is to measure their safety record, after the fact, and clearly report it, and punish it if it's bad.

Now the legal system already makes it illegal to crash cars into things or hurt people on the roads, and has a very well developed system of handling that and demanding recompense. So well developed that we all just buy insurance and let the insurance industry do the work.

We seem not ready to do it that way. We want to hold the robots to a much higher standard than the people. All we do with people is make them pass a very rudimentary test.

I have proposed a system where we simply make robot crashes cost 2x or 3x or more in the early years, compared to what the cost would have been if a negligent human driver had caused the incident. I think companies would embrace the simplicity and predictability of it, frankly, even though it has a high cost. But would the public embrace it?

Add new comment