NHTSA ODI report exonerates Tesla in fatal crash
NHTSA released the report from their Office of Defects Investigation on the fatal Tesla crash in Florida last spring. It's a report that is surprisingly favorable to Tesla. So much so that even I am surprised. While I did not think Tesla would be found defective, this report seems to come from a different agency than the one that recently warned comma.ai that:
It is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose."
The ODI report rules that Tesla properly considered driver distraction risks in its design of the product. It goes even further, noting that after the introduction of Tesla autopilot (including driving by those monitoring it properly, those who were distracted, and those who drove with it off) still had a decently lower accident rate for mile than drivers of Teslas before autopilot. In other words, while the autopilot without supervision is not good enough to drive on its own, the autopilot even with the occasionally lapsed supervision that is known to happen, combined with improved AEB and other ADAS functions, is still overall a safer system than not having the autopilot at all.
This will provide powerful support for companies developing autopilot style systems, and companies designing robocars who wish to use customer supervised driving as a means to build up test miles and verification data. They are not putting their customers at risk as long as they do it as well as Tesla. This is interesting (and the report notes that evaluation of autopilot distraction is not a settled question) because it seems probable that people using the autopilot and ignoring the road to do e-Mail or watch movies are not safer than regular drivers. But the overall collection of distracted and watchful drivers is still a win.
This might change as companies introduce technologies which watch drivers and keep them out of the more dangerous inattentive style of use. As the autopilots get better, it will become more and more tempting, after all.
Tesla stock did not seem to be moved by this report. But it was also not moved by the accident or other investigations -- it actually went on a broadly upward course for 2 months following announcement of the fatality.
The ODI's job is to judge if a vehicle is defective. That is different from saying it's not perfect. Perfection is not expected, especially from ADAS and similar systems. The discussion about the finer points of whether drivers might over-trust the system are not firmly settled here. That can still be true without the car being defective and failing to perform as designed, or being designed negligently.
Comments
jamesdouma
Fri, 2017-01-20 12:21
Permalink
Evaluations of safety should be data driven
When you bring up "people using the autopilot and ignoring the road to do e-Mail or watch movies" you are presuming that those people exist in sufficient numbers to be worth remarking on, which implies that they should appear in the data. Not only do they not appear in the data, the data seems to suggest the opposite - that drivers become safer in cars with autopilot. NHTSA numbers show a 40% reduction in overall airbag deployments for cars with autopilot installed, which is especially notable because autopilot miles appear to only present about 1/6 of the miles logged in the relevant vehicles. So either autopilot is being used disproportionately in situations where airbag deployments are likely to occur, or drivers of vehicles with autopilot are behaving better even when they aren't using autopilot.
You've pointed out in the past, autopilot is intended for use in situations that are generally at the safer end of the spectrum. Maybe that's true, but rural highways seems to have the highest death toll per mile (that's just the kind of road that Brown died on) and autopilot seems to be usable in that environment, so maybe that can account for the reductions. But the second case may also be a significant effect. I myself and other Tesla owners I've talked to describe themselves as more cautious after having used autopilot. Personally I have little doubt that my own non-autopilot miles are driven a lot more conservatively than they were before I owned a Tesla. Using autopilot involves getting regular reminders that driving is dangerous and you have to be responsible. The process of using it is in some ways like overseeing someone who is learning to drive, and that process makes you think about safety regularly and in a more conscious way than otherwise seems to be the case.
In the end, the overall effect of introducing something that changes the driving experience dramatically into a population of drivers who are diverse and who will change their behavior in response to a new and complex tool is very hard to predict. When Google decided that imperfect human drivers could not be trusted with an imperfect driving aid I believe they overestimated their own judgement ability quite severely and, it seems now, in error. That error may have caused both themselves and the world a significant amount of harm.
brad
Sat, 2017-01-21 04:48
Permalink
Anecdotes but many
I have not done a formal study, but almost every Tesla owner I have asked has said they do overtrust the autopilot, and do things like email when it is on.
This is consistent with the finding. Say you have an autopilot that will have one accident every 20,000 miles if it is not supervised. Not good -- humans on their own have an accident every 100K miles, Google concludes. (Not an airbag triggering accident, any kind.)
Now say that people cheat with this autopilot not all the time, but 5% of the time. That's over all people. Some cheat with it 10%, some zero, some close to 100% but I doubt anybody does that. 95% of the miles are autopilot with supervisor. That's good -- perhaps autopilot and supervisor have an accident ever 200,000 miles -- twice as good as the human alone.
So as you can see in 4 million miles of autopilot operation, you will have 3,800,000 miles at the good rate and there are 19 accidents. In addition, you have 200K miles of cheating and 10 accidents. Total of 29. Humans alone have 40, so the overall record has improved, in a way similar to the claimed Tesla improvement!
So both end up true. The autopilot, with cheating, results in a safer overall result, but it's also 5 times more dangerous than human driving to cheat.
jamesdouma
Sat, 2017-01-21 12:09
Permalink
Difference of tone perhaps
In a nutshell, you are surmising that unsupervised autopilot is more dangerous than a human paying attention, that supervised autopilot is safer than a human paying attention, and that some small fraction of autopilot usage is unsupervised. You don't mention any halo effect on driver safety when they are not using autopilot, so lets say you believe that effect is negligible. You put numbers on these various percentages and observe that the stated results allow for the possibility that unsupervised autopilot is more dangerous than a human. And generally it seems you are defending a critical posture with respect to autopilot like systems in public use.
This model is certainly plausible and is generally consistent with statements made by Tesla and the NHTSA about autopilot's performance, with one exception. You are not accounting for the fact that autopilot miles are only a fraction of total miles driven in these cars. If that fraction is sufficiently small then your model will not work. For instance, if autopilot miles are 40% of accident-normalized miles (a mile under conditions which has the average likelihood of an accident) and autopilot's presence does not affect the accident rate when it is not turned on, then it must eliminate 100% of accidents under autopilot in order to get an overall reduction of 40% in accidents. In that scenario either misuse must be so small it can be ignored, or autopilot is negligibly inferior to humans even when unsupervised. Or possibly there is some other factor that we are accounting for, but in any case your model breaks if autopilot miles are less than 40% of compatible fleet wide accident-normalized miles.
I don't have any hard numbers for autopilot miles as a fraction of accident normalized miles, but if we assume autopilot is used in miles that are no more dangerous than the average mile, and that autopilot is installed in half of the cars that can take it then your model breaks. Under those conditions accident normalized miles under autopilot would only amount to 30%, based on the numbers we can find in public. Autopilot must be having a halo effect on non-autopilot miles in that scenario. Note that these assumptions are probably conservative as autopilot is probably used on safer miles than the average and it's penetration is probably over 50% in the compatible fleet.
An autopilot halo effect could happen any number of ways. For one example - if a driver needs to send a text before they get home maybe they turn autopilot on and then send their text or wait until they can use it to send their text. Having autopilot on when sending a text is probably safer than having it off when sending a text so this would account for the disproportionate impact by migrating distracting activity away from non-autopilot miles and onto autopilot miles.
In any case, the 40% accident reduction from this NHTSA report, if we can take it as it appears to be presented, eliminates any reasonable objection to the use of autopilot like systems by the public. No matter what kind of spreadsheet you build it ends up being a large net reduction in accidents. If you can support an argument that disagrees with this statement I'd very much appreciate it if you could share it with me.
brad
Sun, 2017-01-22 07:39
Permalink
In and out of autopilot
Yes, my calculation above covers only the accident rate while autopilot is in use, and it does appear the ODI report is examining all miles. Without knowing what fraction of miles are driven in autopilot, it's not easy to work out potential explanations, but the following factors may play a role:
More data will be of value here.
jamesdouma
Sun, 2017-01-22 16:14
Permalink
Why is there no sunshine here? These are great results.
If criticism of Tesla’s autopilot is reduced to the concern of moral hazard then the argument becomes indistinguishable from arguing that HPV vaccine is bad because, even though it saves lives, it encourages objectionable behavior.
The ODI report shows a clear and frankly dramatic reduction in serious accidents in vehicles equipped with autosteer. Even though we do not have access to the raw data to inform a more detailed analysis we can have confidence in this view of the data because the ODI report itself draws this conclusion. This report is produced by an independent agency whose mandate is to search for potential defects and who came to this conclusion after a lengthy and detailed investigation. To continue to dispute this position is to claim that not only Tesla and it’s representatives, but now also the NHTSA and it’s people are either incompetent or lying.
No matter what odious side effects might be imagined the one critically relevant metric is quite unambiguously in favor of the creation, distribution, and use of these systems. It’s time for critics in the self driving vehicle community to get over their unsubstantiated fears and accept that these systems are a social benefit and should be encouraged. Google was wrong. Tesla was right. It’s that simple. The data is now out there.
Musk complained in Tesla’s 2015 3rd quarter analyst call that media coverage of autopilot was inappropriately negative, that the effect was discouraging people from using these system, and that by doing so they were “killing people”. I think this statement applies doubly to the community of pundits in the self driving vehicle community because they have less claim to plausible ignorance. Given this ODI report there’s no longer grounds for reasonable doubt. Any further delay in the introduction of these systems is, statistically, killing people. And any motivation for doing so can no longer be excused as caution, it is now plainly bias.
P.S. If these rather pointed ending comments appear to be pointed at the author of this blog, please be assured that they are not. There are rather more vocal and more influential critics out there who have been, in a word, irresponsible.
brad
Mon, 2017-01-23 15:38
Permalink
Getting more info
I do think the results are very positive and show a good approach to gaining total safety. I still seek to know how safety results are attained.
The report indicates that the updates to autopilot also included improvements in AEB and other ADAS functions when autopilot is not engaged. It is worth knowing just how much of the improved safety comes from that. Since other car makers have also reported double digit improvements with these ADAS functions, it is expected to see them from Tesla too.
I also suspect that the "autopilot distracted time" is quite a bit lower than 5%. While I have encountered people who say they sit doing emails with autopilot on, I suspect for most people it's the occasional text or brief fiddling with phones or controls that they do on autopilot, not watching movies. Still, you want to know when it finally gets safe enough to watch a movie, and how unsafe it is to do that today. That's because we're pretty bad at judging that.
Add new comment