Should Tesla disable your Autopilot if you're not diligent? - and a survey of robocar validation
Executive Summary: A rundown of different approaches for validation of self-driving and driver assist systems, and a recommendation to Tesla and others to have countermeasures to detect drivers not watching the road, and permanently disable their Autopilot if they show a pattern of inattention.
The recent fatality for a man who was allowing his car to be driven by the Tesla "autopilot" system has ignited debate on whether it was appropriate for Tesla to allow their system to be used as it was.
Tesla's autopilot is a driver assist system, and Tesla tells customers it must always be supervised by an alert driver ready to take the controls at any time. The autopilot is not a working self-driving car system, and it's not rated for all sorts of driving conditions, and there are huge numbers of situations that it is not designed to handle and can't handle. Tesla knows that, but the public, press and Tesla customers forget that, and there are many Tesla users who are treating the autopilot like a real self-driving car system, and who are not paying attention to the road -- and Tesla is aware of that as well. Press made this mistake as well, regularly writing fanciful stories about how Tesla was ahead of Google and other teams.
Brown, the driver killed in the crash, was very likely one of those people, and if so, he paid for it with his life. In spite of all the warnings Tesla may give about the system, some users do get a sense of false security. There is debate if that means driver assist systems are a bad idea.
There have been partial self-driving systems that require supervision since the arrival of the cruise control. Adaptive cruise control is even better, and other car companies have released autopilot like systems which combine adaptive cruise control with lane-keeping and forward collision avoidance, which hits the brakes if you're about to rear end another car. Mercedes has sold a "traffic jam assist" like the Telsa autopilot since 2014 that only runs at low speeds in the USA. You can even go back to a Honda demo in 2005 of an autopilot like system.
With cruise control, you might relax a bit but you know you have to pay attention. You're steering and for a long time even the adaptive cruise controls did not slow down for stopped cars. The problem with Tesla's autopilot is that it was more comprehensive and better performing than earlier systems, and even though it had tons of things it could not handle, people started to trust it with their lives.
Tesla's plan can be viewed in several ways. One view is that Tesla was using customers as "beta testers," as guinea pigs for a primitive self-drive system which is not production ready, and that this is too much of a risk. Another is that Tesla built (and tested) a superior driver assist system with known and warned limitations, and customers should have listened to those warnings.
Neither is quite right. While Tesla has been clear about the latter stance, with the knowledge that people will over-trust it, we must face the fact that it is not only the daring drivers who are putting themselves at risk, it's also others on the road who are put at risk by the over-trusting drivers -- or perhaps by Tesla. What if the errant car had not gone under a truck, but instead hit another car, or even plowed into a pedestrian when it careened off the road after the crash?
At the same time, Tesla's early deployment approach is a powerful tool for the development and quality assurance of self-drive systems. I have written before about how testing is the big unsolved problem in self-driving cars. Companies like Google have spent many millions to use a staff of paid drivers to test their cars for 1.6 million miles. This is massively expensive and time consuming, and even Google's money can't easily generate the billions of miles of testing that some feel might be needed. Human drivers will have about 12 fatalities in a billion miles, and we want our self-driving cars to do much better. Just how we'll get enough verification and testing done to bring this technology to the world is not a solved problem.
While we don't want a repeat of the things that led to the death of this driver, there may be great value to the field in having fleets of volunteers helping verify these vehicles and if it can be done safely, the world probably needs it. But even if there is no way to do volunteer testing of systems that are still being refined -- and they will still be in the being refined state for many years to come -- there is still the question of whether you can have systems that need human supervision and intervention that get so good that humans ignore warnings and stop bothering to supervise them.
Here's a look at a variety of approaches to testing, verifying, refining and learning:
In Simulator
The only way many billions of miles will be done is in simulator, especially since you want to re-test all new software builds as much as you can before deploying. In simulator you can test all sorts of risky situations you could never readily test in the real world. That's why many years ago I laid out a plan for an open source simulator because I think this would be a great thing for everybody to contribute to. An open source simulator would soon feature every crazy situation anybody has even seen or thought of, and it would make all cars safer.
But simulation is just simulation, and it only features what people have thought of. You need more.
On test tracks
The most conservative, but most expensive and limited approach, is a private test track. And almost everybody starts on a closed testing ground, because when you need to when things are just getting going. They give you the confidence to eventually go out on the road -- but go out on the road you must. You can really only do a fraction of a percent of your testing on test tracks, even with robot slave cars to drive along with. You just won't encounter enough variation in terrain and driving situations, and you can only test a few cars at a time.
On the plus side, on test tracks you can use unmanned cars, reducing some costs, and you can run 24 hours/day.
There is a hybrid of simulator and test track called "Vehicle in the Loop" which puts the car on a dyno in a building while robots whiz around to provide relative motion. That requires less real estate and is safer, but automated test track operation is probably better at this point.
On the road with trained professional safety drivers
All companies eventually need to go out on real roads and deal with real situations. They do it by having paid professionals, usually with special training, who sit behind the wheel watching the vehicle and ready to take the controls at any sign of doubt. It is also common to have a second person who is monitoring the computer systems and sensor scans to make sure all is operating properly, and to tell the safety driver to take over if something is wrong. In fact, safety drivers are required for testing self-driving cars (but not autopilots/cruise software) by most of the state laws about self-driving cars.
Companies like Google tell their safety drivers to take control at any sign of doubt. They don't want them to let the car keep driving to see if it would have braked properly for the children crossing the road. Later, they can use the simulator to replay what the car would have done if the safety driver had not intervened. If the car would have done something wrong, it's a bug to be fixed at high priority.
Still, even trained safety drivers can make mistakes. Google has reported one accident where their systems made a mistake. The system decided that a bus it was sharing the lane with would let it in to traffic, but the bus did not. The safety driver didn't intervene because he also judged the bus would let the car in.
In addition, when the systems are very good, safety drivers can also become complacent and not intervene as quickly as they should.
On the road with regular drivers but limited function
You could test a self-driving system by having it do only some of the driving. For example, you could make a system which steers but does not control the brakes. A driver who knows that she has to hit the brakes when approaching something is not going to stop paying attention. With this system you can note when the human hit the brakes, and see if the system also would have, and who was right. Chances are any surprise braking would be combined with taking the wheel. Of course you would also learn when the driver grabbed the wheel.
You could also offer only steering and leave both throttle and brakes to the driver. Or you could do mild braking -- the kind needed to maintain speed behind a car you're following -- but depend on the driver for harder braking, with hard braking also requiring taking of the wheel.
You can also do it in reverse -- the driver steers but the computer handles the speed. That's essentially adaptive cruise control, of course, and people like that, but again, with ACC you have the wheel and are unlikely to stop paying attention to the road. You will get a bit more relaxed, though.
Supervised autopilot with "countermeasures"
Some cars with autopilots or even lanekeeping take steps to make sure the driver is paying attention. One common approach is to make sure they touch the steering wheel every so often. Tesla does not require regular touches as some cars do, but does in certain situations ask for a touch or contact. In some cases, though, drivers who want to over-trust the system have found tricks to disable the regular touch countermeasure.
A more advanced countermeasure is a camera looking at the driver's eyes. This camera can notice if the driver takes his eyes from the road for too long, and start beeping, or even slowing the car down to eventually pull off the road.
Of course, those who want to over-trust the system hate the countermeasures, and find they diminish the value of the product. And most people who over-trust do get away with it, if the system is good, which reinforces that.
Disable the autopilot if it is not used diligently
One approach not yet tried in privately owned cars would be countermeasures which punish the driver who does not properly supervise. If a countermeasure alert goes off too often, for example, it could mean that the autopilot function is disabled on that car for some period of time, or forever. People who like the function would stay more diligent, afraid of losing it. They would have to be warned that this was possible.
Drivers who lose the function may or may not get a partial refund if they paid for it. Because of this risk, you would not want people who borrowed a car to be able to use the function without you present, which might mean having a special key used only by the owner which enables the autopilot, and another key to give to family, kids or friends.
Of course, for many, this would diminish the value of the autopilot because they were hoping to not have to be so diligent with it. For now, it may be wisest to diminish the value for them.
Deliberate "fake failures" countermeasure
Cars could even do a special countermeasure, namely a brief deliberate (but safe) mistake made by the system. If the driver corrects it, all is good, but if they don't, the system corrects it and sounds the alert and disables autopilot for a while -- and eventually permanently.
The mistakes must of course be safe, which is challenging. One easy example would be drifting out of the lane when there is nobody nearby in the adjacent lane. The car would correct when it got to the line (extra safe mode) or after getting a few inches into the unoccupied adjacent lane.
This would keep drivers trained to watch for steering errors, but you also want to have them watch for braking and speed errors as well. This presents some risk as you must get "too close" to a car ahead of you. To do that safely, the normal following distance would need to be expanded, and the car would every so often crowd to a closer, but still safe distance. It would also be sure it could brake (perhaps harder than is comfortable) if the car in front were to change speed.
The problem is that the legal definition of too close -- namely a 2 second headway -- is much wider a gap than most drivers leave, and if you leave a 2 second gap, other drivers will regularly move into it if traffic gets thick. Most ACCs let you set the gap from 1 to 2.5 seconds, but sometimes to even less. You might let the user set a headway, and train to know what is "too close" and demands an intervention.
To do this well, the car might need better sensors than an mere autopilot demands. For example, the Tesla's rear radars are perhaps not good enough to reliably spot a car overtaking you at very high speed, and so a fully automated lane change is not perfectly safe. However, in time the sensors will be enough for that. The combination of radars and blindspot-tracking ultrasonics are probably good enough to know it's safe to nudge a few inches into the other lane to test the driver.
This method can also be used to test the quality of professional safety drivers, who should be taken off the job and retrained if they fail it.
Rather than fake failures, you could just have a heads-up display which displays "intervene now" in the windshield for a second or two. If you don't take over -- beep -- you failed. That's a safe way to do this, but might well be gamed.
Not letting everybody try out the autopilot
A further alternative would be to consider that systems in Tesla's class -- which is to say driving assist systems which can't do everything but which do so much that they will lull some people into misusing them -- should perhaps only be offered to people who can demonstrate they will use them responsibly, perhaps by undergoing some training or test, and of course losing the function if they set off the countermeasure alarm too often.
Make customers responsible
Tesla used some countermeasures, but mostly took that path of stressing to the customer that their system was not a self-drive system and drivers should supervise it. Generally, letting people be free to take their own risks is the norm in free societies. After all, you can crash a car in 1,000 ways if you don't pay attention, and people are and should be responsible for their own behaviour.
Except, of course, for the issue of putting others on the road at risk. But is that an absolute? Clearly not, since lots of car features enable risky behaviour on the part of drivers which put others at risk. Any high performance sports car (including Tesla's "Ludicrous" mode) and the ability to drive 200mph is a function which presents potential risk for other road users as well as the driver. The same is true of distractions in the car, such as radios, phone interfaces and more. Still, we want to hold risk to others to a higher, though not absolute standard.
Just watch
You can get some value from having a system which requires the driver to drive completely manually, but compares what the self-drive system would have done to that, and identifies any important differences. This is easy to do and as safe as other manual driving -- actually safer because you can implement very good ADAS in such a system -- but the hard part is pulling out useful information.
The self-drive system will differ from a human driver quite a lot. You need automated tools to pull out the truly important differences and have them considered and re-run in simulator by a test engineer. To make this workable, you need good tools to only bother the test engineer with worthwhile situations. If the driver brakes suddenly or swerves suddenly and the self-drive system did not want to, that's probably useful information.
This requires a car with a full suite of self-drive sensors, as well as possibly a set of all-around cameras for the test engineer to study. The problem is that this is expensive, and since it's not giving the car owner much benefit, the car owner is unlikely to pay for it. So you could deploy that to a subset of your cars if you have a lot of cars on the road.
Watching is still always valuable, no matter what, and also offers training data for neural networks and updates to map data, so you want to do it in as many cars as you can.
Real beta tests
Tesla would probably assert they were not "testing" their autopilot using customers. They, and all the others makers of similar products, would say that they believed their products to be as safe as other such functions, as long as they were properly supervised by an alert driver. And their current safety record actually backs up that this is broadly true.
Some smaller startups may actually desire to release tools before they are ready, actively recruiting customers as true beta testers. This is particularly true for small companies who have thought about doing retrofit of existing cars. Larger companies want to test the specific configuration that will go on the road, and thus don't want to do retrofit -- with retrofit, a customer car will go out on the road in a configuration that's never been tried before.
This is the most risky strategy, and as such the beta testing customers must be held to a high standard with good countermeasures. In fact, perhaps they should be vetted before being recruited.
Testing, validation and training
While I've used the word testing above, there are many things you gain from on-road operation. Tesla would say they were not testing their system -- they tested it before deployment. It is intended only for supervised operation and was judged safe in that situation, like cruise controls or lanekeeping. On the other hand, lots of supervised operation does validate a system and also helps you learn things about it which could be improved or changed, as well as things about the road. Most importantly, it informs you of new situations you did not anticipate.
That is the value of this approach, with diligent supervisors. You can keep improving an autopilot system and watch how well it does. Some day you will find that it doesn't need many interventions any more. Once it gets to 200,000 miles or more per intervention, you have well surpassed the average human driver. Now you can claim to have a self-driving system, and perhaps tell users they can stop supervising. This is the evolutionary approach car makers intend, while the non-car companies (like Google, Tesla and others) seek to go directly to the full self-driving car.
The approaches are so different that the car companies made sure there was a carve-out in many of the laws that were passed to regulate self-driving cars. Cars that need supervision, that can't operate without it, are exempted from the self-driving car laws. They are viewed by the law as just plain old cars with driver assist.
Conclusion
I think that for Tesla and the other companies marketing high-end driver assist autopilots that require supervision, the following course might be best:
- Have a sensor package able to reliably detect that it is safe to wander a short distance into adjacent lanes
- Have sufficient countermeasures to assure the driver is paying attention, such as an eye tracking camera
- Consider having a "strange event" countermeasure where the vehicle drifts a bit out of its lane (when safe) and if the driver fails to correct it, the car does and signals a countermeasure alert.
- Too many countermeasure alerts and the drive assist function is disabled long term, with or without refund as indicated in the contract.
Comments
Stuart Lynne
Tue, 2016-07-05 10:56
Permalink
What if it had not been a Tesla?
Lost in all of the buzz about autonomous car killing passenger is the obvious question. Would a reasonably attentive human driver in a similar car (or Tesla without auto pilot engaged) travelling at the same speed have been able to brake enough to survive this crash?
It would appear from all reports that the truck driver was 100% to blame for turning left when unsafe to do so.
It would also appear that lack of traffic lights at an intersection like this on a four lane highway shows a certain lack of concern for safety by the local authority.
brad
Tue, 2016-07-05 13:57
Permalink
Unknown
There is one accusation the Tesla was speeding at a very serious rate, so the truck driver is not at fault -- we will see when the results of the logs come out.
The driver and car applied zero brakes. So yes, even a speeding attentive human would have been hitting the brakes to reduce the severity here. Or steering around the truck. Or (and this is much harder for somebody to do) steering for the back wheels of the truck. Or even potentially ducking, though I suspect even if you could do that that serious injury or death could be likely. I don't see many people having the smarts to duck or steer for the wheels of the truck.
James
Thu, 2016-07-07 14:02
Permalink
truck driver fault
It seems like you're working hard to cast doubt on autopilot here. The truck made what appears to be an illegal left turn across the path of an oncoming car (not yielding in this situation is illegal). This information appears in the police report and even barring that it's pretty clear from the circumstances of the accident that the truck driver was at a minimum negligent in making his turn. It's been reported that charges are pending. I don't think this is a situation where you can reasonably state that the truck driver is not at fault. Tesla has the logs and has already analyzed them. From their commentary it appears that they have video and they've stated that the cause of the accident was the truck making an ill-advised maneuver. Inattentiveness on the part of the Tesla driver may well have contributed to the severity of the accident, but given the circumstances it's entirely possible that no action on the part of the Tesla driver could have avoided an accident, or saved his life.
brad
Thu, 2016-07-07 15:48
Permalink
Truck driver's role
Normally, you would think such an accident would be legally all on the truck driver. However there are reports which cloud that interpretation, and we're waiting for data. In particular it is quite odd that 2 months later the truck driver has not been charged yet. Normally if somebody did a left turn when they did not have ROW and it causes a fatality it would be a slam dunk. Something is different here.
Though the reality is that even if it's a standard case of failure to yield ROW, the reality of driving is people do this all the time, and they do it because the vehicle whose ROW you impede is going to slow down and yield to you. They would need to be insane not to. But in this case, there is something else.
James
Tue, 2016-07-05 11:44
Permalink
Attention to Autopilot
How much attention does the Autopilot system require? Clearly sleeping, or reading a book are right out, but what about sending a "quick" text message, or turning to talk to a passenger? Adding countermeasures to make sure the driver is paying attention seems silly and adds a lot of complexity to an already complex system. I say silly because you need to add an automated system to monitor the human as he monitors the automated system. If the system is so fail-able that the human needs to maintain 100% attention, then why bother having it? These kinds of systems intrigue me, and should continue to be developed and tested, but including them in regular consumer cars seems premature to me.
brad
Tue, 2016-07-05 13:54
Permalink
Cruise control
Adaptive cruise control needs 100% attention, as you must steer. In theory if your ACC is perfect and the road ahead is straight and your car does not drift, you could look away for a little while, but generally it needs 100% attention. Yet people like it and would not ask why bother having it.
The problem is that we need so much testing that it's hard to figure out how to do it without some testing in consumer cars.
Tim
Wed, 2016-07-06 18:47
Permalink
Attention to Autopilot
"How much attention" I think is the key question James.
I suspect that driver monitoring systems will be used to ensure the driver pays an adequate level of attention given the road conditions. While the machine is fallible and the human is fallible, we need a redundancy for safety. Even though aircraft can fly autonomously, there are still pilots.
For cars I believe there is an established time period for safe glance behaviour which says that glance time away from the road should be under 2 seconds.
That means, if you are driving (without auto-pilot) and fiddling with the navigation system in your car, you should always be looking back at the road every 2 seconds for safety. I believe this standard is used to ensure that vehicle interfaces do not have time-sensitive modalities that will frustrate the driver and make it harder to sustain their attention to the road scene.
Two seconds is of course a blunt hammer, but a starting point. As cars obtain more sophisticated sensing and become aware of road conditions - traffic, weather etc, I believe this period should become dynamic based on those conditions.
With regards to auto-pilot systems in cars, while the systems are in their infancy I would expect the 2 second rule to be enforced and then gradually loosened based on validation data accumulated from fleets of cars over time.
I think even with a 2 second rule, an auto-pilot would be relaxing to drive. You can look at the road, do some thinking, listen to music, engage more in conversations with passengers, and maybe read some emails. I expect your blood pressure would go way down in traffic. However writing an email would become too difficult with a 2 second glance rule, and I believe this is a rough line in the sand for "how much attention" is enough to the road scene in autonomous cars.
My view anyway.
Tim
brad
Wed, 2016-07-06 19:10
Permalink
2 seconds
Actually, people have done a fair bit of research on these questions for different projects. You can probably tolerate a bit more than 2 seconds of inattention -- if the driver chooses a suitable time. I think most of us look away for longer periods, but only after scoping out the situation and knowing it's a good time to do it. Ie. not in busy or chaotic traffic, etc.
One reason I talk about the idea of the car deliberately drifting to the edge of the lane is that you're asking the driver to detect one of the things that could actually happen, but it would take more than 2 seconds, I think.
Humans are masters of building a model of the world and driving on it. When I took my advanced driving course in order to be a Google safety driver, the instructor said, "A good driver does not have to check his blind spot in order to change lanes, he already knows if it's possible that a car could be there." (He still checks anyway.)
Also, and this is important for Tesla, they don't have a gaze tracking camera, so they would need to retrofit that as extra hardware which is costly and time consuming. Methods which work with existing hardware are much more practical.
Anonymous
Tue, 2016-07-05 19:51
Permalink
Typo
"guinea pigs for a primitive self-drive system which is not production read," should be "not production ready,"
Anonymous
Wed, 2016-07-06 07:16
Permalink
All situations
A key point here is that the car simply carried on: it carried on when the truck was in its path and then it carried on
after it had hit the truck. The point is that if a 'robo-car' does not recognise a problem it simply keeps going - there is no
reaction. As there are an almost infinite number of potential situations on the roads and it is almost certainly not possible
to program or even imagine every scenario the question is then how does a computer handle what it hasn't been programmed to handle?
brad
Wed, 2016-07-06 13:40
Permalink
Carry on
It is not clear if the car just "carried on" after hitting the truck. I don't think it was still trying to drive, I don't know if it was braking or by how much. I would be shocked if the Tesla autopilot would keep trying to drive forward with the MobilEye camera destroyed, with impact signals present, with tons of ultrasonic proximity alerts (if still working) etc. A car at high speed will travel quite far on grass, even if braking hard. At the rumoured speed of 85pmh, it takes 115 yards to stop with full hard braking on dry pavement after you slam the break. 300 yards on grass seems quite expected for moderate braking or even freewheeling.
This is not a robo-car, remember. It is an advanced cruise control with lanekeeping. It is not designed to handle all situations and requires supervision.
Anonymous
Thu, 2016-07-07 05:29
Permalink
Unknown unknowns
I appreciate your response, but you haven't really addressed the question; the point is, (and is illustrated by this Tesla crash) if an autonomous car don't recognise a dangerous situation it just carries on.....so how do you respond to scenarios that you haven't programmed for?
brad
Thu, 2016-07-07 08:53
Permalink
Not an autonomous car
The Tesla is not an autonomous car. It is a supervised lanekeeping cruise control. It responds to unknown scenarios by requiring there be a human being watching the road at all times, ready to take over immediately if there is trouble.
That's very different from a self-driving car, which does indeed need to handle (almost) everything that it will encounter in a safe manner.
Steve H
Thu, 2016-07-07 02:02
Permalink
A game of risks
Tesla is operating at the NHTSA level 3 area where the car senses when conditions require the driver to retake control. Google considered this level too dangerous and are trying to bypass it by going straight to fully autonomous. Maybe Google have made the right call.
Much as I admire Elon Musk, he is a risk taker. I hope that no further fatal accidents occur in the immediate future. A couple of more tragedies like this and the long term development of driverless cars could be seriously effected by the response.
brad
Thu, 2016-07-07 08:57
Permalink
Not "level 3"
The NHTSA levels are badly thought out, but if you wish to use them, the Tesla was working on highways as level 2, a supervised ADAS car. Several other companies have similar products, but Tesla's is one of the better ones. The "level 3" is a standby driver car, which nobody currently sells or operates in that mode, and yes, Google decided not to go that way.
The one thing you could be right about is that if a lot of people confuse the supervised car (which has been available for over a decade, long before Tesla even existed) and a self-driving car, and there are more tragedies with such cars, it could make people mistakenly form bad opinions on self-driving cars. Self-driving cars will also have some accidents and fatalities of course, but not because a human failed to supervise. Those will actually be treated worse than this -- this one didn't even blip Tesla's stock.
James
Thu, 2016-07-07 14:59
Permalink
Low quality debate
The quality of the debate on this topic is annoyingly indistinguishable from clickbait headlines and industrial competition sound-bites.
I haven't heard anyone making the argument that anything other than minimizing the human toll from traffic accidents is the top priority for these systems. The question then reduces to, what approach achieves that goal. It's all very well and good to say you're not going to introduce anything until you are confident that it will not cause harm, but in the case of a safety system the decision NOT to deploy it also brings harm. Of course, human psychology tends to ascribe blame to action much more easily than inaction, and the developers of these system know that they aren't going to get grief from the public if they say that they were delaying their market entry to ensure safety. This environment creates a powerful bias on the part of both corporations and individuals to avoid deploying these systems until they can feel, and be seen as blameless for whatever consequences follow. But every day of delay the world pays a heavy toll.
Google and the likeminded - including most of the auto majors - seem inclined to not release a system until they have solid data to show an unassailable advantage over human drivers under essentially all circumstances. By the time this proof comes it will be years, perhaps many years, after a less capable and less proven system could have been saving many, many lives and improving outcomes on a large scale.
So what is the better decision here? To wait until only fanatics would disagree on the benefits while watching the slaughter continue, or to go to market more aggressively and save as many people as you can while taking some lumps along the way? I have great admiration for people and institutions that buck the bureaucratic desire to avoid the appearance of responsibility for negative outcomes. I've heard the "but the public will reject it if it's not perfectly safe" argument and I think it's a specious justification for timidity that has taken on the aura of truth in the echo chamber of robot car debate.
So I'm going to be supporting Tesla's efforts here, because I think this is too important to be left solely to entities that have powerful incentives to put their own ass-covering ahead of the public good.
Anonymous
Fri, 2016-07-08 04:08
Permalink
Safety
If you are looking purely from the safety aspect then you must also consider clamping down on or banning all other human activities that could be considered dangerous and may cause injury or death. If you wish to continue down that route then you will need to define exactly what activities could be deemed dangerous and may cause injury or death - good luck with that.
The argument that we must ban human-driven vehicles because they might be more dangerous than driverless cars has serious implications for issues around personnal freedom, privacy and choice; this is notwithstanding the point that testing / proving which was the actually safer would be extremely difficult and consume a huge amount of resources.
zooko
Thu, 2016-07-07 18:16
Permalink
should anything be done to reduce this risk?
I don't see why anything at all should be done to reduce the risk of this sort of accident. Have Tesla sedans been involved in more fatalities than comparable models? Have they been involved in significantly more fatalities while in Autopilot mode than while out of Autopilot mode? Or are those numbers too small to be meaningful at this point?
These are the sorts of questions I expect you to have the exact answers to, Brad. ;-) Let's hear it.
There are about 1.25 million road traffic deaths per year, worldwide, which means in the two months since that accident occurred that killed Joshua Brown, about 208,300 people have died in human-driven car accidents.
Martin Nuss
Sun, 2016-07-17 11:58
Permalink
Testing - Just watch
Brad Templeton, I think there is something important in your "just watch" section that is worth exploring further. You mention that there is no incentive for anyone to buy all the sensors (just for the benefit of gathering tons of incident and general driving data). But what if someone else who has a vested interest in that data pays for the sensors? I am thinking of HERE which is now owned by major car manufacturers that could build more of these sensors into each new car - paid by real-time road mapping data generated by all these cars? Another approach might be to just target fleets that need more sophisticated driver and driver behavior management systems paid by the fleet, but which could also generate a lot of data about differences between ADAS and driver behavior.
brad
Mon, 2016-07-18 12:50
Permalink
Who pays
Yes, companies like Here (which is owned by car companies) and others are already gathering data, but I don't yet know of any that have deliberately added superior sensors to consumer cars. But I think it will happen.
Auke Hoekstra
Mon, 2016-08-01 02:01
Permalink
Tesla illustrates that Robocars face a trolley problem
Hi Brad,
Professor Maarten Steinbuch (who just got some lectures from you) pointed me here while I was writing a blogpost on Tesla as an example innovator here: http://bit.ly/2aVD4Nk.
My thanks for all your wisdom but as I explain in the post I think this is basically a trolley problem. In the words of the NHTSA chief: how many people will die while we wait? And waiting will be what we do if we decide not to put out level 3 autonomous cars to users.
What is also important ethically is that people choose to do this, just as they choose to do dangerous things like driving a motorcycle or donating a kidney. So who are we to say that they should not take risks, knowing that by doing this they actually save the lives of others (by perfecting Autopilot) in the future?
Would love to hear your thoughts.
brad
Mon, 2016-08-01 02:36
Permalink
Not a Trolley
A Trolley problem is a no-win situation. Usually something evil has happened to make it that way.
Indeed, there is the argument that letting Tesla take risks to test the autopilot saves lives in the long run. The counter argument is that the ends may not justify the means.
jamesdouma
Thu, 2017-01-19 18:06
Permalink
Finally, some data on the safety of autopilot
So the NHTSA finally released their report. And while it doesn't have enough data to answer a lot of questions, it does provide enough data to answer the question of whether the Tesla system is, statistically, outperforming human drivers. And the answer seems to be yes, that it is. What's more, it seems to be performing well enough that it is significantly enhancing the safety of the vehicles which have it.
I pulled Teslas shipped vehicle numbers and calculated total fleet miles for cars to get an idea of the size of the impact of having autopilot in terms of total accidents and, potentially, fatalities. NHTSA provided airbag deployment event rates for autopilot capable cars both with and without autopilot installed. Those deployment rates (0.8/million and 1.3/million miles) don't tell us the impact per mile driven with autopilot, just the rates for cars which have it and which do not. However the number the NHTSA does provide is potentially more valuable because it addresses the total utility of having autopilot in a car versus not having it, including any impact on the general behavior of drivers. They particularly focus on cars which logged miles before having autopilot installed and after having it installed, which is also nice because the set of drivers is the same for both groups, eliminating the possibility that driver demographics are affecting the data. It also extracts the effect of AEB since the vehicles in question had AEB both before and after autopilot was installed in them. Because Tesla shipped autopilot hardware for a year before providing the software via an OTA update, there are a lot of miles to work with. At least 300M miles before and 600M after for cars that shipped with the hardware before the software was available.
From these numbers we can conclude that, statistically, Tesla has avoided around 700 airbag deployments in 2016. Since there will be about one fatality per 100 deployments, that works out to 7 lives saved in the first year that the autopilot software has been available, not including pedestrians and cyclists of which Musk claims there have been at least a few. That might not sound like a big number, but it’s certainly a larger number than that of lives lost due to the presence of autopilot in vehicles. And for every one of those fatalities averted you are averting dozens of severe injuries and many millions of dollars in other losses.
As the cloud over autopilot dispels and drivers adopt it more heavily, as the fleet size continues to grow at a dramatic pace, as the system software continues to improve, and with the advent of even better safely with the greatly improved sensor suite of the new HW2 vehicles the number of accidents avoided and the net savings in accidents and lives should greatly expand.
I think there’s enough information here to conclude that Tesla is doing the right thing by pursuing self driving systems via incremental ADAS extensions. By the time fully self driving systems become broadly available and can claim to be reducing serious accidents Tesla will have already avoided many thousands, perhaps many tens of thousands of serious accidents.
Musk has previously commented that Tesla believe autopilot was roughly twice as safe as a human driver. This data more or less supports that statement. He recently tweeted that the goal withe HW2 vehicles is a 10x improvement, and I hope that they achieve it. That would very clearly validate their approach.
brad
Thu, 2017-01-19 22:49
Permalink
New ruling
Commentary on the new ruling is at this post
Add new comment