Enough with the Trolley problem, already
More and more often in mainstream articles about robocars, I am seeing an expression of variations of the classic 1960s "Trolley Problem." For example, this article on the Atlantic website is one of many. In the classical Trolley problem, you see a train hurtling down the track about to run over 5 people, and you can switch the train to another track where it will kill one person. There are a number of variations, meant to examine our views on the morality and ethics of letting people die vs. actively participating in their deaths, even deliberately killing them to save others.
Often this is mapped into the robocar world by considering a car which is forced to run over somebody, and has to choose who to run over. Choices suggested include deciding between:
- One person and two
- A child and an adult
- A person and a dog
- A person without right-of-way vs others who have it
- A deer vs. adding risk by swerving around it into the oncoming lane
- The occupant or owner of the car vs. a bystander on the street -- ie. car drives itself off a cliff with you in it to save others.
- The destruction of an empty car vs. injury to a person who should not be on the road, but is.
I don't want to pretend that this isn't an morbidly fascinating moral area, and it will indeed affect the law, liability and public perception. And at some distant future point, programmers will evaluate these scenarios in their efforts. What I reject is the suggestion that this is anywhere high on the list of important issues and questions. I think it's high on the list of questions that are interesting for philosophical class debate, but that's not the same as reality.
In reality, such choices are extremely rare. How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.
Secondly, in the rare situations where a human encounters such a moral dilemma, that person does not sit there and have an inner philosophical dialogue on which is the most moral choice. Rather, they will go with a quick gut reaction, which is based on their character and their past thinking on such situations. Or it may not be that well based on them -- it must be done quickly. A robot may be incapable of having a deep internal philosophical debate, and as such the robots will also make decisions based on their "gut," which is to say the way they were programmed, well in advance of the event. A survey on robohub showed that even humans, given time to think about it, are deeply divided both on what a car should do and even how easy it is answer the question.
The morbid focus on the trolley problem creates, to some irony, a meta-trolley problem. If people (especially lawyers advising companies or lawmakers) start expressing the view that "we can't deploy this technology until we have a satisfactory answer to this quandry" then they face the reality that if the technology is indeed life-saving, then people will die through their advised inaction who could have been saved, in order to be sure to save the right people in very rare, complex situations. Of course, the problem itself speaks mostly about the difference between "failure to save" and "overt action" to our views of the ethics of harm.
It turns out the problem has a simple answer which is highly likely to be the one taken. In almost every situation of this sort, the law already specifies who has the right of way, and who doesn't. The vehicles will be programmed to follow the law, which means that when presented with a choice of hitting something in their right-of-way and hitting something else outside the right-of-way, the car will obey the law and stay in its right-of-way. The law says this, even if it's 3 people jaywalking vs. one in the oncoming lane. If people don't like the law, they should follow the process to change it. This sort of question is actually one of the rare ones where it makes sense for policymakers, not vendors to decide the answer.
I suspect companies will take very conservative decisions here, as advised by their lawyers, and they will mostly base things on the rules of the road. If there's a risk of having to hit somebody who actually has the right-of-way, the teams will look for a solution to avoid that. They won't go around a blind corner so fast they could hit a slow car or cyclist. (Humans go around blind corners too fast all the time, and usually get away with it.) They won't swerve into oncoming lanes, even ones that appear to be empty, because society will heavily punish a car deliberately leaving its right-of-way if it ends up hurting somebody. If society wants a different result here, it will need to clarify the rules. The hard fact of the liability system is that a car facing 5 jaywalking pedestrians that swerves into the oncoming lane and hits a solo driver who was properly in her lane will face a huge liability for having left their lane, while if it hits the surprise jaywalkers, the liability is likely to be much less, or even zero, due to their personal responsibility. The programmers normally won't be making that decision, the law already makes it. When they find cases where the law and precedent don't offer any guidance, they will probably take the conservative decision, and also push for it to give that guidance. The situations will be so rare, however, that a reasonable judgement will be to not wait on getting an answer.
Real human driving does include a lot of breaking the law. There is speeding of course. There's aggressively getting your share in merges, 4-way stops and 3-point turns. And a whole lot more. Over time, the law should evolve to deal with these questions, and make it possible for the cars to compete on an equivalent level with the humans.
Swerving is particularly troublesome as an answer, because the cars are not designed to drive on the sidewalk, shoulder or in the oncoming lane. Oh, they will have some effort put into that, but these "you should not be doing this" situations will not get anywhere near the care and testing that ordinary driving in your proper right-of-way will get. As such, while the vehicles will have very good confidence in detecting obstacles in the places they should go, they will not be nearly as sure about their perceptions of obstacles where they shouldn't legally go. A car won't be as good at identifying pedestrians on the sidewalk because it should normally never drive on the sidewalk. It will instead be very good at identifying pedestrians in crosswalks or on the road. Faced with the option to avoid something by swerving onto the sidewalk, programmers will have to consider that the car can't be quite as confident it is safe to do this illegal move, even if the sidewalk is in fact perfectly clear to the human eye. (Humans are general purpose perception systems and can identify things on the sidewalk as readily as they can spot them on the road.)
It's also asking a lot more to have the cars able to identify subtleties about pedestrians near the road. If you decide a child should be spared over an adult, you're asking the car to be able to tell children from adults, children from dwarves, tall children from short adults -- all to solve this almost-never-happens problem. This is no small ask, since without this requirement, the vehicles don't even have to tell a dog from a crawling baby -- they just know they should not run over anything roughly shaped like that.
We also have to understand that humans have so many accidents, that as a society we've come to just accept them as a fact of driving, and built a giant insurance system to arrange financial compensation for the huge volume of torts created. If we tried to resolve every car accident in the courts instead of by insurance, we would vastly increase the cost of accidents. In some places, governments have moved to no-fault claim laws because they realize that battling over something that happens so often is counterproductive, especially when from the standpoint of the insurers, it changes nothing to tweak which insurance company will pay on a case by case basis. In New Zealand, they went so far as to just eliminate liability in accidents, since in all cases the government health or auto insurance always paid every bill, funded by taxes. (This does not stop people having to fight the Accident Compensation Crown Corporation to get their claims approved, however.)
While the insurance industry total size will dwindle if robocars reduce accident rates, there are still lots of insurance programs out there that handle much smaller risks just fine, so I don't believe insurance is going away as a solution to this problem, even if it gets smaller.
So are there no ethical issues?
The trolley problem fascinates us, but it's more interesting than it is real. There are real ethical questions (covered in other articles here) which need to be dealt with today. Many of them derive from the fact that human drivers violate the strict rules all the time in their driving, to the point that in many places, it's impractical or even impossible to drive strictly by the book, or even strictly by a high-conservative defensive driving. Cars must assert their rights at 4-way stops, speed, force their turn at merges and cross the double-yellow line to get around obstacles sometimes. Figuring out how to get law-bound programmers to make this work is an interesting challenge.
Comments
Randy
Fri, 2013-10-11 10:44
Permalink
Re: Trolley Problem
I've been dreaming and thinking about robocars since I was 12 (way too many years ago) and this occurred to me very quickly. What I didn't realized until now is that it's not a problem that we really need to worry about, precisely because it is so so rare.
Robocars will save so many more lives *every day* that it would be foolish to delay their adoption until after programmers can solve all of these moral scenarios.
In other words: I agree!
Peace,
Randy
Dean
Sun, 2013-11-17 20:52
Permalink
robocars vs bicycles
Just have the robocars drive in the bike lanes, if there are any. As any cyclist knows, that way it's not a problem if they run over someone who's in the say. I have the missing teeth and broken ribs to testify to this myself. Unusually for headlines, the answer to "Is it OK to Kill Cyclists?" seems to be "yes". According to an op-ed in this week's NY Times, anyway. See http://www.nytimes.com/2013/11/10/opinion/sunday/is-it-ok-to-kill-cyclists.html?pagewanted=all&_r=0
Anonymous
Tue, 2015-10-13 18:38
Permalink
The answer I'd prefer to
The answer I'd prefer to see: robocars, like human-driven cars, will drive slowly and carefully when there are pedestrians on the sidewalk, and will be prepared to stop quickly if a pedestrian suddenly enters the road. Robocars will very rarely have to make a choice about what to hit because they will just brake.
brad
Tue, 2015-10-13 18:45
Permalink
They will be cautious
Especially when doing unmanned vehicle positioning or delivery, when not in a hurry, but passengers inside are in a hurry and cars following behind are in a hurry which presents an issue. You can't slow just because pedestrians are on the sidewalk, not and be a productive citizen of the roads. You can slow if people are walking to a crosswalk or otherwise on a vector to cross the road, just a little bit, but a lot of jerking would make people unwilling to ride in the vehicle.
The vehicles will always drive "carefully." The only question is at what speed, because that does indeed dictate the stopping distance and ability to swerve. (Your tires can only apply a little under 1G of stopping, and braking and steering force share that budget.) There have been interesting experiments with other ways to stop even faster, such as a vacuum plate that shoots onto the road, but great than 1G stopping is an issue if you have passengers facing forwards. Or what if they are not belted in?
Jasper
Sat, 2018-02-17 08:19
Permalink
Wait.. there are unbelted
Wait.. there are unbelted passengers and the vehicle is moving? How would that occur?
Saverio
Wed, 2015-10-28 03:51
Permalink
Decisions are based on probabilities
I don't agree with the fact that (simply stated) following the law will automatically solve these complex decision problems.
For each decision (turn, brake, maintain speed, ...) there are many possible outcomes, with different probabilities.
Therefore if a person jumps into my lane, I have to decide whether to brake, and how violently. The person was not supposed to be there, and has no right of way.
Braking, even violently, is usually safe. However, there is some probability of injuring the driver, losing control of the car, etc.
If you assume a worst case approach, you would probably brake just a bit, and prepare to the impact. However, this is unacceptable, even if the person was not supposed to be there.
Any other decision would require that you indeed consider a tradeoff, and falls into the trolley problem.
brad
Thu, 2015-10-29 06:39
Permalink
Not really
You would never brake too hard -- in fact all modern cars include anti-lock brakes that stop you from braking so hard that you skid, even with human drivers.
The correct thing to do -- which the computer knows -- is to brake as hard as can be done without skidding, and do any swerving at the very last second if you are able to swerve. You may not however be able to safely swerve, in which case you won't. Of course if there is an impact pending you will do what you can to minimize it.
I don't say the law solves every possible problem, but it does dictate what to do in most of them.
Magnus
Tue, 2015-11-03 00:47
Permalink
Stress and time management
One other aspect of this is that self driving cars dont get stressed. And To keep the passanger calm will depend on the experience while on the road and that it is designed in a way that allows the passangers to eg. work or watch a movie. The thought came to me when reading "going slow around corners" here abow. But if we can built that experience so well, well you dont need to move or travel that often and if so we also lower the need of transport.
So, Future is more safe, slower and not so crowded. I kind of like it!
Brooks
Wed, 2015-12-02 19:46
Permalink
list of questions that are interesting
Saying there are far bigger problems and then with a hand dismissing the issue does not in fact make the issue go away. For the cars to self drive they must be programmed to respond in some manner. I do believe the trolley problem is misunderstood by many, but not unsolvable.
The answer is simple. You never kill another human that is not in the path, before humans in the path. In other words you never save the workers that did not check out the tracks and murder an innocent person that made sure the tracks were properly set. That is their fate to die by a train. It doesn't matter if it is 2 - 1,000,000. Stay off the tracks. The reason we do this is simple, as you and I could not live in a society in which our lives were not respected. How does this matter? Well imagine you walk into a hospital to see your ailing mother, and the doctors notice you're the perfect candidate to save 2 - 1,000,000 people. You are promptly drugged and cut into little tiny life saving pieces...ending your wonderful life. Those of that have really thought about many things also realize there is no mythical fictional heaven awaiting us either, which means this is our one shot. Our life is our most precious thing we have and under no circumstances would a society survive with a mentality that we could never be safe going to the doctor. We'd have to arm/protect ourselves from this happening. We'd have to murder any mental ne'er-do-wells that might push a fat man killing us rather than letting nature take its course.
That is how I want these car programmed. If my loved ones are killed because they're in the path, I can live with that. If you murder my loved ones that were safe until your programming murdered them, well see the above for how I feel about a world in which we can not safely walk into a doctor's office.
brad
Sat, 2015-12-12 13:48
Permalink
Must be programmed
The problem is that while your answer to the Trolley problem is a common one, it is very far from a universal one. That's why, in fact, the right response to this is mostly to un-ask the question. In effect, the answer you propose, to stay in your lane and follow the law, is what I say above. It is sort of an answer to the question, but mostly un-asks it.
Again, that is because once you demand a sophisticated answer to moral questions, you enter a rathole that in many cases computers are not even capable of answering yet. Or worse, where sensors don't delivery enough information to allow the computer to answer the question, even if you could do the moral calculus.
Unclever title
Tue, 2018-07-10 12:17
Permalink
Thanks for summarizing this
Thanks for summarizing this so well.
Anonymous
Thu, 2018-12-20 17:00
Permalink
children vs. adults
"you're asking the car to be able to tell children from adults, children from dwarves, tall children from short adults"
Maybe not in the first generation, but fairly quickly the cars are going to have to be able to do this anyway. You need to slow down more when there's a child in certain situations than when there's an adult in the same exact situations.
brad
Thu, 2018-12-20 17:15
Permalink
That's quite different
That's simply a factor in algorithms about caution. Very little harm occurs if you exercise a bit more caution around a dwarf. However, if you were to kill the occupant of a car to save a dwarf because it was decided that children should be prioritized, it's a different story. Fortunately cars will not be making such ridiculous calculations.
Add new comment