Researchers fool an old Tesla into misreading a speed limit sign; that fools the public into panic

Topic: 
Tags: 

Many of the media were keen to pick up on a report from McAfee researchers about how they were able to simply modify a speed limit sign to cause the MobilEye in old Teslas to misread it and speed up. We get spooked when AI software acts like an idiot. But in reality, this isn't the sort of attack that is likely to be done in the wild, and it's also unlikely to cause any danger.

I outline the reasons in this new Forbes site article at Exaggerated Stories About Simple Sign Modifications Fooling Old Teslas Fool Humans Into Panic

Comments

I don't know how the companies are actually doing this, but the whole idea of reading signs separately from everything else, is flawed.

The question for the car should not be "what does the sign say the speed limit is." The proper question is "how fast should I go." Yes, the sign is part of that, but it's only a small part of it. The neural network needs to be fed all of the relevant information. It needs to be able to deduce the speed limit from signs, maps, the behavior of other drivers, the type of road, etc. (With very little emphasis on maps. Maps tend to be frequently flawed, and that's a problem. But the biggest problem is that they can be used as an attack vector. A map needs to be treated as an untrustworthy source. And on the other side of the coin, data coming in to HQ from cars needs to be treated as untrustworthy. As we see, signs are untrustworthy too, but disinformation added to a centralized map can cause a lot of damage to a lot of people in a lot of places all at once. To the extent maps are particularly useful, the car the should make its own maps, store them locally, and mark updated information as untrustworthy until the car has confirmed it with its own sensors. I've seen evidence that the Tesla is storing some map information about signs locally that it has gathered through first-hand experience. The evidence was that someone showed it a sign one time and it remembered the sign later. Not sure about the other companies, but the good ones probably are doing this as well.)

Faking it with hand-coded strategies might be "good enough" for a while, but it's the wrong approach in the long run.

And it probably won't be good enough when it comes to stop signs. The question should not be "is there a stop sign." The question needs to be "do I stop at this intersection." The stop sign itself is but one input into that calculation, and it's not even the most important one. You could remove all the stop signs in the world and humans would still know where they go more than 95% of the time.

Yes, famous experiments have found that removing all the signs in a town not only works, but improves throughput in the town! Humans figure it out. I agree that a robot should be able to function in that town, but it is still going to use a map.

Why? Because humans do. The tourists coming to such towns for the first time are going to be much more timid to the point of impeding traffic compared to the locals, who have a map in their heads of not just road geometry but the local driving conventions.

Should you trust your map? Well, not 100% in that mistakes are possible and the world changes underneath your map. Is it an attack vector? That's a little bit harder, if you do your QA properly, but it could be an attack vector for very sophisticated attackers.

First of all, most companies are wary of using maps from outsiders. They want to be on top of the map making process.

Secondly, one of the advantages of maps is you do QA on them. Once you have made your map, you send human driven cars down that street again and confirm that map. And then every robot after that also confirms that map.

If your map is detailed, differences between the map and the world are glaringly obvious. You know if it's wrong. In most cases you also know how it's wrong. It's only when it's wrong and you don't know how it's wrong that you need to summon remote human help.

No doubt robocars will use maps. Whether or not maps coming from a central server for things like stop sign locations and speed limit signs will be very helpful, I’m less sure about. I guess one large factor will be how jurisdictions treat speeding and stop-sign running in a robocars. If jurisdictions want to force robocars to follow map-based speed limits and stop requirements, I guess it’ll be more important. Maybe the jurisdictions that want to do that can provide the maps themselves.

How much will a company save in reduced speeding and stop sign tickets compared to how much would it cost to make, and maintain, a map of all speed limit and stop signs? I doubt the cost outweighs the benefit, assuming of course that your car is able to always travel at a speed that is safe for the conditions, which it needs to be able to do regardless of whether or not you have a map of speed limits and stop signs. (Even if a speed limit sign does read 85, a self-driving car shouldn't be doing 85 on most roads.)

If you're mapping the road, mapping the signs is not that hard. But I have proposed a simple app -- which the companies would easily provide for free -- so any changes to signs and the road can be quickly logged by road workers with phones. The key point is that the new sign isn't legally in force until logged.

Maybe the mapping, both initially and upon any changes, will be cheap. Then again, maybe the traffic tickets from not doing it will be cheap. Either way it's a pretty tangential issue. What's important is that the car can drive safely, and mapping every single sign isn't going to help much with that. (Mapping specific problem areas will.)

Yes, that is true, as i said and in the article: This trick of fooling the Tesla with modified speed signs do NOT work on modern Tesla cars!
Why?
Because modern Teslas CAN NOT! read speed signs at all!
Some day that will change but as for now and several years back, they CAN NOT read the speed signs! They rely ONLY on map data.

Add new comment