I get and review Tesla FSD -- and give it an F


Well, I finally got to try Tesla FSD, and it was a big disappointment. From a robocar developer's viewpoint, it sucks and I give it an F.

I made a video review and a text one. The text one contains the review part of the video and lots more information. The video has the 3.5 mile sample ride around Apple HQ, full of mistakes.

Read the text review on Forbes.com at I get and review Tesla FSD -- and give it an F

Quick answers to some viewer/reader questions

Isn't this just a beta? Do you know what a beta is?

There is a section on this in the Forbes article. Suffice to say with 45 years in the software industry and running multiple software companies and projects, I know what a beta is. It's less clear that Tesla wants to use the conventional definition of a beta. Tesla FSD is better described as a prototype than a beta -- the only way it's like a beta is that it's been given out to some early adopter customers.

As a prototype is has many bugs, of course. But Tesla on the one hand keeps declaring that it will be in production very soon (they first promised it would ship years ago) and also that portions have undergone "complete rewrites" which never happens in a real beta.

The term "beta" has seen a lot of flux in how it is used over the course of my decades in software development, including some very loose usages. But just because it's a prototype or beta, doesn't mean it's not fair to judge it and compare it with other prototypes, or to measure how far along it is on the path to production "very soon." And it's wanting -- not against anything from other car OEMs, which don't even have efforts of this sort, but against the self-driving teams.

I've ridden in many of the prototype cars of the self-driving teams. I've actually mostly stopped, because they all get the same review if you track this... "boring." It's supposed to be boring, but in a good way. Tesla FSD is not boring, and that puts it way behind the others. Necessary interventions on other than straight roads are very frequent. Not boring means an "F."

In 2018, an Uber prototype killed a pedestrian in Tempe, Arizona. At the time, Uber was doing about one intervention every 13 miles, and most people felt that was far too poor a record to have them switch to one safety driver instead of two. That vehicle, which killed a woman, rated an F, and Tesla FSD's performance at what it is trying to do is not as good!

Isn't an "F" unfair? I mean it's amazing!

People who have not seen the other vehicles will think it's amazing. It's amazing that it does it at all, and amazing that it even does it poorly without maps. But I'm not grading it on that, I'm grading it as an effort to make a full self-driving car.

Imagine you were taking your driving test and you made 3 wrong turns, ran 2 red lights, swerved into two obstacles so the tester had to grab the wheel, stalled in crosswalks for long periods and got honked at and gave a jerky and uncomfortable ride. You would not just get an "F," the tester would stop the test early on and ask to drive the car back to the DMV. You would be told not to come back for a while. You have to perform as well as a teenager to get a passing grade in this game.

I know this because I failed my first driving test, when I was 16, because I stopped at an intersection, couldn't see and advanced into the crosswalk where I sat because I couldn't go due to traffic, while pedestrians arrived and swarmed around me. I didn't run any red lights.

Real self-driving is very hard. Doing it 99.9% may seem amazing if you're a newcomer to the field, but you've got to do it 99.9999% of the time, and reach the level of humans who, bad as they are, have a ding ever 100,000 miles, an insurance claim every 250,000 and involve police every 500,000 miles. Tesla FSD on roads with any challenge doesn't seem able to go more than a few dozen miles on average without something that would cause a ding. (That it sometimes goes further may impress but it's the average rate that matters.)

Members of the public see a car drive 100 miles with one mistake and think it's impressive. Robocar engineers would see that and call it a terrible failure. Going 10,000 miles with one serious mistake would be a failure. Going 100,000 miles with one very minor mistake is where they would start to consider it doing OK. Having multiple serious mistakes in 30 miles would result in, "Why are you even here?"

Won't it get better?

In fact, 10.9 just dropped and has improved -- in fact I suspect Tesla saw this video and filed tickets for some of the problems. It understands the bollards and dedicated lane of the right turn now, and so far has no failed on the forced left lane in 3 tries (though it only failed occasionally before.) At the place it took the sudden left and ran the light, I went by 3 loops and twice it swerved into that left turn lane but then corrected and swerved out of it. A 3rd time when cars where in the lane it did not swerve into it. It's unclear if this has improved.

We should hope each release will improve, and I'll drive it more, but it's still too rough -- and inconsistent -- get get a passing grade on this route. Of course, if this came about because Tesla saw the video (that's not confirmed, of course) then that means little, other than that they fix bugs that are reported. The reason I suspect they saw it is that no minor update fixes things this specific in this way.

But Tesla drives on any road, how can you compare it to cars with limited service areas?

Driving without a map is an incredible stretch goal, which Tesla has failed to come remotely close to so far. You don't get points for what you are trying to do, you get points for what you succeed at. Almost every other team believes Tesla has taken the wrong path here (and on LIDAR, but that's a different story.)

Maps are super useful. Any car that could drive without a map is a car that can make a map. In that case, maps are just having a memory. Knowing what things look like up close and from other angles because a cousin car drove this road before. The world changes, and every map-making company knows that. When the map is wrong, you take a step down -- and drive like the Tesla tries to drive all the time. Or rather, you do a bit better because your map is not entirely wrong, and if it's detailed, you know where it's wrong and where it's not. There are many more advantages to maps than this, but even if this were all you did, it would be worth it.

Maps don't scale, you imagine? The first team to build a working map-based car was at Google. I worked on that team. The people did it had just built Google Streetview. When people asked, "Wouldn't we have to drive every road in the country to map?" they could answer, "we did it last month, we'll do it again." It's a big project, but very scalable and doable for the likes of the big players in this game.

Or not even. Several companies such as Intel/MobilEye are building their maps just by having the cars with their gear drive the roads and report what they see. MobilEye is in over 100 million cars, dwarfing anybody's fleet. They can't get as much data from these cars as Tesla gets from its much smaller fleet, and Tesla doesn't get as much as Google/Waymo do from their even smaller fleets, but it's enough. If Tesla wanted to, their fleet and sensors are enough.

Even if maps cost a lot of money to make, the amount per mile is still quite low, and the amount per trip over a segment of road is as well. Even if it cost $1,000 to map a mile of road (it won't) that map will serve thousands of people driving it every day. Nobody will be too expensive from the cost of mapping. But again, quite useful maps can be made for close to free.

Most people believe driving without maps is a fool's errand. It may be possible in the future, but it's not the path to getting to success sooner. And in the end, when grading how well the car does, it mostly matters how it does, not what it wants to do. A car that can drive badly everywhere is not better than a car that drives very well in a few cities. Not when it comes to full self-driving.

Those who think maps need to be perfect to work and are useless because they get out of date don't really understand how maps work. While being the very first to encounter a street that has changed since it was mapped is actually extremely rare, everybody knows you still have to handle it.

To those who say, "Driving without a map works everywhere" the reality is that driving without a map works poorly "everywhere." Driving with a map does work well everywhere, and it's not that expensive -- certainly not so expensive that you would give up safety to save a small amount of money.

There are other limitations to those cars, like turning left

Waymo is not confident of 100% safety on unprotected lefts, so one fault of their service it is sometimes takes a longer route to avoid them. That's definitely something on their todo list. But understand that the Tesla can't do these turns either. Yes, it often pulls them off, but in full self driving, "often" or anything less than 100% means "can't." The Waymo could almost certainly do those left turns as well or better than the Tesla, but it's still not good enough for self-driving.

All the companies building their robotaxis, companies like Waymo, Cruise, Argo, Zoox, AutoX, Baidu, WeRide, Motional, Aurora and more -- are all operating in limited areas. That is not their long term plan, though. They have the money and plan to expand to lots of cities. They don't plan to drive everywhere because they are taxi companies, not trying to be car companies like Tesla. Making a car you can sell and drives everywhere is a much, much, much harder problem, and most people think Tesla is biting off more than anybody can chew. The real world-changing stuff is in robotaxis that change the nature of car ownership anyway. Even Tesla sees that and talks about how they want to deploy their cars in this way. You can make a great robotaxi service without driving every road, so why delay trying to get there. We'll see who gets there first.

Is driving timidly so bad?

It's not. In fact it's what all early prototype products have to do. But you must graduate from that to be a full self-driving product. It's part of what is holding even Waymo back. If you're too timid to go into production, you've failed. You can't be out there blocking traffic, getting honked at. Your project may get pulled from the roads if you do.

How is it OK or legal to do this?

Every robocar team has put a car on the road that was at this poor level when it first started, but they had safety drivers in the car to intervene if it made a mistake. I myself have done this with the Google car. With one glaring exception from Uber, the safety record of this technique has been exemplary -- superior to the safety record of regular human drivers.

As long as we aren't seeing accidents that hurt other people in other cars at a rate above that for normal driving, there is no reason to stop this, even though Tesla is definitely skating near the edge, and has some key differences

  • Other teams have professional employee safety drivers with some training. Tesla just has ordinary owners. Uber had an employee who completely neglected her job.
  • Other teams start with 2 safety drivers, though the goal is someday to get to zero, which Waymo and a few others have done. (Uber switched to one, which was tragic because with two, the one at the wheel would not watch TV.)
  • Tesla's drive quality is quite a bit poorer than other cars were after several years of development -- but again, as long as people are not hurt that's not a key issue.

There are reports of people getting very minor issues like curb dings and scrapes. Elon Musk claims no accidents, but he means serious ones, where an airbag deploys. As long as the record stays like that, and the public is not at greater risk than from ordinary driving, calls to stop Tesla are not appropriate. Though in California, there is an argument that what they are doing needs a permit under California law.

Is this FUD? Do you hate Tesla?

As I said, I love Tesla. I like Elon. I don't support everything they do, because they make mistakes. I am not afraid to call those out. Yes, I think Waymo is #1 and that other companies are far ahead of Tesla. Yes, I am friends with people at Waymo and many other companies, but I stopped working there in 2013, so I have no financial incentive. I own GOOG and TSLA stock, but don't imagine for a second that what I write would ever move the needle on either of those stock prices. FUD requires a motive. I own a Tesla and would love for them to succeed, but I'll be critical when they take the wrong path to success. They are taking a very long bet -- it might even pay off, which would be great. But it's the wrong bet from what we know today.

Humans drive with just their eyes

Yes, everybody knows this very old (in the field) phrase. And birds can fly just by flapping their wings. We are not even remotely close to matching human general intelligence, though. We're not even up to matching a horse or a bird. So yes, clearly, if we got AI that approached humans, or even the part of humans that drives, you could do it with just a camera.

In fact, humans do it with just a single camera (we can do it with one eye) that we swivel around but mostly point forward. But that argument, Tesla is stupid to have 8 cameras on their car, and ultrasonics. Humans don't need those. We don't build airplanes to fly like birds, and it's not at all inherently right to say that driving systems are best built to work like humans.


“Tesla’s performance is beyond what Waymo could do a decade ago“

Is “beyond” meant to be “behind” here?

(Yes, this was fixed, thanks... Brad)

the blogosphere has not bbombarded and bashed the column or the columist as of this moment.


10 hr. ago
Additional comment actions
Certainly I was expecting a much lower intervention rate. Highways and arterials (which Autopilot does reasonably well) are low intervention. This is what surprised me. I mean I just left my home and did a simple loop about the area on roads I don't think are particularly unusual, and it was intervention after intervention. I expected much better performance. Maybe as I drive more miles I'll see a better record. (But my sweetie prefers not to be in the car while testing, I guess she wants me to die alone in the fiery crash. :-) However, a good system almost never drives any road like this drive, let alone the first road you drive leaving your house.

Tesla's perception is limited, and of course refuses to use LIDAR and now radar. The side-rear cameras should show this traffic, but there are limits to what you can do with these cameras, particularly in sun or rain.

I don't understand some of the planning errors, particularly the sudden veer into a left turn lane. We often see Tesla's planner wobbling about considering very different plans and that doesn't inspire confidence. Why is selects plans that are not at all along the planned nav route is odd, though in some cases it's because it now feels that route is not driveable, so it does what it can.

I would hope with a better map it could make clear plans and stick to them. As long as it builds the map on the fly, the plan it picks is going to be a very dynamic thing and lead to surprises. On Tantau, the street of Apple HQ, the lane is marked with giant left arrows. It usually sees those in its perception system but doesn't here, and I don't know why.

Your feedback to others on the YouTube post is helpful as well.

"...Hey, I'm a Telsa fan. They are the best car company, that's why I gave them $55,000 of my hard earned money. I'm an Elon Fan too, but that doesn't mean everything he does is perfect. ...

The other automakers are not really even attempting this (other than through subsidiaries) any more. As for the companies who are attempting it, there may be some who are performing as badly as Tesla, some who are even a little worse. The biggest players now -- Waymo, Cruise, Zoox, Motional, AutoX, WeRide, Baidu, Argo, TuSimple, Plus.AI, MobilEye and a few others -- well, Tesla isn't even in their league at present. I was expecting FSD to be better, to at least compete with some of these, but sadly, it can't. "

All your comments and answers, Reddit / Youtube / Twitter / Forbes etc consolidated together help greatly when read as a whole.

99% of people just want to comment where they read the post. You can try to designate a central location and they ignore it.

Picking the wrong technology sets a program years. Will the FSD chip mated with D1 Dojo supercomputet and AI/NNs succumb to a sensor suite choice (lack of radar, lidar, maps). Is it easier to add than takeaway?

"If I was an AI engineer, that's (Tesla) exactly where I would go. Probably NOT, honestly. There's a reason they're holding such a "recruiting" event, and it's not because the top talent is banging down their door for a job. You'd much more likely find yourself in a research program, or at one of the major behemoths in the industry like Google, Amazon, IBM, every military contractor, hell even Tencent. You'll notice that Tesla is missing from things like the NIPS conferences, too, so it's basically a zero percent chance someone would go work there."

We have had model 3 for 2.4 years and live autopilot. If you live in Pal Desert, the east test is Palms to Pines form here to San Diego! It does the switchbacks with ease set speed to posted 55 and it slows down for sharp turns… the only issue is it slows before the curve much like a sports car on a track and folks following seem to be in a huge hurry wanting to run up on the car.. they should learn a little more about driving safely in switchbacks. I noticed the demo was a 2018. Why use a car that is at least 3-4 years old????

The 2018 Tesla is pretty much identical to current ones when it comes to the self-driving system. It has the latest computer, of course and the same sensor suite, though it has a radar that the newer cars do not. The FSD system does not use the radar according to reports. The main differences with newer models are minor -- heat pumps, charging controller and a few other things, not affecting FSD. Tesla regularly claims that all cars back to about 2016 have the hardware they need for FSD, though they do require the processor board upgrade, and probably will require another one.

A sub-$35,000 BEV with SAE Level 4 included would be the most disruptive event in the history of the automotive industry. Elon, build a Model 2 with Level 4 for $30,000. Mobileye, join forces with Nissan and build a BEV with Level 4 for $30,000. The camera guys will forever be worshipped by the working class.

I have been involved with the design and manufacturing of
Simple electronic systems both hardware and software. Certianly
You may be able to make a driverless car drive as well as a highly skilled

Human for say a few years. But there is a major problem with the all the
Extra complex hardware and software ,sensors,wiring and god knows
What adds extra chances of faults occuring usually intermittant faults
I have worked on late model range rovers and after about 10 years so many frustrating faults have occured that the computer diagnostic system could not uncover. Some times the vehicles are just binned.I have ended up with the binned ones. Yes driverles cars will have a place, but there is one thing I have that a computer system does not. FEAR. There is no way you can make highly complex systems 100% reliable.Look at even the aviation industry which is a best practise industry.

I think you are way too harsh, way out of line. Tesla FSD is not available to anyone who has not jumped through a bunch of hoops to become, not a user, but a tester. This is not a finished product and is not being presented as such. Frankly, I think Elon had a great deal of nerve legally, knowing that there would be "Reviewers" like you or people who are openly or other wise representing the entrenched ICE world, would pop up and try to review this system as being promoted for prime time. It's nothing like that. It is in beta testing. How could Tesla make that clearer? I too have this system and it is teaching me just how difficult learning to drive and then teaching a machine can be.. Yes, this system has miles to go, but needs to be in the field in some form to learn what it is you do when you drive a car. This job is incredibly hard. Progress will be incremental and very slow, but without beta testers, we may not see it in our lifetime. Relax,learn more while the car learns. Progress will necessarily be slow, please be patient. Mike Heaton

It’s hard to take you seriously (even if you might be right in some respects) because you think Brad supports ICE.

It’s hard to take you seriously (even if you might be right in some respects) because you think Brad supports ICE.

I have a history of 45 years in software development and 15 years in robocars. I am fully aware that the product is a prototype. It only gets a review because it is being disseminated to a wider audience and because Tesla have said (for some time) that will will be a commercial production product "very soon." Many people seem to take that claim of being ready soon as credible. It is reasonable to report evidence that it is not. If Tesla didn't annoyingly claim that this is a product on the verge of release I would treat it as the prototype it is.

That said though,I've ridden in several prototype self-driving systems, though this is the first claiming to drive mapless on any street. In contrast to the others it scores quite poorly, which is why the "F" at what it is as well as what it is pretending to be. It is very worthy of comment whether trying to drive mapless on arbitrary roads is a bold stroke, or an error.

Interesting 72 hrs - Grade of F by BT, and The Dawn Project (with full page NYT ad and website).

Crash Test Dummy.

You are correct in your criticism that this should not be called "Full Self Driving". And you are correct that Elon has been dead wrong about how soon real FSD can be delivered.

You say, "Most people believe driving without maps is a fool's errand. It may be possible in the future, but it's not the path to getting to success sooner."

You have no way of knowing this. Success in AI tends to sneak up on us. Tesla is doing something far more ambitious than the others. Nobody on the planet really knows if they will succeed or not. And nobody really knows how soon they could succeed.

A few years ago, Lex Fridman told his MIT students, "There exists a neural net that can drive better than a human." This is obviously true and there is no reason to believe that Tesla will not be the first to develop it. Conversely, there is no reason to believe they will be either.

We just don't know. It wasn't long ago that nobody thought a computer could ever beat a world class Go player. The combinatorics of the problem were just too massive for computers to handle. We saw how that one turned out.

I have said many times (including above) that what Tesla is doing is placing a longshot bet on a breakthrough. It might pay off. It probably won't. That is not a statement that one approach is completely wrong, just that it's more likely to be.

Tesla is being more "ambitious" in some ways but actually even that might be wrong. Waymo could do what Tesla is trying. Probably do it better than Tesla -- there is certainly better neural network skill at Google than anywhere in the world, and by a good margin. But they don't think that's the most likely path to success. Their ambition is quite high, actually.

I am not sure I agree with Lex on this. Or rather I don't think it's a true statement that "there exists a neural net that can drive better than a human ... on the compute hardware available today and in the next few years." That's not at all demonstrably true. Nor that we know or will know how to train that network.

It is definitely true that the neural net Tesla is after can be built. That's all Lex was saying. It's not (yet) demonstrably true. But it is unequivocally true nonetheless. We don't know if it really requires a true breakthrough or just more and more iterations and innovations.

Google is no better than Tesla at neural networks, at least in the autonomous driving space. They are both very, very good. Both have top AI talent and both are developing cutting edge hardware.

The difference I see is that Waymo has to figure out how to build a robotaxi network that is profitable. It looks to me like Waymo has a long way to go and I'm just as skeptical about Waymo's profitability as you are about Tesla's FSD. (Actually, I'm quite skeptical about both. It's a really hard problem.)

Tesla doesn't need robotaxi. If they can just make FSD work as well on city streets as it does on the highway then FSD will be extremely profitable for Tesla. Robotaxi would be a bonus.

But despite Elon's embarrassing missed predictions, Tesla actually has a luxury of time that Waymo does not. Waymo can't burn money forever. But Tesla isn't really spending all that much on FSD. In fact FSD, is already generating significant revenue. Because Tesla's cost structure for the project is smaller, they can keep working on the problem much longer.

You must use a different definition of that phrase than I do. Perhaps you think the human brain is nothing but neural networks. That's presumes facts not in evidence.

No, Google's AI talent is leaps and bounds over Teslas. Tesla is not inventing this stuff. They use the best that other people invent. Not saying they are bad at doing that, and they are making their own innovations but there is no evidence they even approach the kind done by people at Google including DeepMind and Geoff Hinton.

Tesla doesn't need a robotaxi, but the personal car is a much harder problem than a robotaxi. So by aiming for it, they put themselves at a disadvantage. However, I will say of all the car makers, Tesla is certainly better equipped to handle that task. But considering what the other car makers are like, that's not a high bar. (Though the car makers who have been willing to just have a startup subsidiary like Argo/Cruise/Motional do it have a better chance than the others.)

Don't count out MobilEye. They believe in neural networks and evolved ADAS like Tesla does. They are in an order of magnitude more cars than Tesla is, though they don't have the fine control over them that Tesla has. They are building LIDARs and imaging radars and doing it well. Being part of Intel, they are a level above Tesla in being able to make custom silicon. They have embraced maps and are doing them sell, exploiting their much larger fleet of cars.

Look at the sheer number of employment openings at Cruise Automation. Then consider Waymo recently stated of rebuilding the complete stack and sensor suite..
Lex Friedman podcast (start at 2;04 mark)
#241 – Boris Sofman: Waymo, Cozmo, Self-Driving Cars, and the Future of Robotics

BTW, it's also obvious that vision and neural nets is all that is needed to perform the driving task. That's how humans do it.

So to say that driving without maps is a fool's errand is quite wrong. All of us human fools still manage to drive.

Again, we don't know which approach gets us there sooner. But we do know that if Tesla is successful, then Tesla gets us there cheaper.

Yes, people have been saying this line for a very long time, long before Elon even dreamed of it. They imagine that somehow people are not aware that humans can drive in the human way (with one eye missing, in fact.)

It is possible, if you have some fraction of natural human intelligence. Which we don't have, not even remotely close. So it's not really relevant. Birds can fly fine but we don't make aircraft flap their wings.

(Yes. My grandfather was blind in one eye and he drove very well.)

People, many who are AI experts, have been saying that line because it is an excellent argument.

Vision plus a neural net can definitely drive a car. What we don't know is how hard it will be to create a that artificial neural net that can drive better than a human. We only know it is possible.

Our whole argument comes down to the idea that you say Tesla's approach is likely to fail and I say we don't know. We can't possibly know the probability of success or failure at all. I don't even think we can possibly have an informed opinion.

Frankly, I think both Waymo and Tesla look like they are making progress. Then I take another look and I think both Waymo and Tesla are in "fake it 'til you make it" mode.

There are a lot of things that are possible that are still intractable for foreseeable engineering. And it is far, far, far, far, far from proven that anything a human brain can do can be done by a camera and a neural network. People debate it a lot, but there is is certainly no consensus that this is true. (In fact, there is a larger consensus that it is not.) Which does not mean it's false but that you can't assert it as an accepted fact. It just isn't.

It's not a fact that a neural net can drive a car. And it's not a fact that if a neural net can drive a car, that we know how to build one. These are both speculations without a lot to back them up, caused by the fact that neural nets have conquered a number of interesting AI pattern matching and statistical problems which previously could not be done as well. That trend makes some people extrapolate that there is no limit.

Maybe there is no limit. Maybe it can be done. Some are betting it can, others that it can't or is too hard. Nobody can declare "a neural net can definitely drive a car." Well, not with any force behind it.

I read you saying their approach is likely to succeed, and I say we don't know. I find it odd that you represented my view as the reverse of what it is.

Can we know the probability? Well, no, not with breakthroughs. They happen or they don't. You can have intuitions on the speed of progress to them but they are by definition, breakthroughs.

Waymo and Tesla actually both deserve credit for being willing to let members of the public try their systems. That's an important bar few others clear. That's not faking it.

When I say a neural net can definitely drive a car, I mean the one in our heads. Whether a human brain is "more than a neural net" is, to me, a matter of semantics.

I generally agree with you on what you said in this latest post.

While humans are not a neural net (not in the AI and computer science meaning) even if we were, one could point out that we're not very good, on average, at driving cars. Of course, some humans drive a whole life without an accident, though on the whole, our species has far too high a death toll for its driving. Such a system could never get adopted today.

Though to disagree with myself, most of our accidents are due to failed attention, which machines are less likely to suffer from.

I do think we are machines, but neural nets are a very useful abstraction inspired by biological brains, but they are not the same.

I agree with this thought as well. And given that humans are not very good at driving, this give much credence to Lex's statement that there exists an artificial neural net that can drive better than a human.

I personally think Lex's statement is obviously true from a theoretical standpoint, but we don't know exactly how hard it is to create that NN in the real world.

Perhaps I will get around to forming a reply to the other thread where we have more disagreement.

I would certainly say that there can be a computer system that drives better than a human. Whether it is a 2022 neural net or not, or something derived from a 2022 neural net, is a different question which can't be stated with certainty (either way.)

I suspect the answer will include lots of neural net tech (though not necessarily built or trained as we do today) and a bunch of other techniques. Some suspect it can be done entirely with machine learning. That's not yet shown. Machine learning will play a big role, and most teams out there are working on that assumption.

My uneducated view is that true autonomy is so hard because it must "do everything." That is, with ADAS I can keep adding features... BSD, lane-keeping, automatic emergency braking, etc. And as Atptiv has said, get to 80% of full-autonomy safety benefits at 20% of the cost. But in this approach the driver is supposed to keep doing everything as before, with the system intervening as needed. In full autonomy the system is inverted: it does everything and the driver is out of the loop. (I do understand the SAE level system, I am making a broader point.) Thus its processing burden is immense. BSD = focusing a camera at the blind spot and beeping if there is a vehicle present. Full autonomy: "Human, you are attempting to enter the airport parking garage via the exit side... I must stop that." In full autonomy, the "edge cases" become all that is truly important, because instead of building up from limited functions (additive ADAS) full autonomy must "solve for the world." I was struck to read Waymo's paper about how its system would avoid the vast majority of fatalities in Chandler etc., not because the tech is amazing (it is!) but because the keys to this high safety level were mostly such incredibly mundane things: don't run stop signs, don't break the speed limit, don't turn left into traffic.

Is not telling the human they are going in the wrong entrance of the parking garage. It's taking the car with nobody in it and driving it in the right entrance while making no mistakes.

There are two camps. Most are in the "ladder to the moon" camp and think you should focus only on full autonomy (in a defined useful subset of situations and streets.) The other camp thinks you can start with ADAS and just keep making it better. This camp is smaller but has MobilEye, Tesla, Tim Kenley-Klay's new startup and a few others in it. Most are in the former camp.

Why would a CEO of a reputable company author an exaggeration of this nature?

Dan Dowd Twitter in last hour.

"I am worried about the Billions that will die if FSD goes live. It will be almost a Billion in the first few days."

if you look is a fake account

It's over the top, that's for sure.

complete stack and sensor suite rebuild by Waymo.

Lex Friedman podcast (start at 2:04 mark)
#241 – Boris Sofman: Waymo, Cozmo, Self-Driving Cars, and the Future of Robotics

Regarding traceability, validation and verification, how can Tesla make constant changes to a subsystem like beta FSD without significant internal testing first? Even Waymo states simulation is only good to an extent. Are the Tesla employees who are active in the FSD program on the payroll when they are in the vehicle with FSD active?

I would presume, if it were my team, that it would have a very large regression suite with lots of simulation scenarios and recorded data streams for any new build. That would be followed by some test track and on-road drives by staff -- Tesla has thousands of employees with Tesla cars -- and then you might consider release to testers.

Not sure what you mean by on the payroll. I presume they are salaried, most of them.

It is my understand that this is exactly what Tesla is doing. It's regression suite + live testing.

And they are live-testing multiple versions at once. Some employees are live-testing the latest minor version while others are live-testing the next major version.

I believe that right now the next major is version 11 with the "one stack to rule them all" architecture. I think this is also the version where the system has some understanding of object permanence. I expect it to be delayed quite a bit because it is so different and needs lots of extra testing.

part of Yann Lecun comments on Twitter. Read the entire thread and then tweet your review.

"In 2016, MobilEye "divorced" from Tesla, and Tesla had to start building its own driver's assistance system.

It took them 2 years to build a system that matched the MobilEye performance."

"It's not an autopilot. It's a driver's assistance system.
And for that, it's great.
It warns you when you don't hold the wheel for more than a few seconds.
Your cognitive load is much lower (much less tired after long trips), and the driving is safer overall. "

Hard to believe these are comments from an AI expert.
Disjointed, vague, not explicit, possible innuendo.
At least BT's article is coherent.

Tesla is almost a trillion dollar company in value, worth more than the all the below.

Will Toyota, Volkswagen, Mercedes, GM, Ford, BMW, Hyundai, Nissan, SAIC, Volvo resort to public beta testing of similar capabilities to the FSD program?

Why not then is the real question?

Is mostly about their position in EVs. Some of it will come from AV dreams. Ford Hyundai and GM are testing their vehicles with superior capabilities to FSD -- far superior as far as I know -- on public roads.

Given the list of Ford Hyundai and GM why not mention Baidu Volvo Mobileye or Mercedes

I don't think self-driving efforts constitute much of the valuation of Volvo and Mercedes. They do constitute some, but not a grand part, of the value of Tesla, GM and possibly Ford and Hyundai though less sure about those.

When you called leaps and bounds in terms of talents but you have only worked in google almost 10 years ago and (perhaps)never in Tesla, I wonder where your info is from?
Also, Waymo is WAY older/has spent WAY more money to build than Tesla FSD. And, confirmed in Lex Friedman's podcast, they are rewriting their code base just like Tesla.
Given that you don't have information bias(like, not from your google friends), with your network, it will be for self-drive supporters the best to do public reviews on all other self-driving systems that you claimed "in order of magnitude" better than Tesla FSD.
Nonetheless, what is your scale for SD systems? Waymo not being able to make a left is okay, cruise not being able to go city-street is okay, all the map-based systems not being cross cities/states is okay. To a robocar specialist, don't they all deserve an "F"?
btw you can pick tesla's words saying they only see serious accidents as "mistakes". When you say "MobilEye... are in an order of magnitude more cars than Tesla". What is the actual number of miles with the system activated?
Dare I say" Tesla has an order of magnitude more miles with FSD activated than others", I am not sure, just saying, but I am not a specialist, right?
I do think you are one of the very few critics that are actually trying to be objective, just think that with your background & years of experience, coming out with a review(s) on the whole SD car industry with deep research will give a more awesome insight into the industry.
A good review, but I will give a D for DO it better next time.

As I have said, only Waymo and Tesla are willing to let people ride their own routes in them, which is what is needed for a review. Props to both of them. I have taken rides in several and watched many rides for the press and the moment I turned on FSD it was clear it was a step behind just from the jerk quantity. Videos released by companies are cherry picked and not useful (Tesla had a video in 2016 that looked fine of course.) Ideal is public directed rides, and failing that personal rides and press rides where they selected the route but not the circumstances.

Of course Waymo is still busy coding, as is everybody else. As they should be.

The "F" is for what Tesla claims they have -- a beta that is very close to release. Tesla does not have that. That they call it that is odd. So if they say, "Try our beta that will be released this year" (after predicting that for several years) it has to be something matching that. Cruise, whom I have not ridden in (except a long time ago, a different product which wasn't that good) is clear on their limits now, though I am skeptical of their 2023 release. Not being good at left turns is a flaw, though only in ride time, which is pretty far down the problem list.

Not using maps is a bug, not a feature. It's better to get a car that can actually pull it off in a few cities then a car that drives badly in all cities. The latter has no value.

But I do have plans to review the others that will let me. Those who will not let me will be classed as not ready for a review. Tesla is also not ready, but at least they let people do it.

OP here.
Will look forward to your reviews for other systems.
I think the point here is that, after reading through your thought process, it seems like the problematic statements are "Tesla is way behind others" while the fact is, objectively speaking, hard to say due to their unique approach to the situation.
When you make statements with your background, it will get turned into a hit-piece by those mass media who have agendas against Tesla, I guess great ability comes with greater responsibility?
On the topic of Tesla's approach, I do side with you in a sense that they aren't following the most straight path, they want more than robataxis in cities, they want an AI on wheels that do even go without maps(btw I think they do use maps and plan to implement locale memory, Elon mentioned maybe on Lex's podcast). And with years of struggles, they finally figure they actually need to reach AGI to get there, thus why they announce the plan to make AI humanoid robots.
Elon does what he does, he is always ambitious but always late. (but gets things done at the end nonetheless.)
Cut down the ambiguous statements, present as an objective specialist, and if you do have fair views on the whole industry, I am sure your research will do well for greater goods.

Phil Koopman blog Safe Autonomy is worth reading to understand background.

If the "F" is based on Tesla's claims, then I agree with Brad. Elon has always massively over-promised on his timeline. So for that, Elon gets an "F".

But I really don't care that Tesla's definition of "beta" is different from the usual software definition. Anyone like me, who participates in the FSD Beta program knows what they are getting. Labels don't really matter.

The fact that FSD is jerky is irrelevant (for now) and it does not indicate that Tesla is behind. Given that Tesla is solving a more ambitious problem, we can say, "Yes, it's a little jerky, but it operates everywhere." We really only care if the jerkiness gets better over time, which it has rather quickly. This is an indication that they are making progress solving the driving task for the general case, but it doesn't really tell us how close they are.

Brad says, "It's better to get a car that can actually pull it off in a few cities then a car that drives badly in all cities. The latter has no value."

But Waymo hasn't shown it can pull it off in any city. Waymo has to figure out how to make a profit and they are failing at that so far. It looks to me like they are far, far away from profitability. If they can't make a profit then Waymo, has no value.

Tesla will improve over time and even Waymo's director of engineering thinks Tesla will get to level 4 eventually. At that point, Tesla's system will be extremely valuable even if they are unable to launch a single robotaxi.

Anything that makes any claim to being any form of a self-driving system (prototype, beta or otherwise) that makes wrong turns and runs red lights and veers towards obstacles is going to get an F, it it does it frequently in just a few miles of driving. Hesitant driving, jerky driving, getting honked at -- those wouldn't be a failure as long as they are reasonably infrequent. Stopping for very long periods unsure of what to do or where to go -- that's an F if it's not quite rare.

So yes, jerky driving is not a failure, but it does indicate you are behind systems which drive comfortably, because that's a must-have for the final checklist.

While Waymo will not bother to make it, I strongly suspect they could make a car that "drives everywhere" but imperfectly, and better than Tesla drives everywhere. They have no wish to make that because it's not a useful thing to release.

Tesla's approach may indeed work some day. Most people think it's an uncertain, and probably much later day. Except they would use maps because it makes zero sense not to use them when they don't cost anything but storage and bandwidth. Without a LIDAR is doable some day.

You can indeed debate how profitable a robotaxi business is. Tesla wants to get in that business too. But Waymo and many others feel that the robotaxi business comes first, which is much more likely to be profitable than the business that comes much later during that period. Waymo is making a driving system. They can partner with any car OEM if they have made the best one.

Add new comment