How much safer are Waymo cars?

Would a Waymo vehicle do better in snowy or other obstructed conditions than a Tesla, given that Waymo uses lidar and radar (versus cameras alone for Tesla)?

The one Tesla argument that I can mostly accept, is what do you do when there is a disagreement between the different sensors? If one is always going to be right, then why bother with the other? Obviously what is needed is a decision system that takes all data into account; that’s probably what Waymo does.

Also, high resolution camera sensors are incredibly cheap and small. If you can get a vision only system to work, then putting a dozen cameras on a car isn’t a problem.

Another issue was those early radars did not have adequate resolution, which resulted in many of the phantom braking incidents. Those early cameras also did not have good enough resolution, because the new Hardware 4 cars use much higher resolution cameras. There was a talk a few years ago about Tesla adding newer radars with higher resolution, but I haven’t heard anything about that lately. We’ll see what Hardware 5 has.

Possibly. Back when my Tesla did use the radar, i feel like it did better in the rain. Trouble with snow is it tends to collect on the bumper and block the radar. That caused the car to throw fits. Supposedly some cars use heaters to keep the radar clear.

One imagines a team of people sitting at consoles steering the Waymos around the city streets. Like Grand Theft Auto?

One imagines wrongly. But that’s sure what the anti’s like to tell themselves: Waymo is just 3rd world mechanical turks.

Which is why you want to have different kinds of sensors, because they don’t disagree. They give different kinds of data. Like, consider the sensors I use when I’m driving: If I can hear a car but can’t see it, then I conclude that there’s a car somewhere that I can’t see (maybe around the corner of a building, say). I’m not concluding that my hearing is right and my sight is wrong, and I’m certainly not concluding that I should base all of my driving decisions on hearing instead of sight.

An argument like that isn’t one that would be made by an engineer who had given careful consideration to the problem. It’s the kind of argument you make after the fact to support the conclusion you’ve already arrived at. The real reason for the only-cameras approach is almost certainly just that it’s cheaper.

All of the other kinds of sensors would also become similarly cheap and small, if they became standard equipment on all cars. That kind of demand is what leads to that kind of economy of scale.

As well as much better processors.

Except they can disagree, which is the issue. More advanced models that use all of the information are better, but that was not what Tesla had at the time. If you hear a car, but don’t see it, maybe it’s in your blind spot, maybe it’s on the radio. Do you slam on the brakes every time that one Billy Joel song comes on? (Not to mention songs with car horns or sirens in them.)

I was disappointed in the removal of the radar, as it seemed to me obviously better for low visibility conditions. Problem was, it also meant the car would brake for overpasses. This can be a hardware problem in that there may be no way to distinguish between a radar return from a bridge or a truck. Need a better radar.

Mobile phones are responsible for the advancements in batteries, cameras, and processing that have enabled things like drones and EVs. Apple is putting LiDAR in their phones, but I doubt those sensors are comparable to what Waymo uses. Perhaps some day.

My point is, the auto industry is going to have to build advanced radar themselves, and can’t depend on just integrating an off the shelf part. This will make radar (and LiDAR) slower to advance and more expensive than cameras. So yeah, radars that sell a few 10s of millions per year for cars will benefit from economies of scale, but nothing like the economies of scale on tech that sells in the 100s of millions for phones.

I agree with your points, but the last one is understated: Over 4 billion smart phone imagers are sold per year (for 1-1.5 billion smartphones). The economies of scale are insane.

Adding cameras is cheaper then adding a new sub-system, but the priority is time-to-market / getting something to work – not cost. The tech isn’t in the cost reduction stage yet. It’s a lot cheaper to leverage existing computer vision data and models, than to create a second system in parallel.

Lidar has some advantages (and a few disadvantages) compared to CV. All else being equal, a system that also has lidar would be better, but that’s not the option. You have to make a trade-off.

I agree. And in fact, when people navigate space, they also use sound and touch and sometimes even scent. Cars have horns because even when we are trapped inside enormous boxes of metal, we still use our ears to help us navigate space.

It seems dumb to limit an autonomous car to only relying on vision.

Yup. And cheaper is valuable. But at this point in our AV learning curve, I’d be more comfortable with more info, not less.

Sure there is, just like a human distinguishes between a car honking and a honking noise on the radio. You also look around, and your use all the available information.

I can tell you as an actual engineer that it’s definitely not necessarily dumb. There’s a huge amount to be gained with simplification. I obviously can’t speak to the specifics of this system but there are a whole lot of brilliant people who have worked on this and it’s silly to dismiss them.

Radar has an inherent limitation due to the frequencies involved. Compared to visible light, the frequencies used for radar are pitifully low = long wavelength.

Which means that to get a high resolution output you need a ginormous antenna. Like several feet across. Not gonna happen, and no amount of intelligent transmit signal shaping nor return signal analysis will overcome the hard physics there.

LiDAR is all about overcoming that physics tyranny by using light-frequency transmission and reception. Now you have a hope of getting enough resolution to tell a building from a pedestrian. But you may well find that a headlight and a camera and a computer vision system do exactly the same thing, and with better fidelity.

The Google people went the other way. And having known a bunch of software engineers at Google (not in self driving, admittedly) and a could of people working all Tesla (including one on self driving) in inclined towards Google just on personal and company policy grounds.

But the reason why there’s so much more work on computer vision than on lidar or radar is that it needs more work. Converting a small set of 2D images into a 3D world model is really hard. But that’s what radar and lidar give you right out of the gate.

Now, granted, a radar system that brakes for overpasses isn’t really useful. That needs improvement. And the obvious avenue for improvement is making the hardware higher resolution, and that might not really be doable. But you can also make improvements on the software side: For instance, use vision for object shape and size, and only use the radar data to determine distances and radial velocities of those objects.

Yet another reason to use multiple sensors is that they have different failure modes, and in most cases, you can easily tell that they’re failing. If there’s a leaf stuck to the lens of one of your cameras, it’ll look all dark, and you know to ignore whatever comes from that camera until that changes, but the radar with a leaf stuck on it will be almost completely unaffected. It doesn’t take much processing to recognize that “it looks like it’s foggy”, and then you shift weighting from the visual cameras to the infrared, and so on.

There’s been discussion here on reduced injuries but I didn’t see discussion on reduced property damage claims (I searched this thread on ‘property’). AVIA’s (Autonomous Vehicle Industry Association) first-ever Robotaxi Report is here and it quotes that Waymo’s autonomous vehicles demonstrated an 88 percent reduction in property damage claims. I did not see a date on this report.

https://cdn.prod.website-files.com/67ee365c25e6530594bd40c2/6930ab29af2f0cc7461e061b_Robotaxi%20Report.pdf

Gift link to a New York Times article from today’s paper in which the CEO of Waymo, Tekedra Mawakana, is interviewed on the subject.

My first Waymo trial was over a month ago. I’ve since used them 4x more. Once was on a dinner date with my wife. 3 observations:

• Waymo’s lane keeping wasn’t great. In California we have Botts’ Dots on the lane dividers and at one point the Waymo, on a 2-lane city street, was on the Botts’ Dots for some 40-50 yards at approx 30 mph. The Waymo was in the left lane so these Botts’ Dots were on the double yellow line separating directions of travel. A little disconcerting for me, but I never felt unsafe. Most of us human drivers, when we hit the Botts’ Dots, we correct much more quickly than Mr. Waymo did.

• At one intersection at a red light, the Waymo needed to continue straight through the intersection but it had stopped in the dedicated right turn lane — so it had chosen the wrong lane. Easy to do especially when the painted arrow on the road is faded (I did not observe if that was the case here). When the light turned green it changed lanes and continued straight. Inside the Waymo, I did not turn around to see if a car behind wanted to turn right, only to be blocked by the Waymo; if it had been me driving, and if there had been a car behind me that I’d blocked because I’d chosen the wrong lane, I would’ve turned right to get out of their way and then rerouted myself to get back to where I needed to be going. Inside the Waymo, I did not hear any horn honk behind me. Which begs a question, what if there had been a car behind the Waymo and it had honked its horn, what would Waymo do?

• At my destination, Waymo informed me it was looking for a safe place to pull over. There was a space, behind a dumpster that was on the side of the road. The problem in this instance was that a forklift driver (Adam; I know him; this was my motorcycle shop) was using that space to place items into the dumpster. Waymo did not recognize that, and as Adam was maneuvering the Waymo crept into his ‘temporary working area’. Waymo eventually took away Adam’s working space and Adam could not continue his work until I’d exited from the Waymo and it had left. In this case, Waymo was impolite. Waymo was rude. The following conversation I had with Adam was interesting, and somewhat comical.

Sure, but this is also because cameras provide richer information: resolution, color, and (because of cost) more of them, etc.

A company already has to collect massive datasets of video, go through and purge and label, then train and retrain and retrain and retrain. Doing this again for lidar doubles the effort. Then as @echoreply said, you have to stitch the data together.

You can’t make a system with just lidar, but you can with just cameras. If the goal is to build something that works well as fast as possible, it makes sense why you would use one type of sensor.

I like the idea of using lidar (or radar or ultrasound) and am glad Waymo is continuing with mixed-sensing. I just disagree that mixed-sensing was dropped for cost-reduction and not quality.

Lidar of course provides the critical component of precise range. And hence precise range rate. Range & rate that you and I, and camera-driven computer vision, are forced estimate from a variety of cues, parallax being one of the lesser ones.

A very tight Lidar beam that the camera / visual system could aim at targets of interest to get precise range and range rate would be ideal. Similar systems exist in military systems and are both finicky and expensive. So far.

An inclination for one company over another is reasonable but it’s not a reason to declare a technical approach of one over the other better or in particular to call on approach dumb. It’s not relevant that you know a small few engineers.

I’m an actual engineer. I’ve worked on complicated electronics systems and delivered millions of sensors to the automotive industry including some used in self driving development. One client was developing LiDAR for cars very early on and I got a brief tour. He hated the tech but obviously it works.

In any complicated system it’s a trade off of dozens of things and more than one approach can be equally correct. Engineering isn’t really black or white despite what a lot of laymen believe.

You’re one of the smartest people here and you lose all good sense when it comes to Tesla because you despise the company so much.

Btw, I worked as a supplier of components to the automotive industry from 2005 to 2020. None of them went to Tesla or Waymo as far as I know.