Communication is trivial. Once we solve this small problem:
But secure, reliable, spoof-proof communication is a dozen orders of magnitude harder (read as “more expensive”) than a baseline universal intra-car communications standard. That is the rock upon which the idea of intra-communicating cars, or -roadways, comprehensively founder.
With the car in drive, but at a complete stop, a four foot tall child was detected as a human, as indicated by the FSD beta on-screen visualization. The visualization matched the child’s movement around the car.
So, in at least this one case, the computer did recognize a child object as a person. I did not test if the car was willing to drive towards the child.
It could very well be different for a three foot tall child, a child partially obscured by other objects, etc.
Cool. Thanks for testing! I have used FSD only minimally, but do use the visualization all the time. I’ve found that it detects pedestrians very well, often detecting them before I do. Though I have not tested with children.
O’Dowd is untrustworthy. I don’t mean that his claims are completely without merit, just that he is not a source that one should consider factual. And the same goes for Tesla itself, aside from claims where they’d suffer significant legal consequences if they were false (financial, etc.). It would be nice to have independent, third-party evaluations. Reviewers like Consumer Reports are a partial step, but they are focused on the refined consumer experience as opposed to tracking the technology and determining if it is safe in a beta environment.
O’Dowd is not trying to act as that kind of neutral party. His own Twitter description is pretty clear:
He’s not trying to hold Tesla responsible for actual failures, or change the regulatory environment with regards to ADAS or self-driving, or anything like that. He’s just trying to get Tesla FSD banned specifically. And maybe not even that; it’s likely that the whole thing is simply a stunt that rides on the media attention that Tesla already gets.
As an aside, apparently Tesla faced a lawsuit in Germany similar to the one in California about the advertising language surrounding Autopilot and FSD. There was some kind of judgment against them, but it seems that Tesla successfully appealed it:
We’ll see what happens, but I find it likely that the California case will end the same way. The current limitations are obvious from the surrounding context.
In their usual style of converting a tweet into a 600 word article, Electrek points to a Twitter thread by Tesla’s head of Autopilot software.
The point of the Electrek article, and one of the points of the thread, is that Autopilot is being used to prevent about 40 pedal misapplication accidents per day. This doesn’t make the news, because it is very difficult to make a story out of something not happening.
When the car detects an obstacle in front or behind it, and the throttle pedal goes to 100%, the autopilot categorizes this as a pedal misapplication, and will stop the car. The video linked below shows the view from a Tesla’s front and rear cameras, and some graphs of vehicle dynamics. The throttle goes to 100%, the car begins moving towards a person standing next to a parked car, and then stops so quickly that water droplets are thrown off the trunk. Accident with near certain injury is avoided.
Anybody reading this IMHO thread will know that I’m still very skeptical of true full self driving coming anytime soon, but this kind of story is one of the reasons I remain very hopeful for the technology: Humans are really, really, bad drivers.
I wonder how many of those pedal misapplications are caused by drivers not actually driving at the time. IOW, they’re mentally disengaged and their feet are just sitting there on the floor when a surprise appears and they suddenly need to shift their legs into driving position then apply what they hope is brakes?
IOW, is “autopilot” causing these pedal misapplications by proxy? There aren’t that many Teslas in the USA as a percentage of the total passenger car fleet. IF they were truly preventing 40 unintended acceleration accidents a day, AND autopilot was not somehow increasing the likelihood of these accidents occurring in Teslas, THEN that would suggest the rest of the non-Tesla fleet should be having thousands of such accidents per day.
The fact we rarely read of these sorts of accidents suggests they’re not as prevalent on the non-Tesla fleet.
Admittedly we don’t have full telemetry on the non-Tesla fleet. Which itself is an interesting regulatory / enforcement issue raised more than once in this thread.
These are instances where the car is stopped or moving very slowly, and the driver intends to press the brake, but instead presses the throttle.
It is possible that some of them occur when autopilot is engaged, for example if autopilot has slowed or stopped in traffic, and then the driver for some reason intends to press the brake, but misses. If that’s a risk, then it’s a risk associated with any speed aware cruise control, and not special to Tesla’s autopilot.
Other cases, such as the one in the video, occur in parking lots, so autopilot would not be engaged.
40 preventions per day seems high, but no other car maker has centrally collected telemetry information to be able to know this, so saying there is something special about Teslas that make them prone to pedal confusion could just be blaming the messenger.
They blow up big time every few years. The most recent one was with Toyota, where legitimate cases of the floor mat trapping the gas pedal were conflated with stopped cars suddenly speeding out of control on their own. Toyota, of course, totally mishandled this, making them look even worse.
Then there was the whole Audi unintended acceleration scare in the 80s, which if I recall correctly, was based on pretty much nothing but terrible reporting from 60 Minutes.
I was going to mention the same thing. We get these media scarefests all the time. And while they’re not all completely without merit (trapped floormats, etc.), ultimately it happens all the time that drivers just press the wrong pedal. Wikipedia has a whole article on the subject, of course:
The US NHTSA estimates 16,000 accidents per year in the United States occur when drivers intend to apply the brake but mistakenly apply the accelerator.[3]
Tesla is on that page as an example, but there is only an NHTSA petition. Too early to say if there is anything real. Plus, even if Teslas are more prone than the average, it might have more to do with high performance than anything else. Or the aforementioned telemetry.
Perhaps EVs with one-pedal driving capability (removing your foot from the accelerator slows down and stops the car) are more prone to people pressing the wrong pedal. If you’re used to rarely moving your foot to the brake, your panic reaction could be to step down on whatever’s there.
I won’t say it’s impossible, but as long as we’re speculating, I’d suggest the exact opposite might be true: one-foot driving trains a person that just letting off the pedal you’re pushing on is a good way to slow down. Just don’t touch anything and you’ll come to a halt. That’s especially true in the kind of low-speed parking lot situations where these mix-ups seem to happen most frequently. That’s in contrast to the usual behavior of creep mode in an ICE, where you have to deliberately apply the brake to come to a halt.
It would not shock me if both cases were true, depending on the person or circumstances, but we’d need more data to figure out which was dominant.
I’m not sure if I feel better with algorithms and logic controlling the two tons of steel I’m traveling in at 70MPH, or potentially bored and tired humans playing a video game.
Don’t forget, it’s also potentially bored and tired humans in the car next to you.
I think conceptually the idea that the autonomous car will at a minimum avoid a collision, or avoid a potentially hazardous path until it is actively disengaged by an attentive human driver is a safe option. It may not be the most convenient option at the time, but it would be safe.
The alternative, waiting for a human to first recognize a dangerous situation, THEN recognize that the computer isn’t reacting properly, while expecting the computer to drive itself, that’s just unsafe on its face.
Note that this article isn’t talking about someone watching in real time, ready to take control if it sees the car about to hit something. The “human touch” refers to someone who could remotely take control if the car gets stuck in some situation where it doesn’t know what to do, like a confusing construction zone or where people are directing traffic.
I wouldn’t say that kind of assistance will always be necessary, but it probably will be long after cars meet the other requirements of “self driving.”
I let the Tesla drive on some mountain roads today, and I was very impressed with how it handled bicycles. It did a better job than most human drivers.
If the bicycle was far enough over on the shoulder, then just keep going, no problem.
If the bicycle was close to the traffic lane, then the car would move towards the yellow line, to give the bicycle extra space.
If the bicycle was on the right side of the traffic lane, and there is no oncoming traffic, then half cross the yellow line to give the bicycle space. If there is oncoming traffic, then slow down and wait for a gap to cross the yellow line and get around the bicycle.
The car also stopped at a cross walk to let people cross. The people were not yet in the road, but clearly wanted to cross, and FSD let them, unlike the two human drivers in front of me.
Traditionally, Autopilot (like most lanekeeping systems) has had a high dependence on radar. However, that dependence means poor handling of stationary objects, since the only means it has of distinguishing fixed roadside objects (signs, litter, etc.) from cars is their speed. If it’s stationary, it gets ignored. The older vision systems weren’t good enough to overcome this problem in all cases, like if driving directly toward a setting sun.
FSD is pure vision based and uses much more sophisticated processing to determine its surroundings. The latest versions of Autopilot are also pure vision (the cars don’t even have the radar units), but I don’t know if anyone’s done a comparison there.
If he’s going to do all that with cameras, he’d better have little wipers on all of them. My backup camera is basically completely covered within minutes if I drive in snow or slush.
It will be interesting to see how they deal with ground fog and snowstorms as well. Aroune here, it’s not uncommon to have 20-30 ft visibility in a snowstorm. I assume the car will just drive really slow, or just go offline and make the human drive.
That said, I’m not sold on ultrasonic sensors. I have them all over my car, and keep all the features that use them turned off. They trigger false positives, and don’t always trigger when there’s an obstacle behind the car. So to me, they are useless.
I wouldn’t say the ultrasonic sensors on the Model 3 are useless, but they definitely have false positives and aren’t always accurate. My car dings constantly when backing out of my garage.
I wonder how they’ll deal with the front of the car. There are currently no cameras there and it’s pretty much a blind spot. Some cars have front-mounted cameras; they have to add them on the 3/Y.
Removing the ultrasonics doesn’t have much to do with FSD. They don’t do anything over a few miles/hour and have a range of just a few feet.
I don’t see how the problem is any different from human driving. If the visibility is good enough for a human, it’s good enough for a car, and vice versa.