I was in downtown San Francisco on Friday and saw at least six Cruise driverless “taxis”, I guess you could call them. In the little time I watched I didn’t see them making any mistakes. They seemed a bit cautious, slowing down if anything was remotely close.
SFFD Chief is not a fan.
The Fire Department incidents include reports of robotaxis:
-
Running through yellow emergency tape and ignoring warning signs to enter a street strewn with storm-damaged electrical wires, then driving past emergency vehicles with some of those wires snarled around rooftop lidar sensors.
-
Twice blocking firehouse driveways, requiring another firehouse to dispatch an ambulance to a medical emergency.
-
Sitting motionless on a one-way street and forcing a firetruck to back up and take another route to a blazing building.
-
Pulling up behind a firetruck that was flashing its emergency lights and parking there, interfering with firefighters unloading ladders.
-
Entering an active fire scene, then parking with one of its tires on top of a fire hose.
There was a guerilla protest of autonomous taxis last week in San Francisco, where activists placed traffic cones on the hoods of self-driving cars to immobilize them. There are some amusing pictures and videos in the linked article.
NO LONGER DECADES AWAY!!!
For unusually large values of “this”.
But it might put a useful pop in the TSLA stock price.
I have a Tesla and the cruise control doesn’t work right and one almost caused a major accident. Here is a thread I started on software issues on my 2023 Tesla Y. There is no way I would trust a self-driving Tesla anytime in the near future.
Musk says this every year. One year he’ll be right .
They are legitimately making significant progress. It’s just that the problem keeps turning out to be more difficult than anticipated.
Some guys did a Tesla vs. Waymo vs. Cruise “race” in San Francisco:
Short version: the Tesla won, by a lot. In large part because it could use the freeway and go faster than 35 mph. No significant safety issues. The Tesla is not using LIDAR or hi-def maps.
Of course, there was a person in the front seat of the Tesla, and not the others (though they still have a remote control). The Waymo and Cruise are clearly not ready to operate outside their little geofenced area or without a remote backup, and the Tesla needs a monitor. But it’s operating at a vastly higher level than it was a year ago, and the year before that, and so on. So… maybe they are actually getting pretty close.
Level four has always been a weird one for me. My friends in Austin live in a gated community that basically consists of one big loop road. Why can’t we take your Tesla, assuming it’s FSD, geofence it to only that neighborhood, and say level four has been achieved?
Yeah, the levels don’t make much sense to me. Tesla already has “summon” for parking lots, where it’ll come to you without a driver. Is that Level 4? Obviously not. At what point on the spectrum between a car that can pull out of a parking space vs. one that can drive anywhere but a few dirt roads do we call something level 4?
Actually dirt roads may not be the problem; instead it may be winter snowstorms on icy roads…
Well, it’s just an example. The point is that your geofence can cover 0.001% of the roads or 99.999%. Calling all of that “level 4” makes the levels fairly useless.
Y’see, had they, instead of San Fran attempted to do their experiment with robocars in, say, Philadelphia, the solution would have been not as elegant but quite more final.
They are legitimately making significant progress. It’s just that the problem keeps turning out to be more difficult than they’re willing to publicly admit.
I’m convinced that the actual engineers working in this are aware of every single issue and blocker that’s been raised in this thread, and quite possibly more we haven’t even thought of.
For normal, non-tricky, non-weird situations there is still one major bug that causes probably 75% of my FSD disengagements. The car frequently makes incorrect lane decisions.
- it will move into turning lanes when the route does not have a turn
- it will try to move into the left lane for just normal cruising, on both freeways and surface streets
- it will not move into the appropriate lane in anticipation of an upcoming turn
If they can get those things fixed, it would probably double the number of complete FSD trips I could make.
Sure, there’s certainly plenty of that. But they aren’t the only ones to blow past their deadlines. And even among the engineers, I’m sure there is a huge amount of uncertainty about how to fix the problems. At a certain level, it just comes down to “AI will fix it eventually.” But it’s unknown how much more training or hardware that will require.
Hehe, in Dallas I’m not sure anyone would have waited for a protest. They’d probably all just disappear one by one.
This is a characteristic of complex systems. Confusing ‘complex’ for ‘complicated’ is a common failing, even among engineers.
A brief digression:
A watch movement is complicated. It’s even called a ‘complication’. Complicated items look intricate and difficult, but can be understood through reduction - breaking it down into its constituent parts until you understand what each piece does. Reductionism is a very powerful way of understanding complicated things.
Complexity is different. Complex systems are those where the interactions between the parts are as important or more important than the parts themselves. Complex things can look simple at first glance, and gets more difficult as you drill down. You learn very little from reductionism, because the parts of a complex system behave differently on their own vs when they are part of the system.
Some things are both complicated and complex. Trying to understand them as just complicated things will lead you down a bad path. You have to be able to know when you entering zones of complexity rather than just complication.
Complexity has been the bane of software engineering for a long time. We’ve tried to manage it by function point analysis, agile methods, etc. But you almost always find issues that the design didn’t take into account once you start building, blowing out budgets and schedules.
It’s easy to think that you’re ‘99% of the way there’, only to find out that the 1% left expands into 50 other things once you really tackle it, and any one of those 50 things could lead you down another rabbit hole.
Self Driving is probably the most complex computing we’ve ever tried to build. I’ve been saying for decades on this board that full self=driving is a long, long way away, becxause I think you need an AGI or something close to it to really do it, and until last year I didn’t see any AI research that was leading there. Now I’m not so sure.
My guess is that FSD will ultimately require something like an Large Language Model AI fine-tuned by lots and lots of driving. But we don’t have any that could possibly respond in real time to sub-second issues, so we’re probably a few hardware generations away from that still. But maybe there are techniques to add real-time responses to an LLM that I don’t know about.
Humans don’t make intelligent decisions on a sub-second timescale. We barely have reflex responses at that timescale. If some incident requires action beyond slamming on the brakes on a sub-second level, the human will already fail.
We might need LLMs in self-driving systems to decide, say, how to interpret a road sign, or figure out the best course of action if there’s something unusual happening like a firetruck in the road. But those can happen over multiple seconds.
Ehh, not really. Mark Higgins does a lot more than slam on the brakes in this clip. I’ve done similar things in competition, but not at those speeds, and not with that price if I don’t pull it off.
I hope it’s obvious that I’m talking about typical humans and not people at the top of their game and that have practiced thousands of hours at their craft. And your clip still shows someone who is reacting instinctively, and not engaging in an intellectual process (it’s just that “instinct” covers a lot more ground than average due to training).
For typical drivers, many locales use >2 seconds as their standard reaction time:
A few states, including California, have adopted a standard driver reaction time of 2.5 seconds . The United Kingdom’s Highway Code and the Association of Chief Police Officers ACPO Code of Practice for Operational Use of Road Policing Enforcement Technology use 3.0 seconds for driver reaction time. The National Safety Council (NSC) recommends 3 seconds minimum spacing (3 second reaction time) between vehicles traveling in the same lane.
They are using those readtion times to account for the reality that people look at their phones, adjust the radio, talk to passengers, and do other things that take their mind and eyes off the road. It’s an ‘assume the worst’ kind of number.
When paying attention, human reaction time is regularly found to be around 300 milliseconds. With training, as low as 150-200.
If I am driving and paying attention to the road and something runs out in front of me, I guarantee I’m not taking 2-3 seconds to brake. It’ll be well under a second, including the time my foot takes to get to the brake pedal.
J.D. Power says that the average reaction time from perception of an incident to application of braking is more like .75 seconds on average. But because so many people are distracted, it’s common to use 1.5-3 seconds.
I don’t want my AI judged by the standards of distracted drivers, so it had better be able t9 respond to situations at least as fast as I can. As I said, sub-second. And the LLM will have to be involved in those decisions, because that’s where it gets complex.