Self driving cars are still decades away

I get the impression that the prospect of full self driving, and the exciting business models it makes possible, is just technology pixie dust sprinkled liberally on a brand to make it attractive to investors.

I guess lowering aspirations to some simpler practical facility such as auto valet so commercial vehicles are not damaged while being parked by drivers in depots….this is not something that generates human interest stories that journalists consider newsworthy. Much, though it might impress corporate bean counters monitoring the cost of their fleets of workaday service and delivery vehicles.

I tend to agree that self driving cars are still decades away. Much like self driving trains, ships and aircraft. While some parts of a journey are regular and automation can assist, there are always circumstances where the variables multiply and it requires higher mental faculties to make good decisions. That part requires a human.

Highly confined, closed systems, like those little trucks in automated warehouses, are showing the way. They are not fully automated, but the prospects are much better.

Trying to retrofit automation on systems that were never designed for that purpose is not a good place to start. Too many variables.

This is exactly what I think, you basically need AI to have a self-driving vehicle that won’t occasionally do amazingly stupid/dangerous things that a human paying attention would easily deal with. Once we have AI then we have the moral quandaries of creating an intelligence and then enslaving it to drive us around.

A different moral quandary is that even fairly early AIs like we (almost?) have now make different errors that humans do.

So there’s a whole class of dumb mistakes the AIs don’t make. Like tailgating or getting impatient or pissed off. At the same time there’s a different class of errors most humans don’t make, like driving headlong into the back of a parked fire truck.

We ought to judge the machines’ performance on the net. But we won’t. How we get to a moral calculus that includes both sides of the scale when judging an action or judging the overall annual statistics will be quite a journey for society in general and for the law in particular.

Opinions tend to highly influenced by the priorities of gentlemen of a certain age who see the motor vehicle as a key enabler to their personal agency.

They want to get to wherever they are going with speed, elan and style. No computer is going to tell them how to handle two tons of metal flying down the highway. Though they might be persuaded if it could drive them home when they are drunk.

The unglamorous, workaday commercial delivery vans and trucks are probably a better indication of the current level of tech because it is all about the bottom line.

Reuters is reporting that the DoJ has been conducting a criminal probe into Tesla related to autopilot and full self driving for over a year.

In summary, the ongoing investigation is not near a decision to bring any proceedings against Tesla. As the article discusses, official communications from Tesla have always made clear the limitations of autopilot and FSD. Other communications from Elon Musk, not so much. (Debates about Musk belong in a different thread, but it’s impossible to discuss these investigations without bringing Musk into it.)

I’m of a mixed mind over this. I do think that Tesla is under some responsibility for statements by its most prominent executive. On the other hand, anybody who reads the stuff associated with enabling and activating autopilot and FSD is made aware of the system’s limitations.

Tesla has always said the driver is responsible, and must remain alert and in control at all times.

All vehicles are capable of being operated in a dangerous manner, but exactly where does the manufacturer’s liability stop and the driver’s liability start?

My motorcycle owners manual has four pages of warnings about things I should and shouldn’t do.

I can set the cruise control in my 2000 Suburban and run into the back of a stopped emergency vehicle. In the owners manual there are several warnings about cruise control, but none say the driver needs to pay attention and disengage it for obstacles. I’m sure we would all think it silly to hold GM liable for me hitting somebody because I didn’t brake from cruise control. Of course, the cruise control has no capability of disengaging or adjusting based on conditions external to the vehicle.

My Model 3 manual, in the autopilot section, says “Never depend on these components to keep you safe. It is the driver’s responsibility to stay alert, drive safely, and be in control of the
vehicle at all times.” That seems pretty straightforward to me.

Wrapping it all up to the topic of this thread. A non-technological piece that will have to be sorted out is the shared liability around the many groups involved in self driving. The vehicle manufacturer and the manufacturers of sensors, cameras, and other components are just the start. The number of groups involved in software is staggering. There is software in each sensors, camera, and component. There is software to integrate the data from all of the sensors. AI software to label things the camera sees. Other software to make decisions based on the labels. The OS, libraries, and other software that support the self driving software, And etc. How deep do you want to go? (This is not a new conundrum, and has been explored in scifi stories for a long time.)

Ford is shutting down Argo (if you recall, that was the combined autonomous vehicle group they formed with VW) to concentrate on L2 and “L3+” ADAS technology.

Also, this month was the fifth anniversary of this thread! How time crawls while you’re waiting for autonomous vehicles to show up.

It is wise to regard any innovation that prominently features Artificial Intelligence as its secret sauce as techno bait used to pump the stock of companies looking for large amounts of cash from investors.

I don’t doubt that AI, Blockchain, Quantum computing and other esoteric technologies will find their place eventually. But where and when and how they will enable profitable new business models is so much crystal ball gazing.

I can see how car companies can make money out of analysing huge volumes of car data and then use that to sell insurance policies to careful drivers. But I cannot see driver assist modes evolving into some comprehensive personal chauffeur/taxi mode anytime soon. That is a lot harder problem to solve. Too many variables.

I suspect some of Teslas brightest software developer talent is going to be directed towards the new acquisition, Twitter.

???

AI is already in use all over the place. You just don’t see it. AI is used in search engines, appliances, cars, games, manufacturing, etc. Your camera may have some AI in it. Certainly modern photo editing tools like Photoshop use AI extensively. And AI is about to revolutionize stock photography, commercial art, storyboarding, etc.

Blockchain is already in widespread use with digital currency and other applications. Quantum computing is newer, but we have already identified many applications for it.

“Artificial intelligence” is such a broad term that it’s not very useful here. When they replaced a person in a chair with elevator buttons, for example, that was AI.

When people talk AI and autonomous vehicles, it seems like they’re really talking AGI, artificial general intelligence, which implies a lot more autonomy than your examples. (is that term still broad? yes. Is it less broad? Also yes.), and which we’re not even close to. There are better experts than be on the Dope on that topic, however, and I’d be happy if they weighed in.

Sure, for specific use cases and datasets that are regular, controlled and consistently gathered. AI can help spot some patterns.

But the changing roadway that a car sees through its multiple cameras when it is moving is a huge stream of data that has to be processed in real time and reliably detect and evaluate dangerous situations. That is a world away from looking for patterns in a large static set of data like photographs. Both can use AI, but one problem is orders of magnitude more challenging than the other.

AI then becomes just a marketing term.

Same with Blockchain, it is a technology looking for an application and it’s most famous application, digital currency, is fraught with problems. This does not stop it being sold to gullible investors.

It takes quite a while for technologies to find appropriate applications and then they become just a part of a toolset used in the background.

But they go through a phase of being a label that gets plastered on any product or service that needs a marketing boost and impart some kind of credibility on a product that it does not merit.

I guess if you measure the success of a technology by how much investement it raises, then AI is a sure winner. It may sell some expensive options on premium cars. But don’t expect it to drive you home anytime soon.

Do we have any true believers here that think we are on the edge of a self driving break-through?

I’ve probably been the most skeptical person on this board when it comes to self-driving cars. I’ve always said the problems are in the edge cases which require human judgment, and in the area of defensive driving, which AI sucks at. In my opinion, we won’t have true self-driving until and unless we develop general artificial intelligence, and the path AI is on now does not lead to that. I’ve also said that our culture and legal system are not prepared to deal with AI making moral choices, such as swerving to miss one person and hitting another.

But I was responding to your broader claim that AI itself is still mostly hype. I was merely pointing out that AI is already in extensive use in multiple industries. And not just some tiny bit of ‘AI’ used for marketing purposes, but for fundamental functionality in large industries. AI has already proven its worth - it’s just being over-hyped by some people looking to cash in on ‘the next thing’.

AI is a poorly picked name for some math with lots of hidden layers. It’s not artificially intelligent by the any of the old scifi definitions of artificial intelligence. It has been found to be incredibly useful in lots of applications, though. Remember 20 years ago when universal translators were a Star Trek fiction? Now you can point your phone at some text and in realtime see that text in a different language. Similar for audio.

Blockchain may be useful someday, but for now it’s mostly used to push technology that is (apparently) custom designed to commit fraud.

This part isn’t too difficult. The several year old GPU in the car detects objects and labels them in real time from multiple cameras. The biggest issue around this right now is adding even more labels to the detection system. I’m waiting for potholes, road debris, and large birds to be added.

This is the much, much more difficult problem. The car is very poor at taking all of the things it sees and knows, building an integrated environment from it, and then making appropriate decisions based on all of that information.

Some things are easy, there’s a slower moving car ahead, so slow down. Other, even non-emergency decisions are very poorly done. There’s a slower moving car ahead, so change lanes, now you’re behind an even slower moving car. Before changing lanes to pass a slow vehicle, make sure the other lane is clear ahead.

Do a better job integrating vision, route, map, and GPS. The car should know not to get in the right lane 500 feet before an upcoming left turn, even if the right lane is clear. On a freeway the car should know not to move to the right into an exit-only lane, unless it plans to exit.

Particularly for level 2 full self driving, I’m perfectly happy if the car decides a situation is too unusual and punts back to the human driver. I’m very disappointed when the car can’t handle non-emergency situations, like make a left turn at a flashing arrow, and no oncoming traffic.

I wonder whether Musk’s software developers will be skilled enough to deal with the bots, trolls, chimeras and demons that pervade and pollute the Twitter-verse. Surely a doddle of a job for AI. Far easier than getting a car to avoid sudden road hazards?

What’s the saying, “oh, dear sweet summer child?” unless you’re being sarcastic. No, content moderation is an even harder problem than self driving, because for many driving situations there is going to be right answers and acceptable answers, but content moderation just is not like that.

I think a panel of thoughtful laypeople would be able to agree on the appropriate decisions to make in a given traffic situation—a ball rolls out from behind a parked car, slow down because there’s likely to be a child following the ball, etc.

Not even people can agree on content moderation, so training AIs to do moderation to reasonable people’s satisfaction is impossible. If the AIs are perfect, which they’re not, they will still get it “wrong” for many people in many cases.

If that was just a post to bash Musk for making impossible promises about AI in yet another context, then it is probably best saved for a different thread (where I’m happy to pile on).

I think that in several decades we may have true self driving cars, that will certainly come with many structural changes to roads and transportation to allow the AI to work properly. We will never have good content moderation at scale.

That also describes the human brain, though. It’s also unclear whether AGI is necessary for any task we might care to solve. Or if GI is even a real thing beyond a collection of individual capabilities.

Neural nets are not the sum total of AI. A lot of AI today combines neural nets with deep learning and other techniques.

Deep learning is just neural nets (but “deeper”). There has been a great deal of advancement when it comes to how these nets are configured, from convolutional nets to transformers and so on, and improvements in how the nets are trained, but they’re all still “sum up a bunch of weighted inputs and run the result through a non-linear function”.

Which is perfectly fine since it’s been shown that all functions can be evaluated using just this one operation. There’s no need for anything more.

We’ll continue to see growth in the size of the networks, and I’m sure we’ll have continued advancement in the topology, but the basic approach still has plenty of runway left. Maybe infinite runway.

Neural Nets are not the sum total of ML, but they are a lot more of ML these days than they were in previous decades. Now that we have the compute power to train deep neural nets efficiently, a lot of other techniques that were popular in the 90s and early 2000s are non-competitve.

I’ve kind of lost track, what’s being argued about AI vis-a-vis autonomous vehicles?