Self driving cars are still decades away

To me, the #1 biggest problem is in calling these automated driver safety features “autopilot” or “self-driving”, encouraging people to think the vehicle is capable of primary autonomous operation instead of serving as a secondary backup that can perceive and apply necessary emergency actions (veering away from obstacles or people, braking to avoid striking one) faster than a person could do.

The second biggest problem is the idea that that is in any way an achievable goal - fully automated, “I am a passenger in this pod and can sleep, eat, futz around on the internet” operation. At least, in the context of integration with human operated vehicles, in open space (real world) terrain, with natural phenomena (weather, animals, etc.), as opposed to the fully automated, driverless trains and shuttles in an enclosed environment like an airport.

A far more realistic and useful goal, that we can maximize right now with the right framing, would be to expand the use of the secondary safety systems: lane drift assist, blind spot monitoring, obstacle detection, and so on. Things that help the driver be safe, but with the understanding that the driver is still “the driver.”

To really be feasible, a world with fully automatic, self-driving private vehicles (with apologies to the Transformers franchise, I’ll call them “Autobots”) would require three things to work smoothly and safely:

  1. Well defined areas of an Autobot domain. All major highways, cities and towns, and so on - but drive into Yellowstone National Park or something, and you’re back on your own. (Actually, now that I’ve been to Yellowstone I know it’s got a fair amount of car traffic, so maybe not the best example. And driving across mostly empty, flat and straight highway pavement for hours on end while crossing most of Nevada is exactly when I’d most have liked to let an Autobot do the driving.)

Anyway, call that the Autobot Zone, or A-Zone. It will grow over time.

  1. All other vehicles must be Autobot-enabled, or updated with overrides that defer to The Master Programming of the Autobots, when in an A-Zone. No driving that classic 1969 Thunderbird in the A-Zone used by the Autobots unless it’s been fitted with a unit that will brake or steer to yield to an Autobot’s functioning (call it an Autoyoke).

Autobot cars will still have secondary programming to try to account for non-Autobot actions, be that human operated vehicles or pedestrians, bicyclists, etc., but that will be the fallback - the primary assumption will be that Autobots rule in the A-Zone, and enforcement (human or automated) will exist to keep the raods free of un-yoked non-Autobots.

  1. All pedestrian, bicyclist, etc., traffic, human but non-Autotbot operated traffic, would have to be 100% controlled in Autobot managed zones. No more jaywalking, biking on and off the road for fun, running red lights or stop signs after looking ahead, within the A-Zone.

Is all this worth it?

Chances are, what they’ll do is some scheme where all the “autobots” within some distance of one another, probably varied by speed, will communicate some specified set of information, which would probably include things like nearby vehicles’ position, direction, speed and acceleration, as well as their own. And I’d imagine that certain conditions would be relayed even further down the line- traffic stoppages, accidents, etc… so that a car 10 miles back would have the information to take a different route before it has to even slow down.

So your car might know the information about the cars within say… 2 miles of itself, and all cars within that same distance would know about it, and what it knows.

So if a 1957 Buick is on the road within 2 miles, the cars would all be aware of it and able to react appropriately.

Where it would get squirrely perhaps, is that transition period where the regular cars and the “autobots” are in roughly equal proportions- the “autobots” would be a lot more primarily reliant on their internal collision avoidance algorithms without a large network of other cars feeding them data.

Welp, for me that’s the rub. After the fact, yeah I guess their reaction time is quicker. But, before the fact, I will most certainly be quicker-in the sense of intuiting trouble spots before they happen, and taking proactive measures. Can one of these autobots detect a weaving idiot 5 cars ahead, who could cause a multi-car pileup if he misjudges a lane change? Or will it only react once the 1st collision happens?

I’ve seen demonstrations of General Motors’ self-driving technology on TV shows. They have a camera system that watches the driver. If the driver is excessively distracted (say has their head turned to talk to the front seat passenger), it will first warn the driver and then bring the car to a complete stop, if the driver does not pay attention to the road.

Aaand Mercedes is out:

“Mercedes-Benz has decided to stop its efforts to develop and roll out a full self-driving solution for its passenger cars. Instead, the veteran automaker will be focusing on the creation of advanced driver-assist technology for its fleet of long-haul trucks.”

The co-founder of self-driving startup Starsky Robotics is not optimistic about the future of self-driving cars.

It seems like the industry jumped forward to a hard task that has significant negatives when things don’t work, and tried to start at that point without having a foundation of 3D environment modeling and navigation.

I think a generalized system of 3D environment modeling and navigation is required for the hard task, but can be developed for tasks that have an order of magnitude fewer safety requirements.

The trick is to find those other areas where there is value in an automated solution, build and learn, then work your way up the hierarchy of complexity or safety requirements.

“3D environment modeling and navigation” isn’t really the problem; modeling the kinematics of moving and manipulating a physical environment is a challenge but a relatively straightforward one that requires little beyond basic dynamics. The problem is perception of the environment. Most people take it for granted that what you ‘see’ is just an image, but in fact our brains integrate inputs from the eyes (as well as ears, tactical senses, and the proprioceptive ‘body sense’) to create an internal representation of the physical environment that allows you to move in it even with brief impressions and incomplete data. Furthermore, as you become skilled in experiencing the environment, the brain learns to predict how things will behave, which is why when you first learn to walk, ride a bicycle or drive a car you are awkward and prone to error but with a modicum of experience become proficient enough in your perception that you can walk backwards or parallel park a vehicle.

For all of the advances in machine heuristics, so-called computer vision is nothing like how we use vision to perceive the world, and even with the addition of LIDAR even the most sophisticated automatic pilot system is less able to perceive the world, much less predict the behavior of objects in it. When you are driving along and see a pedestrian on the side of the road looking like they might be considering trying to jaywalk, you can notice cues in their behavior and prepare to brake or evade, whereas no computer system has anything like that intuition of behavior; all a piloting system knows is that there are objects in the world which sometimes move in unexpected ways.

Automated piloting systems may work in certain limited ways, and can certainly work as driver aids in helping to prevent accidents or recognize when a driver is being inattentive, but there are some significant technical thresholds before they will be able of actually having the same predictive ability as a human driver, and even with the advantage of not being distracted or fatigued, there are some serious limitations in how reliable such systems can be.

Stranger

That is precisely what I’m talking about.

Modeling the 3D environment (i.e our specific 3D environment) means correctly identifying the objects, the nature of the objects, applying the rules of physics, applying experience etc. to anticipate future state.

Using stats on pixels can get the right answer some of the time, but if you can identify that the thing has a high probability to be a deer as opposed to a tree, and you are familiar with either specific types of behavior (e.g. deer) or just general types of behavior (e.g. animal), you have a better chance of anticipating future state.

I don’t want to be to nitpicky but in terms of nomenclature, modeling of the physical environment—that is, creating an internal representation of location and kinematics of physical objects in the environment—is distinct from the problems of perception of the environment (e.g. using computer vision, LIDAR, ultrasound, et cetera to distinguish between pedestrians, other vehicles, animals, and other solid hazards versus light debris, reflections and shadows, fog and rain, et cetera), and modeling the behavior of objects with independent action. These are three very different types of problems that are in different stages of maturity, and it is important to understand those distinctions because when someone points out how fast and accurate robots have become in manipulating objects on a table, it should be recognized that this really only addresses one of these three different issues.

The problem of producing physical models that predict ballistic motion is relatively straightforward, and given an adequate understanding of the physical dimensions and dynamics of the vehicle being able to navigate around hazards is tractable, although not trivial to do in real-time. This is, in essence, just an extension of the kind of fly-by-wire technology that has long been used in the aerospace industry to make inherently unstable aircraft capable of not flying out of control.

The perception problem of compiling enough data with sufficient accuracy and precision is a much more expansive problem, and while great strides have been made in computer vision it is clear that even with heuristic learning methods (using so-called neural networks) it is clear that computer vision systems simply do not have the inherent ability to formulate a perception of the world based upon visual data that even simple animal brains can. There are fundamental breakthroughs needed in machine cognition before computer vision will even be able to approximate the accuracy and precision of a human driver.

The problem of modeling the dynamic behavior of people and animals, on the other hand, is likely to remain an intractable problem for the foreseeable future, essentially until we can develop an artificial general intelligence, and autonomous piloting systems really have to compensate through a combination of faster reaction times, and applying failsafe measures (e.g. limiting speeds, providing warning signals to pedestrians, et cetera). This is really a key area where objective standards for evaluation and testing need to be developed because people aren’t going to become better drivers or more attentive pedestrians, and even if autonomously piloted vehicles have only a fraction of accidents of this nature that human drivers do under similar conditions, it will still be questioned whether the technology is “good enough”. Having a rigorous standard for being able to respond to inherently difficult-to-predict scenarios is crucial to operators and manufacturers being able to argue for a degree of indemnity when it comes to such accidents.

Stranger

The act of modelling the environment is the parent to those sub-problems.

Perception and prediction are critical and integral parts of the process of building the model of the external environment, adjusting the state of the model, adjusting the rules that govern the model.

It’s possible by including “3D” I made it seem like I was focused on just one aspect of the model, but the reason I added that was to pre-counter this argument: “pixels=>stats=>output has an implied model buried in the learning/stats”. It’s true there is some form of a model, but I don’t think it goes far enough.

I hope self-driving vehicles won’t rely on Garmin GPS. Mine regularly tries to direct me to nonexistent or impassible roads. Many nearby routes are dirt or worse; are lined with conifers hiding deer, dogs, and degenerates; are mis-signed, or private and gated. An autonomous delivery truck would cause and receive great damage. Will SDVs ever be ready for prime time in most of the world?

@Ronrico
I rented a Ford last year and while driving across a long bridge back to the airport, it inexplicably and continuously told me to turn right, about every 50m. That would have taken me directly into the ocean if I jumped the guardrail. This was in spite of the fact the GPS image showed no road, only water on both sides.

I reported it when I returned the vehicle and the good news was the rental agent told me they were required to immediately pull the car out of service and get the GPS software checked. Obviously concerned about liability.

Fully self-driving vehicles are still a long way away.

The technology used in that car has literally nothing to do with the technology used for self-driving technologies.

Sorry - I didn’t realize that self driving cars didn’t use GPS to navigate.:smack:

Well, to know which road to take to get to the destination, yes. But to sense other traffic, stay in lanes, avoid hazards, etc., they use cameras, LIDAR, radar, and similar things.

So in your example, let’s say the car’s GPS really did think that you needed to take a right turn off a bridge because Google Maps was messed up that day. The cameras and LIDAR or whatever would clearly see the guardrail on the bridge and know it was getting too close, so maneuver to stop from hitting it.

That’s a scenario that is actually somewhat easy to deal with. Quite a few cars today with collision avoidance would probably prevent you from driving into the guardrail by applying the brakes (but not turning). Probably all of the experimental self-driving cars could handle something like that.

I agree we are talking a decade-ish until reliable implementation of self-driving cars. Maybe a little less, but something like that, IMHO.

Everything you’re talking about works in conjunction with the GPS routing system and it’s software, which is an absolutely integral part of the auto-pilot.

In my case, a multi-billion dollar company, with all their resources, couldn’t get their GPS to route me to the airport correctly.

As you note, there was a guard rail, but what if there was just a road and it auto- randomly turned down it?

Getting this stuff to work effectively is a long way off. We’re in agreement on that.

The rental car company doesn’t develop the GPS system or the maps it relies upon. For a truly autonomous system, it would have to be capable of navigating around local hazards, road closures, et cetera regardless of what a GPS or other wide-scale navigation system indicates, and in fact in a robust system the piloting system would report out to indicate a hazard such as a flooded road or accident so that other vehicles could avoid it entirely.

Of course, this assumes a very sophisticated, highly networked system that requires considerable infrastructure versus the asynchronous transmission from the GPS constellation that we use today. So, it isn’t just the capability of individual vehicles that has to advance, but also the establishment of standards and capabilities on a much larger scale. So, yes, it certainly isn’t around the corner. Technically, making such a system is within our capabilities; managing it and making it robust from error or malfeasance is another thing entirely.

Stranger

And I will pile onto this by noting that you are making an example of one car having on (large) error.

What you neglect to mention is the literally millions of correct suggestions that GPS units made during the time that yours was malfunctioning. Those suggestions saved time, prevented accidents and probably saved lives.

Autonomous cars may currently suck, but do you know what factually sucks? Real drivers. They have millions of accidents and kill thousands and thousands of people. Moving towards fully autonomous vehicles will result in accidents and deaths, but don’t forget that the current state is many accidents and deaths.

I live on a rough mountain track, yet another of the narrow, twisty trails around here. My regularly updated Garmin GPS unit does not know the conditions or ownership of many of these. A driverless Garmin-guided delivery van navigating less than a mile from the two-lane highway to my house would be hopelessly lost, and likely high-centered and stuck. Then bears would tear it open and steal the pizzas. It’s inevitable. :eek:

I’ll believe in self-driving vehicles when Subaru sells an autonomous Outback.