Driverless cars

From the new column: “How soon can we expect driverless cars?

… This seems nuts to me. My Galaxy smartphone can’t even do voice recognition properly, and we’re banking that a car will be able to drive itself safely in the complexity of city traffic?


Seriously?*** I challenge anyone concerned about software making decision based on sensors to visit Rochester, New York or its suburbs. I’ll take my chances with a software glitch anytime.

Not a chance this happens any time soon.

There are far too many roadblocks (pun definitely and obviously intended) that are put in its way. Besides that, it’s my opinion that everyone is gaga for driverless cars right now, but the mindset of “we have to do this!” is just a fad, the REAL issue is making electric cars the dominant vehicle. That’s much more important and feasable.

Right. I had read Cecil’s response. But my post wasn’t about how soon we see these everywhere. It was a philosophical post about the alternative.

I grew up with this riddle: “What is the most dangerous part of the car?”

“The nut that holds the wheel!”

As a frequent pedestrian it isn’t as funny as in my youth.

A note about terminology: These are not driverless cars. These are computer-driven cars. There is a driver, and that driver is a machine.

I know, it seems pedantic, but it leads to real misunderstandings. People will say things like “Can the driver still override the computer?”. But the very question misses the concept: The real question is “Can the passenger override the driver?”, because the human occupant of such a vehicle is a passenger, not a driver. And of course, nobody wants a car where the passenger can do that, because that’s an obvious safety hazard.

We’ve done several threads on this. There are still considerable challenges ahead. The cars we have, the roads, the regulations, and traffic laws have all been designed for human drivers and we haven’t perfected that yet. The really rough part will be the transitional period when most of the cars will still be driven by meatbrains and the computers have to deal with them. Unfortunately it may be the only solution for our increasing metropolitan area traffic problems since monorails haven’t caught on as expected. The meals in a pill didn’t catch on either, the future is just a bit trickier to predict then we predicted it would be.

*I had just experienced the UK’s (and possibly the world’s) first grocery delivery via autonomous van. The van, called the CargoPod, was developed by Oxbotica—an autonomous systems startup spun off from Oxford University with some useful patents. The trial was part of GATEway (the Greenwich Automated Transport Environment) project. Earlier in 2017, Starship Technologies also chose the borough of Greenwich for a trial of its small autonomous delivery robots.
*

*The delivery van itself, the CargoPod, was very rapidly constructed from an off-the-shelf electric vehicle drivetrain and wheelbase. Oxbotica equipped the CargoPod with two lidar sensors above the wing mirrors, three cameras above the front bumper, three cameras at the back, and inside the dashboard there’s a fairly standard Intel Core i7 computer that integrates the sensor data and performs all of the self-driving stuff. The computer runs Ubuntu, and on top of that is Oxbotica’s Selenium autonomous driving software.
*

*Because the autonomous driving software isn’t perfect, the CargoPod has a steering wheel, pedals, and a big red emergency stop button. The guy in a high-vis jacket is actually a safety driver: it’s his job to grab the wheel or slam on the brakes if something goes wrong.

*The first self-driving grocery delivery van

Ars Technica 2017 July 04

Greenwich is in London.

One of the challenges of getting autonomous vehicles to market isn’t just the technical issus of complex navigation and dealing with road hazards and unpredictable pedestrians but establishing a threshold of acceptable performance and reliability for safety. Currently, even the states that license the use of autonomousvehicles on public roads do little or nothing to assess the safety of the vehicles other than some basic restrictions (cannot exceed posted speed limits, must respond to signage and stoplights, et cetera), and for some not even that much. With a human driver, the driver has plenary responsibility for the safe operation of the vehicle, even if there is a mechanical defect or lack of good maintenance. With an autonomous vehicle in which the passengers have no control, the question of culpability comes into play, with the manufacturer or operator presumably having fiscal responsibility, but as we’ve already seen with Tesla and their “Autopilot” system assuming that the manufacturer will take all necessary and prudent steps to avert avoidable accidents or neferious manipulation is not sufficient. Yes, the Autopilot is not a fully autonomous system despite the name and the driver who died in the Florida accident was using the system in contravention to operating instructions, but in creating a system that is seemingly autonomous Tesla took a certain amount of responsibility in having to engage the driver’s attention or otherwise avert accidents.

What is needed to indemnify manufacturers and operators against unreasonable expectations of safety for unavoidable accidents is a comprehensive design and test standard, developed and accepted by the industry, for how autonomous vehicles should operate vice hazards, accidents, and resist or warn of attempts at modification or intrusion. There seems to be very little effort to this regard (in the US at least) and given the nascent state of the industry it may seem premature, but at least creating the framework for such a standard would create both a more equitable marketplace for new entrants in the field and address legitimate concerns about public safety.

As a practical matter, I don’t see autonomous vehicles being fully accepted or integrated for somewhere around twenty years, but once they achieve a certain level of operational reliability and safety I’d expect widespread adoption, particularly given the convenience, cost efficiency, reliability, and safety of autonomous vehicles versus himan-operated ones, and expect the first adoptions to be in fleet and transportation vehicle applications where the cost an efficient utilization of an autonomous vehicle is clearly advantageous over paying a human driver. The technical challenges are real and significant, but at this point don’t appear to require any revolutionary technologies beyond refining machine vision and heuristic methods.

Stranger

This is in line with recent predictions. Several manufacturers say they will sell “driverless” cars by around 2020 or 2021. By “driverless” these will be level 4 vehicles which can perform an entire road trip within their design domain. They will not be level 5 vehicles that can go down a rural dirt road. These will likely be first used in fleets, then gradually rolled out to end users.

Some predictions envision 10 million driverless cars on the road by 2020, and even more conservative ones say 1-2 million by 2025.

Various driverless forecasts: http://www.driverless-future.com/?page_id=384
Business Insider predictions: 10 Million Self-Driving Cars Will Be on the Road by 2020 - Business Insider
World Economic Forum predictions: https://www.weforum.org/agenda/2017/01/having-a-baby-this-year-a-robotics-expert-thinks-theyll-never-drive-a-car

Re safety and perceived risk of driverless cars, regular human-driven cars kill about 32,000 per year, just in the U.S. That’s equal to all the U.S. Vietnam war deaths every two years. Worldwide, cars kill about 1.3 million per year – equal to the deaths from the bloodiest battles in human history such as the Battle of Stalingrad or Battle of Somme. This continues year after year.

Meh … I got GPS’ed a few weeks ago so I’m apprehensive … this is where the GPS finds the shortest route without taking into consideration the quality of the road … the driver said he had explicit directions when if fact he relied on the GPS … thus we found ourselves in a Prius on a goddam quaternary logging road … if we had bottomed out and got stuck, it might have been days before anyone else came along to save us …

If we’re sticking that software into these self-driving cars … there’s going to be trouble …

No, there’s going to be Mayhem.

:smiley:

What navigation program nowadays doesn’t even take into account the quality of the roads, and why would anyone ever use such a terrible program?

Toyota … what navigation program company inspects all the logging roads between Denver and San Francisco? …

Google. Waze.

Most GPSs are configurable between shortest route vs fastest route. If you pick “shortest”, it may take you in a zig-zag pattern or onto unpaved roads.

Some Garmin GPSs have avoidance settings such as U-Turns, Ferries, Carpool Lanes and Unpaved Roads. If you don’t want those included in route calculations you turn them off. I can’t remember the defaults, but it may vary by model or software version.

This illustrates the importance of the user understanding how their particular GPS works and the quality and currency of the GPS map database. Add-on dashboard automotive GPSs are usually OK but are not considered a “safety of life” application.

By contrast the same GPS hardware may be used in an aviation environment which has a much more expensive and meticulously checked database. Presumably the navigation databases used by future self-driving cars will be in this category.

Also the map database and firmware in dashboard automotive GPS units are frequently out of date. Users either don’t understand they must be periodically updated or don’t know how. In a future self-driving vehicle the map database would likely be updated wirelessly on a more frequent basis, and without requiring user intervention.

But aircraft are significantly hampered when you limit them to paved roads.

The next question is why auto manufacturers don’t use these systems … but wait:

I’ve emphasized just one of your points above, and my comments apply to several you’ve listed … anyone of average intelligence can easily update their software on a timely basis … however, fully half the drivers on the road aren’t that smart … we’re left with car dealership having to administer IQ tests to potential customers and refusing to sell to those who don’t pass … sounds like mischief to me, car salesmen performing due diligence? … we’re left with only being able to sell self-driving cars to people who are already smart enough to drive safely …

These self-driving car will have to be idiot-proof straight out of the factory … or some idiot will drive themselves right into a snowbank and die of exposure … and the original software will have to functional for the life of the rig … ten years or better, much much better …

Yes, I know … I just called the United Kingdom’s National Health Service of below average intelligence …

There’s an airport just five miles south of me that’s no more than a hay field with some hangers … I don’t know if it’s a “mow-as-you-go” operation or if they have some kid out there on a regular basis …

=====

I’m a carpenter by trade … the hammerheads that are dug out of Roman Republic ruins are almost identical to the hammers we can buy at Home Depot today (we started tapering the hafting hole about a thousand years ago) … sometimes technology gets to be good enough, any and all advancements are solely for the sake of that advancement with no real improvement to performance … I understand that twenty people in twenty cars all scrunched up gets better gas mileage … but we’re still better off put those twenty people in a bus …

My rig right now has bluetooth hands free meso-quasi-cloudbased phone connection … but for the life of me I can’t figure out how to connect my General Electric phone to it … but that’s okay, I need to be at 5 or 6 thousand foot elevation to get phone service anyway …

ETA: SDMB is about as sophisticated as the Telnet from twenty years ago … just saying …

Of course we want a car in which the passenger can override the driver. It’s not something that would normally happen, but imagine if, for example, the driver has a heart attack and veers into oncoming traffic. You want the passenger to be able to reach over and grab the wheel to get the car back over into their lane.

Sticking with your metaphor, if the “driver” of a driverless car malfunctions (and it will, many times, as this technology is developed), I want to be able to grab the wheel.

My point was manually updating GPS map databases will not be required on future self-driving cars. It will happen automatically. There will be no issue with dealerships having to test the IQ of potential customers.

I think this already happens on Tesla cars today – either automatic or semi-automatic updates. And those cars are not remotely the level 4 autonomous vehicles described in this thread title, so there is less a concern over fully automatic updates.

Automatic validated GPS map updates is one of the easiest problems to solve. By the time self driving cars spread beyond fleet use to any significant % of end users this will be fully automatic and if there is any failure (download, checksum, database out of date) the car will probably revert to a non-self-driving mode.

I think automated cars will need to follow a prescribed route that has some sort of cable under the road which would help with guidance. A kind of slot car so to speak.

A driver would get to this “track” and then set the car on autopilot then they can just sit back and let the car drive itself. When they need to leave they just turn it off.

I can see this working on say an interstate BUT, only if a driver wants to go a particular speed and wont need to pass anyone. As we all know that if a highway is posted say 65 mph, that doesnt mean everyone will drive 65. You get people going anywhere from 50 mph to 80 mph.

Manual overrides to automatic cars are not always desirable. Imagine a passenger riding in a robot taxi. Obviously one cannot have them grabbing the controls (not that there would be any manual controls) willy-nilly. The best you could do, if the AI got stuck in a situation it could not figure out, is call an authorized operator to take over via remote control.

Urbanredneck: if a robot car cannot change lanes nor overtake then it doesn’t belong on the street. I am thinking more of dealing with atypical situations where normal traffic rules do not apply.