Ramifications of Self-Driving Cars?

If you have a car that is capable of driving itself, what’s the benefit of having a steering wheel and so on in it? It adds complication, and it decreases safety.

It makes the car capable of operation in either mode. For a time at least, this is likely to be a legal requirement (along with the presence of a licensed driver). And, even when it isn’t mandatory, some consumers may be willing to pay extra for a car that is capable of being operated in either mode.

But, yeah, once we have the technology to provide reliable cars that are both safer and more efficient when in self-driving mode, there will certainly be a demand for cars that can be operated only in that mode. Apart from the saving in cost, the design of the car can be greatly changed - there’s no longer any need for a “driver’s seat”, and the interior can be reconfigured to suit the needs and tastes of riders who have no need to pay any attention to where the car is going or what it is doing.

We don’t have cars that are capable of driving themselves and won’t for the foreseeable future. The cars in testing now don’t “drive themselves”, they handle the majority of driving tasks but they have limitations. The steering wheel is there because people have different strengths and limitations to computers and their strengths can be used to make up for the computer’s weaknesses.

I deal with automation on a daily basis and there is no way at present I’d rely on it without some kind of human backup. The number of times I have saved the aeroplane from the automation are countless while the number of times it has saved me is nil.

Even given more modern technology and future advances, we are still talking about stuff that is ultimately designed and built by the same people whose limitations we are trying to sidestep. The human element is still there but it is just being moved to a different stage of the process. A computer is a tool that can make our lives easier and do certain things better than we can, but it is not some perfect infallible machine. Anyone who owns a smart phone, tablet, PC etc knows that.

I am responding to people such as Chronos and others who think this is the heralding of a future where cars will have no human controls and will drop us off at work, find themselves a carpark, then pick us up at the end of the day. People in this very thread think this is reasonable.

Yes, particularly in a country where it is still not unusual to get your pay as a physical paper check!

This was exactly how one of the runaway Lexuses worked. Lexus Crash: An Avoidable Tragedy - Autoblog

I can see how automakers don’t want the car to turn off instantly on the highway if you accidentally brush up against the button.

This article doesn’t speculate whether the driver tried to turn the car off. I recall other articles saying that accident specialists suspected that the driver had probably poked the “start” button repeatedly but that he did not try to hold the button down for the requisite three seconds. Maybe they even had black box data confirming that’s what happened but I don’t recall.

There are already cars being tested which lack a steering wheel.

And the comparison to operation of an airplane is only relevant if we’re talking about a world where all drivers are as skilled, alert, and conscientious as airplane pilots. While there might be some humans who are better drivers than computers, the vast majority are not, and you do not want the less-skilled driver to be able to override the more-skilled driver.

The comparison to computerized aircraft flight controls was NOT about the pilot/driver taking over when autopilot disconnected.

Rather it was about the concern that “the system itself” could fail in a driverless car – IOW the software/hardware control system.

The point about airplanes is (depending on the plane) your life may be already dependent on the computerized fly-by-wire system working. Even when the autopilot is disengaged, the pilot’s control input is just a vote to the control system. If the software or hardware totally crashes, the plane cannot be controlled. On the Airbus A380 there are six flight control computers, three primary and three secondary. However if they all fail due to either hardware or software, there is no mechanical backup. No matter how good or alert the pilot is, pulling on the joystick at that point will have no more effect than pulling on his seatbelt.

Engineers know how to design systems which simply must not fail, and the lives of many people are already dependent on these. That is easier with high-cost systems where many levels of redundancy can be afforded. The issues about cars is how to develop and deploy these in the lower-cost environment, and what levels of automation and safety are achievable in a given timeframe.

Ok yes, Google are working to remove the driver completely, something that I philosophically disagree with. They have had to install controls for testing on public roads though. There are also flying cars being tested by the way. Testing something is a long way from it being a practical reality.

It’s not about being better at driving than a computer, it is about having an entirely different skill set. Can the computer cope with a situation it hasn’t been programmed for?

Contrary to what you seem to think, an airplane pilot is not as good at flying as the autopilot is. By your logic that means the human pilot should have no ability to take over, however we recognise that there are many situations that the autopilot is not able to deal with, so until there is a huge advance in technology, we are stuck with having humans monitor the autopilot and take control for certain phases of flight or when the autopilot can’t cope.

Driving is no different. The autodriver will be more skilled at all of the mundane driving tasks such as staying in a lane, maintaining a safe and steady speed, parking in tight spaces, etc. There will inevitably be situations that it simply can’t cope with though, and for that you still need a human at the wheel.

People and computers have unique strengths and weaknesses. It would be idiotic to simply replace one set of strengths and weaknesses with another different set. Far better to combine the strengths of both.

This is not accurate. If all flight computers have failed then there is still a direct control backup system. You need electrical power but you do not need flight computers. The only situation an A380 would be truly stuffed is if it had a complete electrical failure which would require the failure of multiple independent redundant systems including the RAT which is small turbine that sticks out into the airflow and acts as a generator.

Sorry, I don’t think that is correct. You may be thinking of Direct Law flight control, which is a software law and has nothing to do with the hardware it runs on.

I was slightly incorrect; the A380 actually has seven flight control computers, not six. The seventh is the “backup control module” (BCM) which is just another computer: Airbus Flight Control Architecture - joema

This is similar to the space shuttle’s quadruple redundant flight control computers, with a fifth backup computer running different software. But if they all failed there is no direct analog or mechanical reversion – control is lost to the flight control surfaces. I believe the A380 is the same way, which is different from prior Airbus aircraft.

The BCM is analogue and electric. Not a computer in the traditional sense.

I work in aging and disability, so the most exciting potential of a truly autonomous car is making it possible for people who cannot safely or physically operate a vehicle to have the ability to get from point A to point B without the current usual enormous hassle and/or expense. This is a big ass deal in my circles.

In theory, driver-less cars should eliminate speeding and reckless driving. The computer can be programmed to follow all posted speed limits, keep a safe distance from other cars, obey traffic signs and so on. The need for passing might be eliminated if all the cars are traveling at the speed limit.

Depends on how it’s programmed. If they allow speeding then impatient drivers will set the car to speed.

The obvious problem will be cars driven by people. One slow poke driving ten miles under the limit screws up the entire line of computer driven cars. Or even worse, one speeder trying to pass a long line of computer driven cars.

Of course if you’re a passenger in one of those cars you are unlikely to care (or perhaps notice), that you’re travelling a shade slower for a while.
OTOH if the car is driving significantly below the speed limit, the self-driving cars will individually overtake.

People trying to pass a queue of cars happens right now, the difference in this scenario is that the self-drive cars will presumably drive defensively, so will give the aggressive driver room to merge in the likely event he finds he’s overcooked it.
Such situations are very dangerous right now, as many drivers are initially disinclined to let a foolish driver push in.

I am imagining this entire thread, edited to replace all mention of “cars” with “cats”.

I don’t think that the comparison with autopilots is useful. If your car develops a major fault, you stop as soon as you can and phone for help. An aeroplane cannot do that.

Modern cars monitor all kinds of variables even now. If there is a fault, it will signal the driver (the engine management light comes on). If there is a serious fault, it may restrict the speed (limp home mode). My car will phone for help if it is involved in an accident where the airbags are deployed. It’s no big leap of the imagination for the driverless car to take some more positive action when it detects a fault.

I also think that you guys are looking at the problems too much from a North American viewpoint. Driverless cars will have little appeal in the Mid West, or Texas. Both from a practical (wide open spaces) and the local attitudes (I love my truck).

In more crowded countries and in cities, they will catch on much faster. I can see that London and New York would soon have a lot of ex cab drivers looking for work; superseded by a mode of transport that is more reliable, doesn’t go the long way round, doesn’t insist on giving you it’s political views and is quiet and non-polluting.

For the commuter from the suburbs - again in Europe they are more likely to be on public transport. The distances are shorter, the cost of motoring higher and parking is a nightmare when you get there.

Even farmers are going automatic. That huge machine harvesting a crop or spreading fertiliser is almost self sufficient now. The driver is there to take over, but the computer does a better job than he could.

We are over a century away from flying cars being a practical reality. A century in the other direction, that is. We’ve had them since 1910, when the Wright Brothers started mass-producing the Model B.

As for “situations coming up that the computer can’t deal with”, I can think of plenty of those. But I can’t think of any of them that a human wouldn’t be even worse at dealing with. If a meteor impact turns the freeway two feet in front of me into a crater, I’m screwed either way. But I’m slightly less screwed with the computer in control.

I don’t think we need to worry too much about meteor impacts… :slight_smile:
But there are lots of situations where the computer won’t know what to do.
Because the computer only knows how to follow rules. But when driving, there are times when the rules don’t apply.
Sometimes the rules are illogical, and sometimes there simply are no rules.

Example of when there are no rules:
You drive to a rural area to attend a music festival, or the county fair. The parking area may be a farmer’s cornfield. Will your robo-car know that it’s okay to turn off the asphalt onto an open field of dirt?
And here’s an example of when the rules don’t apply:
(This is how I violate the rules of traffic law every day, and I want my robo-car to do the same: )

Where I live, the law says that a solid painted stripe in the middle of the road means not just “no passing”, but NO crossing the line at any time. I live in a quiet neighborhood, and the road leading out of the neighborhood to a busy street has this type of stripe painted at the intersection. But the person who lives in the house near the corner parks his truck in front of his house on the street, near that stripe. The truck is just wide enough that I have to swerve a bit to pass around it—and cross the painted stripe ,several inches into the oncoming lane.
This is a clear violation of the law—which my common sense knows to ignore. But how will a robo-car deal with it?

How can a corporation’s legal department give the okay to their engineering department? “Sure, it’s okay to program the car to intentionally violate the law. What could go wrong? Nobody will sue us”.

Even if a car has no manual controls, you can still give instructions. There are many ways to resolve this (e.g. “Stop here and drive into the field on the right.” “Are you sure?” “Yes.”)

Generally, crossing a solid line is allowed when necessary to avoid an obstacle. If your local law does not allow it, then the law should be changed to explicitly allow it.

I see no reason to assume that the computer would behave differently than the human in either of those situations.