People aren’t the cause of traffic jams independent of cars, and it’s too many in once place that causes jams. I doubt there will be a central traffic control anytime soon, and I can easily imagine a world where you have various AI’s developed by independent companies that don’t play well enough to avoid a traffic jam.
As I said in the quoted text, if it’s going to ignore this information, why have the car-to-car network at all? Just have the cars drive via it’s on-board information.
I won’t address the issue of our cars becoming an arm of law enforcement, because that’s far beyond self-driving cars, and delves far afield into legal issues.
And I missed the edit window, but if anything would make me get out of the car entirely, cars driven directly by a centralized control would be it. I couldn’t think of a worse idea, it is just waiting to be hacked.
Like some others have said, in the end, this is already happening piecemeal and will continue that way. As I’ve said in at least one other thread on this subject, we probably won’t completely do away with cars people can drive themselves for at least 100 years because many will want to hold onto the ability to drive a car themselves.
Given the same amount of cars in a given area, guided by a centralized computer network and with the lighting fast reaction time of the onboard computer analyzing all available data, traffic jams will very likely not occur. As I said, traffic may slow down, but it is very unlikely that it would stop.
The car runs on its own internal computer and gets information from a central control system that is independent of a specific manufacturer. Like a weather report or a traffic report. The car’s computer analyzes the data and acts accordingly. That central computer may well say drive this speed in this lane, etc, but the car would also be taking in information from its active sensors regarding where the car is currently in space and adjust accordingly. It should also report such information back to the central control. eg. a deer on the road, another car driving erratically, etc.
It is a logical extension of what would happen. There will always be those ‘rebels’ who feel the need to drive their own cars. Having verification from many sources of them breaking the law and endangering others will eventually force them onto private roads away from the rest of us.
Smart cars are autonomous. A system for the centralized control of individual vehicles is not feasible. The actual system is more mundane. It’s goal is efficient use of the road surface.
Buyers require motivation to purchase self driving features like vehicle to vehicle communication. The leading edge of this motivation is tolling - selective tolling. Cars with vehicle to vehicle communication can automatically control their speed and braking. This increases allowable safe traffic density.
As long as you have too many cars going through a choke point, cars will either have to stop or move to a crawl. Computer controlled or not, two objects cannot occupy the same space. If you could successfully implement a central control, and get everyone to participate, you might minimize some traffic jams, but to believe that it would eliminate all of them is just as much of a fantasy as eliminating all car accidents. I would say they’re certain to occur.
And all of that is unnecessary for a self-driving car, would be expensive, and probably just open up more avenues of abuse in the end. It may happen someday, but it’s not happening any time soon. We’ll have commercially available self-driving cars long before anything like that is seriously proposed.
And now you’ve jumped from a nonexistent self-driving car to making it illegal to drive a car yourself. If this is so certain, why do we still employ pilots for planes? It’s because even though piloting an airplane is an easier task for an AI than driving a car, they still can fail at the task, and the pilot has to correct it.
There are four things self driving cars can do to reduce congestion, that human drivers cannot.
First, they can safely drive closer to other cars at equivalent speeds, increasing throughput.
Second, they can maintain that distance more exactly, preventing traffic waves and rubbernecking delays/accidents.
Third, they can communicate over short distances to further reduce the need for space between cars, each car will know what is happening based on signals instead of visual cues.
Fourth, they can connect to a central repository of traffic data, potentially anonymized snapshots of traffic flow provided by member automobiles. If just 5% of cars provide speed and location data once a minute, you will have an incredibly detailed real-time picture of traffic flow, and the car can automatically take alternate routes, knowing exactly how good the routes are. This both makes your trip shorter, and pulls traffic away from trouble spots.
Co-piloting will make it possible for automobiles to optimally flow through congested areas. There is an optimum spacing and speed. That’s the purpose of smart cars.
Cheesesteak and Crane, I agree that they can minimize congestion. Even without a central control, they could eliminate a lot of them simply by not waiting until the last second for a merge. But those factors don’t cause all traffic jams, sometimes there’s simply too many cars wanting to use a given road at the same time. (E.G., at a sporting event) Self driving cars can’t do much in those cases.
Also, the more I consider the problem, the more I’m less sure these systems will be bug free enough for the system to never cause a problem that is the equivalent or worse than say, drunk drivers, in a short term measure. All non-trivial software written to date has bugs, and the larger the code base, the more likely they become. Even if you only allowed self driving cars’ code to be written by an organization such as NASA, there are always bugs. Even NASA produces code with critical bugs, sometimes it costs them a satellite, sometimes the astronauts have to get involved. The code produced by business isn’t anywhere near as bug-free as the code produced by NASA, and business is the crowd that’s currently writing the software to drive cars.
Will it be the constant reign of bad behavior on the road that you get now? I think almost certainly not, but there will probably (almost certainly) be days that the systems fail in bad ways. The hope obviously is that we can track down the bug that caused the last wreck or traffic jam, fix it, and never see that problem again (unlike drunk drivers). The catch is that when you fix a bug, you often expose a different bug, or create a new bug to be exposed later.
I think that steering wheel (or some sort of manual control) is probably going to be there for awhile, even after we get a car that can potentially drive me home from work.
Another advantage of smart cars will be that they will make car ownership less important. They can drive themselves from point to point to pick people up and drop them off. Autonomous cabs that don’t have drivers who have to earn money above the cost of fuel. I predict most people’s “cars” will be a customized module that the individual travels in, the “car” will just be a frame with engine and transmission on it. Might have wheels on it so it can be towed, even. Think of a modern trailer truck scaled way, way, way down to sedan size. The cab does all the work, and the trailer is the passenger section.
Bug like texting on a mobile phone? Changing radio channels? Monkeying with the GPS? Reaching back to smack the kids to get them to stop killing each other? I’m pretty sure those bugs aren’t going to exist in any computer program.
I would hazard a guess that the reason airplanes don’t fly themselves is due to people not wanting to be flown by a computer, not that the human is a better pilot. They don’t trust the technology exactly like you don’t. Even if it could be proven that accidents and deaths would drop tenfold, nervous Nellies won’t like it.
How about bugs like thinking you’re on the ground when you’re 40 meters in the air? I’ll remind you that the current known front runner in self-driving car tech is Google, not NASA. NASA has a reputation for producing the most bug-free code for the number of lines written, but they still crash. Sometimes lightning strikes twice, it’s sister ship was also lost due to a software bug. Those are the kinds of bugs software has, and that is software that is written in a more careful way than almost any other software running. Even NASA’s manned space program has bugs.
No, the automated systems on the aircraft do actually fail pretty spectacularly sometimes. Like when the F-22 went without navigation or communication upon crossing the international date line*. The pilots had to follow their tankers back to base. Without the pilots on board, the aircraft would have been lost.
*One would think they’d have tested for that. It seems one would be wrong.
So, did they fix the bug? Does it happen anymore? Potential 1 death in one occurrence. Unlikely to happen again.
Have they fixed the bug with texting while driving? Being drunk while driving? Etc. Proven 33K+ deaths in US alone. Ongoing with no fix in sight.
Drive off a cliff or into on-coming traffic? What would the car do?
Well, if our cars start getting too uppity and sentient, we can expect the car will do whatever best protects itself and damn any people who happen to be inconveniently nearby.
Really? That’s your answer? I addressed this already.
Software does indeed sometimes get better by revising it, but sometimes by fixing things, you make them worse. I can remember plenty of instances where patches had patches released immediately afterward to fix bugs introduced by the first patch. Again, all non-trivial software has bugs. So far, no one’s come up with a method of successfully eliminating that problem.
On top of that, all hardware will eventually fail. Some non-zero portion of that hardware will fail in an unpredictable manner.
Both of these are accepted ideas in computing today. If you’re going to argue against either of them, be prepared to bring extraordinary evidence to back up your claim.
Even in fabulously expensive aircraft, which are an investment for those that buy them, they accept that even making fully redundant systems isn’t a solution. It increases the cost greatly, and still is susceptible to the same problems. Their end solution is to have a pilot in the aircraft, who can directly control it when those systems inevitably fail. The pilot themselves sometimes fails, but it appears to be better than letting the machines do it on their lonesome.
You seem to believe that there’s something that will make these two problems go away once we translate the problem from an aircraft to a car, and as a result we’ll be able to do away with any possibility of manual control. You’re welcome to believe that, but you haven’t presented any argument or evidence that makes me think you’re right.
When talking with Luddites, it is best to keep answers simple, IMHO.
It’s called regression testing. Sometimes it is done well, sometimes not so much. Non-critical systems don’t get as much testing. Critical systems get more. Your Ipad isn’t a critical system.
Cars are hardware. They fail. There is a big industry on that. Having multiple fail-over systems in a vehicle wouldn’t be prohibitively expensive as they don’t need to be aircraft grade.
Pilots/drivers aren’t free. In a car, if you have a system that is better at driving than 99.9% of drivers, and a backup system that can safely pilot the vehicle to a stop in the event of catastrophic failure, having a driver based backup is incredibly expensive for the small number of times catastrophic failure occurs.
Driver based backup requires a full instrument cluster, steering, accelerator/brake, and gearshift. Computer based backup requires another computer.
I’ll also point out that you can’t pull over and wait for further instructions in a plane.
Name calling does not improve your argument. At any rate, you couldn’t be more wrong. Perhaps we haven’t met. Hi, my user name is scabpicker. In real life, I’m a Unix admin at a small hosting company. In no particular order, my job at different times includes: recovering systems from hardware failure; writing software; debugging others’ software; reporting bugs to software I don’t maintain; tracking down, mitigating, and preventing hacks; and building highly available systems. I may be many things, but I probably couldn’t be further from a Luddite.
Surely, NASA and the aerospace industry has heard of this fabled regression testing? If so, how did their bugs slip through?
Since you’re proposing a level of machine control that is far beyond current aircraft systems’ control, why wouldn’t it be necessary for it to exceed aircraft grade? After all, aircraft grade controls do fail completely sometimes.
Since the automated controls are being built over already existing cars, the cost of the manual controls is currently virtually nil. Once you have a reasonably bug free (and cost free) AI that can reliably bring a car to a safe stop without regularly causing a traffic jam because it crashes more often than current cars disable themselves, you might start seeing cars without steering wheels and brakes. It would depend how much the minimal controls compare with the cost of the AI and its attendant sensors. The instrument cluster can take over the inevitable media center when basic manual control is necessary. If that fails, it’s seat of your pants flying like the F-22 pilots. Hey, it may have cost a tenth of what your AI system cost, and you can’t play banjo sitting in that seat, but it’s better than nothing when you can’t divide by zero.
You can theoretically pull over in a car. But if the AI doesn’t know that it’s failed, how will it know to pull over? Even if it in its failure process, it reaches some sort of fail safe point where it brings the car to a full stop, that can still cause severe issues if it happens to many cars at once, for whatever reason.