What I am saying is that the programming, so far, leaves something to be desired. The story I linked to illuminates a flaw in programming design. Really, autonomous surface (or airborne) vehicle control code should not have the privilege of trade secret protection, because it is very critical to all of us.
And quite frankly, I cannot see why vehicle-to-vehicle collisions should be a huge issue. Autonomous vehicles can be made out of no-traditional materials that are more crash tolerant – an airbag system on the outside would probably be a really good idea, even for manually controlled cars. One general idea is that self-driving cars would largely replace personal cars, establishing a large mass-transit system, so there is no reason they would need to be especially stylish on the outside.
My understanding is that autonomous cars are very good at following rules but not so good at abstract situations. For example, a 4 way stop. Human drivers tend not to come to a complete stop. They slow down to an almost stop and then proceed when it’s their turn. I read that they had an issue where the computer car would make a full stop as per the letter of the law, which would then be interpreted as a delay by other drivers who would take their turn.
I would also imagine that driving on poorly marked roads, obscured by dirt, leaves, snow or weather might be a challenge.
I also read there was an issue with the self driving cars making wild maneuvers that are well within the safe operating parameters, but still uncomfortable forthe passengers. IOW, they have trouble telling the car that it doesn’t have to avoid an obstacle as rapidly as possible.
But generally speaking, if a computer can detect other vehicles, I think it’s safe to assume it can calculate their range, bearing and distance and determine a safe route far quicker and more accurately than any human.
That’s kind of where I was thinking the trail will lead- greater and greater adoption of assistive or semi-autonomous systems to aid drivers, and eventually, things will just go all automated. I mean, it would be cool to have a heads-up display that would somehow show relative speed of cars on the highway
It also seems to me that if you had enough cars logging enough data, then deep learning systems might be able to determine some solid driving rules for future autonomous vehicles. It would take something like logging speed, map position, control inputs, surrounding cars and static objects, and the results of each situation.
And finally, I think we’ll eventually end up with some sort of swarm technology that allows the automation to happen- it won’t likely be “autonomous” vehicles in the sense of only having sensor input. It’s a lot more likely to be some sort of ad-hoc network kind of thing, where the cars share pertinent info and act on it. Like if a car needs to get over, it might signal that intent, and other cars would speed up and slow down to allow it to get over safely. Or if there’s an obstacle in the road 2 miles up, that information might be communicated backward to cars so they can get over early. And so on…
The obvious solution for intersections, even with human-operated vehicles, is roundabouts to replace traffic lights. It does take a while for drivers to become accustomed to them, but they keep traffic flowing fairly steadily. One reason we do not use them everywhere right now is that they force drivers to be considerate of each other, but they would provide an effective structure for mixing autonomous + manual vehicles to share. Their biggest weaknesses are the challenges involving large trucks/buses/semi-rigs, and the fact that there is no practical way to incorporate them into a traditional one-way grid.
Roundabouts are great but they are best used at intersections that have traffic from all directions. A roundabout on an intersection of a main road with lots of traffic and a cross road with very little traffic can prevent the side road traffic getting through for long periods.
As a programmer and general computer guy, I’ve seen so many examples of computers screwing up 1000 times the stuff in 1/1000th the time. One little sensor glitch and mistakes very quickly compound and computers do things that seem crazy to us, because computers aren’t thinking, they are just running a bunch of algorithms very quickly.
I can’t see self driving cars any time soon. I believe the safest thing we can implement soon is a “road train” type cruise control for limited access Interstate highways that would allow a train of similarly programmed vehicles to follow each other at high speeds. On the Interstate highways there are no traffic lights, stop signs, or intersecting roads. But weather would still be a wildcard.
From early reports (basically today) humans had to take control a time or two but if they can make it work in this crap-tastic road system it can work anywhere.
So it is essentially allowing a select few invited passengers to participate in testing. The trips are free, there is both a driver and an engineer in the front of the car with the driver taking over regularly and the engineer sometimes adjusting the car’s speed. This doesn’t seem any closer to widespread autonomous cars than the other trials that are underway.
Not in any meaningful way. The article cited by kopek says industry executives are divided with some saying 5 years and others several decades before we have truly autonomous cars.
Humans have good judgement (when we aren’t drunk, tired, distracted or otherwise impaired, that is), sure.
But if you are designing a car, would you choose to put only two sensors, both facing the same way with limited ability to swivel, behind a thick sheet of glass several feet into the vehicle? I can’t think of a worse setup!
We are collectively crappy drivers. Major accidents happen every day. Most us know someone who has died in an accident. Most of us will be in an accident. Driving is the most astoundingly dangerous thing we do on a daily basis.
Self-driving cars won’t be perfect, but they’ve got a low bar to clear. They will save millions of lives and can’t come soon enough.
It depends what that “low bar to clear” is .
The problem right now is not how to clear the hurdle of safety.
The problem is how to make a self-driving car that can clear a more difficult hurdle—ie. a car that can actually …well,drive.
Everybody is all excited over the safety issue, and how self-driving cars are better than people.
But nobody talks about the most important issue of all: The purpose of a car is to take you places.
And that’s what self-driving cars fail to do.
Yes, IF the car manages to take you where you want to go, then you arrive in safety.
But right now , self-driving cars are not capable of taking you where you want to go.
They can only take you on the highway, or on clear roads on sunny days.
They don’t know how to avoid potholes, or drive on snow. They don’t know how to drive in fog.They don’t know how to maneuver around a construction site where one lane is closed, forcing you to drive the wrong way down the opposite lane.
The safety issue is important. But it’s less important than actually driving you where you want to go.
Robo-cars may soon appear in limited uses, such as shuttles within airports.
But it’s going to take several decades before the general public buys self-driving cars to keep in their garage at home.
Self-driving cars will be, on average, better drivers than humans. There will be fewer accidents. I agree with these statements. What concerns me is that they will spread the risk from bad drivers to everyone. Currently, a good driver is less likely to cause an accident than a bad driver. When you put those drivers in self-driving cars they’re now equally likely to be in an accident. If I’m a better driver than a self-driving car I am reducing my safety by riding in one.
Only if self-driving cars are worse than some drivers. And even then, even if you are among those top 1% of drivers who can drive better than a self-driving car, if the penetration of self-driving cars is high enough, I find it hard to believe that the marginally increased danger from being in a (relatively) poorly-driven car would outweigh the huge benefit from being around a lot fewer truly horrible drivers.
Well, it wouldn’t have to. If the penetration of safe self-driving cars is that high, everybody gets that exposure benefit and it drops out of the comparison of individual cases.
Bingo. The problem with self-drivers now isn’t so much their eyes, it’s their brains.