I’d estimate that once true self driving cars become viable - the changeover is going to happen surprisingly fast. It’s going to drag along for a while, but once something that can handle a city reliably becomes available - there will be a tremendous push and huge demand.
And once “a lot” of cars are self driving - a huge amount of the issues are simply going to “go away”. Fully crowd sourced information compiled and available - any holes in data are going to be filled amazingly quickly. And what % of roads in the US are truly that infrequently travelled - as an estimate, I’d think that at least 99% of roads see an average of 1 car per hour - with that sort of information coming in, and with some way for actual sentient people to help guide - the level of data available will be amazingly accurate
Many of the problems raised above only apply in a country like the US, where individual states can have different laws. In the UK, the government can order all local councils to fit transponders to all signals. They would have to fund it of course, but it would not really be a problem. Contractors who need to put up temporary signals would also have to comply.
Of course it would not be all plain sailing - the car would have to deal with failed signals, but no doubt there will be a failsafe mode.
My prediction is that self-driving vehicles will appear in town centres first. They may well be allowed to use pedestrian walkways and bicycle lanes. They will be used like an ‘Uber’ taxi. Just use your smartphone to call one up, stick your card in a slot and off you go.
The other place is in transport yards where there are hundreds of trailers being shunted around 24/7. Replacing the drivers with computers will make a huge saving in wages, human errors and collision damage. Once that is proved to work, it can be extended to inter depot work.
It will be a long time before your Amazon or Post Office parcel is delivered by robot though.
Presumably if a pedestrian is detected, no matter what the time of day, one wants the car to assume a speed of zero. :eek:
Or more to the point, a speed that will result in the car not hitting the pedestrian. No driver comes to a halt every time they ever see a pedestrian anywhere.
I fully agree with you about the end position but I think it will take a while longer to get there. Smartphones were introduced more than ten years ago. They are more expensive to operate than regular phones but they are immeasurably superior because they can do so much more. Smart phones also raise privacy and control issues that bother a lot of potential users. In those ways, they sound just like self-driving cars. U.S. smartphone penetration is only 80% after ten years. Cars are more durable than cell phones and cost a lot more to replace. I would say that I’m 90% sure that it will take more than 10 years from the date the first marketable “Level-4” self-driving car is available for sale to the date when only 80% of registered cars in the U.S. are self-driving. If you think it will happen more quickly, I’d be interested in understanding why. I wouldn’t guess when the last manually-driven car will come off the road but the oldest registered cars now are comfortably over 100 years old. Unless laws completely force manually-driven cars off the road, it’s more likely that some of them will be on the roads for at least another 100 years. Manually-driven cars and self-driving cars will have to coexist for a long time.
[QUOTE=bob++]
Many of the problems raised above only apply in a country like the US, where individual states can have different laws. In the UK, the government can order all local councils to fit transponders to all signals. They would have to fund it of course, but it would not really be a problem. Contractors who need to put up temporary signals would also have to comply.
[/QUOTE]
The interstate commerce clause of our constitution gives the U.S. government the authority to order the same thing, however, unfunded mandates where the federal government tells that states how to spend their money are politically unpopular, so funding transponders is the real issue.
I will ignore the cost of engineering the transponder system to work the way you want it to work. There are roughly 4 billion paved miles of road in the U.S., with let’s assume, 5 road signs or signals per mile on average that require transponders (this is a complete guess but I think it’s low). That means there are 20 billion controls that we need to supplement with transponders as you propose. If each transponder cost just $40 to buy and install and nothing to maintain, they would cost $800 billion in total.
The U.S.'s most-recent transportation bill was adopted late last year after 10 years without a similar long-term bill (http://thehill.com/policy/finance/262171-obama-signs-305b-highway-bill). That law appropriates $205 billion in highway spending for the next five years. Thus, the U.S. would have to spend all of this money to install just one quarter of the transponders. In the meantime, the U.S. government would not contribute a nickel for things like restriping highways or repairing failing bridges. Funding infrastructure improvements like you propose for self-driving cars won’t happen in the U.S. I’d be surprised if the U.K. has a similar pot of money sitting around for this sort of thing, but it is a smaller country with fewer paved roads so maybe the scale of the problem is less and funding would be easier to come by. I think self driving cars in the U.S. will have to operate on the roads we have and every researcher working on the problem seems to assume the same thing.
I’m all for crowdsourcing lots of this data. But we’ll have the problem of blackhat crowdsourcing too. Just for the lulz some Anonymous jerks decide to crowdsource that a particular school zone’s speed limit is 70mph. Won’t that be funny?
I have a lot of belief in AI cars. But I don’t think the answer is relatively dumber cars driving with reference to a crowdsourced map. The answer is relatively smarter cars driving from what they see around them.
That’s fair. I’m imagining the foundation of the maps being carefully pre-developed by Google et. al., with tremendous supplements to the maps generated and distributed automatically by self-driving cars, with the remainder of the data being timely crowdsourced updates to the static map data. The crowdsourced bit would be used confidently by the self-driving car only if it were confirmed by other users, which I explained in my post above.
Mapping companies would also have an incentive to monitor people who put in misinformation “for the lulz.” The systems wouldn’t accept anonymous contributions and would discount contributions from unreliable submitters algorithmically if other users don’t confirm the data. Data security must be integral to the design of these mapping systems to minimize the risk of blackhats attacking it at any level, but computers operate other critical systems like nuclear power plants and ship navigation. Imagining the worst case scenario shouldn’t stop us from embracing the improvements that technology may offer.
Self-driving cars will squash a few toddlers and run a bus or two over a cliff. I just think this will happen less frequently than it would with people-driven cars. The U.S. loses something like 32,000 people per year to car accidents. It would take a lot of hacking for self-driving cars to generate a record that bad.
I won’t argue the maths but why would **all **the signs need transponders. The only ones affected are variable ones as mentioned above - schools and roadworks being the main ones.
Self driving cars will have all the information they need about fixed speed limits and light controlled junctions. I am sure that they already recognise traffic light colours.
My maths are perfectly arguable. They are mostly numbers I made up but I’m just trying to guess conservatively to highlight the scale of the problem of improving the roads rather than improving the cars.
I understood your post to suggest that all signs, including fixed ones like speed limit signs, should have transponders. If you are just talking about variable signs, the number of transponders is much lower and perhaps the cost would be too. But in any event, in the U.S. at least, there is essentially no budget for that for the next five years.
If you assume that autonomous cars can cope with traffic lights, why can’t they also cope with flashing “school zone” signs? Or roads with speed limits that vary by day? Autonomous cars will have clocks. Even “when children are present” signs can be dealt with because autonomous cars can already tell when pedestrians are around. If you need to distinguish between adults and children, gauging their heights is about as much as a person could do and I’ll bet self-driving cars could do it even better. More likely, they would just assume, as someone above suggested, that all pedestrians are children. That’s a fail safe; it would work well enough for me.
Temporary traffic lights are apparently an issue for self-driving cars. For all the talk about them in this thread, I don’t know how often they are used. I recall seeing one in my life, but maybe I’ve seen hundreds of others that I promptly forgot about. Perhaps a self-driving car could cause an accident because it failed to recognize a new traffic light but it wouldn’t be any different than people that way. Fortunately, the self-driving car could learn from that mistake and almost instantly teach every other self-driving car about the new light, instead of dozens of people making the same mistake.
Generally, you seem to suggest that we will need to improve the roads for self-driving cars; I think engineers will have to improve self-driving cars until they are good enough for our roads. I also think they will be able to do it.
And then those anonymous jerks get fines of hundreds of dollars or more for speeding through that school zone, and/or wrap their cars around trees. And they still probably don’t get the speed changed in the database, because there are probably many times more legitimate drivers who go through that school zone. There’s a problem here, but the problem is just jerks going 70 in school zones, which is possible with or without the database.
This is fascinating. It says that Google has successfully used photos taken from a single camera on a single car driven down a particular route to automatically identify between 95 and 99% of traffic lights on the route. They didn’t analyze the data in the car in real-time, but they at least showed that the automatic light detection algorithm can work. Google says that success depends on the area and traffic conditions and they did the study in sunny California and they probably picked days with nice weather to do the analysis. Perhaps their success rate is higher than could be assumed from the real world.
Even still, given that only one self-driving car has to identify a new traffic light before they will all be able to do so, I am convinced that self-driving cars will eventually be able to navigate new street lights better than the average person. Bring on the self-driving cars.
It depends on why that 1% to 5% was missed. If the first self-driving car through an intersection misses identifying the light because a bird happened to fly in front of it right when the picture was taken, then the next car will probably catch that light. If the first car misses the light because of glare from the Sun right behind it, then another car going through at the same time of day is likely to also miss it, but a car at a different time of day will probably catch it. If the first car misses the light because it’s mostly covered by heavy foliage, then all of the cars are likely to miss it (but then, so are human drivers, and the city needs to do something about that). If the first car misses it because the machine-learning software has gotten itself convinced that all traffic lights have yellow housings, but this one has a black housing, then all of the cars are likely to miss it, but most human drivers wouldn’t, and in this case the robot drivers would be significantly worse than the humans.
That’s good enough for me. If those cars only run between 1 and 5% of real stop signs, we should be perfectly safe.
Let’s not kid ourselves. I can give examples of human reasoning that is equally faulty. It’s not just bad computer programming that fails the real-world test.
My neighbors are used to local telephone prefixes as starting with 843 or 846 for landlines and 495 or 493 for cellphones. So when I tell them my cellphone number is 843-NNNN, they are sure it’s a mistake (it’s not). Their assumption is in the fallacy class of “all the swans I’ve seen are white, therefore no swan can be black.”
Re: school zones. The easiest programming is to just assume school is in session 24x7, and go 25 (or 20 or whatever your jurisdiction sets for them).
Slowing down 10 mph for 2-4 blocks isn’t going to be a big problem. And if I’m in a truly self-driving car, I’m probably playing Candy Crush and don’t even notice it slowed down.
They didn’t disclose why they didn’t catch all the lights they failed to identify. Glare was one issue. I’m not a computer scientist but I understand that the hill-climbing approaches they use don’t find the optimal answers, they just find some answers that are better than the ones they tried before. I’m not sure if that means there is simply no answer derivable from their methods about why they didn’t find the traffic lights they didn’t find. If someone else wants to correct my understanding, I’d appreciate it.
Google looked at a series of images taken at 4Hz as the car drove through the route. They also “controlled for obstructions, curves, etc. by only counting intersections that a human could classify from the camera images.” So the foliage blocking a person’s view would also have been excluded from the human’s count of total findable lights. I think the bird would have to have hovered in front of the light for it to have been missed in all the images entirely. Of course, excluding these obstructed lights overstates Google’s success to some degree.
Your black light housing problem is closer to another issue they will have – they looked at only 13 light configurations. There are a lot more in the real world, like horizontal lights in Texas, HAWK lights, or arrow lights that don’t point directly up, left or right. Their 13 configurations might have described 100% of the lights in their study area but it is certainly somewhat smaller than the universe of traffic lights in the real world.
They also had trouble with dim lights. Their proposed solution is to flag certain lights as dim and presume they are green if the car doesn’t see a yellow or red light. This is fail dangerous" in this edge case but it seems like part of the compromise necessary to make self-driving cars work well enough. It may contribute to the occasional accident in the real world but as long as the accidents relating to these types of compromises are less frequent than people making equally catastrophic mistakes, then the self-driving car still wins. Google could also contribute to solving the dim light problem by reporting the known dim lights to cities and getting them repaired.
I guess I wasn’t explicit enough.
Crowdsourcing, whether done manually by human end users with an app or automatically by the AI cars’ own sensors reporting their own observations, implies there’s centralized databases taking in billions of updates from millions of users.
My thought is it’ll be darn hard to make that process both highly open and also highly secure such that a blackhat sitting at a PC someplace can’t submit millions of updates of his own choosing. Add in a few botnets and 495 of the last 500 “cars” through that school zone have reported the speed limit is 70. So the database decides that’s true and publishes an update to the rest of the local traffic coming through.
Is this an insurmountable obstacle? No. Is it one more challenge? Yes. Is my particular example of blackhat crowdsourcing a school zone speed limit a compelling real world target for bad buys? No, but it’s a simple enough example to readily understand.
Again I’m a fan of AI & self-driving cars. I dislike these threads where somebody picks some trivial driving task such as school zones and presents it as an “Aha Gotcha” that nobody in Engineering has ever thought of or can ever solve. That thinking is nonsense.
But it remains the case that crowdsourcing is inherently a low-trustworthiness information source. The general idea of the world being instrumented by all these devices cooperating and all of us enjoying the combined fruits of that cooperation is compelling. And very pro-social and share-y and good. Bring on the 21st Century of Enlightened Cooperation by legions of smiling citizens. Where do I sign up? I’m all for it.
But until made almost bulletproof against deliberate misinformation, we can’t let our machines rely on this low-trust info. It’s scary enough what happens when publics react to rumors spread by old fashioned word of mouth. C.f. Hutus & Tutsis. But letting our machines take real world actions in response to crowdsourced data in the face of blackhat crowdsourcing is IMO inviting disaster.
So far we haven’t (AFAIK) seen much blackhat crowdsourcing. IMO that’s because there’s not yet much way to profit by it or to cause harm by it. That will change when crowdsourced data starts being used for real decisions with real consequences.
If I understand it right, their light-finding algorithm has only a 1-5% chance of missing the light. So the first self-driving car through the new light intersection has only a worst-case 95% chance of identifying the light. If each self-driving car has a fully independent chance to identify the light, two self-driving cars have a 99.75% chance of identifying it. Three have a 99.999875% chance of identifying it. Once one car finds it, that car can report it to all the other cars so they can look for it. You are grossly overestimating the risk of these cars missing these lights. Go look at my anecdata above for just how poorly people do at finding these new traffic controls. My money is on the self-driving cars.
LSLGuy, you are describing security problems. They are very real and I’m not discounting them. Car makers seem bad at mitigating security issues in their connected cars now, so this problem of hacking cars to control them already exists and is threatening people today. Self-driving cars are only incrementally more dangerous than the danger faced by owners of certain Chrysler products because they had poorly-secured fancy infotainment systems (see: Hackers Remotely Kill a Jeep on the Highway—With Me in It | WIRED).
The engineers working on self-driving cars are probably better able to design for security than today’s car infotainment system manufacturers. Google has pretty good experience designing for network security. There is a very good chance that self-driving cars will be less subject to hacking than manually-driven ones. One analogy to consider whether people should fly given that people can sneak bombs on planes. And yet, if I understand it right, you fly all the time. Flying is still safer than driving despite the risk of bombs. At the very least, self-driving cars have potential to be much safer than people.
Oh, indeed, and I didn’t mean to imply otherwise. One might even say that the reason machine-learning systems can make mistakes like that is because they’re emulating another system that can make mistakes like that, namely us.
Ah, so you’re not worried about bad crowdsourcing; you’re worried about hacking. That is a valid concern, but not one particularly connected to crowdsourcing, and Google is eminently well-qualified to deal with it.
Tired and Cranky, as you mention, that’s only valid if each car’s chance of noticing the light is independent. In reality, it’s almost certainly not to at least some degree, and the degree to which it’s not makes a lot of difference to the safety of the system.