No I don’t. Not for a while. Lane assist in current cars only is allowed when there are painted lines. If they had tech that was able to handle non-lined roads, wouldn’t it be out there already? Staying on the right part the road is basic stuff. Either the sensors aren’t there, or the AI isn’t there. I mean, maybe it exists, but I don’t think it’s going to be available to the public anytime soon. And I don’t see them handling snow anytime soon either.
And in 2015 people didn’t see autos as a thing at all, yet here we are.
Thanks Maserschmidt. You’re right about Volvo. They’ve walked their prediction so far back that they are further from the goal than when they started. So, they are no longer a leader in this space.
I was ready to concede that Google let their deadline slip but, in fact, they met that deadline. They no longer plan to market a self-driving Google car and it seems instead they will just install their technology in a ride sharing fleet. However, we can’t call their predictions woefully optimistic. In 2012, Sergey Brin announced that, “You can count on one hand the number of years it will take before ordinary people can experience [self-driving cars].” In March 2018, began to operate a fleet of self-driving cars for public users in Phoenix and some other Arizona communities. Waymo’s “Early Rider” program allows real people to hail a self-driving car for running errands that contains no drivers, engineers, or other minders. (Waymo Is Now Taking Passengers In Its Driverless Cars | Fortune).
You can even be one of those ordinary people. Waymo - Self-Driving Cars - Autonomous Vehicles - Ride-Hail
Sergey Brin was only about six months off in his pretty ambitious timing. Seeing as this is available only in Phoenix and other Arizona communities, it might not be as many “real people” as we imagined six years ago but Waymo did what Mr. Brin said they would.
Ford did predict a a self-driving taxi fleet in 2021 but Ford also doesn’t support your thesis that these deadlines are apparently slipping. The article you cited is over a year old. Even though Ford is a laggard in this space, as of last month, they still planned to have a level 4 self-driving taxi fleet on the highway in 2021. (Who’s Winning the Self-Driving Car Race? | Fortune). Since the article you cited, Ford has invested $1 billion in ArgoAI to develop self-driving cars. (An inside look at Ford’s $1 billion bet on Argo AI | The Verge)
It’s not apparent to me that Ford has let their deadline slip. They probably won’t make it but they are still trying. I don’t think it will take them decades to do it.
The preliminary NTSB report on the Santa Clara Tesla crash just came out. Somehow it missed the highway attenuator, even though it has very clear markings on it. The last 8 seconds are pretty ugly.
Thanks again, Maserschmidt. People who want real self-driving cars seem to be settling for Teslas and treating them like self-driving cars even though they actually require all the attentiveness of any other car. Cadillac has a similar “Super Cruise” system in some of its cars but the Cadillac system enforces driver vigilance with an onboard camera that checks whether the driver is looking at the road. Tesla says that Cadillac’s monitoring system is unreliable so they didn’t install anything like it in the Teslas.
It’s unclear from the report whether the attenuator had clear markings. The report says the attenuator in the Tesla collision was previously damaged in another collision. The report also shows a side-by-side picture of damaged and undamaged attenuators and the damaged one has no special markings. Perhaps the attenuator markings were all destroyed in the earlier collision. That said, the Tesla should still have avoided it. Most of the obstacles that self-driving cars face won’t have painted tiger stripes on them. Self-driving cars need to deal with those obstacles anyway.
Being “autonomous” while requiring the driver to be fully supervising and able to respond in a microsecond is a recipe for disasters. The car being able to take over from a driver’s inattention or able to actually be L4 level should be the only options. Neither fish nor fowl, no.
Ah, I may have misread their diagram-if it was damaged in the previous crash, they could have said that, but I agree it’s unclear. If that attenuator on the right was previous to this crash, I can imagine it would be hard to pick out optically on the cement highway.
Musk possibly gets some share of the blame here because he has allowed it to be referred to as auto pilot, and maybe a little more of the blame because of his insistence that a car can be fully autonomous despite not using lidar. In either case, and to your point, the car is supposed to alert someone if something is in its path or if it is confused.
There is also I think a factor involving familiarity and comfort level. I’m sure this guy had been driving for a while with the auto pilot turned on, and had gotten used to not having to look at the road frequently, despite the speed he was traveling at.
We toured an MIT lab where they were doing work to determine whether cameras in the car could identify the driver’s attention level…essentially it was a car driving all around the city with a giant processor in the trunk capturing data and making evaluations.
It was a neat project, and when I asked one of their statisticians how good their true positive+true negative rates was, he said 97%. Then he gave a shrug, and said “97% is great in the world of statistics, but not quite where we want to be when you’re talking about someone driving a car.”
It sounds like the one on the right is after the Tesla crash. But the left one is intact, while the report clearly says that at the time of the accident, it was already damaged from a crash 5 days earlier. They probably didn’t have a photo of what it looked like at the time of the Tesla crash.
I thought of your portrayal of AI as I read this article in a recent Science magazine.
Again, note. The tactic of these machines is “machine learning” - millions and millions of trials and errors, made possible by powerful processing well outside of the reach of what will in vehicles. Driving to learn by trial and error? Well it is sort of like another bit from the same article:
For now autonomous vehicles will not be doing Alpha-style AI. They will have programmed rules that they follow.
Now there’s a company that’s developed a system where a driver can take over remotely in those situations where the AI fails.
I’m not sure why that’s better than having someone who’s actually in the car take over. I suppose it would be great when everyone in the car is coming home after a party, drunk.
From the link:
I’ve got the same question, kunilou. Seems to make more sense to leave the steering wheel in the car, and ping the person sitting in the driver’s seat.
Sure, the remote operator would be able to see the view through all the cameras and stuff, but each camera would be providing a different view on a different screen, rather than the much more unified view that a driver has when in the driver’s seat.
I’d like to see some tests comparing how remote operators remotely drive a car equipped with AV tech, with a random sample of normal, everyday drivers driving normal, everyday cars, in the sorts of situations were the AI would freeze up and ‘ping’ the remote driver.
Also, how about response time? What’s the lag time between the time the AI pings the remote driver, and the time the remote driver figures out which car he needs to drive, and is able to grasp what he’s looking at?
“You may find yourself behind the wheel of a large automobile,” but if you find yourself there abruptly, how quickly can you get a fix on what’s going on and what you need to do?
Exactly what I was thinking. Heck, the problem with having a human in the driver’s seat who isn’t paying attention is the time lag to observe, comprehend and take action. It strains credulity to think some remote person could do so quickly.
I think there are far too many variables for self driving cars anytime soon, except for closed course type settings.
I get the impression that the remote operator would only take over for non-timing critical situations, eg. the construction site example that was brought up. Other examples might be if a pedestrian decided to camp out in front of the car, road becoming impassable, etc. I don’t think it’s realistic for a remote operator to help in situations where the AI fails to recognize or respond to some fast-moving hazard.
The remote operator option can help when there are only a few self-driving cars around, but I don’t think it would scale well to when there are a lot of them. Let’s take that unfamiliar construction site scenario - if there are only a couple of self driving cars in line, a small number of remote operators could get them through reasonably quickly, but if there are hundreds of self driving cars per hour arriving at this construction site, it would likely take a while for the operators to get through them all (assuming that there is a relatively small # of remote operators relative to the total self driving car fleet - if there are a large # of remote operators, it kind of defeats the purpose!).
In any case, living in Canada, I won’t get too excited about self driving cars until I hear that significant progress is being made on driving in inclement weather conditions. I wonder with the much larger range of sensory information that AI cars will have access to, if they may eventually be much more adept winter drivers than humans - eg. will it be feasible for them to detect black ice ahead of time and slow down preemptively?
Yeah, if a motorist driving in Baltimore were to forget to program the car not to use its turn signals, it would cause a lot of confusion.
Yup. The amount of processing and storage necessary to do pure neural net-style AI isn’t in place, though with distributed analysis it’s possible.
Another big challenge is integrating observations from multiple sensors/kinds of sensors.
And a third big challenge is edge cases. Today while I was driving south on I-95 through Rhode Island, about a quarter-mile ahead of me a few deer bolted across the highway in between cars moving at 65+. As a result, I slowed down because I could more deer lurking in a copse of trees in the median area. Rare? Sure. But there are thousands of other edge cases.
The recent accidents has reminded the public this technology is still experimental.
The biggest problem is still accurate sensors. Knowing the difference between a woman pushing a bicycle and a shadow on the road.
Expecting a passenger to react quickly enough and take control is problematic. The Uber test driver was alert. She wasn’t reading a book or distracted watching a video. She still couldn’t react to the emergency quickly enough to prevent Elaine Herzberg’s death. She was about 20 seconds too slow. It makes a difference when you’re driving and already have your feet on the pedals. You can slam on the brake much faster.
Tesla plans to roll out a full self-driving package in August. I hope, the technology is ready. I have doubts that it is.
https://www.google.com/amp/s/www.theverge.com/platform/amp/2018/6/11/17449076/tesla-autopilot-full-self-driving-elon-musk
The software is the big problem, not the sensors. It’s not capable of analyzing all the factors in real time properly. Another issue is keeping the sensor clean, I passed a construction site the other day, actually still in the destruction stage, and got a coating of dust all over my car.
No she wasn’t.
She was staring down at her phone! Of course she couldn’t react quickly enough.
Effective monitoring of an automated system like this requires your brain to be engaged in the activity exactly as much as it would be if you were controlling the car yourself. The problem is that because you’re not controlling it yourself there is normally no immediate effect from allowing yourself to become disengaged, and so it is easy to drift off into your own world and you stop being an effective back up.