Does a new Tesla have the hardware to be fully autonomous?

If you somehow uploaded KITT (or another AI of roughly human intelligence) into a Tesla, would all the sensors, servos, etc be in place for the car to drive itself as well as a human in all situations?

Yes, the Tesla is completely computer controlled. It can turn itself on, turn the wheel, accelerate, brake, blink, turn on the lights, open and close doors and liftgate (Model S and X only) etc.

There is no factual answer on whether the Tesla will in fact drive “as well as a human” when they deploy their self driving software or whether the computer is powerful enough to run a real life “KITT” but given that you were running KITT on a Tesla, it could do everything.

Oh and it could talk, too.

It could change gears but not physically move the gear lever.

They claim as much, but have yet to demonstrate that it’s true.

The other companies working on self-driving cars all use LIDAR sensors, which are expensive. Tesla vehicles don’t have them. Tesla keeps claiming they can someday get their software good enough that it doesn’t need the sensors all the other manufacturers think they need. Maybe they’re right, or maybe they’ll eventually have to introduce a new improved LIDAR-using model, if they survive that long.

“Fully autonomous” is a continuous scale, not discrete. I.e., you can’t say that a car “fully” autonomous as there will always be new hardware, new algorithms, etc. to make things even “more autonomous”. Just putting more sensors and adding computer power will make a car better.

So a new Tesla can run in a fairly autonomous way but not as good as it might be.

I think it’s probably best we stick to the SAE “levels” of 1-5 where 4 and 5 are the “fully” autonomous capabilities.
Tesla is currently somewhere slightly above Level 2, which is what any car with a full ADAS package is.

In May this year Tesla switched the computer modules - all new cars have a dedicated AI (artificial intelligence) chip to better classify and recognized surrounding traffic and other road objects. People who purchased the Full Self Driving option -when it arrives - will also have their vehicles retrofitted with the new computer module.(That’s me, Herc!) As mentioned, the Tesla relies on cameras and AI visual recognition software rather than LIDAR to determine what is ahead. Obviously camera tech is necessary too, because LIDAR can tell you about obstacles, but you need a camera to tell you where the road turns and other interesting stuff. I can’t speak for research vehicles, but my Model 3 autopilot is pretty good. However, it has annoying problems - level higher speed urban roads here wave up and down a few feet between the storm drains. Telsa judges distance at a basic level by where the object appears vs. horizon, meaning as you drive up the slight incline it slows because it thinks cars 200 feet away are closer than that. Then you crest and head down, and it overestimates the distance and speeds up. Once you are close it judges distance much better. (Apparently this horizon issue cause problems when testing in Australia - a kangaroo in mid-hop seemed like it was far away, the appeared to approach rapidly when it came down.) If a car makes a left turn across my path even 200 feet away, it begins braking hard, even though the car will be through long before I arrive at that spot. Similarly, the AI has trouble so far with road conditions, it can’t tell when the road is very icy and it should do less than the speed limit (but then, neither can many humans). The current AI still isn’t programmed to recognize and react to traffic lights and stop signs - but then there are so many varieties of lights, and training the car AI to distinguish then all seems a daunting task.

Like many cars, it does have ultrasonic sensors to detect immediate nearby objects such as when parking.

I’m still not sure if the AI “learns” - such as, I’ve driven this road dozens of times, even without lane marking GPS should tell it where the lane is. If I come to an intersection with confusing traffic lights, will it figure out from my behaviour which lights it should watch for? Like many navigation systems, it does start to recognize common destinations. Plus, when there are updates, they get pushed out to the vehicle so I always have the latest software.

I wonder if in the future the AI will simply reuse to drive itself if the weather is too bad; heavy rain could mess up visual systems, as will fog and snow. Snow also means not being able to see the road. Often we are told humans should not drive in these conditions. AI simply may not be able to.

As for hardware, yes… all vehicles have the camera and servo controls - brakes, accelerator, steering - to control the vehicle.

It might be computer controlled but does it have enough cameras and other sensors to get enough input for driverless operation?

Teslas don’t have a transmission.

Nobody knows what sensors are necessary for a road-legal fully autonomous car, because we aren’t there yet.

As already noted, it lacks LIDAR, which is something all other developers of self-driving cars think is essential.

Hmm, so it’s main view of the outside world are cameras?

I suppose it has multiple cameras and a wider FOV than human vision, but how’s it going to be able to deal with not seeing road markers during bad conditions (fog, snow, hard rain)? Humans have a hard time of that themselves, but they can make up for that somewhat by, for example, driving in snow grooves from previous cars, for example.

Things like this are some of the main reasons why there’s disagreements on how close Tesla is to an actual full-self driving feature. On one hand, with just a minor upgrade to the computer to a Tesla Model 3 that you can buy today (and as mentioned the upgrade will be available very shortly), you can see that the capability has a lot of potential.

However, as you would surely point out, that video shows what the car can do in conditions that are not challenging at all. We don’t know how a Tesla that is now available, and will have a series of software upgrades for the next several years, will deal with all manners of hazards, from potholes to double-parked cars to road construction to blizzards.

The cars that Waymo drove for over 5 million miles on public roads were not road-legal?

they have permits to operate them with the stipulation that an attentive “safety driver” be at the ready to take control at any time.

Teslas already do this. The current Autopilot, while not yet close to true self-driving, is generally very good at lane following, even with highly obscured or indirect boundaries. The neural nets are trained on all kinds of situations like this and they pick up on the same clues that humans do.

Humans drive without LIDAR; ergo, self-driving is possible without LIDAR.

The only question about LIDAR is whether it will hasten the arrival of self-driving. Musk’s opinion is that it’s a crutch: you have to solve the vision problem anyway, so LIDAR is just a distraction from that goal. There’s no problem that LIDAR solves that you don’t have to solve anyway using just vision. It remains to be seen whether this is correct, but so far no one has identified such a problem.

Human drivers aren’t very safe. We don’t yet know if we as a society would accept self-driving cars that are only as safe as human drivers. That would mean we’d be OK with self-driving cars killing 40,000 people every year.

The AI “learns”, but not on a micro scale like that. The neural net is trained by Tesla and sent to your vehicle with its capabilities already “learned”. Tesla collects insane amounts of telemetry from anyone that doesn’t opt out. They collect anonymized stats whether in autopilot or driving manually. This data, billions of real world miles of it now, is sent back to the mother ship, where the neural net is trained. New versions of the neural net software are then pushed to cars during an update. They don’t typically announce that in release notes, but Elon will sometimes mention it on twitter.

So yes, it will figure out these things in time from your data, along with any other Tesla owner who passes that intersection. The “autonomy day” videos on Youtube go into a lot of fascinating detail on how this works.

Currently the training has mostly been focused on highway driving, which is the only place you can use full “navigate on autopilot”, but it’s quite good there. Earlier this year, I was caught in a torrential storm on the highway and didn’t feel comfortable either pulling over or finding an exit. Visibility was horrible and for large portions of the time I could not make out the lane marking myself. Autopilot had no problem with it, and any time I could make out the lane markings, I could see it was keeping the lane just fine.

It was notably far more capable than I was in that situation. That’s not going to be true of every situation yet, but I think it will be in time. When the weather reduces visibility enough, it will disable “Navigate on Autopilot”. This just means it won’t switch lanes, take exits, etc autonomously. It will still hold your lane and speed and avoid obstacles in this mode. I’ve noticed this happening less and less, though that may be selection bias.

Humans are very safe under the right conditions. Almost none of those accidents can be traced to deficiencies in our sensor inputs. They’re almost all because the human was drunk, or tired, or distracted, or driving aggressively, or reacted too slowly, or the like. Even when our sensory inputs are involved–for example, rear-ending someone because we were trying to change lanes and not looking forward–car vision systems will have the advantage because they have 360 degrees of cameras going all the time.

Maybe LIDAR will fix some subset of the remaining failure modes, but this will be a tiny number compared to the current death count.

That said, there is an effect which I suspect will eat a lot of the gains at first. I thought about this today when giving room to some idiot that decided to cross five lanes of traffic quickly, and without looking, to make an exit. And that is that there is a subset of asshole drivers that depend on others driving defensively to avoid accidents.

As bad as these drivers are, the only reason they aren’t worse is because of other people paying attention and getting out of their way. Driving defensively is a subtle thing that’s going to be hard for the first couple generations of self-driving systems. So I think there will be a increase in some categories of accidents, not because the self-driving cars did anything wrong, but because they aren’t making as many allowances for the worst human drivers.

Yeah, a big problem is that right now, it’s mostly reactionary. It’s pretty good at some aspects of defensive driving. It’ll brake to avoid someone entering your lane, or swerve if there’s room to do so and someone is about to hit you. There are still some issues here, like what people call “phantom braking”, but it’s pretty good.

It won’t plan ahead, though. It won’t yet identify someone driving like an idiot up ahead and try to avoid them. If it thinks everything’s good, it’ll just hold the lane and drive, even if there are small inputs it could make to increase safety by reducing the potential for an accident.

One example that annoys me daily: some people will tend to pace you on the highway, starting to overtake and then stopping and camping out either right next to you or right in your blind spot. That removes one of my options if I need to make a defensive move. I’ll almost always speed up or slow down to get away from them if it’s safe to do so, but the Tesla doesn’t care.

This also annoys me when people are excruciatingly slow to pass trucks or other large vehicles. I don’t want to hang out in the lane next to one of those for any longer than necessary. I’ll hang back and let the car in front pass, and then pass quickly myself when there’s enough room. Autopilot will just pass the truck at .1mph speed differential just like the idiot in front of it did. These things I think will be tougher for autopilot to learn.

Agreed. The fast reaction time helps to a point, but it needs to have a greater ability to plan ahead. Even in non-safety scenarios, like someone changing lanes in front of you, it should sense the lane change and proactively ease away instead of only doing so aggressively once it detects the lane intrusion.

Some of these things can be coded up relatively easily, I think–for instance, it could have a kind of soft “magnetism” that tries to stay out of blind spots, either speeding up or slowing down gently so as to not spend too much time there. But other situations are more difficult, like someone a few lanes over moving towards you aggressively.

It really needs to have a dynamic “anxiety map” that varies based on some sense of danger, trying to stay out of bad locations or at least trying to minimize the amount of time spent in them. But building such a map will be a challenge.

What about unusual situations, accident, construction, & or tree trimming where I intentionally cross the double yellow line to drive in the opposite lane because a normally two-lane road is now one lane?

How do they work in snow? I can slow down & tell approximately where I should be driving by looking at the differences in height (curbs) even though everything is white.