Tesla steering and also Beta self-driving

Is Tesla steering mechanical, or drive-by-wire? Specifically, if self-driving turns the car, does the steering wheel turn?

Also IMHO anything called Beta should not be allowed on public streets. You wouldn’t beta test the next release of Windows in a hospital or nuclear plant.

The Tesla vehicles have a mechanical linkage to a standard rack & pinion steering with electromechanical power assistance. The power assist system also doubles as the “driver assistance” that can fully articulate the steering as directed by a driver aid/autonomous system. This is not unique—many other vehicles with driver assistance/lane holding features have similar systems for controlling steering.

Agreed that Tesla should not be using public streets and the drivers and pedestrians upon them to beta test their autonomous driving systems. Unfortunately, at least in the United States, there is very little the federal government can do to regulate features of a vehicle that do not fall under the purview of “safety systems” (e.g. seat belts, air bags, antilock braking systems), and for some bizarre reason (read: political influence) the National Highway Transportation Safety Administration (NTHSA) has classified autonomous systems as driver aids instead of safety. Individual states could certainly regulate or prohibit such systems that are not demonstrated to meet obvious standards for reliability and functionality, but again for bizarre reasons (political influence) very few have anything but cursory regulation of autonomous driving systems.

Stranger

Seems like a strange statement given that the NHTSA just required Tesla to issue a recall for their FSD beta:

FSD Beta is an SAE Level 2 driver support feature that can provide steering and braking/acceleration support to the driver under certain operating limitations. With FSD Beta, as with all SAE Level 2 driver support features, the driver is responsible for operation of the vehicle whenever the feature is engaged and must constantly supervise the feature and intervene (e.g., steer, brake or accelerate) as needed to maintain safe operation of the vehicle.

In certain rare circumstances and within the operating limitations of FSD Beta, when the feature is engaged, the feature could potentially infringe upon local traffic laws or customs while executing certain driving maneuvers in the following conditions before some drivers may intervene: 1) traveling or turning through certain intersections during a stale yellow traffic light; 2) the perceived duration of the vehicle’s static position at certain intersections with a stop sign, particularly when the intersection is clear of any other road users; 3) adjusting vehicle speed while traveling through certain variable speed zones, based on detected speed limit signage and/or the vehicle’s speed offset setting that is adjusted by the driver; and 4) negotiating a lane change out of certain turn-only lanes to continue traveling straight.

For better or worse, they are not requiring Tesla to disable the feature, though they seemingly could have. Instead, they identified four “rare” scenarios that will require a software update.

ETA: Correction–it was not a forced recall, but a voluntary one (though I expect it could have been forced had Tesla stonewalled):

On February 7, 2023, while not concurring with the agency’s analysis, Tesla decided to administer a voluntary recall out of an abundance of caution.

As you noted in your edit, this was a voluntary ‘recall’ but more to the point the NHTSA is not in any way preemptively regulating autonomous driving systems; it is relying on manufacturers to police the selves on assuring that systems are sufficiently mature to operate with at least as much functional safety as a human driver. Given that I’ve seen Teslas in (presumably) autonomous driving mode operating at highway speeds in a way that, were it being driven by a human driver would indicate severe impairment, and actually having my truck hit by a Tesla in Autopark mode (fortunately entirely to its detriment; it severely creased the ‘frunk’ while leaving just a residue of easily scraped off paint on my reinforced steel bumper), I would opine that this approach is nowhere near adequate for public safety. Meanwhile, the ‘co-founder’ [sic] and CEO of Tesla motors continues to whinge on about how this non-existent regulation is preventing the company from deploying its supposedly fully functional autonomous piloting system as he has done since 2019.

Stranger

There was obvious NHTSA influence here because otherwise, Tesla would simply have released an ordinary OTA update, the way they normally would have, and have done many times before for both the FSD Beta and other features. That would have avoided the obvious negative perception of calling it a “recall” and the cost of physically mailing out notices.

It was clearly only voluntary in the sense of “if you don’t, we’ll make it mandatory.”

Yes, when self driving turns the car, the steering wheel turns. It is a similar in driver perception to some cruise control systems where the gas pedal will move.

So you’re saying they should call it something else?

No but come to think of it, I’m surprised they didn’t, to avoid the connotation of a beta test. You know, something to make it sound like people are getting a “Special Preview” instead of openly admitting that it’s software that is virtually guaranteed to be too buggy to be commercially viable.

That’s a very strange take given that every FSD Beta user paid thousands of dollars for access to it–often years before it was available in any form. It sounds highly commercially viable to me.

If only human drivers were held to these standards.

Notwithstanding, neither NHTSA, the Consumer Products Safety Commission, or any other federal agency is doing anything to regulate or impose any kind of minimum standards for the performance of autonomous driving systems. It is a matter of fact that people have been harmed and in at least a few cases killed by the use of Tesla autonomous driving systems because of inherent flaws and immaturity of these systems, and neither Tesla nor any other manufacturer should be deploying a ‘beta’ version of these systems on public roads instead of doing the necessary diligence of demonstrating that the systems are at least as reliable at avoiding accidents as human drivers.

In fact, human drivers are held to standards, both in licensing and for operating vehicles in an unsafe manner or impaired condition. While the enforcement of these standards may be well short of perfection, it is still more than is imposed upon manufacturers. Having personally experienced Teslas in autonomous driving mode sweeping back and forth across multiple lanes so frequently during my daily commute that I make sure to keep careful distance behind any Tesla, as well as having been struck by a Tesla while sitting parked in my vehicle and then watching it drive off, I’m not inclined agree with any assessment that this system is somehow still better than the typical human driver, nor that we should give great latitude to a ‘visionary’ who assures us that “Full Self Driving” will take us from coast to coast without ever touching controls any day now with more safety than a human operator.

Stranger

Simply not possible without a revolution in machine learning. ML in its current state–and any foreseeable state–requires an immense amount of accurate training data; data which is only available by actually using the system in the real world.

I’m not making a judgment call here. But there are essentially two options:

  1. We never have anything resembling full autonomous driving
  2. We allow companies to essentially beta-test their software with the public

There’s no real in-between, because plunking away on their codebase in private or under highly restricted conditions simply will not result in a product. So far, society has pretty much chosen option 2, believing that the value of eventually having self-driving exceeds the damage it does in the short term.

It’s not like this is a brand-new calculation. We allow student drivers on the road with minimal limits, despite them being far worse than an average driver. But if we’re to have any drivers at all, eventually we have to let them on the road, where they will make mistakes and sometimes kill people.

That’s not to say we can’t have a discussion about allowable conditions, liability, etc. But there’s just no way around testing unfinished software in the real world if we want it at all.

FWIW, my Kia’s Lane Keeping Assist feature also turns the steering wheel when it engages.

When I see an autonomous vehicle sweeping to and fro across lanes at highway speeds because it can’t distinguish between legitimate and prior lane markings, running into parked emergency vehicles, or making its herky-jerky way through urban streets repeatedly driving into the wrong lane or incapable of making judgment calls that a small human child could easily do, I’ll take Option 2, at least for the foreseeable future until we can build some virtual guard rails into such systems. There is zero reason to allow companies to “beta-test their software with the public” when this means putting other drivers and pedestrians in the way of being struck by a multi-ton vehicle and particularly at the whim of a tech ‘visionary’ who has assured investors and the general public of the maturity of the system to operate safely and reliably on public roads for four years despite ample evidence to the contrary.

The claim that “society has pretty much chosen option 2” can only be taken as a true statement if you restrict the definition of “society” to babbling tech moguls and their rabid fanbase of enthusiasts whose calculus is that if achieving some level of autonomous driving means putting unknowing members of the public at substantial risk its worthwhile in the name of more rapid “progress” versus more methodically testing and maturing such systems in the same way that all other critical functional systems are developed and proven. This mentality of “beta testing” a safety critical system on the general public is analogous to the Mars colony enthusiasts who think we should just sent mission of desperate volunteers (or at least in one proposal, “involuntarily transport” convicted felons) in ‘cheap’ spacecraft over and over until one of them finally makes it, and the claim is equally risible on both counts.

Stranger

I’m not being fatuous. At some point, there will come a time when self-driving vehicles are on the public streets for the first time. The word for that is “beta test”.

Now, one can, perhaps, object, when that time comes, that the vehicles aren’t yet ready. But one needs a better basis for that objection than “it hasn’t been done before”, and one certainly needs a better basis than “it’s called a beta test and therefore it must be buggy”.

The OP contains both a factual question and an opinion that the OP even labeled as IMHO. Since the factual aspect of the OP has been answered and the thread is already drifting well into IMHO territory, let’s move the thread to IMHO (from FQ).

“Beta test” doesn’t just mean the first release of software; it is a specific approach to testing certain types of non-safety-critical software that intentionally deploys software applications at a state of immaturity in order to both stress test and get user experience-type of feedback about features such as user interfaces or interaction within a a workflow. You might beta test a desktop operating system or a game because such things pose virtually no risk to users or other members of the public. However, you would (or at least, should) never “beta test” something like medical equipment firmware or a launch vehicle flight code because of how much risk that poses both fiscally and to human life. For such systems, you develop and validate a suite of tests in both software-in-the-loop and hardware-in-the-loop simulations which extensively exercise the code and then go through many cycles of debugging and robustness improvement through regression testing before you even consider loading the software onto any operational hardware. Yes, this is expensive, and takes both time and expertise to do thoroughly, which is against the SiVal “move fast and break things” mantra, but when “breaking things” includes human bystanders, that mentality is no longer ethically sound. Failures can always occur in any system no matter how rigorously it is tested but ethical practices should require that you do all due diligence to uncover flaws and weaknesses in safety- and reliability-critical systems before deploying them to the general public, especially when it impacts people who have no option to consent to the risk.

This gets to a larger problem with machine learning in this kind of real world safety critical application: machine learning systems do not “learn” in the way that humans do; that is to say, they use heuristic algorithms to ‘learn’ to perform a specific set of tasks by trial and error, weighting the decision pathways that produce the preferred result as evaluated by some set of rules or a human referee. However, this ‘learning’ is absent of any larger context. When a human driver learns to operate a motor vehicle it is with a specific intent and (hopefully) the understanding of how dangerous an incautiously operated vehicle can be along with the consequences of an accident. Furthermore, they have a larger understanding of the world and the value and behavior of objects within it, which is why even the worst human driver can distinguish between a harmless plastic grocery bag blowing in the wind and a small dog, while autonomous systems struggle to interpret ‘ambiguous’ shapes and often emergency stop or swerve in a way that is actually more unsafe than proceeding forward.

This is not to say that human drivers are perfect, or even often very good, and of course humans become distracted or can be impaired in ways that are just not possible with an autonomous system short of system failure, but even the worst human driver that is sober and attentive has a more comprehensive understanding of the world around them than the best conventional machine learning system will ever grasp. Although these machine learning systems are often referred to as “artificial intelligence” they are not actually intelligent at all by any reasonable standard, and furthermore it is not possible to really evaluate or assure their functional reliability to operate safely in any preemptory way other than to have a human driver on constant standby to take over immediately when the system makes a bad judgment. The only way to evaluate the maturity and safety of these systems is, as noted, to put them on public roads and allow them to cause accidents and then assess their performance post hoc, and given the experience with Tesla “Full Self Driving” [sic] thus far, that is wholly inadequate to protect public safety.

Despite the ostensible advantages of autonomously piloted vehicles it is clear that the technology is nowhere near ready for broad deployment on public roads and requires far more development, and possibly revolutionary advances in machine vision and learning as well as better systems to validate and test the controlling software. That Tesla in particular is trying to rush this to market as fast as possible, even it it means ‘beta testing’ on a non-consenting public while other companies with vast experience in automotive safety and driver aid systems are pulling back and regrouping on what is actually needed to achieve true Level 4/5 degree of autonomy is indicative of just how reckless and and heedless of public safety the company and its executive leadership is.

Stranger

The roads are already jam-packed full of cars whose software does all of these things. If that’s your objection, then you’ll need a time machine to go back a century.

The relevant question is not whether computers ever fail at driving; it’s how often they fail, compared with the alternative. So let’s see the numbers.

There are usually more than 2 options for anything.

  1. Beta test software using drivers who are specially trained to use the feature as intended and also to observe and report on performance. The only requirement currently is that you have $15K to burn.

The Waymo approach to beta testing seems much safer, since they are not simultaneously trying to sell cars. With a Tesla, it seems like they are relying on a random car owner to be responsible in monitoring the performance of imperfect software. But…

…even if it could be done much more safely than the way Tesla are doing it, you really do need to see the numbers to judge whether regulators should step in. If another approach would delay perfecting automated driving by a few years, those few years represent a lot of casualties.