Tesla Model 3 anticipation thread

Agree completely.

And I bet the physical flasher button is there entirely as a result of NHTSA regulations requiring it to be a specific size, shape, and marking. As Magiver points out, some stuff can and should be regulated.

Timed out on edit. Replace all the above with:

Agree completely.

It’s one thing to add “smarts” to reflex actions like “stomp on brakes quickly but not max effort -> extra apply braking effort.” Like ABS, that probably improves drivability and safety. It’s a different thing to add “smarts” to non-reflex actions. Which also raises the issue of where to draw the reflex/non-reflex line. A car driven deliberately / aggressively by a performance-oriented driver may well make a lot of control inputs that would look to the control system like panic inputs if done by a little old lady or casual distracted driver.

The other day I had occasion to drive my co-worker home from work. His comment after a few miles of my driving: “You drive this thing like an F-16.” :smiley:

And I bet the physical flasher button is there entirely as a result of NHTSA regulations requiring it to be a specific size, shape, and marking. As Magiver points out, some stuff can and should be regulated. It’s arguable whether that particular button ought to be that thing. But it demonstrates the point.

FMVSS 101- controls and displays
FMVSS 103- windshield defogging and climate controls
FMVSS 108- exterior lamps, indicators, and associated controls.

Not to divert too far from the thread but the automated system wasn’t the problem. They had too many system failures to account for it with software. What really took the plane down was the joy stick. Unlike a mechanically linked yoke between the two pilots they had completely separate joy sticks and they were set up so that the first one engaged by a pilot over-road the other. The Captain was trying to correct the problem but the FO’s input cancelled it out.

There’s always going to be a potential design error when programming affects control input. It can be infinitely better than physical controls as long as it fits within the parameters of the programming. Anti-lock brakes and yaw control are 2 excellent examples of how computers can correct human failures. But I’ve had anti-lock brakes get confused at low speeds and almost trash a car.

This is why I wonder how we are ever going to safety qualify systems based on machine learning. With them, effectively there is a massive array of network weights and millions of separate coefficients that go to producing the output. It can mean that all you really know is :

a. You know what the training parameters were, what the network was trying to optimize
b. You know it passed all the test cases you gave it, both simulations and a small number of real life cases
c. You know from the math you used and the exact architecture that the machine is going to interpolate what to do in between known test cases. This is going to be the best thing to do some of the time, depending on the nature of the problem.
d. You at least can qualify the underlying plumbing. Neural networks require hugely simplified plumbing underneath to actually drive them, which means a much lower chance of the actual plumbing failing. The plumbing is all that low level code written in C/C++/ASM that actually does the grunt work of calculating the outputs and getting the inputs from peripherals and getting the outputs to the peripherals and handling communication and memory errors. What I mean by this is that there is a lower chance of some memory pointer going nuts and causing outputs not even from the control logic or the system running out of execution time and doing something undefined - at least you can be pretty sure the neural network remained in control, with faithfully calculated outputs per the model.
e. You know the machine has passed all test cases. So this means that if early versions of the network do have edge cases that kill people, you can download the crash data and make new test cases that will handle that problem. The system as a whole will now not make that particular mistake again, which guarantees forward progress.

But effectively that neural network has a near infinite number of operating modes. Every single time it makes a decision it’s weighting all the inputs, including data from the last decision it made, and deciding on an output uniquely optimized for this set of inputs.

The joysticks contributed too, certainly.

All of this stuff fits into what LSLGuy brought up in a different thread–planes get taken town when the pilot’s mental model of the plane’s behavior does not match the actual behavior. The sequence is something like:

  1. Something goes wrong
  2. The pilot notices that control inputs don’t do what she expects, and realizes something has gone wrong
  3. She forms a hypothesis about the real situation (which instrument is lying? which control surface isn’t working?) and performs control inputs consistent with that model
  4. She observes the outcome–either the new inputs work or they still misbehaved, invalidating the model
  5. Repeat 3-5 until the model is validated or the plane crashes

Fortunately, in a plane there’s usually minutes; enough to go through several loops of the sequence. But sometimes it’s still not enough.

And of course multiple pilots might have their own individual models of the situation, and if they don’t communicate properly then things are even worse–hence the importance of “crew resource management”. Stuff like the weird joysticks can contribute further to mismatches, and even cause one person to transition from the correct model (I think the plane is stalling, so I correct for a stall–nothing’s happening [due to the other joystick], so I guess that’s wrong).

Good analysis there Strangelove.

@Magiver: Not wanting to hijack further, but it’s almost always a mistake to say “<this> was the problem” in an aircraft accident. In the law there’s a concept called “but-for causation”. As in “But for <this one factor>, that <bad thing> would not have happened.” It’s used in certain circumstances to determine fault. But nowadays its usage is pretty circumscribed. Why is that?

Because it’s 17th century thinking appropriate to 17th century complexity. But for 20 different things AF 447 would have arrived in Paris. It’s a mistake to settle on one as more but-for than all the others.

I will agree that non-moving sidesticks don’t help. Which is why Airbuses have a takeover button that cuts the other guy out. But for the left seat pilot pressing that button on *his *stick, they all were killed.

The latest and greatest sidesticks do move, feel interconnected, etc. Some bizjets are flying them now. I expect Airbus’ next project, the A320 replacement, will probably incorporate them. As might Boeing’s NMA or subsequent 737 replacement, in a major break with their yoke-centric past.
Cars aren’t airplanes and I may have derailed this thread excessively by comparing lessons learned and lessons yet-to-be-learned between the two.
Tesla, anyone? :slight_smile:

I like the example of this flight because I think it’s relevant and I connected it to anti-lock brakes. In the case of the airplane that specific scenario had an extremely narrow time frame in which to correct the situation. It was measured in seconds not minutes.

My anti-lock brakes were designed not to lock up. But at very low speeds the act of slamming on the brakes was met with a computer that decided locking them up was a bad idea. It was like I had no brakes at all.

Agree overall. Not trying to bust your chops. Done right, helpful machines are a help. But engineers need to mind the corner cases; they’re sharp and can cut people.

Sounds like you managed to hit one of your car’s (hidden) corner cases. ABS has always been a two-edged sword and designing a system to be fail-safe rather than fail-fail no matter what is hard.*

A related problem when ABS first came out was they decided to do the pulsing pedal feedback when ABS kicked in. Or rather, they didn’t design in extra some isolation to prevent the ABS’s actions from hydraulically feeding back into the pedal.

The result was when people who’d never gotten into a brake-caused skid finally did so and the pedal started jumping, they released the brakes from startle or fear/confusion that the brakes weren’t working. Or they released and reapplied brakes to “try again” hoping the problem didn’t recur. Of course it immediately did. Cue loud crunching noises.

Oops. Machines have got to do what the operators expect. When they don’t what happens next is pretty much uncontrolled chaos.

  • Another airplane hijack; read or skip as you wish: Back in the 727 procedure was to turn ABS off after slowing to taxi speed but before exiting the runway. The concern was that a wheel speed sensor failure at low speed or a large speed difference sensed during sharp turning would be misinterpreted as a skid and trigger all brakes to release followed by loud crunching noises. We also didn’t turn it on until taking the runway for takeoff for the same reason.

That is really interesting. I wonder if the automakers figured this out. It’s such an easy fix to add a line to the code to ignore ABS under a certain speed.

Of course, with the new sensors in the front of cars it’s going to see an object and add that information to the mix.

My 2002 GTI ignores ABS under a certain speed. In an icy parking lot, going under 10MPH or so, I can lock-up the brakes coming to stop. This is probably the correct behavior for the ABS. The ABS definitely kicks in at higher speeds, as it’s supposed to.

A 2004 BMW motorcycle I had with ABS did have weird problems. There was a particularly large expansion joint in a right turn lane I would often hit under moderate braking. I think the front tire would lose just enough traction to slow (or stop) enough to cause the ABS to kick in and disengage the brakes. That left me going towards a right turn with no brakes somewhat faster than I wanted to be going. It would take the ABS about 5-10 feet to sort itself out. I learned to let off the brakes over the expansion joint, and then I could resume braking sooner than if I let the ABS do its thing. It certainly could have caused an accident if I was going too fast and somebody in front of me decided to come to a stop in a continuous no-yield right turn lane (but that’s a conversation for the pit).

To keep this on topic. I hope Tesla does updates to their systems as these sort of edge cases come out. I mean, I think I hope they do. I know other manufacturers do over the air updates of infotainment systems, but do any others do OTA updates of vehicle control software? I know dealer/recall/service bulletin updates happen occasionally. I mean, I don’t care about beta testing a new nav screen, and I want the updated ABS software, but I don’t really want to beta test the ABS software…

Interesting topics all. Dealing with edge cases (and updates for edge cases) is real hard. For both self-driving and AVs we like the idea of “they all learn from each other and get better”. But the first time a bad lesson is learned or a bad update is pushed and 250K cars suddenly drive more stupidly that’ll be bad.

Aircraft-related hijacks below here. Read or skip as you prefer:[spoiler]As to 727s …

The ABS system was a 1960s design with a handful of discrete transistor op amps and some relays. With one speed sensor for each of the 4 wheels so very little redundancy. For many years USAF refused to put either thrust reversers or ABS on their airplanes due to fears about reliability. So instead they just added an extra mile or so to all their runways. A couple decades behind airline practice they bought into both reversers and ABS.

As to edge cases and mode shifts …

Programmers like bright line distinctions expressed in if statements. e.g.
NB: code mode doesn’t display correctly in the new mobile theme.


// inside a control loop executing at 20 or 60 Hz
if( Vehicle.SpeedMPH < 5) ABS.Active = false else ABS.Active = true;
if( ABS.Active && ABS.TractionInsufficient) ModulateBrakes();

So what happens when you’re slowing through that transition point with the ABS cycling? Answer: Lockup. Or, as echoreply says about his bike, going the other way maybe unexpected brake release.

Umpteen years ago they almost crashed the prototype of the F-22 (IIRC, might’ve been a different type) on an early flight. The flight control software had two modes, on-the-ground and in-flight. Which transition was detected by microswitches in the landing gear legs as they compressed or extended as airplane weight came off or was applied.

Anyhow, the mode change was abrupt and changed a bunch of gains in the pitch control loop. With the effect that on the first takeoff (or was it landing; memory fades), the pilot got into trouble where the software would transition modes, the airplane would jump nose up/down in response, the pilot would push/pull to correct, the software would transition modes, the airplane would jump nose down/up in response, the pilot would pull/push to correct, etc.

They damn near wrecked the jet. The solution of course was that mode transitions should trigger a slow fade over a 3-5 seconds from one set of control gains to another. Not a bang-bang switchover.

ISTR they thoroughly crashed a drone prototype (DarkStar?) the same way.[/spoiler]

I believe they all do. the corner case is that if you’re on an extremely slippery surface (wet snow or ice) and all four wheels lock up, what does the ABS controller do? As far as it can see the vehicle is stopped; no wheels are turning so there’s no vehicle speed info going up the bus.

I think I remember that. The F-22 started to porpoise on landing.

More airplane crap:[spoiler]This Lockheed Martin F-22 Raptor - Wikipedia says it was the second YF-22 prototype on landing and they did crash / destroy the jet.

Here’s a bunch more details about the mishap: AW&ST Articles on YF-22 Crash. And a vid: YF-22 PROTOTYPE JET CRASH! - YouTube

I should have done this research first; I munged a bunch of details 25 years after first reading about it. The punchline remains as I first said: the ultimate cause was an abrupt mode transition in the control software where a fade-in would have been better. At least nobody died discovering this bug.[/spoiler]Tesla, anyone?

This is absolutely fascinating and just last Monday I found a bug in my own codebase that causes a similar issue. It was a PID control loop, using a reference implementation from Texas instruments, and I discovered that if you adjust the setpoint to the loop, you would get a sudden “impulse” response that was incorrect. Similar to this 727 example, where locking the brakes or freeing them suddenly is incorrect.

In my case, this bug is possibly the cause for the loss of about $10,000 worth of electronics, though it’s not solely the software’s fault…the system lacks a functional overvoltage protection circuit and thus depends on the software…

In general you want a smooth gradient of an output from this software function, that’s most correct and most compatible with the rest of the airframe’s systems. From a little ABS activity at some minimum speed to max ABS activity at some higher speed.

Alas, the technical hardware you need to implement that well* is called a “floating point unit”, and low end microcontrollers today still don’t have them. You might remember it being a big deal when desktop computers first got them in the late 1980s.
*not strictly required but emulating floating point in fixed point math is just begging for additional bugs from overflow errors…

And on further review, I realize you could just use a programmable PWM unit, a standard feature on any old microcontroller today, to solve this particular problem. You don’t need floating point math at all except it helps make the code cleaner.

Desktop PCs could (and did) have them all the way back since the original IBM PC model 5150. They were called “math co-processors” back then and were separate chips. if you needed a hardware FPU you just popped an 8087 co-processor in the socket next to the 8088/80286. it was 1989 when the 486 offered the first integrated FPU in the x86 line.

More non-Tesla control theory:[spoiler]In the pre-machine learning world, most high level control systems are implemented as Finite State Automata (FSA). Which by definition exist in discrete states and make abrupt state transitions in response to external or internal stimuli. Then a bunch of lower-level activities are called into effect based on the current state. The low level activities themselves are often, as you say, designed to ensure their output is always differentiable with a reasonable gradient.

The challenge when this stuff is used for dynamic control is that although the FSA itself may make a hard state transition in negligible clock time, it needs to orchestrate all the low level activities so their outputs remain smooth. Which includes activities shutting down, starting up, those continuing with unchanged parameters, and those continuing with different (sometimes radically different) parameters.

In essence, the FSA develops a need for a mini transition management machine along each allowed FSA transition arc. Once you get enough layers of these things it gets goopy and hard to test, much less to certify. And invites unplanned interdependencies.[/spoiler]Tesla, anyone?

Sure.

Tesla working on their own custom AI chips. Note of course that the programming discussions above are all well and good but purpose built custom AI chips are a bit of a different beast.

And some believing that Tesla 3 production might actually be ramping up.

Example. That said, fixed-point math is effectively universal for microcontrollers. Ideally, you write the software in a way where you can bound the results–i.e., a 0-1 number times another 0-1 number is also 0-1. The bounds flow through the program and so you can prove that overflow will never occur.