But its fun! And I’m learning so much about Tesla!
OK, I understand what you’re trying to say.
I guess you completely ignored my other post, since you would have answers to these questions otherwise.
Tesla’s margins are positive even including R&D. And that’s despite the fact that Tesla spends more (as a percentage) on R&D than anyone (and by a good margin). This is partly because they’re just research-heavy in general, partly because their future product line is larger than the current one, and partly because Tesla is relatively small and some items don’t amortize as well (self-driving research, for example).
SG&A–that is, “Selling, General and Administrative”, eats up most of the rest. This includes things like charging stations, but also their galleries, service centers, and so on. A large maker like GM would have a huge efficiency gain here due to amortization, and (presumably) having fewer defects in the first place.
Even reducing these expenses by small amounts would give Tesla great net margins, comparable with other automakers. So unprofitability is an invalid excuse unless you disagree with the point that Tesla is badly run, makes shoddy products, etc. If it were true that Tesla was already being run perfectly, there would be much less margin on the table (though still some due to the growth aspect).
…
wow.
You seem to be reaching a kind of snark singularity, where the content of your posts gets smaller and smaller, until a hypothetical future where the posts contain zero content but infinite smugness and disdain. Keep it up!
Well, let’s turn the page on this thread. What I can say is that the quantity of Model 3’s at work has balloned to a few hundred and counting. The reviews from coworkers is very positive, including those from family members who actually got theirs. Overall up close examination of the car looks awesome, imo. I think everyone who has ordered one will be quite a satisfied customer.
Nice! I have not yet spotted one at my workplace yet–we work with Tesla and I’m certain some coworkers have spouses/family members that work there, but I haven’t yet seen one in the lot. Lots and lots of Model S/Xes, of course.
The reviews certainly do seem positive and it’s hard not to get excited about it. Regardless of manufacturability or other “remains to be seen” aspects, Tesla should be very proud of the design.
It looks like the first people without direct Tesla connections have started the configuration process. Deliveries should be in a few weeks as best I can tell.
I’m somewhat farther back on the list, having not even owned a Tesla yet, but I did wait in line on day 1 and got a pretty early spot (in the Fremont location). So hopefully I’m relatively early, but even if dates slip a tad more I won’t be too distressed. I’d rather wait for them to get it right.
Out of curiosity–have the people you’ve talked to said anything about UI improvements?
I’ve seen a few complaints about things like wiper control, but really all this stuff seems like it will get ironed out over time. It should be possible to make the scroll wheels context-sensitive about almost everything. So for example, a click on the wiper stalk button could put it into wiper control mode where the wheels then control the frequency. Or, when normally cruising, left could be radio volume and right the cruise control speed target. I suspect this kind of thing just needs user feedback and that we’ll see some pretty rapid iteration, but I wonder what improvements people have seen so far.
You raise a *very *interesting issue there.
A combo of regulations and convergent evolution means that any driver already familiar with any car made from 1960 to about 1980 can hop into any other car of the same era and operate it. Including all the accessories.
There’s a bit of a break somewhere in the 80s when lots of controls migrated from knobs on the dash to buttons or knobs on stalks. But equally, any driver familiar with any car from the stalk era can hop into any stalk-era car and operate everything with no retraining.
That’s rapidly becoming less true. First to disappear into the maw of computerization was the gee-whiz driver trivia stuff like trip computers. next up were the pure audio entertainment functions and later nav systems. Plus “climate control”.
But it is still the case today that most any driver with experience in a 1980s or later car could hop into most any ordinary 2017 model car and reliably get the headlights, parking brake, turn signals & windshield wipers to work on cue. Plus of course get it started and into gear forward or reverse. They might struggle a bunch to get the FM radio tuned. But that’s not real important, and certainly not in real time.
That comfortable uniformity is rapidly coming to an end.
Just like the early days of GUIs on various personal computing machines (including but not limited to IBM-compatible "PC"s) there’s going to be a lot of experimentation and weird cul-de-sacs of dead end design. Obviously the total software industry and the total driving public are immersed in GUIs now, so the car developers will avoid some of the worst dead ends.
Bottom line: migrating need-to-use controls into a computerized UI will reduce the interoperability of drivers between cars. This poses issues for drivers, rental car agencies, and for regulators. We can hope the industry standards groups settle on something smart early and don’t pursue differences for differences’ (and licensing revenue’s) sake. If the governments have to step in to keep cars interoperable enough, that will be bad for everyone.
=======
Different issue:
They’re also entering a new design space of stuff that should be easy and reliable to operate without having to look at the display or dig into 3 layers of menus.
Consider wipers. It might seem obvious that they can live in a low level menu; after all, rain isn’t a surprise. It comes on slowly and you can usually see it coming. Heck, often it’s happening while you’re getting in the car. IOW, wipers tend to be set-and-mostly-forget for any given driving session. But what of the case that it’s a nice sunny day and up ahead, unseen, is a large puddle of sprinkler run-off in the other lane. Just as you get there a truck passes and thoroughly sprays you with muddy water. In 1/10th of a second you went from sunny day to “I can’t see to drive”. That’s probably a poor time to have to dig into a 3rd level menu using some sort of scroll-wheel plus click-to-select interface. Especially if you add some user startle into the mix.
The punch line being there’s a huge difference between frequency of use and importance of quick access. You can’t prioritize controls using just one of those parameters. Nor can you make everything priority one when you only have 6x8 or 8x10 inches to hold everything.
There is massive research, expertise, and published info in the airplane biz about managing this. IOW how to design a control station that works easily for routine stuff but also easily for non-routine but urgent stuff and also non-routine but non-urgent stuff. And how to design out lots of potential errors with proper awareness of the ways humans tend to goof.
We in the jet biz are starting to see the consequences of squeezing what once was 4 square feet of panel space into 8x10 inches. You can see and interact with all of it, but not all at once. Which makes your mental map of how to gather info and make changes have to grow a third dimension. It’s not just <here> in left/right + up/down. It’s also <here> in following-this-sequence-of-access-steps. It also means there’s a greater need for short term memory. Not only to keep your place in the access steps, but to recall what you saw on the screen you just switched away from as you move to the one you’re going to.
There’s more than one way to get this right. But there’s a comparative infinitude of ways to get this wrong. It’ll be interesting to see how the various manufacturers deal with this.
Especially pernicious might be the temptation for them to think that computerized UI for human-driven cars is a mere stop-gap on the way to computerized UI for self-driving cars. If they treat getting the human driver UI right as an after-thought or just temporary scaffolding on the way to full self-driving, they’re going to hurt a lot of people and a cost themselves a lot of progress.
Lastly, you can evolve this stuff quickly while you’ve got a small user base of “beta testers” with beta test attitudes and aptitudes. Very quickly, once you got an installed base out there it becomes very, very problematic to make alterations.
e.g. It might be technologically feasible to do a software push that totally changes how you control windshield wipers on Brand X cars. But it isn’t feasible from the POV of user acceptance and safety during the learning curve.
A motto from the software industry: Be careful what you release in v1.0. You’ll still be supporting your mistakes in v15.5. Or you’ll be out of business. Neither is good.
Excellent post, Sir! I’ve seen a variation of this in my wife’s Edge. We traded her 2011 for a 2017 and the climate controls are a perfect example of this. In the previous Edge it was all on the screen, on the new one a few discreet buttons do most of the functions. I’d also really want to know why the Model 3 doesn’t have a HUD. It could be built discreetly and not detract from the lines of the interior while providing all the necessary info to the driver without needing to take their eyes off the road.
Great post, LSLGuy.
Some of this stuff has been a long time coming. I’m reminded of those Toyota “unintended acceleration” incidents several years ago. In one specific case, the driver could not shut off the car due to it having a pushbutton start. The engine could be shut off if the button was held down for several seconds, but this isn’t that intuitive to someone used to a key. As you’ve mentioned elsewhere, this is just not the kind of thing that can be figured out in a few seconds and in a highly stressful situation.
Toyota eventually modified the button to also shut off if pressed several times in a row, as it might be in a panic. But it took at least four deaths to get there.
As for the wipers specifically, the stalk button (from my understanding) still works like a normal car, where a single press gives you a pulse, and a hold gives you a pulse+spray. So that solves the unexpected splash situation, and only the level of intermittency needs to be tweaked via other means. Still, I’m sure this kind of thing deserves more iteration.
You bring up a very interesting point, which is that vehicle UI design operates on different metrics than others. When Google makes a UI, one of ths things they optimize for is the average number of operations to do normal stuff. Burying an infrequent operation under a bunch of neseted menus might make good sense if it streamlines everything else and makes common ops a little faster. That logic doesn’t work on cars or airplanes–they have to work well even in the worst case, or even especially in the worst case. All of that dictates how the hierarchy will work.
I brought up the problem of distracted driving with the new display systems coming out. Regulations for this are in their infancy but they will eventually address the issue in much greater detail. I think it’s related on some level to the need to standardize control systems from a touch screen.
I truly hate getting into a car and not being able to operate a basic control. I just went car shopping with someone and as a passenger in the car I couldn’t figure out how to turn the !#%@!%# fan on to clear up fog on the windows. All I could think of was how dangerous that was for a driver on a busy road. Even if you knew how it worked it required you to look at the display. Any car I’ve ever owned had climate controls I could operate without looking at them.
That’s kinda surprising; I don’t know that I’ve ever seen a car where I could operate the climate controls without looking at them. There are just too many options–at least on mine, there are 3 buttons for the vent controls, two for temperature, one for auto mode, two for front/rear defrost, one for recirculate, one for A/C, and a couple of others that I don’t remember. The only part that I can do purely by muscle memory is the heat/cool mix knob.
A touchscreen may require an extra tap to get into climate control mode, but once in that mode the controls can be made giant, with pictures and text that are much more visible than the standard wall of buttons. It seems like a net improvement to me. Also, I’ve been in plenty of cars where the climate control was confusing, with weird button labels. Even if the touchscreen is different between cars, it’s more likely that it could “explain itself” better due to the extra space and potential for context-sensitive help.
That said, there are plenty of cars out there with terrible software. I have a hard time believing there will be regulations, just because crafting the regulation seems like an exercise in futility–there are no real standards for how UIs “should” work. I can definitely imagine some lawsuits, though.
Something which is reminiscent of military guns whose proper operation is going to be extremely important in quite dire circumstances, operated by people who are likely to be physically fatigued, sleep deprived and scared. The way that’s often dealt with is drilling: You do it over and over again so that the information goes from your prefrontal cortex to your basal ganglia aka muscle memory. Our panicked animal instincts aren’t well-suited to operating machinery and computers.
It’s too bad that with today’s technology, driving lessons don’t include simulations so that people can be drilled to have the right reflex (or at least not too much of the wrong reflex) when a dicey situation pops up.
I’m sure LSLGuy can offer examples on the trade-off between easily accessing common features and accessing emergency features. It’s probably preferable for emergency features to stay accessible through physical controls. It’s won’t look as aesthetic, futuristic and Apple-y but it will make drivers feel safer and may even be safer.
Right. Never going to happen, though.
However, you can try to design your system to respond the right way to a typical human reflex. That’s one reason I thought the Toyota case was interesting. They figured, probably correctly, that people will mash the start button trying to turn it off. It’s something we’ve probably all done when we get angry at a system–push the button repeatedly until something happens. In contract, push-and-hold isn’t really a normal reflex.
So maybe some of these ideas can be integrated. Going back to wipers, you could make any action on the wiper stalk trigger one swipe. As a fast reflex where you quickly take a swing at the stalk in case something suddenly obstructs the windshield, that would do the right thing (and pose minimal problems in case of false positive).
Another one I recall are systems that boost the brake pressure when it detects sudden braking. People do slam on the brakes when there’s a sudden stop ahead, but they tend not to apply as much pressure as they should. So some systems detect the rate at which the brakes are applied (which differentiates normal from emergency braking) and boost things for emergencies.
I expect one could come up with hundreds of these kinds of ideas if one really put some thought into it.
Not specifically about Telsa model 3s or even EVs, but xkcd today is kinda-sorta on point and prescient as usual: xkcd: Self-Driving Car Milestones
The current attitude in big airplanes is that emergency stuff remain physical controls. The bad news is that means lots of knobs and buttons and levers that seldom get touched. The good news is we have a lot of space to put them.
In fact one of the (jokey) maxims we use is “only touch the shiny switches.” It dates from the days when everything was a toggle switch with a steel or aluminum bat. The ones used several times in every flight would be polished to a high sheen by fingers after fingers after fingers. There’s also a worn spot from lots of hand contact where they emerge from the panel. The other ones would be a dull oxidized color with maybe a rust pit or two surrounded by dusty panel. Never touch that second kind without *really *stopping to think and probably reading the book.
Nowadays with far fewer toggles and more pushbuttons or rotary knobs the aphorism doesn’t sing as well. But the point remains equally valid.
The risk here is that “smart” can quickly shade into “inscrutable which feels unreliable”.
I had an example of that issue with my last car. The logic of the interaction of the key fob door lock/unlock buttons, the lock/unlock button on the dash, and whether the car was running or not, in gear or not, and which door(s) were open was so complex as to be inscrutable. The outcome from my POV was “Sometimes you push and get what you want; sometimes you don’t.” I’m sure it made sense for *some *collection of use cases. Just not for a couple I often used.
One of the major causes of what we call “flight path excursions” is what we call “automation mode confusion.” e.g.
The pilots believe the system is in Mode 3.a.3.f and when event X occurs it’ll transition to Mode 3.a.4.b with observable outcome Y. But the system was really in Mode 3.a.3.g which means that when event X occurs it’ll do nothing. That is exactly the scenario where the flight path excursion developed all the way into a “hull loss” a few seconds later: Asiana Airlines Flight 214 - Wikipedia .
Just as pernicious is when the is was really in Mode 3.a.1.a which means that when event X occurs it’ll switch to Mode 3.a.5.c with observable outcome Z. Which leads to hurried conversations like “What is it doing? Why is it doing that? Quick, take over manually. Now how to we resync it to what we really want?”
All of which demands that you be on the lookout for upcoming event X and for observable Y. If nobody happens to see event X because it’s subtle or was unexpected now you’re really off to the confusion races once the disconnect becomes obvious.
The “fix” for mode confusion is to have 4 modes, not 23 with 4 levels of nested if conditionals. It gets even more fun if this “lots of ‘optimized’ modes” design is applied to non-routine stuff that get’s used a few times a year, not every day.
The punch line here is that it’s relatively trivial for software to add complexity. So there’s a temptation to do so. Which needs to be thoughtfully resisted.
Of course cars and drivers have simpler problems.
But there’s only one driver who’s (probably) not paying as much attention, and is (probably) less well trained on these nuances. Plus, cars’ and drivers’ problems develop quickly and proceed to finality even quicker compared to the majority of airplane problems.
There is a fundamental difference between my example of mode confusion which is about watching HAL drive, versus input/output confusion which is about “If I make <this> control input, <that> will happen.” But it has similarities. Because it leads to “If I do <this> 4 times in subtly different circumstances and get 4 different outcomes, my mental model is simply confused and untrusting.”
Ultimately safety comes from the overbalance of supply vs. demand for skill, knowledge, and attention. Clearly the demand is less for cars vs jets. But how’s the supply relative to that reduced demand? Accident statistics give a clue. The automotive software folks are really casting pearls before swine. Be careful not to confuse the pig; they get ornery when that happens.
Regulations would be along the lines of number of steps for basic functions and standardized layouts. For instance, the home screen could be a triple tap anywhere on the screen with basic climate controls always on top. Defrost always on the upper right, fan in the middle, and temp on the left.
I suspect auto makers are already working on standardizing screens for basic functions. It’s just common sense.
It seems that there were some issues with cars in the 600 vin range mainly dealing with uneven door gapping, but my other buddy with vin in the 1100s has had no issues whatsoever. No one mentioned GUI issues. Overall the way to describe riding in it is like riding in a spaceship.
Thanks for the report! Glad I didn’t get one of the very first units, and also that they’re getting a grasp on the QC issues.
“Like a spaceship” is how Musk described it as well. It’s just that he means this instead of this!
Non-employee shipments seem to keep coming along. No reports yet of non-owners getting one, but it’s just a matter of time.
Certainly a danger. I’d hope that any work along these lines would research what people actually do in these situations. Not just adding complexity for complexity’s sake, but tracking behavior and responding to human reflex. I’d hope that automakers already do what computer UI designers do, where they track eye movements, mouse movements, how long it takes to complete actions, etc.
Air France 447 was another example along those lines. The automatic stall protection didn’t happen under “alternate law”, and the pilots either didn’t know they were in this mode or didn’t understand the consequences. So they stalled right into the ocean.
There is I think a general tension between “do the right thing” and “have predictable behavior”. This happens in programming, too–languages which try to interpret the programmer’s request in a way that makes sense necessarily means an increase in complexity, and decrease in predictability. I say “tension” and not opposition because the tradeoff is not zero-sum, and trading a little of one for a lot of the other might be a net win. But the balance can be hard to reach.
Incidentally, the Model 3 does have one ordinary cabin button. The emergency flashers :).
BTW, I really like the “only touch the shiny switches”, and hope to integrate that into my set of idioms… (metaphorically, it can clearly be applied to more than just physical switches)