Now that Elon Musk has bought Twitter - now the Pit edition (Part 1)

GM is adopting Tesla’s North American charging standard, as Ford did. One standard is a good thing!

Yeah, that doesn’t seem like pit material.

Technology Connections (on his alt channel “Technology Connextras”) has a nice run down on the differences in connectors with snarky commentary (40mins) .

Twitter has refused to pay its Google Cloud bills as its contract comes up for renewal this month

Seems in keeping with the standard Musk playbook and a whole bunch of Trust and Safety moderation tools are going to go offline next month as a result.

Elon doesn’t need any of that shit.

“best businessman in history”

Speaking of exploding Teslas (gift article):

To be fair, we’d need to see a comparison to human drivers on a per-mile basis.

Humans aren’t very good drivers.

What ‘fairness’ do you mean? It’s not a true Autopilot. People are supposed to remain fully in control of the vehicle at all times.

People aren’t supposed to be overrelying on this tech but due to several factors people treat it as true autonomous driving. To be sure, general human stupidity is part of it. But not the only one.

One factor is Musk’s decision to strip down or change some of the sensors, mostly to make things cheaper. And yet another factor is Tesla (or perhaps mostly Musk) overselling how capable the tech really is - leading many people to overconfidence in it.

The types of accidents are also revealing. It appears it doesn’t do well near emergency vehicles and motorcycles - where humans would actually do better.

Sure, on a flat stretch of 100 miles of straight highway without any accidents or emergency vehicles, this already does better than human beings. Near children in school zones or near ambulances? Not so much.

Sounds like people misusing cruise control.

And people aren’t supposed to drive drunk. But if you put a drunk person in the driver’s seat of a car, there’s going to be a better outcome if there’s an airbag instead of a metal spearhead or nothing at all. You will probably want to ban spearheads and promote the use of airbags, once you look over the actual numbers on the outcomes. (Though, plausibly, the spearhead will have the best outcome in driving safety… I’d happily defer to the actual numbers.)

Likewise, if you put an idiot in a Tesla and let him misuse the Autopilot, in a Tesla without an Autopilot - so he can’t misuse it - or in a regular car then there will be some outcomes that you can point to that endorse an option and some which will point against some of those options.

But you have to compare the numbers honestly. Just saying that people shouldn’t do something has no bearing on it. We don’t live in the world where people do what they should do. We live in the real world where, often, they don’t. If, in that world, autopilot is safer than conscious driving, then it’s still better even if people are misusing it.

That’s impossible. Teslas have had flawless full self-driving capabilities since 2015. At least, according to Elmo’s prediction in 2014. And, incidentally, Elmo’s predictions almost every year since for the miraculous arrival of FSD the following year. :roll_eyes:

Yes, full self-driving is a very hard problem. But apparently so is making accurate technology predictions, or at least it’s hard for Elmo, who according to one of his other predictions is currently living on Mars.

No it isn’t. We have the capability now to do it, and the units can be mass produced by unskilled labor. Now, it does take 16 years or so to optimize the unit, and they are notoriously buggy for the first few years of operation. But after that, they work just fine. Better than Tesla, at least.

Yes but the upfront and continuing expense option seems to be excessive. I’ve heard that you can instead get various add-on options that have independently purchasable self-driving options admittedly with huge differences in skill, availability and cost. You can get per-instance options, or at the high end, a full-time dedicated unit or so I’ve been told!

But the units do it using a very complex set of subsystems and we have very little understanding of how they work, so it’s very difficult for us to reverse engineer them! :wink:

We actually do have fully automated transport systems in practical, everyday use:

Driverless trains are a good example of how difficult this is. You’d think a train would be orders of magnitude easier to automate than a car, given that the train is in a very controlled environment. And yet…

That was just on the first Google page.

The problem is that the real world is full of unknown unknowns, and a driverless vehicle must handle them safely. That likely requires reasoning and judgment, and not just a set of rules. We’re going to need something close to an Artificial General Intelligence to make autonomous driving truly safer. That’s why I always said it would take ‘decades’ to never before we had driverless cars. Now, with the advent of Large Language Models, I think it’s sooner. But it’s definitely not here yet.

ChatGPT isn’t going to drive your car anymore than Photoshop is going to.

What about AutoCAD? It has auto right in the name.