Stories that mention the report and provide a link all link to the same thing: Cars 2025. I don’t see the 300,000 figure anywhere at the website tho, and it looks less like and in-depth analysis than a slick powerpoint presentation.
ETA: Oh yeah,it’s a dynamic webpage, so links must get there eventually.
EATA: Nope; I couldn’t find it anywhere. And that seems to be the only auto-related article at the Goldman Sachs site.
After some more research, it appears that the way Goldman Sachs works is this: they release a press statement or actually plant (pay for the space) a story in the media about their reports. But they don’t release the report, just a story about the report.
To actually see the report, Goldman Sachs wants to charge everyone a fee; naturally their expertise isn’t free. So they use the media as their advertising arm (like a lot of “business news”, IMO) and then hope the publicity makes everyone spend the $4,365,987/year for a subscription to their services [citation needed].
Every news story I looked at had nearly identical language when describing the 300,000 number, yet there is no source available to check that number. Somewhere I found a reference to Goldman360.com, which is a website that wants you to log in to do anything at all, even view the website. So that’s their paywall where the actual report resides, is my guess.
How many accountants and analysts and other people who 90% of their job is generating periodic, rules-based reports?
How many middle managers does a large corporation have minding those people and how many more executive minding those middle managers?
For example, a few years back I worked on a project related to operational risk management at a bank. Now while the function of ensuring that a bank has significant capital reserved to ensure they can cover business losses in the event of fraud, property damage or other “event” according to regulatory guidelines like Basel II/III is important, it a) really doesn’t have much to do with the actual business of how a bank generates revenue and b) it actually involves a lot of people doing a lot of mindless data work that is ripe for being replaced by automation.
So instead of 100 people doing all manner of tasks, you have 10 doing just the high level stuff. Ergo…90% bullshit jobs removed.
And yet there are still jobs for horses.
The main difference is that horses won’t do any job other than the ones humans tell them to do.
Most jobs for horses are ceremonial or decorative. The horse-drawn carriage at the Oktoberfest or county hay ride, racehorses, et cetera. They’re jobs that could be done better by machines (no horse will ever beat a Formula 1 in a race), but we just like to use horses in them. So like I said:
“Like, best case scenario, there are a handful of service jobs which humans prefer to have done by humans rather than robots (teaching, caring for the elderly, certain kinds of creative work).”
This is not a promising comparison. Most jobs are done by humans not because we need humans doing them, but because we need something done and there’s no better option to get it done than humans. Baristas, truckers, taxi drivers, lawyers, doctors… Massive amounts of work done by these people can be automated away, and there’s no particular need for a human face on the thing making your latte in the morning.
I would suggest this is not the right way to approach this issue.
Fact Number One: The economy is adding jobs. We have better technology, more machines, and more workers than we’ve ever had before. The trend is upward, for both workers and machines, and this long-term trend is centuries old despite business cycle fluctuations. Real wages are also up, way up, when looking globally rather than at any particular country.
Fact Number Two: There is a theoretical machine that we can easily imagine that would be superior to a healthy human worker in any job and across every conceivable dimension.
So here’s another way to describe the same point. You seem to think machines will start to eat away at what people can do, where the “red” of automation slowly replaces the workers, starting with the least skilled workers first, looking something like the following. These workers have literally no work options left because they can do literally nothing better than a robot. The green dots represent skills that require levels of human ability that cannot yet be fully automated.
I think this is a fair visualization of people who fear a massive number of “unemployables”.
Unfortunately, such a belief shows a regrettable lack of appreciation for how valuable human beings actually are, and how similar our skills sets are. If robots get around to replacing workers, on net, it will look much more like this. (You’re going to have to scroll to the right. Very very far to the right.)
That X is the entire spectrum of human skills. All of us collectively.
You can’t cite single industries, even large industries like transportation. Our ancestors were literally all involved in producing food, and now we don’t – near 99% relative job loss in the single most needful industry in the economy – but there are more jobs than ever. Why? Humans beings are clever. Human beings are flexible. Human beings can quickly adapt. These are valuable skills. That’s the whole reason why other people build machines. If human work weren’t so damn valuable, there’d be no incentive to build machines to replace that work. Human beings can quickly adapt, but robots can’t be quickly programmed or quickly engineered or quickly implemented. Notice the years and years of effort it’s taking to automate the transportation industry – and that’s the current low-hanging fruit. The next low-hanging fruit is going to be even harder to pick, even more expensive to debug and implement. It’s easy to imagine a machine doing work, but it requires more than simple imagination for those machines to do the job more efficiently than a human can. And it is beyond our present imagination to conceive of what jobs we might “need” human beings for in the future.
Basic human intelligence, of the kind all healthy people have, is valuable.
It’s no use imagining a robot that can do the same job. It needs to do the same job more efficiently than humans can do it for someone to actually want to built that robot. But humans can just be told, “Hey, now do X instead of Y”. A robot cannot (yet) manage that. If any single fully functional human can be totally dominated by a robot along every conceivable dimension of economic production – where the worker can’t just hop to a new industry like workers have been doing for literally centuries – then the entire species is irrelevant for production the very next day. Not some of us. All of us. For healthy humans, we’re simply not that different. One robot better than the worst of us across every dimension means very quickly every robot better than the best of us.
We can tell a human worker, “Do this!” and then be confident they can start doing it without the program crashing and needing to be debugged five times a day over the next few years. That is an astoundingly valuable skill, and robots are not, I believe, anywhere close to replacing it. That is exactly what is so strange about these discussions. A reasonable person should expect that the economy should continue functioning as the economy has always functioned, until the ground rules change. Now, that might happen. Maybe it’ll even happen soon. Someone might develop a machine that can do literally everything better, along every dimension. But that scenario? Worrying about something as shallow as job losses in that scenario is like putting on sunscreen to protect from a nuclear explosion.
The reaction is not just misguided, but wrong by so many orders of magnitude that it’s like using an electron microscope to study a galaxy at the edge of the observable universe. The scale of the worry is so wrong, it’s breathtaking.
There’s no halfway scenario here. The economy will continue to function in the way it does as long as the current rules hold. That’s not to say that the current rules will always hold. They might not. But in that case, “jobs” is no longer a human concern, let alone “bullshit jobs”. That is not the right level of magnification to approach that sort of difference. Not even close.
I feel like we go in circles a little with this topic.
Human wants are essentially limitless. So whenever we make tools or machinary to do something humans used to do, there are always jobs the displaced workers can do.
There’s no plausible scenario where humans are both materially worse off than now and unemployed.
However, there could be a difficult transition period where a lot of workers are displaced at the same time and the economy has yet to adjust to the new possibilities of what is trivial to do and what we need to be shooting for. That’s what we’re talking about here.
Strange to hear people say this in 2017.
What exactly do you mean by “bullshit”? Not economically productive? So why do companies hire such people and why is productivity higher than ever (while unemployment still remains low)?
Or is it not a true [del]scotsman[/del], “man’s man” job? Should I be slashing coal out the ground with a battleaxe like old grandpappy or something?
The reason I mentioned a government infrastructure project is that no corporation will pay a worker a salary and benefits once robots are cheap enough that they can pay a one-time fee to own one and then a very small maintenance fee after that. To perform the same work. 24 hours a day without breaks. And likely more capable than any human would be at the same job.
Doing it through the government, which has no fiduciary responsibility to maximize profits for shareholders, would be trading inefficiency for humans still being able to eat and have homes. I proposed this because in the U.S. we have a LONG way to go before our establishment will accept a universal basic income as a positive development; we’re still trying to convince one of our two major political parties that giving people food isn’t wasteful socialism run amuck.
This only holds true to a certain point; that point will be when “worker” and “robot” become interchangeable terms.
What is the point in paying a salary and benefits to a human to do a job that a machine can do 24-hours a day with no breaks, maintained by another robot, that you paid a one-time fee to own?
Once not only manual labor, but also the bulk of intellectual labor, can be done more cheaply by robots… it will. Any new jobs created by the increase in efficiency will be immediately filled by newer model robots, or small tweaks in programming that, once done to one robot, can easily be uploaded to any others you need to do the same task.
There are only so many computer programmers that can be employed.
It remains an enduring mystery to me why people incapable of understanding a thirteen paragraph post think they’ll manage a clever point when they stop reading after the second paragraph.
1920 is about when the horse population of the world peaked. At the time, horses were used for transportation, for amusement, for war, for agriculture and for construction. Today, horses are not used for any of those things unless someone simply wants to, with the exception of amusement: you can’t go horse riding unless you have a horse, etc.
So in or about 1920, a horse would reasonably think that there will always be jobs for horses. I mean, trains happened and yet the number of horses in use increased. Airplanes were invented, the telegraph, the steam engine, all kinds of labor-saving devices… yet horses were used more and more each year. Until they weren’t.
No, seriously, thanks for the thoughtful post. I always look forward to reading your posts in these subjects. One thing though, is that you seem to be missing the impact of AI. AI has the potential to make the robot be able to do what currently only humans can do. Any thoughts on that?
However human labor is a quite a different case than a specific animal however, for all the reasons that Hellestal gave.
Humans have been displaced many times and found work elsewhere over and over again. It’s very likely a lot of human labor will be displaced again, but the majority of people being unemployable is a very different scenario, and not one that’s going to happen in any realistic timeframe.
Saying that once horses were commonly used and now they aren’t, is about as relevant as saying bellbottom jeans were all the rage once (therefore, checkmate)
Literally from the third paragraph, I’m talking about AI. My point, such as it was, relates to the difference between a machine that can do things like driving a tractor semi-trailer down the highway (a relatively “weak” AI, for anyone who wants to use the word) versus a machine that can do literally everything better than humans along every dimension (a “strong” or “general” AI). I don’t generally like the use the term AI directly, though, because it seems to me like people conflate the two different things.
I’m not a fan of most AI discussions because different kinds of “intelligence” get conflated under one term.
And this is understandable, to the extent that intelligence is very difficult to think carefully about. Look at the history of the development of AI. People in the past would ask, like, okay, what could an AI do if you invented it? What could it accomplish? And the answer early in the field was things that: “It could play chess! It could understand words!” And hey, we’ve got a computer that can play chess. We’ve got computers that can do translation. But we still don’t have a real AI. (By which I mean a “strong” or “general” AI. This is to say: We don’t have a single machine that can conceivably outperform a human being along every conceivable dimension of activity.) What we have is a large variety of very different machines, engineered for very specific purposes, which outperform humans at very specific activities. “Think of Activity A. Now build a machine that do that better than people can. Now think of Activity B. Now build a machine that can do that better than people can. Repeat.”
AI folks in the past got accused of shifting the goal-posts when they said, well no, that chess machine isn’t actually an AI. But shifting the posts wasn’t quite a fair accusation. What actually happened was: we got better at understanding what intelligence means in more and more general contexts. When you’ve got an idea in your head for the first time, ever, and you’re trying to flesh it out with words, you’re not going to describe it correctly. In part this is because the idea is still fuzzy in your head, but also because even when the idea is crystal clear, you don’t yet have the language or the proper analogies to make a vision that is clear to you also clear to other people.
A “real” (or “general” or “strong”) AI will completely reshape the world in ways that we cannot presently imagine. But I personally believe this is a much harder problem than many of its advocates seem to believe. (Maybe I’m wrong about that.)
In contrast, a continual series of “weak AI” machines are simply not a big deal, as far as the economy goes. Nothing fundamental changes. They are designed to perform very specific tasks – like driving a shipping container down the interstate – and so they are not necessarily indicative of anything but our ability to automatic specific tasks, when it is economical to do so. Now, we’ve gotten fairly clever with the automation of specific tasks. People look at their own jobs and think, well hell, a machine could do this. In many cases, they’re absolutely correct. But the machine still has to be engineered, and that is not easy. While there’s certainly overlap between some tasks, that overlap is continually overestimated. The ability to automate transportation is not going to cross-over to the ability to perform open heart surgery. That surgery? It might be automated. But the programmers aren’t going to copy&paste their driverless car code to do it. They’re going to have to write a bajillion new lines, specific to the new task. The same would be true of picking strawberries. Or making sure a fast food restaurant is properly cleaned overnight. Or just about any other task that a human being can be told to do in about three sentences.
A human can just be told: “Do this.” And they can do it. Because, of course, we human beings are ourselves a kind of general intelligence system. In contrast, a newly engineered machine cannot be so commanded and repurposed. A big chunk of change has to plopped down to get the machine functional in the first place, and then, it’s only going to be working on that one task. Human work remains valuable because it is flexible. And human wants, as previously noted, are effectively endless.
The fundamental rules of the economy could change. Absolutely that could happen. But we need to understand those fundamental rules in order to understand what could actually change them.
Instead we get antifactual beliefs like “labor always loses”. This isn’t just wrong. It’s pathologically incorrect. Imagine travelling back in time, when people worked 14 hour days just to survive, and asking them if they wanted the ability to earn five times the wages for less than half the work. Or hell, Pratchett said it better than I can.
Labor has been winning.
It hasn’t “won” because there is no such thing as winning. We always want more. But for literally all of human history, right up to the present day, machines have been a complement and not a substitute to human labor. We are rich beyond the imaginations of our ancestors because we have machines.
That could change. Of course it could. But we need to understand how reality works right now in order to understand what would cause such a fundamental shift to happen. And that means, in this context, a machine that can repurposed to new tasks faster than human beings can be so repurposed. That means a GENERAL intelligence, not a technology that takes a decade and a half to develop. There is a huge gulf between those two things.
A thread like this gets to 15 pages because people can’t distinguish the two things.
It peaked in 1915 (PDF) and had been flat per capita since 1900, if not earlier. Meanwhile, over 200 years after Watt’s steam engine, we’re creating a million+ jobs each year, with increasing household income and high quality of life. Losing sure feels good.
I’m curious if the doomsayers have any tangible predictions with dates.
I’ve actually been studying a lot about RPA automation for work. I would hardly call the work they replace as “intellectual”. Robots are replacing work that is repetitive, rules-based or tedious. They don’t design the processes and still need human oversight.
IBM Watson doesn’t replace your doctor. It gives him the ability to reference every medical publication ever written in seconds.
Repetitive, rule-based, tedious work is the bulk of human labor. Whether you’re talking about taking an order, taking money, and then pouring coffee ingredients into the right size cup in the right combination, then repeating… or lifting an item with the right barcode from one pallet and placing it on another pallet or in a truck… or driving hundreds of miles where your only job is to brake or accelerate and keep wheels pointed forward between white lines; every second, repeated over and over.
Even the bulk of office work is very repetitive: look at this scanned document, type the relevant information, check this box, make sure that box is filled out right, mark it complete, move to the next record. I’ve never worked at an office job that isn’t metric-based with quantitative numbers that can be applied to every single worker in the office because the tasks involved are hugely repetitive and it’s not unreasonable to set X number of tasks completed per day and base performance on meeting or exceeding that number. Creativity and innovation are helpful, but they aren’t necessary for most job functions.
My job now consists, as my user name implies, largely in proofreading the work an algorithm does. I process health insurance applications - the data entry is mostly done by the customers entering their info on the website (or moved by OCR from a scanned paper application to a digital format automatically), then the algorithm analyses all the standardized fields, determines if they’re all filled out and logically consistent, and pushes 90+% of them through to completion. This is sufficient to meet regulatory requirements of a governmental agency (Medicare), and customer satisfaction in enough cases.
If the algorithm runs into something it can’t handle (needs missing info, doesn’t know how to address a problem it hasn’t seen before, is programmed not to make a certain judgement call), it drops to me or one of a few dozen people with a message already attached, e.g. “Error: Multiple enrollment reasons given” or whatever. Every day by the actions I’m taking, I’m actually training the deep learning algorithm how to automate what I’m doing, and fewer and fewer applications actually require human review. I’m working myself into obsolescence.
People talk about AI replacing jobs like it’s something that’s *going to happen - it’s not. It’s already happening. The tens of thousands of applications that humans don’t review at my company once required thousands of humans to review them. Now it’s done by me and a few dozen others plus 50 temps brought in from Oct - Dec for open enrollment when the volume is much higher.
Now, deep learning and other limited-purpose AI *hasn’t led to mass unemployment, and new jobs are being invented that still require extra intellectual capital to achieve them. But it may not always be the case. It is not difficult to imagine the day when automation, whether in manual-labor jobs or in routine office work, will become so cheap, efficient, and widespread, that as workers are displaced from one field, it can be expected that the same software and hardware will be present in any new jobs that we come up with from day one; and supplant other workers in existing fields, so the workers who are displaced have nowhere to go.