This “style” is the position you should expect all good scientists to take.
Right. And I would simply say that Appeal to Authority especially doesn’t work here because opinions are divided. There is no consensus opinion among academia for this speculative point about the future economy.
So we have to focus on the actual arguments.
That general AI changes the rules completely.
Until that happens, the rules haven’t changed.
These are the two things that get continually conflated. If you want to make an argument that general AI is coming soon, I’m not going to argue with you. (I’m not necessarily going to believe you, but I’m not going to argue with you.) But that particular scenario is a much different conversation from any tech that’s short of that general AI. The world will fundamentally change at that point, in ways that we cannot presently predict.
I don’t advise conflating those scenarios, as you have (seemingly?) done. (Maybe I misread you?) Worrying about “jobs” is not remotely sensible in a world of general AI. There will be other potential problems many, many, many orders of magnitude more serious.
Even in brief, it would be possible for you to acknowledge previous error.
You didn’t do that. But the option was available.
Your “reading between the lines” failed to take into account the actual lines themselves. I said the exact opposite.
The 2008 recession was clearly demand-based, for an obvious recent example. But for your information, some significant Nobel (Memorial) Prizes in economics have been given out to people with the very “extreme libertarian” beliefs you now complain about. The Kydland Prescott (1982) “Real-Business Cycle” model happens to be the foundation of all modern macro. (Unfortunately.) It is violently anti-Keynesian. There is no room in that particular model for any government interference that can possibly improve the economy. My entire problem is its lack of realism in this regard.
The only reason that it caught on in the first place was a startling indifference to genuine evidence, in favor of a faux-statistical comparison of second moments.
Yet Prescott and Kydland were professors at some prestigious universities, and got themselves a trip to Sweden for this model that made no sense. Literally anything that the government does in this model must necessarily be counterproductive. There is no room whatever for good intervention. Yet here is what you say:
Oh really?
Prescott is (still! after the financial crisis!) defending a variant of his original model. Does the great prestige of his education and awards sway you into believing him? I can’t join your deference to empty prestige. Despite his award, I think Prescott is completely full of shit about this. Regardless of the prize, this method of modeling set back macroeconomics thirty years. I don’t give a flying shit when professors at prestigious institutions peddle garbage. I care about real evidence. But you? You seem to think we should listen to people with Nobels, based only on the fact that they have Nobels and not on their arguments, which necessarily means you think we should listen to people with “extreme libertarian” views like Prescott – regardless of the evidence that supports that position.
I can understand the reliance on prestige from someone who has not investigated the issue themselves, whether from lack of ability or mere lack of time. But as soon as real evidence starts flying around? The retreat to prestige is a sign that there are no legitimate logical justifications available. Prestige is nothing more than a paper shield at that point.
You might consider the possibility that you’re actually irked by facts that don’t fit well with what you would rather believe.
Sigh. That was my point, though. The rest of that sentence was “…anyone who disagrees with me is stubbornly unwilling or sadly unable to comprehend the basic facts I’m laying out.” (“Me” there referring to Hellestal.) My only appeal to authority was that said authority should be enough to get someone’s perspective a basic hearing that’s not immediately scoffed at as though it is base ignorance or the willfully obtuse ignoring of obvious facts. I don’t think any of us should disagree with that most basic level of “appeal to authority”, as modern society could not function without it.
And my whole point was to “focus on the actual arguments” rather than bemoan how those who disagree with an argument are just failing to comprehend basic facts. I went on in that post to say:
Just state your argument, and don’t whine about how you are presenting Facts but no one will listen. Or if you think your “facts” are not getting a fair hearing, maybe say so once and then go away.
Yes, we should *listen *to them. They have earned that right, as I keep saying. That doesn’t mean we must never *disagree with them. If you read me as saying so, you misread me. I myself publicly disagreed with the most recent economics Nobel laureate on Twitter, in this tweet and then this one following up on the first.
*Yes, I’m aware that the economics prize was not established by Alfred Nobel, but it is awarded by the Nobel committee so that’s close enough for me.
I am not conflating the two scenarios. In one, general-purpose A.I., we are all obsolete more or less immediately / once the devices to utilize that A.I. are cheaply mass produced.
In the other, as is happening now, we are all replaced gradually. We don’t have to reach a human-level or better A.I. in order for humans to lose most of their utility. All that’s required is a “good-enough” imitation that can do most of the most common tasks that we do (which are largely repetitive, simple, and based on manual labor and a minimal amount of intelligence.) As deep-learning algorithms, basic bots in the real world programmed for everyday tasks, and specialized machines such as burger flippers and auto-factory mechanical arms become more widespread, along with input-only-required-from-the-customer devices such as kiosks, self-checkout lanes, vending machines, etc. And online shopping where the only humans involved are the few people who oversee the drones working in the warehouses moving products around and placing them on trucks, which will soon be self-driving themselves to the customer.
You seem to think that the vast millions of people who are going to be displaced from these industries will always have somewhere else to go, because humans are somehow special and our work is unique and requires creativity and flexibility. I respectfully disagree. We only need a limited number of doctors, lawyers, managers, engineers, and computer programmers. The bulk of people working today have no specialized skills; they’re performing manual labor or entry-level intellectual labor that can, and is, being replaced by automation. When their jobs are gone, most will not be able to shift up the ladder to better, higher-skilled jobs. The number of jobs that require more advanced skills, education, and creativity are limited; and will remain limited. The bottom might fall out on wages for skilled jobs because of the increased demand, but we aren’t going to see millions of higher-skilled jobs appear overnight.
It’s completely sensible, in either scenario. What the dangerous of general A.I. might be we can only speculate on at this point. The dangers of millions of people being without the means to support themselves is something that we have many, many historical examples of, and it doesn’t tend to end well.
Another great post, AIP.
Have you ever actually managed real people? I have, and your confidence in the human worker is far overstated, IMO. Yes, human beings can be flexible, creative and intelligent. They can also be incompetent, impatient, inconsistent and insolent. They can steal from you, they can lie to you, and they can sue you. They need rest and sleep, and perform inadequately without those things. They even want to get paid for every hour they spend working for you, and they might try and cheat you for some extra hours, the cheeky buggers. Robots do not have those problems. It seems one sided to rave about how awesome humans workers are without looking that the other side of things.
Also, if your goal here is to convince others of your position, you may wish to use a slightly less antagonistic tone. While reading this thread, I had to make a deliberate effort to not simply blow your posts off, not due to your facts, (which I thought were well put together and interesting) but 100% due to the tone you are posting in. IMHO and YMMV, obviously.
I do have one question. You claim that unless the world is perfect, there will always be work for people to do, which is correct. My question is who is paying for that work to make this perfect world? I could be wrong, but I don’t think anyone in this thread is saying that the robots taking over will result in there not being nothing worthwhile for humanity to do. The worry is that there will not be enough jobs that are worth paying a human to do instead of a robot to go around.
Finally, I’ll third how great AIP’s last post is. It summed up my view and concerns on the subject fairly well.
A big reason for the recent advance in machine learning is the use of multi-level neural nets. While this is reasonably old, sometimes it takes a while for a new method to spread. Hardware helps, but not as much as better algorithms and heuristics.
My first business trip in 1980 coincided with the release of the 386. The USA Today, in its front page story, said that with all this power AI was just around the corner.
Not quite.
He should wander over to Course VI some time. (MIT EE and CS Dept.) Moore’s Law has actually slowed down, as fabs which can handle new nodes get more and more expensive, requiring more time between nodes. Heat and leakage issues have meant that most of the new transistors we get go into multiple cores and larger caches. You’ll notice that clock rates have not gone up nearly as fast as they used to.
And the chessboard analogy was old when I was a kid. Sheesh. Exponential curves in the real world always flatten out.
Cray supercomputers, which predate 1997 by a lot, were a lot, lot smaller than a tennis court. The compute server ranch we used to run microprocessor simulations was that size, but was not really a supercomputer but rather thousands of processors which you can grab and run your jobs on. This kind of thing is easier to build when you make your own computers.
I think he might have changed some dates by 20 years. Tennis court sized computers are not a good idea because it takes signals a long time to get from one end to the other due to speed of light limitations.
We are already seeing stuff like science fiction. But, sorry Mycroft and Harlie, AI does not appear to spring out of nowhere when you increase computing speeds. Computers have been teaching themselves for a long time. That is not true AI.
So, don’t believe everything you read. Even from MIT. I know, I went there.
It is happening already. In IT. In India.
A fascinating article from Technology Review
It does make sense that a lot of those jobs that have migrated to India would go to AI next. Not just call centers and such, but stuff like radiology and other “expert” work. The lower wages in India probably slow the process down because they can make it cost-effective to continue to employ humans longer than would be the case in the more industrialized world.
We’re not “raving” about human workers. The point is just that sentient beings can very easy switch tasks. There is no kind of AI on the horizon that is sentient.
So that is an advantage that human workers have over AI, and it’s a big advantage in the messy real world (even if, of course, there are also some relative *disadvantages *of using human labor).
The economy. The size of the economy is not fixed, and indeed a higher-productivity AI replacing a human worker is an example of growth.
Factory workers are very cheap. I heard a talk where someone said that their company did not automate their line in SE Asia as much as they could because the machines were more expensive than people.
IT workers, even lower level ones, are more expensive. My old company had a design center in Bangalore, and we had tons of problems with recruiting and turnover. The very jobs that cheap IT workers can do are the ones easiest to automate, and so most at risk.
And easy jobs are getting harder. When my phone went out six years ago or so I got on the phone with a tech and we diagnosed it. When it went out a few months or so ago the AT&T web site led me through the diagnostic steps, ran tests itself, and made a correct diagnosis without me ever talking to a person.
That is where we are going.
How many jobs use this flexibility, or even see it as a plus. For example low level call center people who work from scripts.
The problem is not the size of the economy, it is that the benefits of the growth of the economy are going to fewer and fewer people. A 3% increase in GNP does not put food on the table of the guy automated out of a job.
That’s impressive that an automated system was able to diagnose and help you fix your phone. Wow.
Well a lot of jobs do use that flexibility. I gave the example upthread that even burger-flipper implicitly includes “use your human general intelligence to do other miscellaneous tasks”.
Or more typical might be my job. I’m a software engineer, but actually writing or fixing code is maybe 10% of my time, 20% tops. Everything else is miscellaneous research, admin, planning, discussion etc. AI that can write code would make my job easier, but not replace it by any means.
Crazy Canuck was arguing about where the money would come from for new jobs but of course growth in productivity is money, indirectly.
Your point is not quite the same: you’re going back to the heart of this, and implying the laid off worker cannot find work because there are no jobs. But as pointed out, there are always jobs while there are human needs. It makes zero economic sense to imagine a reality where we’re all materially worse off than we are now and only the rich benefit from AI: If you and I are worse off than now, why can’t we just do work for each other, like we do now?
The issue is really with just the transition period, and cultures like the US needing to evolve past the idea of work being an end in itself.
I think “what are the jobs going to be” is the wrong question to ask. It’s really about how society is going to be structured if or when we are able to automate a significant amount of our physical and intellectual labor. Not just in terms of jobs and work people do. But who makes the decisions? Who provides oversight? Where and how does wealth accumulate? What does a “job” in such a world even look like?
People talk about basic income as a solution, but do we really want a society where we more or less just pay people to sit around all day and surf the web or watch Netflix?
I said up-thread that I think a lot of corporate jobs are bullshit, and I meant it. Do we create a society where we assign people to some sort of corporate-like structure were they “play work” all day sitting in bullshit meetings and have no real impact on anything because the actual work and decision making are handled by computer systems?
Does the ability to entertain through blogs, social media postings or traditional media become the new intellectual capital? Like do I try to get my young children to focus on being charming and good looking, rather than smart and hard working because they will never be as smart or as hard working as the AI systems that will be in place when they enter the workforce?
I hope you didn’t use that speech to inspire the people you managed!:eek:
I think that’s a pretty glib way of putting it. A basic income means that people have the freedom to choose how they want to spend their life. They can try to find the thing they are most talented at, train into other fields, try to capture the big money rather than just a sustaining income.
And yeah, some may use that money to blob out and do nothing. In this hypothetical, the cost of sustaining such people is negligible so why would I care?
And people responded to that point. I pointed out for example how productivity has increased steadily. How did that happen if we’re all just pushing paperclips around?
Well being serious, yes I think there are some differences in how we should train ourselves for work.
In the past being the very best at some very specific skill was very lucrative. That will be less and less true; being a “jack of all trades” and being able to utilize lots of different systems will be more important.
Being “charming” might not be useful per se, but actually networking with other people will become even more important than it is now. I don’t mean in an old boys network kind of way, I mean being in contact with people who know this or that area or industry.
I think the reason you should care depends on how we choose to structure society. But I don’t know that a society consisting of a small number of Elon Musks and Jeff Bezoses worth big money and the rest a bunch of interchangeable carbon blobs doing nothing but consuming is necessarily a healthy one.
Although I imagine that even with automation, there will still be a large number of jobs. Lets say I want to go outside and get something to eat. Assuming that everything isn’t just owned by some monolithic corporation or government entity, there will still be people who own all the local restaurants, bars and coffee shops in my neighborhood. Presumably many will still have human staff to interact with (although the kiosk system at my local Panera seemed to greatly reduce the number required to process the volume of customers they service). Humans will presumably still be involved at some level of figuring out what stores go where and building and maintaining the buildings and infrastructure that connects them.
It’s like all productivity improving technology since the beginning of time. It frees up workers from having to directly produce stuff and lets them work on other tasks. A hundred years ago, a factory didn’t have a “Director of Culture” or “VP of Social Media”. They probably barely had anything resembling an HR department. I have a college buddy who works for a huge software company where, from what I can tell, all he does is serve as some sort of though leader and “evangelist” on “disruptive technology”. Now maybe there is a legitimate purpose, but I suspect the only reason he exists at his company is because they make so much money they can afford to make up jobs like that.
When I’m consulting at a company, I often find that there is like 1 guy who actually knows all the systems and gets everything done. Most of the rest of the people I deal with are just middle managers who sit around in meetings pontificating and trying to look like they’re doing something.
A “jack of all trades” is typically a master of none. What I fear is that a society where most people don’t need to develop specific skills will become a vapid one, similar to what one might find in a typical high school or reality show. While likability and political aptitude are always somewhat important in any environment, I think what is often more important is needing competent people who can get stuff done. Take that away, then everything just becomes a popularity contest.
“As is happening now” is not a phrase you should be using to describe this.
There are more jobs than have ever existed in history. Wages are higher globally than they have ever been in history.
Humans being gradually replaced, if it were “happening now”, would mean we were already past the peak of human employment. While it’s true that particular jobs are being replaced by automation (and that has been true for literally the entire history of automation), it’s also true that on net more jobs are being continually created continually than are being replaced. Humans jobs are still going up for now. Humans are not being replaced at the present moment. Not “gradually”, and not otherwise. The continual tendency to ignore the facts of the present situation is the number one reason, without question, why I have trouble taking arguments like this seriously. To believe that jobs will be lost on net does not describe the present.
You are speculating.
You are prognosticating the future in a way that is contrary not only to the present, but to the entire history of human automation. You are giving your beliefs about a future that is contrary to what we have seen for literally all of history. There is, of course, nothing inherently wrong with that.
Obviously, the future can be different. It can follow different rules. But in order to successfully describe a future that contradicts current trends, it is necessary to identify, first, why current trends are the way they are. It is necessary to realize why jobs and wages globally are at their highest levels ever – literally the highest in all of human history – while simultaneously machines are also at their highest level ever. The two facts happen to go together. Machines create better productivity. Productivity creates wealth. This is exactly why the richest countries, and therefore the best countries to be in for low-wage workers, are the countries with the most capital equipment per capita: the most machines. (Workers don’t try to immigrate to Chad. There are very few machines there.) It is necessary to explain how the prognosticated technology will work its enormous change on the labor market itself, putting large numbers of people out of work (and thus depressing wages) while automation somehow continues to remain economical. The entire reason firms build machines is to replace expensive workers, meaning the entire reason why it’s economical to put in very expensive research into building very expensive machines is that those machines will replace those selfsame expensive workers. There needs to be some explanation for how more and more workers get displaced – putting downward pressure on wages the entire time – while somehow firms continue to implement expensive automation to replace workers who are getting paid less and less. On top of that, there has to be some explanation of who is possibly going to buy the output of those machines, when bigger and bigger fractions of the the workforce are out of work.
Money is a intermediary. When we’re buying our groceries with “money”, we’re actually buying them with the work we previously did. We exchange the fruits of our own output, with the fruits of other people’s output. In the world of massive job loss, suddenly there’s a whole bunch of new automated output being produced. In order for it to be economical to produce it, it has to be exchanged for something. An endless supply of Big Macs can’t just pile up in fast food restaurants. Money is a veil. It is necessary to look past that veil and ask what is being exchanged for the output of automation when some large percentage of the workforce is, somehow, producing literally nothing of value to anyone else.
To believe that automation will suddenly start replacing human work, faster than that human work can be created on net, is to make a speculation that things will be very different from the way they have worked for all of history. Again: there is nothing necessarily wrong with that. But the reasoning that says that the future is going to be different, despite all of human history working otherwise, is an argument that has to take into account how the world works now and therefore why the change will happened – taking into account points like the above.
This is what tends to be lacking.
I’m not going to say that it would be “logically impossible” or anything like that to build such an argument.
But I haven’t seen it.
I’d say there are two major problems with this perspective.
The first problem is that “repetitive” tasks are not a synonym for “low-skill” tasks. There is a variety of low-skill tasks that presently exist that are not particularly “repetitive” in nature, most especially service jobs where a human flair is valuable in itself. Again: human desires are effectively infinite. Even if the number of low-skill but low-repetition (and therefore hard to automate) tasks is low today, that would not necessarily be the case in a world where the more repetitive tasks are being slowly automated. We can’t predict what desires we’ll have in the future, after our current discontents are finally addressed, the most pressing of today’s imperfections already remedied. But we can predict that we’ll desire something, because that has always been the case. In a world where the “repetitive” somethings are satisfied, non-repetitive somethings will be next on the list. We’ll use the available resources to provide those somethings, whatever they are. Among those available resource is human labor.
The second potential problem in the above is, I think, the entire crux of the disagreement.
This is a belief among many that low-skill workers are, to put it crudely but evocatively, worthless fuck-ups who won’t be able to do anything else. Dead weight. “Unemployables”. Zero-marginal-product (ZMP) nobodies. The robot armies of the not-too-distant future can liquidate this portion of the population to no particular loss of human productivity. (I’m joking, but similar fears have been voiced before.) I pointed out this belief in my very first post in this current round of conversation in this thread. I’m going to repeat it now.
I’d guess this is where the argument really hinges. This is the real crux of disagreement.
Many seem to believe that a large portion of the population is good for nothing but worthless, mind-numbing work – the stuff that can be most easily automated – and that this portion of the population is so utterly worthless and without value that speculation runs rampant that when the machines get just a liiiiiiitle bit better than they are now, that portion of the population can no longer contribute in any way whatever to production in human society.
There is no way to “prove” this arguement either right or wrong.
I think it’s wrong. But unlike so many other assertions in this thread, there are no facts of the matter on this particular issue. There is no historical record. There is no precedent. This is the belief that some humans exceed a certain threshold of value for economic purposes, and other humans don’t, and that even without a strong AI, this is just the way it will be in the near future. I think that’s bollocks. But I can’t prove that it’s bollocks. So I would just point out a few things.
First: the belief that “repetitive” tasks can be easily automated, without the rest of human work being so easily replaced by machines, tends to be accompanied by an implicit, but very firm and assured action, of patting oneself on the shoulder for not being one of the fuck-ups. As a deep misanthrope myself, I can appreciate this. I understand the temptation only too well of drawing a line in the sand and putting oneself on one side of the line, with large portions of humanity on the other side.
More Pratchett to make this point:
I don’t necessarily disagree with the misanthropy in this position. Rather, I would say it doesn’t go far enough. A more rational level of misanthropy, which is to say an even higher level than has been displayed so far, should inform the misanthrope that they, too, are human and therefore also deserving of suspicion when they tell themselves they are potentially above the class of “repetitive” action workers. This is to say: I think the second picture is more accurate than the first. Obviously some people are more economically productive than others. Can’t be denied. But even given that fact, a good misanthrope shouldn’t necessarily overestimate the distance. The scorn should be spread liberally.
I am hardly the first person to make this (style of) observation.
Differences between human beings are continually overestimated. We’re human, and so we’re specifically wired to notice differences between people. Obviously, genuine differences exist. But I believe these differences tend to be over-emphasized by people who can then flatter themselves by the perceived difference, by putting themselves on the “appropriate side” of the line. I’m not alone in that belief. The observation goes back quite a long way.
The final point I’d make related to this argument is one the relative “burden of proof” and relative levels of “speculation”.
The position that a certain proportion of human workers can be displaced – without literally all human workers being almost immediately displaced – requires a belief that the economic world will work much differently than it has up till this point. Up till literally today. It requires that the ground rules will change.
To which the argument can be made: “Well, the ground rules could change.” Yes. Yes, they could. But the question is not just what could happen, but what it is likely to happen. And this is exactly why I give so little credence to the “massive number of fuck-ups” theory of human society. Why does one group of humans have an extended buffer protecting them that the other humans don’t have? Why is the line drawn in that particular place? As soon as that line is drawn, the arbitrariness of it is striking. Sure, that line could exist separating the workforce from the employables and the unemployables, the valuable and the worthless fuck-ups. But why would it? How likely is it?
This is exactly why the strong AI position – that “human jobs” no longer have any meaning with a strong AI – is so much more coherent. The line is not arbitrary. Human intelligence has some real variance, but there is still a relatively hard upper limit, which means that upper limit can potentially be surpassed. I don’t trust the given timelines for the enthusiastic strong AI supporters. But the change in the “ground rules” of society is clear, and so clearly revolutionary that nothing much can be said about it beyond its obvious and undeniable dangers. It is speculation, too, but logically coherent and with no arbitrary lines drawn anywhere.
But from a burden of proof perspective? Putting the line arbitrarily below some subset of humanity is immediately suspicious, especially given the lack of precedent. I’m not going to say that it is absolutely, totally impossible.
But it smells. It smells very bad. And that relates to this:
Everything in your post related to job loss is pure speculation at this point.
You are not describing the present. (The present has more jobs and higher wages, globally, than ever before.) You are speculating. You are describing your worries about the future that hasn’t yet happened. That’s fine. Speculation is the name of the game here. But you need to own up to it.
In stark contrast, a new intelligence bringing potentially extinction-level events is not even remotely speculation. It is plain historical fact.. Intelligence is a kind of a big deal. This isn’t hyperbole, and this isn’t speculation.
Shit gets real. A new kind of intelligence that can potentially run the entire world’s factories simultaneously without assistance, potentially including all military equipment, potentially even including thermonuclear weapons system, or can replicate itself across the entire world with a copy&paste command, is something new and different from what has ever been seen before.
Worrying about jobs in that scenario is completely fucking insane.
The workforce doesn’t even crack the top thousand most pressing concerns.

The dangers of millions of people being without the means to support themselves is something that we have many, many historical examples of, and it doesn’t tend to end well.
Literally none of the historical examples of massive net job loss, on the order of “millions” or more, have related to technology.
Financial and monetary shenanigans? Yes. Robots? No.
It’s a legitimate concern about how to feed the poor when resources are limited. But technology-based jobs loss is not a poor society. It’s a society that is quite literally so ludicrously wealthy, it can’t find enough work for the least productive people to do. The lack of jobs might create a self-esteem problem, but it’s not going to create a resource problem. We can just cut people a check at that point.
I’m going to limit myself to one logical point a day from here on out.
I’ll try to acknowledge the things I haven’t yet addressed, but I’ll post no more than one piece at a time, one limited argument a day.
Money is a intermediary. When we’re buying our groceries with “money”, we’re actually buying them with the work we previously did.
And/or work you are expected to do in the future. Were it just work you have already done, the incentive breaks down.