I'm afraid of the "Singularity"

Ever hear of the S curve? Predicting out too far when you are on the steep part of it is one of the most common fallacies. Consider the transportation curve from 1800 - 1970. Extend that and we’d be cruising the solar system now. In fact, we have gone now faster than we did on the Apollo flights.

What is the hot topic in computing now? It isn’t AI, it is Big Data. That’s where the money is these days. A lot of my professor friends tell me that their EE students want to work in bioelectronics. Applied biology and genetics are where computing was 50 years ago, My prediction is that the computer S curve is about to flatten out, and we’ll see a big increase in progress in biology. Would you rather have a better app or 50 more years of healthy life? I rest my case.

You do realise though, that:

‘want’ is irrelevant. You will be assimilated.

?

Now, I am sure that some people will find that terrifying. I thought it was beautiful & a ready response to those who decry immortality, whether physical or spiritual.

Sex! Sex sex sex! That is what will save us. Because not everything that’s important to human beings exists in the ethereal plane. The drive toward procreation is just as strong and extremely personal. “Knowledge” is a construct of intellectuals who put their faith in empirical observations that can be stated in language. And it’s important - but it’s only one dimension of human existence.

In other words, there are plenty of dumb, horny people who aren’t buying into the prerequisites that make the Singularity a possibility. There’s a good reason why Cecil hasn’t succeeded in wiping out ignorance (as smart as he is, shouldn’t he have accomplished it by now?). Intellectual knowledge is just one page in the book :slight_smile:

Sex is one thing, but there’s also greed.

Many things that drive the world economy, and thus to a large extent human quality of life, are already out of human control.

Remember that mini stock market crash in 2010? We’re still not sure what caused it. But it was more sudden and less controllable than the 1987 crash. And the trading algorithms have gotten faster and smarter (from an individualistic point of view) since 2010.

The most jarring part of that story was the continued presence of phone books 80+ years from now.

It’s highly implausible that they can’t however. It reeks of human egotism to claim that we just happen to be the smartest things that can ever exist.

Considering the advantages we have over animals, I find that highly implausible as well. Clearly, a large enough difference in intelligence is a massive advantage.

Well, that’s one version of the Singularity, there are several. There’s the vague “Nerdvana” version where Earth becomes a paradise and everything will be solved by handwaved post Singularity technology, without bothering to explain how to get there from here or why everything turned out so well. There’s also the idea that society will reach the point where normal humans cannot understand it anymore, thanks to superhuman AI/augmented humans. Or the idea of a runaway cycle of AIs building smarter AIs; possibly leading to the former scenario as well. That’s not like living from 1982 to the present; it’s more like going to sleep in the Stone Age and waking up in 2060. Or possibly going to sleep in the Triassic and waking up wondering what are those little bipeds pointing metal tubes at you for.

If we figure out the right architecture for a human-level AI and know how to program it. We’re a long ways from that to put it mildly. A time traveler could hand us computer hardware capable of supporting human level reasoning, and we still couldn’t build a human level AI simply because we wouldn’t know how to program one.

Religion for nerds, and about as meaningful as any other invisible friends.

ROFL You’re realling straining the bounds of credulity, here.

Fuck process nodes, fuck heat distribution. I agree that we’d be near the end of Moore’s Law if we were stuck with our current hardware, but quantum computing is on its way. That’s going to be a massive game changer. IMHO, anyway.

and as far as robots needing to be cheaper than people…Dude, you’re killing me. Never mind Moore’s Law, the simple fact that any given level of technology, whether it’s aluminum production or computer speed, gets cheaper over time. Aluminum used to be the most expensive metal on Earth. That was just a couple hundred years ago.

Now, don’t get me wrong, I am, in general, a cranky naysayer on a lot of things., especially as I get older However, people like you (smart guys with tech degrees and experience) have been predicting the end of Moore’s Law ever since Gordon Moore published in in 1965.

The only way I see it ending before 2030-2035 is if quantum computing fails. Now yes, if that happens, absolutely. Moore’s Law will probably end around 2025, from what I’ve read. You haven’t given even one reason why it would fail, though.

We’re not all that far off now. As you say, if Moore’s Law holds for a little while longer, we’ll actually achieve that estimated processing power of the human brain, but that’s not the same thing as saying it can do the same things a human brain can. The brain works fundamentally differently from a computer. Humans are terrible at math in comparison to even a computer of decades ago, but computers terrible at many tasks that are simple for humans, like face recognition. There’s more to the human brain than raw processing power and there’s a lot of things that we can only guess how they work and may even involve processes that aren’t easily mimicked by computers.

Consider, decades ago, it was believed that intelligence meant beating a human at Chess or a similar game. Chess has been “solved” for a considerable time now, and even with considerable resources also being spent researching conversational language, nothing has come close to passing the Turring test for intelligence. The thing is, intelligence is a lot like porn, we don’t have a definition for it other than “you know it when you see it”. Every time we’ve tried to set some kind of hard line and we’ve achieved it, we’re just not there.

I don’t think this is unreasonable. The technology for a lot of this exists today, like the mentioned self-driving car. I was closely following and know people involved in the DARPA Grand Challenge. Seeing where it is now, I have no doubts it could be feasibly available to the general public in 20 years if the market incentive is there to make it so.

But I don’t see this as a death scenario for labor classes. Automation won’t happen overnight, and some things will be automated sooner than others. This will be enough time for shifts in the market and government thought. Some of what it will do is open up more higher end jobs or free people up to do other “unessential” jobs in the arts and sciences. Imagine a world where we don’t need as much work so rather than having half the people working full time and the other half unemployed, everyone can just work fewer hours and spend more time with their families and pursuing their passions. That’s the direction we need to go, but part of the problem is that that’s just not the thought process that’s in place now.
Regardless, the singularity isn’t something we should fear. It looks scary if we compare it to where we are not, but like others mentioned, where we are not would look scary to people of just a few decades ago without seeing how we got here. I imagine those people of the past might see the internet and smartphones as “big brother” technology but there’s huge advantages that go along with it, as well as the intervening cultural changes. Hell, if you’re scared of the whole Borg hive-mind thing, we’ve already taken some pretty significant steps toward that with the internet, smart phones, facebook, twitter, texting, 24-hour new cycles. Even now, I can know more or less what all of my friends and family are thinking and feeling, if we’re all sufficiently plugged into social technologies and media. In fact, it seems to me that having it directly plugged into our brains is a less significant step culturally and socially speaking, than some of the ones we’re making now in accepting and adapting to that level of connectedness.

The hardware may, or may not, follow the Moore Curve. Programming still has to proceed at the pace of programmer ingenuity (and typing speed), which is not and has not been increasing exponentially, at all (tho to be sure, it has been increasing-somewhat). I’ve almost completely stopped playing video games because the gameplay curve (while we’re on the subject of curves) has been as flat as I’ve ever seen it lately (for whatever genre you care to name). Most of that processing power goes into nice and shiny chrome, not gameplay. Imagine something like Skyrim, but instead with extensive and dynamic economic, political, magical, and interpersonal engines, NPC’s all pursuing their own goals and ends, that you as a player can delve into at whatever level you desire. Foment a revolution by becoming some disaffected prince’s most trusted advisor? Organize some lazy and disillusioned dwarves into an expedition/army to retake a dungeon from a bunch of orcs? Take over the Council of Air Mages via intrigue and subterfuge?

Instead it’s pretty much the same kind of gameplay I’ve seen in CRPGs since time immemorial: whack monsters, go into town and try to scrounge up some precanned quests, listen to some precanned conversations for additional clues, sell loot, buy new equipment, head on back out (to be fair I got most of that from this thread and posts like this one, since I didn’t buy the game). A bit more sophisticated than it was for say Ultima or Baldur’s Gate, but not any sort of quantum leap-still static gameworlds, with limited NPC AI’s, which you only have a very limited ability to change significantly. Now imagine all the programming that would be required to make it more like something in my vision-not so easy, eh? All that pure hardware power that you have in your machine all of a sudden isn’t so helpful to this end, is it? Of course a lot of the complaints in said thread were more along the lines of “Why isn’t there more combat!?” not “Why isn’t there more compelling gameplay?”, but consumer preferences in this vein is a topic for another thread.

Sure, it’s a death knell. Just because you have a product, it does not mean that anyone will want it. Think of all the young rock musicians, poets and artists who wind up working as waiters, marketers and administrative assistants. There was no market for their skills so they moved to an area where there WAS a market for their skills. Classic free market theory.

Now imagine a world where there is no market for manual labor in factories, where most office jobs are automated. Where will these billions go? There are not THAT many restaurants out there. What’s more problematical, there won’t be a huge mass of consumers wanting services, it will have drastically shrunk, which means, fewer restaurants, i.e., a much smaller service industry. There will be no jobs to go to. The rich will be very rich, there will be a small middle class to serve the rich, and the rest of us … will be cordially invited to go to hell.

It’s already happening. The “jobless recovery” has occurred because productivity has gone up. Companies don’t have to hire people so they don’t, so the economy stays stagnant because there is no demand.

Now magnify that times 1000. That’s our economic future if robotics and computer-enhanced productivity continue on their present tracks. Sure, we’ll be freed up for other things. But who will buy those things? Ask any art major.

This is not however an effect of the Singularity, it doesn’t require AI or anything like it.

The latter seems unlikely. If it’s possible to model how an individual neuron reponds to synaptic inputs, and it’s possible to understand how that neuron is connected to other neurons, then I doubt there is anything going on inside a person’s skull that couldn’t be easily mimicked by a computer program. It’ll just be a matter of understanding those details. I’ll agree we don’t fully understand those details yet, but I don’t believe there’s anything there that’s intrinsically incomprehensible.

I think you’ve worded that too strongly when you say no closer to AI.

Probably no progress towards understanding how to achieve consciousness, but there has been significant progress on some of the underlying functions of the brain (categorization/classification/function approximation) that intelligence appears to be built upon.

Quantum computing has a limited set of problems it can be used to solve. Very limited. It’s great for those problems and worse for everything else.

Scientist’s understanding of the complexity of the brain is increasing every single day making those estimates you’ve read more and more wrong.

Two examples to illustrate:
1 - Glial cells are involved in computation (100x glial cells vs neurons)
2 - Nonsynaptic plasticity - the neurons and their connections are way more complex than estimates/models allow for

The methods used to win at chess are not intelligent methods. They are brute force which makes them completely uninteresting.

Some argue that dismissing chess brute force solutions is moving the goal posts, that we only consider it non-intelligent because the problem is solved, but that point of view really exposes a lack of understanding of the problem in my opinion.

Human (animal) intelligence is interesting precisely because it isn’t a brute force method. It’s adaptable to any new problem, can learn and has novel/creative insight to solve problems. It’s an efficient system that categorizes/classifies/compresses attributes of the world around us, builds useful models and makes predictions to solve problems based on that modeling.

Except, after the singularity, why would we even need factories? There would be no shelves of retail products for us to browse, why should there be? We would have the equipment to fab anything on demand. Which means one would have to envision some sort of economic engine that could support such a non-fluid economy. What is on the other side is not our culture enhanced but a completely different one.

The fundamental problem with computer science right now is that we are still relying heavily on fixed architecture serial instruction processing. Once pure dynamic logic systems become the standard, computing will be vastly more efficient, and I expect the rate of advance toward “self aware AI” will accelerate tremendously, as systems will make the human toil of programming more and more abstract. But computers trying to mimic the brain seems absurd. They work the way they work, it is quite different from the way the brain works, I think we will gain more by embracing the difference than by trying to merge the two into a common pattern.

–––––––––

But the most important thing standing in the way is energy. We have lots of it, but what we have is being wasted at an alarming rate. Petroleum is still very crucial to our technological progress, and it is going away rather quickly. Up in smoke, as it were. We lack a viable replacement for this energy source, as all the alternatives still rely on petroleum to get there in the first place. As it stands, we are in a race against time, and even crossing the super-tech boundary may not be enough.

Until the day they destroy us all.

OK, in your imagined future there will be a small class of trillionaires who own the automated factories and robots, a small class of human servants to those trillionaires, and then 10 billion unemployed and unemployable masses who can’t create anything of value compared to a robot.

Except this doesn’t make any sense. What makes a factory owner of today “rich”? His factories make products that people want, so they produce goods and services that they are willing to exchange for the goods produced in the factory. Except in your future, owning a factory won’t make you rich, because the factory doesn’t produce anything that people are willing and able to exchange goods and services for, because those people can’t produce anything anyone wants. It’s all done by automation and computers, right? So the automated factory that can produce an endless panoply of goods actually produces only what the trillionaire wants for his own use.

How can he sell his products to the masses? They can’t buy them, they don’t have jobs, they are useless. The only customers for his products are his fellow trillionaires. And so what exactly makes him a hyper-rich trillionaire?

Bill Gates is rich because his money allows him to pay people to do things for him that Bill Gates wants done. If his money doesn’t allow him to pay for things, then it isn’t really money, is it? And so a guy with an automated factory that can produce any good or service you can imagine isn’t rich unless he’s the only one who controls such a factory, and there are things he wants to trade those goods and services for. Otherwise, he’s just a guy living in a cave with a magic box. Having a lifestyle with luxuries that would make the Sultan of Brunei envious doesn’t make you rich. It just means you live a pleasant life.

And of course, even if we imagine that in America the hyper-rich will simply allow the 99.9% to literally starve to death, America isn’t exactly the only country in the world, is it? And if the 99.9% are literally starving to death, alongside factories that can provide for their every need, perhaps some political changes would take place? I know you believe that the hyper-rich would rather build killer robots to exterminate the masses rather than allow a government to own a factory, and maybe they would. But there are ruthless people all over the world, and not all of them are wealthy industrialists.

Imaging that industrialists will dominate the future of the planet is as naive as imaging that because plantation owners dominated the early United States the future belonged to plantation owners. Even today owning a factory isn’t the route to wealth. Henry Ford made a fortune operating factories to churn out automobiles. But the guys who own the factories that churn out the mass produced crap in China aren’t making vast fortunes. They aren’t the masters of the economy, any more than farmers are. They build whatever they are told to build. The margins in industrial production are extremely thin. Apple makes a fortune selling iPods, but the factories where those iPods are assembled only intercept a tiny fraction of that profit.

No. You, together with the singularity loons like Kurzweil, are making the mistake, alluded to my previous post, of thinking that intelligence is something on a linear, quantitative, continuous scale, incrementally increasing as you go from amoebas, through chimps, George W. Bush, Einstein, Cecil, and then potentially onwards and upwards to something like what God is imagined to be, endowed with incredible superpowers thanks to His massive IQ. It really is not like that.

The difference between human intelligence and animal intelligence is not quantitative, it is qualitative, associated with the evolution of language, a general-purpose representational system that no animal has access to, but that enormously enhances the cognitive resources of humans. Compared to that difference between humans and animals the quantitative differences in intelligence between humans, between morons and human geniuses, and even the difference between, say, a chimp and a mouse, are trivial, and there is no particular reason to think that artificially intelligent systems would be able to improve very much on the most intelligent humans. No doubt we are limited by our biological structure somewhat, so you might be able to build machines to do human-style thinking a little more efficiently than actual, biological humans can, but that is no very big deal. To make any very big difference to the world it would take another evolutionary step to a new a kind of intelligence, comparable to that which humans took when they evolved language. Unfortunately, we currently have virtually no understanding of how that step occurred.

Even if we did understand that though, as we probably will one day, it is unlikely that it would provide much insight into how another such step might be taken. I am not saying such a further step is an impossibility, or that humans are the smartest things that could ever exist. Maybe it has happened in some other part of the universe and there are aliens with a sort of intelligence incomprehensibly beyond ours, that we can’t hope to comprehend any more than a monkey can hope to comprehend us. However, we have absolutely no idea what such a step would entail, and very likely we are simply incapable of conceiving of what it might be like (as a monkey is unable to conceive what it would be like to reason using language). What we can be sure of is that incremental improvements in computing power of the sort that our hardware and software engineers may be able to provide for a while longer are not going to bring it about. Nor is research going to help, because we have no idea what to look for, and would not be able to recognize it or understand it if we found it. The only way to get there (if there is any there to be got to) is the way we got to here, via the random trials and errors of evolution.

Yep. Capitalism is bound to eventually collapse under the weight of its own success.

You know this is really Marxism 101 you are talking here?

The trouble is that the failures of the premature attempts, in Russia and China, to establish communist societies have left us with no model of what a viable post-capitalist economic system might look like. :frowning:

But all this has nothing to do with the mythical “singularity”. Preternaturally intelligent machines are not going to save us, any more than Stalin and Mao could. (I guess the good news is that neither are they going to enslave people, even to teh extent that Stalin and Mao did.)

Pointing out that people need to be able to afford to buy products and services in order for the rich to become and remain rich isn’t Marxism.