Going to Mars: Same as a Personal Flying Car?

Absolutely nothing in that popular science article disputes any of my points. Notice how in my definition of the Singularity I did not give a specific timeframe. I’m well aware that self replicating factories and/or artificial intelligence are very difficult problems. I’m not taking Vinge or Kurzweil’s view that this event is imminent. It might be 300 years away. It won’t be any less dramatic when it happens, even if it is 300 years hence.

Also, unlike the popular science article, I mention 2 causes of the singularity that are quite different. Popular science only mentions AGI. We might develop

  1. Human uploads. See thisessay for an accurate technical description of how you might do one. Note that every tool needed that the author mentions exists in working prototype form, 11/6/2014.

  2. Self replicating factories. Leaving the nanotech out of it, all a self replicating factory needs is to be completely unmanned and the total factory (which may be a many many square mile complex) must contain all of the equipment needed to produce every machine in that factory. All it takes is robots at about the tech level of thisone, and the capital to build and develop all the firmware and production line engineering for every stage of this factory. (it would be pretty expensive, obviously).

This model of factory would be assisted self replication (since you don’t have AGI/uploads yet) : humans would be needed whenever the factory encounters a fault that software does not know how to handle. I imagine millions of workers in India acting like the factory’s “help desk” or “Maytag repairman”, clicking a mouse to order the robots to resolve faults that the software is unable to resolve on it’s own. This does mean that there’s a limit to how many factories you can build with only this method.

So I have established the how. Now, regarding the *why *: this earth has competing nations and corporations. Any corporation or nation the develops AGI would have a gigantic competitive advantage because they would be able to produce things without paying as many workers. Any corporation of nation that developed human uploads would be able to charge immense fees and/or command obedience from their populance because they would have a legitimate treatment for biological death. Any corporation or nation that developed self replicating factories would become the richest entity on earth (barring theft of the tech, of course).

Essentially, to disprove the Singularity, you need to show

  1. **All **of the three technologies that triggers the singularity are highly unlikely based upon peer-reviewed scientific knowledge.

OR

  1. Humans will never put in the investment needed to reach those technologies because they do not want a vast economic and/or military advantage over other humans.

There once was a human chipping rocks together trying to start a fire. I’m sure other humans like yourself told him he would never succeed and that fire was the realm of the gods. Nothing I have mentioned (self replicating factories or sentience) is not already performed by biological systems, in the same way that the cave man chipping the rocks together had seen fire and knew it was possible.

I’m of the view that once humans develop one of the three trigger techs, they will be able to rapidly develop the other 2. Also, once all 3 techs are present, they each feed back on each other and allow for even faster growth.

If humans have self replicating factories, they will have vast numbers of people who can be trained as AGI researchers and the ability to manufacture supercomputers the size of mountains. If AGI is possible at all, they would have the resources to do it.

If humans have self replicating factories, they would be able to produce the scanning equipment to scan human brains (a multi-beam electron microscope like this one) and the gigantic computers with enough power to emulate them. Also, they would be able to produce robotic lab equipment to perform the millions of distinct experiments on in vitro synapses to determine the exact effect each protein in a human synapse has on thresholds and other measureable characteristics, and to work out the rules for learning.

If humans have AGI, they could order the AGI to develop the tech for self replicating factories and human uploads.

If humans have human uploads, they could run the uploads at high speeds and use them as super-scientists and engineers to develop the self replicating factories and AGIs.

Once humans have all 3, the human uploads provide the direction for their AGI assistants to solve problems human neural architectures have difficulty with, and they use their self replicating factories to make more computers to run themselves and their AGI cousins on. The self replicating factories are improved by the AGI and uploads, and the extra computers made by the factories make the AGI/uploads better, who make the factories better, etc.

Ultimately this kind of feedback loop can only rationally end in all of the material resources available locally in Sol being converted into machinery and computers.

What about a bird, a plane, a missile, a lightning bolt, a satellite deliberately directed to crash into the cable…

Fundamentally, that cable is all a weak point in that it only has to be broken in one spot for all of it to be destroyed. Real life materials tend to have tradeoffs - a wonder fiber that has tensile strength better than diamond probably has disadvantages. It might slowly unravel under load, it might be flammable, it might oxidize, etc etc etc.

As I understand the Mars issue, we’re at a point where:

  1. We do have the technology to get there and back, if we’re willing to pay for it.
  2. but we do not have a way to do it with any great margin of safety against things like radiation storms and long-term zero-gravity survival.
  3. therefore we’ll have a choice of “soon, by the seat of our pants” or “in an unknown time frame, in relative safety and comfort.”

I am reasonably confident that we’ll put a person on Mars by 2100. I would be shocked if we did it before 2050, though. We’d need a very compelling reason to hurry up and get there. If a rover found life currently surviving on Mars, I think that would provide the necessary impetus to send some people out there.

It’s possible as shown with the moon landings. But it is probably 100 times more difficult as a rough guess. It will require a space station to assemble the various parts of the space craft, including radiation shield, and the space craft will have to be quite large and heavy to have such a shield and a place the astronauts can exercise. It will probably require a robotic craft to land on Mars and be ready to take the astronauts back to Mars orbit. These various craft will need to be pre-positioned (and not at the end of a sentence!) They will be in transit months, if not years, absent some new kind of powered half-way and braking the other half. All that fuel and spacecraft and food and living quarters will need to be put into space. Expensive, but it can be done.

Uh, NASA is building - Not just designing, actually building and testing - A spacecraft designed for long-duration missions in general, and Mars specifically. They’ve also pushed all Low Earth Orbit flights to Soyuz and private companies in order to focus solely on long-range manned flights. Going to Mars is a reality with an actual mission timeline of 20 years.

There is no support for the notion that just because something is exponentially growing now, or at some future time, it will continue to do so ever after. In fact, all the relevant evidence is against it.

On the contrary. Your logic is backwards, and you are talking pure bull. Just about every one of the speculative premises you assert would have to be true for the singularity (as you seem to conceive it) to occur. Most of your premises arguably have some scientific plausibility, but only one of them would have to false for the whole scenario to fall apart. Almost certainly, one or more of them is false. (Quite apart from all the contingencies you have ignored, such as that the entire human race might soon be wiped out, or sent back to the stone age, by climate change, or nuclear war, or ebola, or whatever.)

(I am not sure that the scenario you are envisaging is the same as what most people, including most believers, mean by “the singularity”, anyway. But the same point applies to the arguments generally made or the more standard singularity scenario, too, although they perhaps do not rely on quite so many questionable speculations.)

In stark contrast to “the singularity”, which is probably not genuinely even possible, let alone likely, getting humans to Mars clearly is possible, and could almost certainly be done in only a few years time, if the political will were there to do it.

You do not seem to know the difference between science and science fiction.

? What’s science fiction about

  1. Purchasing enough ATLUMs to cut apart a preserved brain
  2. Scanning the brain with multi-beam electron microscopes
  3. Image recognition of the relevant proteins
  4. Research to determine the coefficient changes caused by these proteins
  5. Construction of an emulator.
  6. Running an emulation

All of these steps have been done by humans in the past. The only difference here is scale.

You didn’t read or understand anything I wrote, at all. I specifically mentioned, if you actually read my posts, very strict limits on the exponential growth. Right now, today, humans (and the whole biosphere) use a tiny fraction of the useful elements in the Earth’s crust. Post singularity, all of the useful elements will be in use.

Only a war that kills all humans and/or destroys all knowledge up to this point (meaning that this war must kill every flash drive with a copy of most scientific texts, for example) is able to do more than delay the Singularity.

I named 3 different mechanisms it might occur. All of the mechanims are possible by peer reviewed, rock solid science. You are arguing from pure ignorance if you argue otherwise.

You must show that 1. all mechanisms are too difficult for humans to build OR 2. Humans won’t do them. You cannot show that they are impossible, because you’d be little more than a nutjob if you make that argument, given that life on earth already does everything I mentioned.

We have self replicating factories, right now. We have intelligent computers, right now. They just don’t take the form you are used to and involve a human element.

Let me restate another thing you missed because you just dismissed everything as “sci fi” :

  1. We could achieve self-replicating factories with nothing more than a bunch of expensive engineering and firmware/software design. We already have fully automated factory processes, self replication only means you have gone ahead and make all processes automatic. I’d like to see how you can argue it’s sci fi to build more of the same thing humans already have.

  2. We could achieve human “uploads” the hard way : neurons cultured out and growing with embedded electrodes and grown into electrode arrays. This would be fantastically expensive and slow, but the only way you can say it’s impossible is you must establish that mimicking the needed biochemical processes cannot be done, even if we did so with organs grown from stem cells on the bench top, like has already been done.

  3. Brains are probably Turing complete. We could achieve useful AGI with a lot of software similar to existing deep learning systems. It doesn’t really even need to be “conscious” or self directed, it merely needs to be software that can help us do extremely hard tasks autonomously, like automatic debugging of software and automatic design of products. I’d like to see how you are going to argue that there’s no scientific plausibility of this.

Person on Mars in the next 50 years; if I had to bet, I would say “yes” - a lot of things happen in 50 years. It won’t be much earlier than that, however. Mars, unlike the moon, has the problem that it takes months to get there (and months to get back).

Personal flying car? Not happening - we can overcome a lot of things, but gravity is not one of them. I assume “flying car” is something that flies more than a few feet/meters off the ground (i.e. not a glorified hovercraft), in which case you need to overcome the possibility of running out of fuel, or having a mechanical problem, while airborne (“what do you mean, the alternator cable broke?”). Yes, I am aware that airplanes also have these potential problems, but airplanes also have rather large wings that enable them some sort of leeway through gliding - something a personal flying car almost certainly will not have.

Neurons with embedded electrodes? Can you please expand on this? It’s not clear how you will duplicate the state of the brain with this.
Regarding your previous “scientific” article:
That is a highly speculative description of a process that may or may not work. For example, we know that epigenetic changes to the neurons DNA is involved in maintaining the strength of synapse connections, but that article completely skipped over the detail at that level. In 100 years we may realize that we need to duplicate every single atom to arrive at the same conscious state, we just don’t know.

Lots of people look at the moon landing and just make the assumption that because we did that, it’s just a matter of time until we reach Mars. But the situation nowadays is entirely different from that in the 60s. We didn’t land on the moon because of science, or a feeling of humanity achieving something. We did it because of politics. Without that massive, obsessive, one-track focus on beating the Russians, we’re really not on the right track.

What’s more - and the space geek inside of me is crying right now - what’s the point? You could say we should go to Mars because of population expansion and to be able to save humanity even if something happens to Earth, yes. But if that was the priority behind space travel, we’d be putting colonies on the moon, not looking for other places to land. It’s a dead planet. This isn’t the New World we’re talking about here. In all likelihood, if magically NASA got the funds to actually do this thing, we’d go there, put a footprint in the red dust, collect some rocks, then pack up and go home.

Haven’t we got enough problems on Earth? And I know, I know - everyone’s argument against that is that if we didn’t put the funds into space travel, they’d go to wars or similar pointless things, that the Starving African Children or whoever wouldn’t be any better off. But maybe we should try to address that problem, that 842 million people on this planet, the planet we’re on right now, don’t have enough to eat, and we’re not doing nearly enough about it, than pour money into going somewhere just so we can say we did.

On the other hand: Yeah, going to Mars would be awesome. But unless China starts a massive program to go there, and a dashing young President declares we’ll get there first and then gets murdered, I doubt it’s going to happen in the immediate future.

Sure, in that specific case, where nothing else works (and you would otherwise have to duplicate neurons to the atomic level blah blah blah), you would grow sections of neural tissue in a controlled environment in the laboratory, mimicking patterns you found after making connectome maps of human brains. You would “teach” the first “AI” built this way the hard way, with shocks directly to the tissues, but you would also save to disk the exact timing pattern of all inputs to the neural tissue (including self inputs).

This would let you, later, mass produce these meatware “AIs” by recreating this apparatus and playing back the inputs to program the cells to the state of the “educated” AI that has useful skills.

This is just about the least efficient possible way to do this, and I have serious doubts that this will ever been necessary. However, even if you believe that human neurons use straight up “quantum magic” to operate, this would enable to you to create human-like intelligence in a factory reproducible manner, and, lead to exponential growth and the Singularity.

As for epigenetic patterns : stop trolling me, please. There are other far more efficient ways to model state changes to a given neuron that do not require simulating the entire protein synthesis pathway. The easiest way is just statistical : you know that, averaged over a population of synapses, a given coincidence pattern through back propagation results in a certain amount of gain or loss to a specific synapse. More than likely this very simple computational model is close enough. Modern neuroscience currently suspects that individual neurons are quite noisy and glitchy and so the model to simulate them can be a little crude yet you’ll get the same outputs from the resulting brain.(to say it in plain english : if a given synapse receives an input and fires together with the host neuron, you increase the strength of that synapse by an empirically measured coefficient. In math, if the synaptic strength is measured by the state variable S1, you just multiple it by the coefficient C1, or S1 *= C1; )

Not trolling in the slightest, trying to explain to you how a neuron works.

Epigenetic changes are a key part of how our brains learn and maintain that learning. This mechanism has characteristics/attributes related to how quickly it happens, under what conditions, how quickly it decays over time, etc.

If you try to upload a brain and you didn’t account for all of the relevant state and all of the characteristics of change over time (in your simulation) then you will not have an uploaded brain, you will have garbage.

Modern neuroscience is learning the exact opposite, that a single neuron is a network all by itself with many different types of computation and signal processing happening at various locations on the neuron.

There are spike trains moving in various directions on that one neuron for various purposes, computing different things, all critical for the correct overall functioning of the neuron.

If you want to understand more about this, here’s a paper that describes some of it (there is much more that has been learned more recently): https://courses.cs.washington.edu/courses/cse528/11sp/london05.pdf

The problem with the “singularity” is that it’s misnamed. It’s not a singularity; it’s a horizon. The Horizon is about 30 years away, and it’s always about 30 years away: Technology will advance so rapidly in the next 30 years that a person of today can’t even conceive of what it’ll be like then. Similarly, a person of 1984 couldn’t have conceived of what life is like now, and a person of 1954 couldn’t have conceived of 1984, and so on. We’ll get to what is now the Horizon, and will find that it all seems perfectly normal, by then, but that the new Horizon is still 30 years ahead of us.

EDIT:

Oh, and I don’t know why everyone always talks so much about flying cars. We’ve had those for a century.

One problem with Mars and space travel in general is that the human body does not do well without gravity. The list of things impacted is pretty long and includes brain changes.

That’s not the point. The point is that all that complexity - all those RNA states, etc, reflect simpler fundamental rules that represent learning. Sure, there may be slightly more going on than just multiplying by a coefficient - but modern knowledge suggests that many of those details are irrelevant. Your paper link changes none of that. Skimming it, I know of simple algorithms that will fully reflect all of these newly discovered rules. No reasonable person would even consider modeling at the atomic level as a result of this.

The paper even mentions that you can model a single neuron with a conventional 2-layer neural network and get good results.

If theories about the Singularity are correct, on one side of this horizon, humans still live on Earth for the most part and most of Earth’s real natural resources remain untapped. (humans have never even tried to mine 2/3 of the earth’s surface for resources because it’s all covered in seawater, for instance. Humans also don’t mine down to the lava layer routinely because it costs inconvenient amounts of energy and labor to go that deep)

On the other side, all the easily reached resources have been tapped throughout the solar system. Starships are possible. Technology is now close to an asymptote representing the absolute limits possible within the laws of physics.

That sounds a lot more like a black hole than a horizon to me. Also, the other consequence is that after the Singularity has happened, it should be easier to predict far into the future roughly what’s going to happen.

Yep. The real problem is there’s no direct evidence that 1/3 G (Mars surface) is enough to prevent those changes. The only way to find out short of going to Mars would be to build an orbital centrifuge and have astronauts live in it for months.

That’s the next logical step if NASA were serious about Mars. You would need to build an orbital centrifuge and a long duration recycling life support system intended for a Mars journey, and have astraonauts live in it and test it over at least the duration of an intended Mars visit.

You’d also want to test that long duration life support system by making a production run of the intended hardware/firmware/computers (and freeze the design!) and test it by setting up a bunch of copies of it on Earth. Some of the copies would simply be tested with lab animals or artificial mechanisms to represent the humans normally inside, and some would need to be tested by groups of humans willing to be guinea pigs.

You’d also need a large aeroshell lander design, something large enough to support human visitors. You’d need to send multiple robot spacecraft to Mars to test the lander, and also send at least 1 robot spacecraft with an ascent module to test return to earth.