Can there be machine super intelligence without it being life altering

Since it’s developed enough that you can envision the engineering problems that might occur when making one, it seems pretty obvious that we’re closer to a space elevator than a strong AI. I can’t come up with an engineering limitation for strong AI, because it doesn’t even have anything but the most abstract of theories to consider.

NASA considers a space elevator to be TRL 2, (Technology concept and/or application formulated). Pretty speculative, but they think they might have begun to figure it out. Harold G. White seems to think that his warp drive is at TRL 2, I think that is a little premature, but they’ve at least laid out the theoretical framework to the point where they are calculating the energies that might be required, and are attempting methods at warping space. I don’t see how a strong AI can be considered anything but TRL 1, ( Basic principles observed and reported ), but I think even that might be generous. Outside of neural nets, which are still a toy, we don’t seem to have a grasp on the basic principles. YMMV, but my position is not one from ignorance of the state of the art.

As to building a machine that would build the AI for us as eschereal seems to suggest, I can’t imagine a more sci-fi concept in this thread. We have built and used many machines that we understand poorly. I don’t think we understood tires really well until the 1960’s. Building a machine that builds a machine that we don’t understand seems very far removed from that activity. It appears to require that we somehow build a strong AI that is superior to ours by accident.

How does locally vs. globally figure into it? I thought that Special Relativity insists that if anything, by any method, makes a round trip faster than a beam of light can, that a reference frame exists that sees that as traveling backwards in time.

Well, I’m better equipped to debate AI than Relativity, but here’s a wiki cite, a youtube link and a bunch of half-assed speculation.

In an Alcubierre drive, the space around the object is warped, and particles passing through it are rotated in space. I imagine that they still come out of it at the expected time. I can see it being possible that the object at the center isn’t viewable from most directions, as space-time is being warped around it.

If I’m right (and I can’t do the math to prove that I might be), if all you see are elemental particles behaving oddly, would that violate causality at the levels they operate at?

The only hard physics I can find related to the idea are concerned with the energies necessary, whether it is survivable for the occupants, and whether it will destroy the destination upon arrival. It’s possible that the people qualified to answer your question have either overlooked it or regarded it as someone else’s problem. The only reference to the considerations of time travel I can find is that the chronology projection conjecture states “if a method to travel faster than light exists, and one tries to use it to build a time machine, something will go wrong: the energy accumulated will explode, or it will create a black hole.”

Here is a presentation by Harold White on his research. It provides a bit more perspective than the wikipedia article.

Yeah, that’s a really long version of “I don’t know”.

All the engineering problems with a space elevator point towards its impossibility. No space elevator exists, and none could be built using the designs currently imagined.

7 billion human brains exist, so we know they are physically possible.

Indeed- but there is no possible physical barrier to creating strong AI, because strong natural intelligence exists.

The energies are calculable, but not attainable. A big difference here.

There is no evidence of any process of phenomenon that could transfer matter or usable information faster than light, anywhere in the universe. This lack of evidence can be contrasted against the billions of existing minds on our world alone.

You are correct, so even methods of FTL that do not break the speed of light locally can be used to reverse causality, so are almost certainly excluded by the laws of the universe.

NP hard problems can take exponential time to achieve optimal solutions in the worst case. Practically speaking there are lots of heuristics which can usually achieve optimal solutions in polynomial time. When I was in grad school we showed this for a scheduling problem which had gotten lots of research done on it - and we killed the whole area. Very satisfying.
So whatever AI needs to do, solving NP hard problems is not it.

I used to write machine language code (not assembly, literally hex coding) for entertainment, and I was pretty OK at it. One time, I wrote something in Pascal, and when I looked at what it built, the coding was horrendously crufty. Nowadays, compilers have gotten very sophisticated, optimizing the object code in ways that I would not have thought of. This is a non-trivial improvement compared to how things used to be, and it is a step along the road to greater abstraction on the part of the programmer. If we can proceed apace, without IP fights bogging down progress, machines are really not that far from building the program you want from a natural language request. This is how programming works, construction complex designs from fairly simple building blocks, and there is no reason to expect we cannot construct a design that can put the pieces together in ways that we had not thought of, using fairly succinct directions. Hardly “sci-fi”, the path should be obvious to anyone who understands software design.

But there are absolutely no artificial ones, and no clear theories about how to make one. We’re still at the stage of figuring out what the problem is with strong AI.

Really, this keeps being repeated as if it has some bearing on our ability generate thought through math, which is currently our only tool to address the problem. It doesn’t.

They’ve calculated the necessary exotic matter down to the size of a Voyager spacecraft. They’re currently trying to figure out how to warp space with normal matter/energy, and have some results that look like they may be on to a method of doing it.

The researchers working on the drive do say that it does not violate our understanding of causality.

You seem to be confusing a program doing something unexpected with the rules we provided it – with a program doing something we don’t understand and can’t really comprehend, but somehow are still able to use. I understand software well enough (writing/debugging it is part of my job too) to know that we haven’t gotten close to that.

That’s not true; a perfect copy of a human mind in software would be very useful indeed.
Technologies like fMRI have been overhyped somewhat; we actually get a very crude picture of brain activity from them. Being able to dump out everything the brain is doing would give us more data to chew on in seconds than everything mankind has gathered up to this point.

Furthermore it’s unlikely there will be a hard line between being able to make a copy of the human brain and being able to improve it. If nothing else we could use faster connections or scale the whole thing up.

You could say what you meant was copying a brain exactly as it is, in organic form with little knowledge even of how we formed it, would not be useful. That’s true. But that doesn’t mean we therefore want create a different intelligence.

Quite correct; and that understanding is that any method of travelling faster than light, even warp drive or wormholes that do not break the speed of light locally, could be used to violate causality.

I suppose it is just possible that White’s experiments, or somebody else’s might give valuable insights into the warped nature of space- but wherever and whenever they result in causality violations they are likely to fail. As far as I can see that almost certainly rules out FLT.

On the other hand, to get back on topic, we know for an absolute certainty that minds do not break the laws pf physics, therefore artificial ones are possible. It may be the case that none of our current methods will achieve artificial minds- but (given enough time and sufficient opportunities for research) one day we will get there.

Don’t think for a minute that I expect them in the next twenty years, or that Ray Kurzweil will ever see one; he is just as over-optimistic a dreamer as Sonny White.

Ok, I have a better answer than “I don’t know” or “smart people disagree” for the question of causality/time travel in relation to an Alcubierre drive. Since you carry a bubble of space-time with you that remains in sync with your current location, it appears that it is a special frame. It appears that objects from other frames of reference cannot interact with yours until they enter your reference frame. Perhaps the warp bubbles in the linked scenario rotate around each other in whatever geometry they travel around in until one drops out, and they assume the local space-time perspective. At this point, I really wonder what might happen to anything that dropped from one space-time reality to another at the end of its trip in an Alcubierre drive (and Alcubierre thinks you might not be able to turn one off at all). Now, special frames are supposed to be prohibited by relativity, but the theory arose as a novel solution to relativity. Very odd, I should probably be less surprised that it is.

Yeah, no argument there. Both are still fantasy by any measure using technology we can currently envision, which is why I used the analogy. Thank you for questioning it. I was kind of irritated that it’d be picked out, but by having to back it up, I’m much smarter than I was when I started blabbering in this thread. :slight_smile:

And for a finer-grained version of my earlier answer to the OP: We have no way of making an accurate prediction. For all we know, we might have to send our strong AI to school for 18 years before we can get anything useful out of it, and it still might eat its lunch on the way there. OTOH, it could be created after lunch one day in a very clever man’s garage, and take over NORAD by dinner. One’s more likely than the other to someone who has a formal definition of intelligence (and I wouldn’t like to guess which one fits, both are kind of unappealing), but either could happen under our current model of the idea.