I don’t think your metric is very scientifically valid. Counting “ifs” is a lazy way to evaluate the feasibility of something. How about you come back with an actual argument.
“I don’t think brain emulators are possible because…‘insert reason based on actual science and engineering principles’”.
Let me give you an example. “I don’t think time travel is possible because the math says you need negative energy to achieve it, and no one has ever in any experiment demonstrated that negative energy exists. Also, if it were possible, paradoxes would result, and you would expect that intelligent beings would continually time travel in an escalating time-war until a single being goes back to immediately after the big bang and takes control of the universe forever”
I know what **Exapno Mapcase ** means, though; we can never predict the future.
At this moment in time a lot of transhumanists and so on want to believe in neurotechnological uploading, and they think it is inevitable; I think it is possible, but a lot of other possibilities will probably happen instead.
For instance I’m fairly sure that some sort of sentient artificial general intelligence will be developed long before uploading is possible, and these sentient AGIs will probably be very different to humans. It is possible that the sentient AGIs will help humans to achieve uploading, but I don’t know of any reason why they should, and lots of reasons why they shouldn’t. Long story short- the future becomes completely unpredictable when anything like an AGI looms on the horizon, so don’t make any long term plans to become immortal.
Saying “it’s possible to become pseudo immortal” is an immensely different statement from saying “it’s going to happen in a way that benefits me personally”.
You would have to be pretty uneducated to conclude it’s not possible to either hack biology so human bodies live indefinitely or copy the important information into an artificial system. And if you were moderately educated you might say "well yeah but it’s really hard to do and at the current rate of progress we will never figure it out, and we aren’t going to find a better way* to conduct scientific research before a billion years passes. "
Every one of the AGI scenarios results in a situation where the resulting entities can make some or all humans immortal or copy them. It doesn’t mean they will or that there will be any humans alive to benefit.
*by better, I mean thousands to millions of times better. Essentially either vastly advanced computational tools or full AGIs that can correctly take into account all experimental data and design optimal experiments to expand our knowledge where there will be the most benefit. And the math checks out for such tools today, they are much easier than an AGI…
Dude, it’s utter asspull and you should have been embarassed to make the statement.
Plan for getting groceries : “if I’m alive Monday, and if the grocery store still exists, and if I still have a job, and if the bank still has the same balance it does today, and if my car still runs, I will go to the store and get groceries”
Exapno Mapcase : “Any plan that has 5+ ifs in it won’t happen for a billion years”
This, like much of the thread, was posted 3/3/03. 2017 must’ve seemed like the far distant future. Yet here we are, using keyboards and the internet. Sigh… sorry, ianzin, not as much progress as you’d hoped for.
I feel like everyone in this thread is part of a time travel experiment, communicating with people where there’s a fourteen-year gap.
Right back atcha, dude. I was obviously talking about technologies that don’t exist at all as “ifs”. Language comprehension is a technology and possibly the most important one.
You’re dividing by zero and expecting to come out with a result that isn’t by definition meaningless. As we’ve told you in a million similar threads, you can’t simply announce that stuff will become real in the future because… FUTURE!
No, but a person who accepts basic principles, such as :
a. The laws of physics are the same and apply to everything equally on earth
b. The human brain is a physical system
c. Intelligence, whether human or otherwise, is a series of algorithms that can be discovered
d. As humans develop more intelligent computer systems, the evidence for which is literally daily in the news, most likely those systems will allow for reducing the workload to develop the next set of systems
e. I could extend the “most likely” to explaining Baysian updated neural networks, to point out rapid development of new algorithms in just the last 12 calendar months that have made previously difficult AI problems trivial (like bipedal walking), but I doubt you have the background to understand any of it.
The predicted Singularity is based upon a set of principles as sound as those used for nuclear fission. If you skim the actual history of the Manhattan project, it was not certain it was going to work, and getting the right geometry for the first nuclear chain reaction wasn’t easy or cheap. But it was probably going to work because the math and prior observations all checked out.
What you don’t understand is that there’s a world of difference between predicting a future technology that you **know **will work unless human understanding of reality is wrong and a future technology that you don’t know of any mechanism by which it could work.
AI is in the former category. Intelligence through mechanisms made of matter is real. Time travel isn’t.
Yes. Also it implies that Exapno believes that just because the majority of straight dope posters share a common opinion in a particular thread, it means his argument must therefore be correct. Who is he to “tell me” anything? Make an actual argument based on known facts and real principles, don’t make a ‘well a mob of internet randoms agrees, thus it must be true’ argument.
Weird. I thought that the purpose of GQ includes inducing people to learn from their mistakes.
You do have the advantage that nothing you post can be proved wrong because it will happen in far futures. However, saying that we don’t know of any mechanism which will make it work is not a point for your argument, but a point against it. A point which is well worth making whenever the subject warrants.
But we do know of a mechanism that would make it work.
Why would it mean that? I mean, I can see how time travel to the past would imply that, but what’s the big deal about travel to the future? All that travel to the future proves is that the future will eventually be written.
Fiction about traveling to or revealing the future all die on the point that every action by everyone on earth (not to mention inanimate objects like weather) must proceed in a set way to allow that future to happen. If, as in the tv version of Flashforward, someone successfully defies that future, we are in the same logical dilemma as past time travel: either time is set and all supposed “change” is baked in or else an alternate world has been produced.
That assumes, as most time travel stories do, that a return to the present with knowledge of the trip occurs or the present catches up with the “future”. Some have imagined that famous disappearances - Judge Crater was a favorite in the old days - happened because they were taken to the future. You can’t disprove that unless we should find them sometime in our future, but all such speculations should remain in fiction and not in GQ.
Item c is the faulty link in your argument. Things reducible to a statement about algorithms are just a tiny part of mathematics. So why expect all of reality is reducible to an algorithm?
That’s not what statement c says. In more formal terms, it says that some function f(x) exists that can take as input <local environment state> and output <course of action to maximize expected return>. That’s what intelligence is fundamentally, every single action taken by any creature on earth, from bacteria on up, that uses a control system is doing some variant on this. At our level we are finding more optimum solutions, given more sophisticated brains and much more data, both present and past, to work with.
And statement c says we can discover such functions, and if you have converted your local environment state to actual numbers in a computer, and have a reliable mechanism for scoring the consequences*, you can objectively measure the quality of a candidate algorithm.
This is how we can make very rapid advances in AI research - and we are - because it’s not an opinion as to whether or not a candidate algorithm is better than a previous algorithm. It’s measurable, and we are also creating meta-algorithms that can programmatically generate other algorithms and thus we can search the space of possibilities to find optimal functions. I can link a couple of recent deepmind papers on this if you are interested.
*I can talk more about scoring if you want. It’s not an unsolvable problem either, currently existing limitations with this are why we can’t yet build algorithms that run in the human domain and say, speak in a way intended to make friends, but there is a series of steps that can get there and I can explain why.