View Single Post
  #804  
Old 01-26-2018, 07:45 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,313
I'm going to back up here a few days. I wanted to tie both posts together because I think they are related, and I've been out-of-pocket. But, there are a few unresolved questions I'd like to re-ask for clarification. . . Again, I'm in a civil tone for ya.

Quote:
Originally Posted by SamuelA View Post
What'll really bake your noodle is that if our theories are correct, and we have reasons to think they are, then all the world will eventually converge onto these ideas.
I'm still skeptical about the reasons you think these theories are correct, and I'd like more context on the discussion. Can you please offer a link to that discussion? Which participant are you--are you, "SamuelA," in that discussion?

Quote:
Originally Posted by SamuelA View Post
Aumann's agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.
First, rationality is a subjective determination, that is entirely dependent on the lifetime experience of the observer, and their impression of the other speaker (this includes the credibility of the speaker and the topic), and entirely dependent on a third party for judgement. Second, humans are not models for Bayesian rationalists, nor are they Bayesian rationalists; there is too wide of a spectrum of variables, and a wide-floating range of values in those variables to be even remotely predictable. For something like human emotion/choice, and the degrees of freedom involved, one is best off using Monte Carlo method of analysis to include such stochastic variables and degrees. We do it all the time here for physics modelling.

Quote:
Originally Posted by SamuelA View Post
Once every sentient being is an AI or a human converted to a computer and has sufficient processing power, we will all have the same common set of data about the world and the adequate cognitive ability to converge on the same conclusions. In the more immediate future, we're mere years away from limited function data analysis tools that can augment human intelligence and thus produce the correct conclusions given the data.
This is a pretty bold statement, and I'll counter that "Once" or "when" statements are entirely dependent on "if" arguments. I'll get into that below.

Quote:
Originally Posted by SamuelA View Post
Then why claim to ignore me and say you don't care as loudly and repeatedly as possible?
I have maid no such claims since post 556. Our exchanges since post 645 have abrogated that entente.

Quote:
Originally Posted by SamuelA View Post
You know I don't claim to know the answer to your question because I don't know the way the future will go. Ultimately all that theorem really means in this context, as wolfpup points out :
Before I get to your assertation of wolfupup, I will reiterate what I mentioned earlier. Your "Once" statement above is predicated on "If" it happens. I cannot agree with you that something "will" happen, when we cannot agree "if" it will. That's why I asked those particular questions about the technology. . . My "bottom line" will address this and the earlier statement/question.

Quote:
Originally Posted by SamuelA View Post
a. Physical reality is a game with fixed rules. Like all games, one and only one optimal strategy exists, given the same end goal.

b. As smarter beings begin to replace humans - whether that be AIs, cyborgs, genetically engineered humans, it doesn't matter - those beings will have the neural ability to follow more optimal strategies. I know what I am doing now is not optimal, but my cave man emotions won't let me do what I know is better. (hence I don't have a 6-pack, 5 girlfriends, and a job as a quant making 500k a year, even though there exists a sequence of actions I could have logically worked out and taken to get there if I were an inhuman, rational agent)

c. Smarter beings will also have vastly more memory capacity and ability to share data with each other digitally.
You had me agree, up until "digitally." But what is your vision for digital humankind without "cave man emotions"? Isn't a purely digital being a different species? E.g. Vulcan, Borg?

Quote:
Originally Posted by SamuelA View Post
Hence, if beings can share data with each other digitally, and analyze it using the most optimal strategy they know about in common, they will reach the same conclusion. In the same way that 2 calculators agree with each other as wolfpup points out.
I think you're assuming that the digital will be both credible and applicable in that interchange. I posit that will never be the case. Two individuals--even digital ones--will never share the same perspective, based, elementarily on the fact that they are two distinct beings and cannot occupy the same space at the same time.

Quote:
Originally Posted by SamuelA View Post
Part of the reason this idea has impressed me is that religion, politics, personal lifestyle choices - they are all strategies to accomplish goals. Given the same goals and knowledge of the optimal strategy, rational beings wouldn't have 5000 opinions for religion/politics/personal choices. A correct answer (where correct means "most probable strategy to accomplish your goals) exists for each of these "taboo" topics.
I disagree. The accomplishment of goals is based of a discrete individuals' ways, means, and ends. No pair of individuals will have the same abilities. Perhaps the same goals, but never the exact same ways and means.

Quote:
Originally Posted by SamuelA View Post
If you encountered another being with a different opinion, you could just plug your serial ports together or whatever and swap memory files. You would literally be able to work out mathematically why that being's opinion is different. Maybe one of you is unaware of the most optimal strategy - you could share it with the other, they could run that strategy on their experiences, determine it has a higher expected value, and switch over.
I strongly disagree. Referring to my earlier comment about digital beings, you cannot have mathematical 'humans' without a basic emotion--you are speaking in terms of apples and bowling balls. But that gets to my bottom line:

Bottom Line: You're implying a mechanical, digital-based utopian society, that is currently indefensible as a future prospect. You even admit this is indefensible with your comment that: "I don't claim to know the answer to your question because I don't know the way the future will go."
So what are you positing for discussion?

I offer that "when" humans are "converted to a computer" is completely dependent on the more pertinent question of "if". If you differ, please make your argument.

Tripler
An open discussion, SamuelA.

Last edited by Tripler; 01-26-2018 at 07:46 PM.