It looks to be a site where everyone is SamuelA.
I wish I could go back to believing I knew things about things. Nowadays I just wallow in a bath of my own ignorance, and steak. I like steak, but sometimes I envy the likes of SamuelA.
It would be more of the same. SamuelA making fantastical claims with no/late citations, ad nauseum. I don’t know what he expects to gain out of it except the fact he may have some moderators in his pocket, or be a moderator himself.
There’s a lot of that going on there. Some intelligent discussion but the most of what I’ve seen is speculation on AI.
You bathe in steak?!?? Man, I want to live at your house. Can I come over? I’ll bring A-1 and Worcestershire.
Tripler
I’m totally paleo. Totally.
Steak is more of a metaphor for ethereal meatiness that surrounds me while I wallow in my own ignorance.
A-1 and worcestershire? You steak at whole nother level man. Respect.
I read that an hour ago, just paused, opened my mouth and shook my head. My brain contemplated the idea for like 0.25 seconds and then said “No. Just, no. You can’t make me try to imagine that.”
What’ll really bake your noodle is that if our theories are correct, and we have reasons to think they are, then all the world will eventually converge onto these ideas.
Aumann’s agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other’s beliefs* cannot agree to disagree**. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.*
Once every sentient being is an AI or a human converted to a computer and has sufficient processing power, we will all have the same common set of data about the world and the adequate cognitive ability to converge on the same conclusions. In the more immediate future, we’re mere years away from limited function data analysis tools that can augment human intelligence and thus produce the correct conclusions given the data.
I consider it a higher probability that “our” (me and the other members of the site) current analysis is closer to the one true correct analysis that “your” (you mentally challenged individuals in this thread) analysis. It’s just a guess…but a rational one.
When I make steak sometimes I put worcestershire on it while it is cooking to add to the flavor, then dab A1 sauce on it while eating because while they both taste just as good, the worcestershire doesn’t stay on.
Mensa called, they said you’re way too fucking stupid for admission. Sorry to burst your bubble.
“Well this guy says this, so it has to be true and apply to everything”
Ah, the zeal of the new convert. Believing everything said is the truth and wielding it as a cudgel against everything in their world.
He needs to be on some kind of watch list, against the event that he decides to study politics or religion. He’s like one ideological molecule away from being a militant fanatic.
I know there are many fans of SamuelA in this thread (for a precise and ironic sense of the word “fans” :D) so herewith an appreciation and tribute to the brilliance of SamuelA.
My first question would be what the fuck on God’s green earth this has to do with anything that was being discussed, but never mind. SamuelA is on a roll, so let’s take a look, because it’s always fun.
We know from previous experience that SamuelA likes to plagiarize things out of Wikipedia and pass it off as his own, as he did with the definition of “computation”, and, significantly, to do it without understanding it. He no doubt got the idea for this off-the-wall irrelevant grandstanding out of “LessWrong” because this is the kind of stuff they bloviate about, and thought he would pass off a cribbed Wikipedia entry here as a shining example of … something. But what? His ability to cut and paste? His shameless plagiarizing?
Moreover, the substance of the “theorem” is pretty unenlightening because if one accepts the artificially constrained technical premises as precisely defined, then the theorem is self-evidently and trivially true. As the author himself stated, “We publish this paper with some diffidence, since once one has the appropriate framework, it is mathematically trivial.” Or as Rational Wiki astutely opines, “Aumann’s agreement theorem is the result of Nobel laureate Robert Aumann’s groundbreaking 1976 discovery that a sufficiently respected game theorist can get anything into a peer-reviewed journal.”
The problem, of course, is the presumption that everyone has access to the same perfect information and possesses the same perfect rationality, and implicitly has the same history and the same goals and values. It necessarily presumes that we have ceased to have any identity either as individuals or collectively as groups and have become identical machines, a perfectly plausible scenario in SamuelA’s demented imaginary world, ignoring the fact that this is tantamount to saying we will have ceased to exist.
As Rational Wiki points out here, at its core this is nothing more than the truism that two calculators will give you the same answer to the same input. This is something that SamuelA can appreciate because I understand that he took (and passed!) a signals course, plus he knows how the brain works (it executes branch instructions!). In the real world – the one that SamuelA has great difficulties with – people have, and will always have, identities, goals, values, and self-interests. This is why we have politics and why different rational people come to entirely different conclusions based on exactly the same facts.
All this brilliance was apparently dredged out of the anal sphincter of LessWrong and seems to have impressed the beejesus out of SamuelA. Not surprisingly, Rational Wiki has a few choice words about them, too: “… very focused on an evil future artificial intelligence taking over the world. Some compare it to a circle-jerk of wordiness” and “… the community’s focused demographic and narrow interests have also produced an insular culture that is heavy with its own peculiar jargon and established ideas that often conflict with science and reality.” If they also celebrate cut-n-paste plagiarists and pompous bloviating blowhards, it will be the perfect place for our SamuelA and we wish him well, if only he would kindly bugger off and go there and become an imaginary hero in his own mind. We will miss the humor, but c’est la vie.
SamuelA, you embarrass yourself every time you post. Even your fantasies are epic fails.
This reply is going to quote SamuelA out of order, because I actually do want to give him a chance to defend his postulates in an academic conversation. I do the following with a calm voice and from a rational mindset; but please consider me a skeptic that could be “sold” on this idea. That being said. . .
Some initial questions:
- Can you define how you plan to convert every sentient being into ‘an AI’ or a computer? What type of technology will this entail? Hardware? What software or ‘mental reconditioning’ will be required to connect everyone to this system?
- Will people be allowed to choose–say ‘opt in’ to this construct, or will this be a requirement? At what point in their lives (say . . . a particular age) will they be converted? Will humans have the choice to ‘opt out’ and reverse this conversion?
- Where will this common set of data be hosted? Who will collect and maintain this data? What safeguards will be in place to prevent corruption of this data and this collective cognitive ability from, say corporate interests or foreign agents?
- Will those connected to this sytem still retain their individuality? What if they choose to pursue other interests or other problems?
- What resources will this system require? Will this sytem prolong life? What resources will be required to maintain the ‘meatspace’ element of the system (ties into #1 with the hardware element. . .) Who/what organization will manage this system?
Can you link to this discussion on your other forum?
---- Now a break to address the earlier comments.
This is not helpful on “selling” your idea.
Is this stated in that discussion, or is this a personal opinion?
Can you link to Aumann’s theorem so I can read more about it?
Tripler
SamuelA, the floor is yours.
And every knee shall bow, every tounge confess, that the singularity is lord!
Here’s the original paper http://www.ma.huji.ac.il/~raumann/pdf/Agreeing%20to%20Disagree.pdf
It’s pretty interesting, actually. I’m not sure that in practice any two people have the same priors, and note that the paper does not estimate how long it will take for posteriors to converge (there may be later papers that deal with both these issues).
I believe that topic is covered in the documentary Requiem For a Dream.
I’m throwing SamuelA an olive branch here. I see he’s been online, but I’d hope to get some more details from him.
Tripler
No tricks.
ASS TA ASS!
and some lowercase text
Then why claim to ignore me and say you don’t care as loudly and repeatedly as possible?
You know I don’t claim to know the answer to your question because I don’t know the way the future will go. Ultimately all that theorem really means in this context, as wolfpup points out :
a. Physical reality is a game with fixed rules. Like all games, one and only one optimal strategy exists, given the same end goal.
b. As smarter beings begin to replace humans - whether that be AIs, cyborgs, genetically engineered humans, it doesn’t matter - those beings will have the neural ability to *follow *more optimal strategies. I know what I am doing now is not optimal, but my cave man emotions won’t let me do what I know is better. (hence I don’t have a 6-pack, 5 girlfriends, and a job as a quant making 500k a year, even though there exists a sequence of actions I could have logically worked out and taken to get there if I were an inhuman, rational agent)
c. Smarter beings will also have vastly more memory capacity and ability to share data with each other digitally.
Hence, if beings can share data with each other digitally, and analyze it using the most optimal strategy they know about in common, they will reach the same conclusion. In the same way that 2 calculators agree with each other as wolfpup points out.
Part of the reason this idea has impressed me is that religion, politics, personal lifestyle choices - they are all strategies to accomplish goals. Given the same goals and knowledge of the optimal strategy, rational beings wouldn’t have 5000 opinions for religion/politics/personal choices. A correct answer (where correct means "most probable strategy to accomplish your goals) exists for each of these “taboo” topics.
If you encountered another being with a different opinion, you could just plug your serial ports together or whatever and swap memory files. You would literally be able to work out mathematically why that being’s opinion is different. Maybe one of you is unaware of the most optimal strategy - you could share it with the other, they could run that strategy on their experiences, determine it has a higher expected value, and switch over.
The exception that proves the rule: Crazyhorse’s troll theorem says that your posterior will always have the higher probability of your head being stuck in it.
Why is the same end goal a given?
It appears that it’s not only on the subject of technology that SamuelA can provide novel and illuminating insights. Here he holds forth in an excellent new treatise on the subjects of immigration, race, and Making America Great Again™. Here we have one of the great minds of the day addressing one of the truly challenging issues of our times. You can read it starting with post #73 on page 2 of this thread. I cannot hope that my humble summary will do it justice, but I’ve tried to convey the essence of it. For the sake of brevity, I’ve cut to the chase of what he’s really saying and not bothered with the dog whistles.
America became successful because it was settled by the superior Aryan race, white northern Europeans. There wasn’t much in the way of formal immigration criteria prior to the 1920s, but that was OK because only decent white Europeans could afford to come here, so it all worked out. And after that it was biased by race and ethnicity with favoritism toward exactly the right kind of white northern Europeans, which was great! Oh, sure, in the 19th century a bunch of Chinese started coming in, but the 19th century was great because the rules worked pretty much according to the rules of SamuelA: laws were soon passed putting a stop to that sort of thing, see? So that worked out OK, too, when we kicked their little yellow oriental butts and made it clear they weren’t welcome.
Now inferior races from shithole countries want to come in – most of them even worse than Chinese – and this must be stopped. Don’t bother about their qualifications, just look at the shitholes they live in! It’s not that SamuelA is racist or anything – heaven forfend! – this is just Aumann’s theorem of rationality as expressed by SamuelA’s posterior. We know he’s not racist because he tells us many times, just like he tells us that he’s absolutely right about computation and how the brain works. It’s so obvious that he’s right about all these things that you’d have to be an idiot not to see it. We assume that he totally loves inferior races from shithole countries, as long as they stay in their shithole countries and don’t mess up the ones that decent people live in. Did I mention that SamuelA is not a racist? Yes, I did, but you can never say it too often. He’s so incredibly non-racist that he probably even starts a lot of dinner-table conversations with comments like, “I’m not a racist, but …”.