Given his favorite topic of blather, may I suggest he be called a rAcIst?
Btw, that’s a double ignore of Wolfpup.
Welcome to the club. May our man bring you your smoking jacket? How do you prefer your scotch?
Damnit. What do I have to do to get on the list? I’ve been refuted with Wikipedia and everything!
I’m going to back up here a few days. I wanted to tie both posts together because I think they are related, and I’ve been out-of-pocket. But, there are a few unresolved questions I’d like to re-ask for clarification. . . Again, I’m in a civil tone for ya.
I’m still skeptical about the reasons you think these theories are correct, and I’d like more context on the discussion. Can you please offer a link to that discussion? Which participant are you–are you, “SamuelA,” in that discussion?
First, rationality is a subjective determination, that is entirely dependent on the lifetime experience of the observer, and their impression of the other speaker (this includes the credibility of the speaker and the topic), and entirely dependent on a third party for judgement. Second, humans are not models for Bayesian rationalists, nor are they Bayesian rationalists; there is too wide of a spectrum of variables, and a wide-floating range of values in those variables to be even remotely predictable. For something like human emotion/choice, and the degrees of freedom involved, one is best off using Monte Carlo method of analysis to include such stochastic variables and degrees. We do it all the time here for physics modelling.
This is a pretty bold statement, and I’ll counter that “Once” or “when” statements are entirely dependent on “if” arguments. I’ll get into that below.
I have maid no such claims since post 556. Our exchanges since post 645 have abrogated that entente.
Before I get to your assertation of wolfupup, I will reiterate what I mentioned earlier. Your “Once” statement above is predicated on “If” it happens. I cannot agree with you that something “will” happen, when we cannot agree “if” it will. That’s why I asked those particular questions about the technology. . . My “bottom line” will address this and the earlier statement/question.
You had me agree, up until “digitally.” But what is your vision for digital humankind without “cave man emotions”? Isn’t a purely digital being a different species? E.g. Vulcan, Borg?
I think you’re assuming that the digital will be both credible and applicable in that interchange. I posit that will never be the case. Two individuals–even digital ones–will never share the same perspective, based, elementarily on the fact that they are two distinct beings and cannot occupy the same space at the same time.
I disagree. The accomplishment of goals is based of a discrete individuals’ ways, means, and ends. No pair of individuals will have the same abilities. Perhaps the same goals, but never the exact same ways and means.
I strongly disagree. Referring to my earlier comment about digital beings, you cannot have mathematical ‘humans’ without a basic emotion–you are speaking in terms of apples and bowling balls. But that gets to my bottom line:
Bottom Line: You’re implying a mechanical, digital-based utopian society, that is currently indefensible as a future prospect. You even admit this is indefensible with your comment that: “I don’t claim to know the answer to your question because I don’t know the way the future will go.”
So what are you positing for discussion?
I offer that “when” humans are “converted to a computer” is completely dependent on the more pertinent question of “if”. If you differ, please make your argument.
Tripler
An open discussion, SamuelA.
You are too kind, sir. And far too modest. May I remind everyone that you are yourself the recipient of the coveted triple ignore, a stature heretofore unachieved by anyone else, and one which I can only look upon with all appropriate awe.
While I’m under no illusions that I can achieve a triple ignore myself, hope springs eternal, and there are so many opportunities that I can’t help but make another effort, to wit:
Here we are informed that the Asian has been genetically evolved to conscientiously do his homework, get top grades, and be a top-notch contributor to the white Aryan culture. You can tell this by their slanty eyes, which genetically came about from cramming all night by candlelight in order to get an “A” on the next day’s test. The logic is of course impeccable, as befits SamuelA’s giant throbbing brain analyzing all data at superhuman speeds – and certainly not racist in any way whatsoever – but I still want to know SamuelA’s genetic theories of other races, and here I note that SamuelA has not yet offered his opinion, as I asked before, on what I imagine he would familiarly refer to – being non-racist in any way whatsoever – as “the Negro”.
I would assume that the corresponding theory is that the Negro is genetically predisposed to be stupid and eke out a career dealing drugs and robbing gas stations. Those Negroes who might graduate magna cum laude from Harvard Law and become president of the United States are, of course, freaks of nature and can be ignored. So I am anxious to hear SamuelA’s view of the Negro, cast in the same light of “genetic adapation” to the white Aryan culture in which he has – in so incredibly non-racist a manner – cast the brilliant Asian. The genetic contribution to societal productivity is certainly an important concept to all non-racists and non-Nazi non-eugenicists like SamuelA, so we would like to hear more from this eminent authority.
Lucky you, K9friendfinder.
Oh. That’s simple. This is right up your alley, even. The base subunit in your brain does the following about 1k times a second : a electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap. Diffusion carries the neurotransmitter across, and an electric charge is added or subtracted from the receiver.
This is the same thing as Receiver = MAC(Sender). Branch(Receiver)
We can, right now, today, trivially make computer chips that do this fundamental operation in 1 clock cycle, and run at ~2 ghz while doing this. Most modern GPUs run at between 1.2 and 2 ghz, and contains thousands of hardware subunits doing this very operation.
You need not thousands, but trillions - a vast data center crammed full of custom chips that would resemble a GPU in some ways - but you could actually build a machine, if this were a Manhattan Project style effort, that has the same scale and scope as a brain.
The reason this is up your alley is the biggest weapon on the planet isn’t nukes, it’s the human mind that allowed us to bang rocks together until we had nukes.
While you have to actually program a computer that has the same physical capability as the brain with the algorithms that make it sentient like the brain - a far harder task than building the raw hardware, which is why we have not yet done it - *when *that problem is solved, this would be roughly the same relative advance as going from conventional to nuclear weapons.
A machine mind that runs at 2 ghz would be 2 million times quicker, give or take. It would make a nation that had just one, with the same capability as one human but 2 million times quicker, unbeatable given time to take advantage of it.
You know the idea of a Gantt chart, right? The key idea here is that all complex projects, whether it be making a new jet fighter or anti-ballistic missile or some other strategic level weapon are limited by a single “critical path” of steps that must be done. You can put the best people in the world on that path, and work them 16 hours a day, but it still is going to take you years to decades to develop to a deployable state a major new weapon.
So if you had a super-AI that could do the key process steps and get you new prototypes in hours, where you just have to wait for them to be automatically fabricated, you could compress that timeline down to probably months per generation of weapon. You’d do similar compression steps for developing factories to build you more computers so you can have more AI nodes, factories to make you more factories, and so on.
The logical thing to do would be to develop enough defense capability against nukes that you then start a world war and beat everyone else. A few nukes getting through your defenses won’t knock you out because the only thing that matters are these self replicating factory nodes and AI nodes, and just 1 of each has to survive and they can copy themselves.
All the logistic problems with invading every nation on earth at the same time and controlling every surviving human after you win go away when you can do it all with machine intelligence.
This is one scenario. There are many others. But the lure of it is very, very tempting to a lot of nations for national defense reasons.
What are the possible reasons that this won’t happen? Because it will unless something incredible happens.
a. A nuclear war ends civilization first
b. It turns out that human beings have mystical 'souls' that provide us our sentience
c. All the major powers agree that AI research is too dangerous and refuse to do it and nobody cheats and everyone honors the agreement and a world police force is formed to inspect all nations.
d. It turns out that the problem is too hard and you can't just write an algorithm you can describe in a few pages and kick the ass of any human alive at a well defined task. Oh, whoops, you can.
e. It's going to take so long that you and I will both be dead of old age first.
Most board members who think about this probably just assume (e) is the answer, to be quite frank. And I can’t deny the logic, progress on this seems to be accelerating dramatically but I can’t say if it’s going to continue accelerating and we hit machine sentience before 2030 or not.
Alright, SamuelA, this is a hipshot; You’ve described ‘Point “B”’ knowing where we’re at now. You’re talking about the when we get there.
I’m point blank asking you that if we get there, how is it going to happen. We’re at Point “A”. Your ‘Point “B”’ is too esoteric and nebulous to argue without the ‘how’ to get there.
Tripler
Bridge that gap, brother.
[Moderating]
SamuelA, saying “fuck you” to other posters is a violation of the Pit’s language rules. Please avoid this in the future.
No warning issued.
[/Moderating]
Which *how *do you care about? You realize that I don’t realistically know. There are multiple converging paths. They all lead there. Once we get there the paths we didn’t take will probably become feasible.
You know, if during the Manhattan project we had decided to go all in on just one of the 3 main methods (calutrons, centrifuge enrichment, plutonium breeding), we’d still have gotten nukes. Slightly sooner, even. And once we had nukes, going back and exploring the other methods would have been a lot easier to justify. In fact, more recently, we found afourth method.
Right now the method that to me feels the most valid is we work on lower level systems than machine sentience. We use the shit we’ve already demoed and adapt it to run robots that do just limited scope tasks. Pick this weed, pick up that can, restock those shelves, pickup that rock, drill that ore vein, install that gear, drive that car.
Each task is something in the physical world that humans are currently doing. It’s something where there is a correct answer, every time. It’s a task you can break into smaller substeps. Where you can clearly define rules for doing the task “better”. (finishing the task without dropping something and faster and without hitting the robot arm against something all make your solution better)
And it’s a significant fraction of all jobs on Earth.
Once we get all that working real smooth, we get robots that blow past human ability at doing these defined tasks (they aren’t just more physically capable and tireless, I expect them to be smarter. They’ll find ways to do these tasks that use less motions and take less time and make less errors than a human would, even without their actuators being better) we can push it further.
Make intelligence systems that use predictive models of physical reality generated from the collective experiences of millions of robots. What I mean is that if you stick any collection of random physical objects that any of the robots in the pool have experience with in front of this new system, it’ll be able to predict what will happen if you manipulate them.
It’ll know from experience that the red rubber ball will bounce and by how much. That the chip bag will crumple and how. That the gear edges are sharp and can do damage to the robot’s own wiring and hydraulic lines.
And then if you ask it to accomplish a task that requires building a rube goldberg machine, and write some additional task solver modules, it’ll be able to do it. Not all on it’s own, humans wrote the extra software to do it, but humans taking advantage of the existing knowledge and ability the machine pool has.
I think you could iterate that way until you crack things like full machine self replication and you could probably crack nanotechnology the same way.
Even non-sentient agents could predict how some carbon atoms are likely to move along a surface in a vacuum chamber when dragged around by atomic force microscope probes. Advanced agents could plan a sequence of steps to move the atoms to form some assembly. Really advanced agents could design an assembly that accomplishes a goal.
You could eventually bootstrap your way up to agents that design for you whole nanoscale assembly lines and armies of nanoscale robotic waldos, and eventually achieve self replication. (note that this is NOT what we think of as sci fi nanobots. It’s these big flat plates that are very fragile and covered with I/O ports. The machinery lives in a vacuum chamber and can never see pressure or even visible light without being destroyed. There’s a maze of plumbing supplying various gases to the ports. It sucks a lot of power and there’s a huge flow of coolant going in and our. The products are either a fine powder or more flat plates.)
I don’t know how to go from this to what we think of as full sentience. I’m not really worried about it, I think what I have described is already way beyond human ability in many areas, and I think you would be able to build various “meta” modules that self-optimize other AIs, analyze human speech, and one day you’d reach a critical mass of complexity and self-improvement loops that gives you the AI we’ve wanted this entire time.
Well the problem is that if you can’t tell us how we’re getting from A to B, if you cannot offer proof of your argument, or do some research to know what is being done and how, then you’re just postulating. Expressing a guess. An opinion.
Don’t be the guy who goes to the machine shop and says “I have an idea that’s going to make us a billion dollars! Build me a machine that can move individual molecules to build larger structures.”
Machinist says “Great, tell me how to build it.”
Genius says “Oh no, I just gave you the idea. Now you build it.”
“I have a great idea for a screenplay. I tell you what it is, you write it and we’ll split 50/50.”
It’s really easy to enthusiastically speculate on creating particular effects on the world. One can paper over any expected difficulty, wave away possible impossibilities as it’s all happening within one’s mind according to what one wishes.
I really wish SamuelA spent as much time actually working on his ideas concretely as he does going on about it. Just taking a one week break from this forum might do him well.
I hate myself right now, but he’s not saying this is a progression. He’s saying that it’s multiple choice.
He is, of course, the King of “Then-A-Miracle-Occurs”. This thread amply demonstrates that.
You’re absolutely right and I thank you for actually helpful advice.
Best case of that was an IT project I worked on for a reallybigcompany. I took the job saying that I would stay 6 months, and if the project wasn’t off the ground, I was leaving. At the 6 month mark, the manager who was acting as project manager released a four inch thick project plan covering a 14 month development cycle. Listed seven pages of people working on the project. Did not include the programmers.
The programming part of the project was allocated 30 days.
Gave my notice, moved on, people got pissed at me for doing it. Another 5-6 months later the project was shitcanned and the entire 110 person division laid off.
I have no idea if the following applies to you but I figured it might be useful if it does resonate with you:
Sometimes, it’s easy to get so focused on something that you tense up, hyperfocus, lose perspective and small things appear much bigger than they really are because you associate them with something in your past or your sense of self.
If you step away for a while, like a two-day vacation you give yourself to enjoy something light and pleasant fun, you might benefit from a second, fresh look. The worst that will happen is that you’ll wind back up right where you are now.
What, you never thanked me for helpful advice, all you did was ignore me.
I really feel slighted here.
I care about any “how.” There are not multiple converging paths, and the future is infinitely disparate from what we think it is. “They” [the paths] do not necessarily lead there. I’m looking to find your evidence on why you think they do.
If we hadn’t gone with one or two of the main methods, we would have had two gun devices during the war. The implosion method was already proven by mathematics, but not supplied by material. Little Boy and Thin Man would have been our devices for decades until we had Plutionium production online.
A “feel” statement is an opinion, and is indefensible/inarguable.
How do you make intelligence systems use this method? I understand your ends, but with what ways and means do you intend to affect this change?
The information given to the machine is only as good as the person giving that information. GIGO. Your ideal machines are prone to hacking.
Cite?
I’m sorry but if you don’t know how we get from “A” to “B”, then your argument is moot; you’re just postulating a utopian society without any evidence to back it up.
Tripler
Open ears.
Ok, I’m a little confused now. What cites do you need? Do I need to link the lectures in Udacity or one of the other AI training sites, or the papers by Google, or what? This stuff is all very new and cutting edge. Everything I said works or will work Real Soon Now. Including planning agents that can model nanotechnology.
What are you talking about by “hacking”? Or “giving information to the machine?”
That’s not what reinforcement learning is. Humans build the plumbing but the reason the machine would “know” a bag of chips crumples because it has subsystems that do that and those subsystems figured it out from observation.
A simple one would just have a neural network that takes the output from the classifiers. That’s the modulethat looks at the camera feed and labels the different parts of the image. Like “chip bag”.
Other subsystems would reconstruct the geometry from a mixture of stereo cameras and lidar.
And those subsystems feed into a simulator. That’s a neural network that predicts the new state of the system. It would have weights and would predict that the future state of the chip bag, post pressure, is pressed inward more, with geometry distortions predicted by these numbers that were found from the data.
It’s a very complex topic to be honest. I can’t really do it justice. I just “know” we can get these pieces to work extremely well, and to build agents that do more complex tasks. And there’s hundreds of billions of dollar being poured into it.
I also “know” that the problem I have described : various common objects inside a robotic test cell, with several robotic arms and a defined goal that requires the machine to “invent” a rube goldberg machine to accomplish the task, is the type of problem that is very solvable with the current state of the art.
How could you possibly “know” this?