Reply
 
Thread Tools Display Modes
  #801  
Old 01-26-2018, 06:17 PM
JohnT's Avatar
JohnT is offline
Charter Member
 
Join Date: Jul 2001
Location: San Antonio, TX
Posts: 23,911
Given his favorite topic of blather, may I suggest he be called a rAcIst?
  #802  
Old 01-26-2018, 07:08 PM
JohnT's Avatar
JohnT is offline
Charter Member
 
Join Date: Jul 2001
Location: San Antonio, TX
Posts: 23,911
Btw, that's a double ignore of Wolfpup.

Welcome to the club. May our man bring you your smoking jacket? How do you prefer your scotch?
  #803  
Old 01-26-2018, 07:15 PM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,073
Damnit. What do I have to do to get on the list? I've been refuted with Wikipedia and everything!
  #804  
Old 01-26-2018, 07:45 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,316
I'm going to back up here a few days. I wanted to tie both posts together because I think they are related, and I've been out-of-pocket. But, there are a few unresolved questions I'd like to re-ask for clarification. . . Again, I'm in a civil tone for ya.

Quote:
Originally Posted by SamuelA View Post
What'll really bake your noodle is that if our theories are correct, and we have reasons to think they are, then all the world will eventually converge onto these ideas.
I'm still skeptical about the reasons you think these theories are correct, and I'd like more context on the discussion. Can you please offer a link to that discussion? Which participant are you--are you, "SamuelA," in that discussion?

Quote:
Originally Posted by SamuelA View Post
Aumann's agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.
First, rationality is a subjective determination, that is entirely dependent on the lifetime experience of the observer, and their impression of the other speaker (this includes the credibility of the speaker and the topic), and entirely dependent on a third party for judgement. Second, humans are not models for Bayesian rationalists, nor are they Bayesian rationalists; there is too wide of a spectrum of variables, and a wide-floating range of values in those variables to be even remotely predictable. For something like human emotion/choice, and the degrees of freedom involved, one is best off using Monte Carlo method of analysis to include such stochastic variables and degrees. We do it all the time here for physics modelling.

Quote:
Originally Posted by SamuelA View Post
Once every sentient being is an AI or a human converted to a computer and has sufficient processing power, we will all have the same common set of data about the world and the adequate cognitive ability to converge on the same conclusions. In the more immediate future, we're mere years away from limited function data analysis tools that can augment human intelligence and thus produce the correct conclusions given the data.
This is a pretty bold statement, and I'll counter that "Once" or "when" statements are entirely dependent on "if" arguments. I'll get into that below.

Quote:
Originally Posted by SamuelA View Post
Then why claim to ignore me and say you don't care as loudly and repeatedly as possible?
I have maid no such claims since post 556. Our exchanges since post 645 have abrogated that entente.

Quote:
Originally Posted by SamuelA View Post
You know I don't claim to know the answer to your question because I don't know the way the future will go. Ultimately all that theorem really means in this context, as wolfpup points out :
Before I get to your assertation of wolfupup, I will reiterate what I mentioned earlier. Your "Once" statement above is predicated on "If" it happens. I cannot agree with you that something "will" happen, when we cannot agree "if" it will. That's why I asked those particular questions about the technology. . . My "bottom line" will address this and the earlier statement/question.

Quote:
Originally Posted by SamuelA View Post
a. Physical reality is a game with fixed rules. Like all games, one and only one optimal strategy exists, given the same end goal.

b. As smarter beings begin to replace humans - whether that be AIs, cyborgs, genetically engineered humans, it doesn't matter - those beings will have the neural ability to follow more optimal strategies. I know what I am doing now is not optimal, but my cave man emotions won't let me do what I know is better. (hence I don't have a 6-pack, 5 girlfriends, and a job as a quant making 500k a year, even though there exists a sequence of actions I could have logically worked out and taken to get there if I were an inhuman, rational agent)

c. Smarter beings will also have vastly more memory capacity and ability to share data with each other digitally.
You had me agree, up until "digitally." But what is your vision for digital humankind without "cave man emotions"? Isn't a purely digital being a different species? E.g. Vulcan, Borg?

Quote:
Originally Posted by SamuelA View Post
Hence, if beings can share data with each other digitally, and analyze it using the most optimal strategy they know about in common, they will reach the same conclusion. In the same way that 2 calculators agree with each other as wolfpup points out.
I think you're assuming that the digital will be both credible and applicable in that interchange. I posit that will never be the case. Two individuals--even digital ones--will never share the same perspective, based, elementarily on the fact that they are two distinct beings and cannot occupy the same space at the same time.

Quote:
Originally Posted by SamuelA View Post
Part of the reason this idea has impressed me is that religion, politics, personal lifestyle choices - they are all strategies to accomplish goals. Given the same goals and knowledge of the optimal strategy, rational beings wouldn't have 5000 opinions for religion/politics/personal choices. A correct answer (where correct means "most probable strategy to accomplish your goals) exists for each of these "taboo" topics.
I disagree. The accomplishment of goals is based of a discrete individuals' ways, means, and ends. No pair of individuals will have the same abilities. Perhaps the same goals, but never the exact same ways and means.

Quote:
Originally Posted by SamuelA View Post
If you encountered another being with a different opinion, you could just plug your serial ports together or whatever and swap memory files. You would literally be able to work out mathematically why that being's opinion is different. Maybe one of you is unaware of the most optimal strategy - you could share it with the other, they could run that strategy on their experiences, determine it has a higher expected value, and switch over.
I strongly disagree. Referring to my earlier comment about digital beings, you cannot have mathematical 'humans' without a basic emotion--you are speaking in terms of apples and bowling balls. But that gets to my bottom line:

Bottom Line: You're implying a mechanical, digital-based utopian society, that is currently indefensible as a future prospect. You even admit this is indefensible with your comment that: "I don't claim to know the answer to your question because I don't know the way the future will go."
So what are you positing for discussion?

I offer that "when" humans are "converted to a computer" is completely dependent on the more pertinent question of "if". If you differ, please make your argument.

Tripler
An open discussion, SamuelA.

Last edited by Tripler; 01-26-2018 at 07:46 PM.
  #805  
Old 01-26-2018, 08:46 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,228
Quote:
Originally Posted by JohnT View Post
Btw, that's a double ignore of Wolfpup.

Welcome to the club. May our man bring you your smoking jacket? How do you prefer your scotch?
You are too kind, sir. And far too modest. May I remind everyone that you are yourself the recipient of the coveted triple ignore, a stature heretofore unachieved by anyone else, and one which I can only look upon with all appropriate awe.

While I'm under no illusions that I can achieve a triple ignore myself, hope springs eternal, and there are so many opportunities that I can't help but make another effort, to wit:
Quote:
Originally Posted by SamuelA View Post
Asians are better at 'our' culture than we are. Because the selection pressure that created them happened to lead to adaptations that are slightly more optimal at it. That's what I think at the present.
Here we are informed that the Asian has been genetically evolved to conscientiously do his homework, get top grades, and be a top-notch contributor to the white Aryan culture. You can tell this by their slanty eyes, which genetically came about from cramming all night by candlelight in order to get an "A" on the next day's test. The logic is of course impeccable, as befits SamuelA's giant throbbing brain analyzing all data at superhuman speeds -- and certainly not racist in any way whatsoever -- but I still want to know SamuelA's genetic theories of other races, and here I note that SamuelA has not yet offered his opinion, as I asked before, on what I imagine he would familiarly refer to -- being non-racist in any way whatsoever -- as "the Negro".

I would assume that the corresponding theory is that the Negro is genetically predisposed to be stupid and eke out a career dealing drugs and robbing gas stations. Those Negroes who might graduate magna cum laude from Harvard Law and become president of the United States are, of course, freaks of nature and can be ignored. So I am anxious to hear SamuelA's view of the Negro, cast in the same light of "genetic adapation" to the white Aryan culture in which he has -- in so incredibly non-racist a manner -- cast the brilliant Asian. The genetic contribution to societal productivity is certainly an important concept to all non-racists and non-Nazi non-eugenicists like SamuelA, so we would like to hear more from this eminent authority.
  #806  
Old 01-26-2018, 08:48 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,022
Quote:
Originally Posted by SamuelA View Post
I am going back to ignoring you. You and everyone who has called me a racist is too fucking stupid to engage further. K9friendfinder, thanks again for reading my posts and not jumping to the wrong conclusions.
Lucky you, K9friendfinder.
  #807  
Old 01-26-2018, 08:57 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
I offer that "when" humans are "converted to a computer" is completely dependent on the more pertinent question of "if". If you differ, please make your argument.

Tripler
An open discussion, SamuelA.
Oh. That's simple. This is right up your alley, even. The base subunit in your brain does the following about 1k times a second : a electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap. Diffusion carries the neurotransmitter across, and an electric charge is added or subtracted from the receiver.

This is the same thing as Receiver = MAC(Sender). Branch(Receiver)

We can, right now, today, trivially make computer chips that do this fundamental operation in 1 clock cycle, and run at ~2 ghz while doing this. Most modern GPUs run at between 1.2 and 2 ghz, and contains thousands of hardware subunits doing this very operation.

You need not thousands, but trillions - a vast data center crammed full of custom chips that would resemble a GPU in some ways - but you could actually build a machine, if this were a Manhattan Project style effort, that has the same scale and scope as a brain.

The reason this is up your alley is the biggest weapon on the planet isn't nukes, it's the human mind that allowed us to bang rocks together until we had nukes.

While you have to actually program a computer that has the same physical capability as the brain with the algorithms that make it sentient like the brain - a far harder task than building the raw hardware, which is why we have not yet done it - when that problem is solved, this would be roughly the same relative advance as going from conventional to nuclear weapons.

A machine mind that runs at 2 ghz would be 2 million times quicker, give or take. It would make a nation that had just one, with the same capability as one human but 2 million times quicker, unbeatable given time to take advantage of it.

You know the idea of a Gantt chart, right? The key idea here is that all complex projects, whether it be making a new jet fighter or anti-ballistic missile or some other strategic level weapon are limited by a single "critical path" of steps that must be done. You can put the best people in the world on that path, and work them 16 hours a day, but it still is going to take you years to decades to develop to a deployable state a major new weapon.

So if you had a super-AI that could do the key process steps and get you new prototypes in hours, where you just have to wait for them to be automatically fabricated, you could compress that timeline down to probably months per generation of weapon. You'd do similar compression steps for developing factories to build you more computers so you can have more AI nodes, factories to make you more factories, and so on.

The logical thing to do would be to develop enough defense capability against nukes that you then start a world war and beat everyone else. A few nukes getting through your defenses won't knock you out because the only thing that matters are these self replicating factory nodes and AI nodes, and just 1 of each has to survive and they can copy themselves.

All the logistic problems with invading every nation on earth at the same time and controlling every surviving human after you win go away when you can do it all with machine intelligence.

This is one scenario. There are many others. But the lure of it is very, very tempting to a lot of nations for national defense reasons.

What are the possible reasons that this won't happen? Because it will unless something incredible happens.

a. A nuclear war ends civilization first
b. It turns out that human beings have mystical 'souls' that provide us our sentience
c. All the major powers agree that AI research is too dangerous and refuse to do it and nobody cheats and everyone honors the agreement and a world police force is formed to inspect all nations.
d. It turns out that the problem is too hard and you can't just write an algorithm you can describe in a few pages and kick the ass of any human alive at a well defined task. Oh, whoops, you can.
e. It's going to take so long that you and I will both be dead of old age first.

Most board members who think about this probably just assume (e) is the answer, to be quite frank. And I can't deny the logic, progress on this seems to be accelerating dramatically but I can't say if it's going to continue accelerating and we hit machine sentience before 2030 or not.

Last edited by SamuelA; 01-26-2018 at 09:00 PM.
  #808  
Old 01-26-2018, 09:52 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,316
Alright, SamuelA, this is a hipshot; You've described 'Point "B"' knowing where we're at now. You're talking about the when we get there.

I'm point blank asking you that if we get there, how is it going to happen. We're at Point "A". Your 'Point "B"' is too esoteric and nebulous to argue without the 'how' to get there.

Tripler
Bridge that gap, brother.
  #809  
Old 01-26-2018, 10:58 PM
Miller's Avatar
Miller is offline
Sith Mod
Moderator
 
Join Date: Dec 2000
Location: Bear Flag Republic
Posts: 44,636
[Moderating]
SamuelA, saying "fuck you" to other posters is a violation of the Pit's language rules. Please avoid this in the future.

No warning issued.
[/Moderating]
  #810  
Old 01-26-2018, 11:21 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
Alright, SamuelA, this is a hipshot; You've described 'Point "B"' knowing where we're at now. You're talking about the when we get there.

I'm point blank asking you that if we get there, how is it going to happen. We're at Point "A". Your 'Point "B"' is too esoteric and nebulous to argue without the 'how' to get there.

Tripler
Bridge that gap, brother.
Which how do you care about? You realize that I don't realistically know. There are multiple converging paths. They all lead there. Once we get there the paths we didn't take will probably become feasible.

You know, if during the Manhattan project we had decided to go all in on just one of the 3 main methods (calutrons, centrifuge enrichment, plutonium breeding), we'd still have gotten nukes. Slightly sooner, even. And once we had nukes, going back and exploring the other methods would have been a lot easier to justify. In fact, more recently, we found a fourth method.

Right now the method that to me feels the most valid is we work on lower level systems than machine sentience. We use the shit we've already demoed and adapt it to run robots that do just limited scope tasks. Pick this weed, pick up that can, restock those shelves, pickup that rock, drill that ore vein, install that gear, drive that car.

Each task is something in the physical world that humans are currently doing. It's something where there is a correct answer, every time. It's a task you can break into smaller substeps. Where you can clearly define rules for doing the task "better". (finishing the task without dropping something and faster and without hitting the robot arm against something all make your solution better)

And it's a significant fraction of all jobs on Earth.

Once we get all that working real smooth, we get robots that blow past human ability at doing these defined tasks (they aren't just more physically capable and tireless, I expect them to be smarter. They'll find ways to do these tasks that use less motions and take less time and make less errors than a human would, even without their actuators being better) we can push it further.

Make intelligence systems that use predictive models of physical reality generated from the collective experiences of millions of robots. What I mean is that if you stick any collection of random physical objects that any of the robots in the pool have experience with in front of this new system, it'll be able to predict what will happen if you manipulate them.

It'll know from experience that the red rubber ball will bounce and by how much. That the chip bag will crumple and how. That the gear edges are sharp and can do damage to the robot's own wiring and hydraulic lines.

And then if you ask it to accomplish a task that requires building a rube goldberg machine, and write some additional task solver modules, it'll be able to do it. Not all on it's own, humans wrote the extra software to do it, but humans taking advantage of the existing knowledge and ability the machine pool has.

I think you could iterate that way until you crack things like full machine self replication and you could probably crack nanotechnology the same way.

Even non-sentient agents could predict how some carbon atoms are likely to move along a surface in a vacuum chamber when dragged around by atomic force microscope probes. Advanced agents could plan a sequence of steps to move the atoms to form some assembly. Really advanced agents could design an assembly that accomplishes a goal.

You could eventually bootstrap your way up to agents that design for you whole nanoscale assembly lines and armies of nanoscale robotic waldos, and eventually achieve self replication. (note that this is NOT what we think of as sci fi nanobots. It's these big flat plates that are very fragile and covered with I/O ports. The machinery lives in a vacuum chamber and can never see pressure or even visible light without being destroyed. There's a maze of plumbing supplying various gases to the ports. It sucks a lot of power and there's a huge flow of coolant going in and our. The products are either a fine powder or more flat plates.)

I don't know how to go from this to what we think of as full sentience. I'm not really worried about it, I think what I have described is already way beyond human ability in many areas, and I think you would be able to build various "meta" modules that self-optimize other AIs, analyze human speech, and one day you'd reach a critical mass of complexity and self-improvement loops that gives you the AI we've wanted this entire time.

Last edited by SamuelA; 01-26-2018 at 11:25 PM.
  #811  
Old 01-26-2018, 11:41 PM
Chimera is offline
Member
 
Join Date: Sep 2002
Location: In the Dreaming
Posts: 24,689
Well the problem is that if you can't tell us how we're getting from A to B, if you cannot offer proof of your argument, or do some research to know what is being done and how, then you're just postulating. Expressing a guess. An opinion.

Don't be the guy who goes to the machine shop and says "I have an idea that's going to make us a billion dollars! Build me a machine that can move individual molecules to build larger structures."
Machinist says "Great, tell me how to build it."
Genius says "Oh no, I just gave you the idea. Now you build it."
  #812  
Old 01-26-2018, 11:59 PM
MichaelEmouse's Avatar
MichaelEmouse is offline
Guest
 
Join Date: Jan 2010
Posts: 7,390
"I have a great idea for a screenplay. I tell you what it is, you write it and we'll split 50/50."

It's really easy to enthusiastically speculate on creating particular effects on the world. One can paper over any expected difficulty, wave away possible impossibilities as it's all happening within one's mind according to what one wishes.

I really wish SamuelA spent as much time actually working on his ideas concretely as he does going on about it. Just taking a one week break from this forum might do him well.

Last edited by MichaelEmouse; 01-27-2018 at 12:03 AM.
  #813  
Old 01-27-2018, 12:23 AM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,073
I hate myself right now, but he's not saying this is a progression. He's saying that it's multiple choice.

Quote:
Originally Posted by SamuelA View Post
What are the possible reasons that this won't happen? Because it will unless something incredible happens.

a. A nuclear war ends civilization first
b. It turns out that human beings have mystical 'souls' that provide us our sentience
c. All the major powers agree that AI research is too dangerous and refuse to do it and nobody cheats and everyone honors the agreement and a world police force is formed to inspect all nations.
d. It turns out that the problem is too hard and you can't just write an algorithm you can describe in a few pages and kick the ass of any human alive at a well defined task. Oh, whoops, you can.
e. It's going to take so long that you and I will both be dead of old age first.
He is, of course, the King of "Then-A-Miracle-Occurs". This thread amply demonstrates that.
  #814  
Old 01-27-2018, 12:33 AM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by MichaelEmouse View Post
I really wish SamuelA spent as much time actually working on his ideas concretely as he does going on about it. Just taking a one week break from this forum might do him well.
You're absolutely right and I thank you for actually helpful advice.
  #815  
Old 01-27-2018, 01:16 AM
Chimera is offline
Member
 
Join Date: Sep 2002
Location: In the Dreaming
Posts: 24,689
Quote:
Originally Posted by Sunny Daze View Post
I hate myself right now, but he's not saying this is a progression. He's saying that it's multiple choice.



He is, of course, the King of "Then-A-Miracle-Occurs". This thread amply demonstrates that.
Best case of that was an IT project I worked on for a reallybigcompany. I took the job saying that I would stay 6 months, and if the project wasn't off the ground, I was leaving. At the 6 month mark, the manager who was acting as project manager released a four inch thick project plan covering a 14 month development cycle. Listed seven pages of people working on the project. Did not include the programmers.

The programming part of the project was allocated 30 days.

Gave my notice, moved on, people got pissed at me for doing it. Another 5-6 months later the project was shitcanned and the entire 110 person division laid off.
  #816  
Old 01-27-2018, 03:45 AM
MichaelEmouse's Avatar
MichaelEmouse is offline
Guest
 
Join Date: Jan 2010
Posts: 7,390
Quote:
Originally Posted by SamuelA View Post
You're absolutely right and I thank you for actually helpful advice.
I have no idea if the following applies to you but I figured it might be useful if it does resonate with you:

Sometimes, it's easy to get so focused on something that you tense up, hyperfocus, lose perspective and small things appear much bigger than they really are because you associate them with something in your past or your sense of self.

If you step away for a while, like a two-day vacation you give yourself to enjoy something light and pleasant fun, you might benefit from a second, fresh look. The worst that will happen is that you'll wind back up right where you are now.

Last edited by MichaelEmouse; 01-27-2018 at 03:47 AM.
  #817  
Old 01-27-2018, 09:32 AM
Morgenstern is offline
Guest
 
Join Date: Jun 2007
Location: Southern California
Posts: 11,866
Quote:
Originally Posted by SamuelA View Post
You're absolutely right and I thank you for actually helpful advice.
What, you never thanked me for helpful advice, all you did was ignore me.
I really feel slighted here.
  #818  
Old 01-27-2018, 12:08 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,316
Quote:
Originally Posted by SamuelA View Post
Which how do you care about? You realize that I don't realistically know. There are multiple converging paths. They all lead there. Once we get there the paths we didn't take will probably become feasible.
I care about any "how." There are not multiple converging paths, and the future is infinitely disparate from what we think it is. "They" [the paths] do not necessarily lead there. I'm looking to find your evidence on why you think they do.

Quote:
Originally Posted by SamuelA View Post
You know, if during the Manhattan project we had decided to go all in on just one of the 3 main methods (calutrons, centrifuge enrichment, plutonium breeding), we'd still have gotten nukes. Slightly sooner, even. And once we had nukes, going back and exploring the other methods would have been a lot easier to justify. In fact, more recently, we found a fourth method.
If we hadn't gone with one or two of the main methods, we would have had two gun devices during the war. The implosion method was already proven by mathematics, but not supplied by material. Little Boy and Thin Man would have been our devices for decades until we had Plutionium production online.

Quote:
Originally Posted by SamuelA View Post
Right now the method that to me feels the most valid is we work on lower level systems than machine sentience. We use the shit we've already demoed and adapt it to run robots that do just limited scope tasks. Pick this weed, pick up that can, restock those shelves, pickup that rock, drill that ore vein, install that gear, drive that car.

Each task is something in the physical world that humans are currently doing. It's something where there is a correct answer, every time. It's a task you can break into smaller substeps. Where you can clearly define rules for doing the task "better". (finishing the task without dropping something and faster and without hitting the robot arm against something all make your solution better)

And it's a significant fraction of all jobs on Earth.
A "feel" statement is an opinion, and is indefensible/inarguable.

Quote:
Originally Posted by SamuelA View Post
Once we get all that working real smooth, we get robots that blow past human ability at doing these defined tasks (they aren't just more physically capable and tireless, I expect them to be smarter. They'll find ways to do these tasks that use less motions and take less time and make less errors than a human would, even without their actuators being better) we can push it further.

Make intelligence systems that use predictive models of physical reality generated from the collective experiences of millions of robots. What I mean is that if you stick any collection of random physical objects that any of the robots in the pool have experience with in front of this new system, it'll be able to predict what will happen if you manipulate them.
How do you make intelligence systems use this method? I understand your ends, but with what ways and means do you intend to affect this change?

Quote:
Originally Posted by SamuelA View Post
It'll know from experience that the red rubber ball will bounce and by how much. That the chip bag will crumple and how. That the gear edges are sharp and can do damage to the robot's own wiring and hydraulic lines.

And then if you ask it to accomplish a task that requires building a rube goldberg machine, and write some additional task solver modules, it'll be able to do it. Not all on it's own, humans wrote the extra software to do it, but humans taking advantage of the existing knowledge and ability the machine pool has. I think you could iterate that way until you crack things like full machine self replication and you could probably crack nanotechnology the same way.
The information given to the machine is only as good as the person giving that information. GIGO. Your ideal machines are prone to hacking.

Quote:
Originally Posted by SamuelA View Post
Even non-sentient agents could predict how some carbon atoms are likely to move along a surface in a vacuum chamber when dragged around by atomic force microscope probes.
Cite?

Quote:
Originally Posted by SamuelA View Post
Advanced agents could plan a sequence of steps to move the atoms to form some assembly. Really advanced agents could design an assembly that accomplishes a goal.

You could eventually bootstrap your way up to agents that design for you whole nanoscale assembly lines and armies of nanoscale robotic waldos, and eventually achieve self replication. (note that this is NOT what we think of as sci fi nanobots. It's these big flat plates that are very fragile and covered with I/O ports. The machinery lives in a vacuum chamber and can never see pressure or even visible light without being destroyed. There's a maze of plumbing supplying various gases to the ports. It sucks a lot of power and there's a huge flow of coolant going in and our. The products are either a fine powder or more flat plates.)

I don't know how to go from this to what we think of as full sentience. I'm not really worried about it, I think what I have described is already way beyond human ability in many areas, and I think you would be able to build various "meta" modules that self-optimize other AIs, analyze human speech, and one day you'd reach a critical mass of complexity and self-improvement loops that gives you the AI we've wanted this entire time.
I'm sorry but if you don't know how we get from "A" to "B", then your argument is moot; you're just postulating a utopian society without any evidence to back it up.

Tripler
Open ears.
  #819  
Old 01-27-2018, 12:37 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
I care about any "how." There are not multiple converging paths, and the future is infinitely disparate from what we think it is. "They" [the paths] do not necessarily lead there. I'm looking to find your evidence on why you think they do.

How do you make intelligence systems use this method? I understand your ends, but with what ways and means do you intend to affect this change?

The information given to the machine is only as good as the person giving that information. GIGO. Your ideal machines are prone to hacking.

I'm sorry but if you don't know how we get from "A" to "B", then your argument is moot; you're just postulating a utopian society without any evidence to back it up.

Tripler
Open ears.
Ok, I'm a little confused now. What cites do you need? Do I need to link the lectures in Udacity or one of the other AI training sites, or the papers by Google, or what? This stuff is all very new and cutting edge. Everything I said works or will work Real Soon Now. Including planning agents that can model nanotechnology.

What are you talking about by "hacking"? Or "giving information to the machine?"

That's not what reinforcement learning is. Humans build the plumbing but the reason the machine would "know" a bag of chips crumples because it has subsystems that do that and those subsystems figured it out from observation.

A simple one would just have a neural network that takes the output from the classifiers. That's the module that looks at the camera feed and labels the different parts of the image. Like "chip bag".

Other subsystems would reconstruct the geometry from a mixture of stereo cameras and lidar.

And those subsystems feed into a simulator. That's a neural network that predicts the new state of the system. It would have weights and would predict that the future state of the chip bag, post pressure, is pressed inward more, with geometry distortions predicted by these numbers that were found from the data.

It's a very complex topic to be honest. I can't really do it justice. I just "know" we can get these pieces to work extremely well, and to build agents that do more complex tasks. And there's hundreds of billions of dollar being poured into it.

I also "know" that the problem I have described : various common objects inside a robotic test cell, with several robotic arms and a defined goal that requires the machine to "invent" a rube goldberg machine to accomplish the task, is the type of problem that is very solvable with the current state of the art.
  #820  
Old 01-27-2018, 12:42 PM
Czarcasm's Avatar
Czarcasm is offline
Champion Chili Chef
Charter Member
 
Join Date: Apr 1999
Location: Portland, OR
Posts: 63,126
How could you possibly "know" this?
  #821  
Old 01-27-2018, 01:19 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by Darren Garrison View Post
Lucky you, K9friendfinder.
I don't know who that is, but he must have left quite the impression on Sam.
  #822  
Old 01-27-2018, 01:21 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,316
Quote:
Originally Posted by SamuelA View Post
Ok, I'm a little confused now. What cites do you need? Do I need to link the lectures in Udacity or one of the other AI training sites, or the papers by Google, or what? This stuff is all very new and cutting edge. Everything I said works or will work Real Soon Now. Including planning agents that can model nanotechnology.
Pick any one. Go from there. . .

Quote:
Originally Posted by SamuelA View Post
What are you talking about by "hacking"? Or "giving information to the machine?"
The machines you describe are dependent on the data fed to them. For example, i f the Soviets/Russians decide to feed bad data into the machine, then you'll have bad output.

Quote:
Originally Posted by SamuelA View Post
That's not what reinforcement learning is. Humans build the plumbing but the reason the machine would "know" a bag of chips crumples because it has subsystems that do that and those subsystems figured it out from observation.

A simple one would just have a neural network that takes the output from the classifiers. That's the module that looks at the camera feed and labels the different parts of the image. Like "chip bag".

Other subsystems would reconstruct the geometry from a mixture of stereo cameras and lidar.

And those subsystems feed into a simulator. That's a neural network that predicts the new state of the system. It would have weights and would predict that the future state of the chip bag, post pressure, is pressed inward more, with geometry distortions predicted by these numbers that were found from the data.

It's a very complex topic to be honest. I can't really do it justice. I just "know" we can get these pieces to work extremely well, and to build agents that do more complex tasks. And there's hundreds of billions of dollar being poured into it.

I also "know" that the problem I have described : various common objects inside a robotic test cell, with several robotic arms and a defined goal that requires the machine to "invent" a rube goldberg machine to accomplish the task, is the type of problem that is very solvable with the current state of the art.
I have no idea what you are talking about in context to the current conversation. You've leapt from plumbing to neural analysis of potato chip bags without any reason. Responding to your one idea though, a human must program that machine to '"know" a bag of chips crumples.' A human, with all of his/her emotional imperfections, will program that machine. And you, as a computer programmer, must appreciate that.

Tripler
Can we at least agree to hate the Soviets?
  #823  
Old 01-27-2018, 01:31 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
Pick any one. Go from there. . .



The machines you describe are dependent on the data fed to them. For example, i f the Soviets/Russians decide to feed bad data into the machine, then you'll have bad output.



I have no idea what you are talking about in context to the current conversation. You've leapt from plumbing to neural analysis of potato chip bags without any reason. Responding to your one idea though, a human must program that machine to '"know" a bag of chips crumples.' A human, with all of his/her emotional imperfections, will program that machine. And you, as a computer programmer, must appreciate that.

Tripler
Can we at least agree to hate the Soviets?
Ok, so let's say the classifier says the environment is state S0. That's what the classifier thinks is true "right now". S0 is just a tuple of several matrices, some for position, some for geometry, some for color, some for velocity, etc.

The simulator/predictor is a neural network, such that Predictor_Convolve(S0) = Predicted S0 + dt. That is, it's making the prediction that after a small amount of time, there will be a new state.

You can obviously keep re-running the predictor and the predicted states are going to become increasingly uncertain for moving objects and stay pretty firm for stationary objects.

The key trick is that after dt actually passes, you feed back what the environment actually did back to the predictor. And you adjust it's matrix of numbers in a way that will cause it to give more accurate readings next time.

Then the other key component of this system is a planner. This is a system that guesses possible paths that might accomplish your goal. So if it's "shove the red ball to the left touching nothing else", the "goal" is just a matrix of numbers that contains a shift to the red ball position. The planner will come up with possible guesses as to sequences of robotic arm motions that might accomplish what you want.

The planner's guesses get optimized by comparing them to what the predictor will think will happen.

And then the system picks the best path and does it. It uses the results from that path to update the planner.

Given enough data, planner has "machine intuition".

This is where this starts to really work. These algorithms need not be even a tiny fraction as good as human brains. But if you can give them the collective experience of a million separate robots working for 1 year, that's a million years of experience. Or maybe 1000 real robots and 999000 simulated robots. Either way, this vast pool of data will mean that the predictor has truly "seen everything". The planner has tried many, many strategies and knows for a given configuration what type of things are actually going to work.

This is why you get superhuman performance. Your machine has far more experience doing what it does than any human alive. Also, it always does it's best. At all times, it's faithfully working out the optimal answer from the data it has. It never gets tired or angry or bored.

You can see how this type of algorithm slowly gains on humans. You could build one that knows how to fight jets in a dogfight. It has millions of years of experience in aircraft simulators and a smaller amount of real flight time. So it's always going to be calculating the path that optimizes it's chance of victory, but doing so using expected value calculated from the sum of all the outcomes that typically happen in a given scenario.

Last edited by SamuelA; 01-27-2018 at 01:35 PM.
  #824  
Old 01-27-2018, 01:36 PM
Tripler is offline
Charter Member
 
Join Date: May 2000
Location: JSOTF SDMB, OL-LANL
Posts: 7,316
Quote:
Originally Posted by SamuelA View Post
Ok, so let's say the classifier says the environment is state S0. That's what the classifier thinks is true "right now". S0 is just a tuple of several matrices, some for position, some for geometry, some for color, some for velocity, etc. . . .
You need to go basic for me. What exactly is a "classifier"? Prediction states and the rest of it rely on this basic definition

Tripler
To me, they're the folks that say I can't say stuff.
  #825  
Old 01-27-2018, 01:51 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
You need to go basic for me. What exactly is a "classifier"? Prediction states and the rest of it rely on this basic definition
It's a neural network/digital filter that goes from sensor data to "what is the state of the environment and how uncertain am I about it"? Image recognition classifiers are some of the most common but the same algorithms work on other sensor types as well as multiple sensors combined.

So in my example of a "rube goldberg constructing robot", the machine has several cameras and a lidar. There is say a red ball on the table, a chip bag, and a gear. Classifier converts the large digital video frames of from the robot's cameras to

[ objects found]
[identities for each object]
[positions in 6 axes for each object]
[velocities for each object]

Last edited by SamuelA; 01-27-2018 at 01:53 PM.
  #826  
Old 01-27-2018, 02:58 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,228
Quote:
Originally Posted by SamuelA View Post
Oh. That's simple. This is right up your alley, even. The base subunit in your brain does the following about 1k times a second : a electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap. Diffusion carries the neurotransmitter across, and an electric charge is added or subtracted from the receiver.

This is the same thing as Receiver = MAC(Sender). Branch(Receiver)

We can, right now, today, trivially make computer chips that do this fundamental operation in 1 clock cycle, and run at ~2 ghz while doing this. Most modern GPUs run at between 1.2 and 2 ghz, and contains thousands of hardware subunits doing this very operation.
That entire pontification is meaningless technobabble. I'm always amused when SamuelA launches in to explaining how the brain works by comparing neurons with computer logic gates, and there you are. Remember, the brain is just a computer, because... signals! The brain executes branch instructions, according to SamuelA. He can not only tell us exactly how it works, he can even predict the performance of the future electronic brain compared to the human mind ("2 million times quicker, give or take"). That sort of prescience is, needless to say, breathtaking. Only not in a good way.

In fact, modeling how the mind really works is something that cognitive science is only in the most primitive early stages of even beginning to understand. What we do know is that the computational paradigm is only a small -- albeit important -- part of the theory of cognition. Of course it's "simple" in SamuelA's world -- so is everything. Simple, and wrong. The brain is "computational" only in the most trivial, unscientific sense of the word.

Modeling how the mind works is also of only marginal relevance to AI, as the most effective practical AI's today have been built by applying a wide range of different technologies and heuristics that have practically zero relationship to how the mind may or may not work. Their potential has also been consistently overestimated. When the first language translation systems appeared, it was widely believed that language would soon cease to be a barrier to human communication and human translators would all be out of business. And then someone tried translating "The spirit is willing but the flesh is weak" into Russian, and the machine rendered "The liquor is good but the meat has gone bad", which became emblematic of the magnitude of the contextual problem and AI over-optimism in general.

To say that SamuelA's ideas about the brain and AI are gross oversimplifications would be incorrect. They are in the category of "not even wrong", completely missing the fundamental nature of the problem. The best way I can describe it is that if there had been a SamuelA a couple of hundred years ago, he would be positing that the future of aviation will be premised on dipping yourself in glue, covering yourself with feathers, and flapping your arms real hard. It's simple!

To be clear, I believe in the future of computational intelligence and we've made big strides since the early days of AI, but we have a very long way to go. IBM's DeepQA project, for example, has impressive potential but like all other AIs it still operates only in very narrow domains of competency, and has to be painstakingly trained in each one. Further, it's almost impossible to predict the trajectory that emergent systems will take, and even less so their societal impact. No, it isn't "simple".
  #827  
Old 01-27-2018, 03:04 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by wolfpup View Post
And then someone tried translating "The spirit is willing but the flesh is weak" into Russian, and the machine rendered "The liquor is good but the meat has gone bad",
I thought that was the correct translation for russian.

Remember, in Mother Russia, AI translates you!
  #828  
Old 01-27-2018, 03:19 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
I'm a latecomer, I read the first post and then SamuelA's first post in the thread and then this page.

Seems like his primary point is that:
if we had the physical details of a brain, down to the appropriate level, that we could use it to build a simulator that functions substantially the same as the original.

That position seems like a reasonable position if we make the assumption that we don't need to simulate down to the level of quantum interactions (I assume that becomes problematic) and if we assume our behavior is based on physics/energy, not some unknown component like a "soul."


Is the primary disagreement with how many decades or centuries it will take for humans to be able to capture the state of a brain?

Or is the disagreement that the level of detail required is so great that capturing the state will also alter the state so the result would be invalid?
  #829  
Old 01-27-2018, 03:24 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by wolfpup View Post
In fact, modeling how the mind really works is something that cognitive science is only in the most primitive early stages of even beginning to understand.
Agreed.

Quote:
What we do know is that the computational paradigm is only a small -- albeit important -- part of the theory of cognition.
Not following this point, seems like it's all computation. Can you expand on this point for clarification?
  #830  
Old 01-27-2018, 03:44 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,022
Quote:
Originally Posted by RaftPeople View Post
Is the primary disagreement with how many decades or centuries it will take for humans to be able to capture the state of a brain?

Or is the disagreement that the level of detail required is so great that capturing the state will also alter the state so the result would be invalid?
The disagreement (well, for me) is the idea that freezing a brain would ever, ever leave the state of the complex chemical reaction known as a "mind" intact and recoverable.
  #831  
Old 01-27-2018, 04:02 PM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,073
For me, the disagreement further is that we should go ahead and kill folks now, instead of letting them die, so that we can save some of the data. We don't know how the brain works. You and he may think it's computational, but it's not a model that we've developed yet. We don't know how dementia works. We don't have a way to save/store a brain. We don't know how to re-animate a brain. But, sure, let's go ahead and skip to the "inevitable" and start killing folks now to "save" something, when we know nothing at all about what we're doing. All of this because to Sam brain = computer. Maybe someday it does. Today it does not. Today it's tech we cannot replicate, and when it stops working, it's gone.

What he proposes is not inevitable, in this, or any number of other areas. Hand-waving away discussion because something will happen, with no thought to how, or why, is ludicrous. Refusing to talk to posters who question him is childish.
  #832  
Old 01-27-2018, 04:22 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by Darren Garrison View Post
The disagreement (well, for me) is the idea that freezing a brain would ever, ever leave the state of the complex chemical reaction known as a "mind" intact and recoverable.
We do it for embryos today, but the brain is on a much larger scale, so from a cell preservation perspective, it seems to work.

So it sounds like you feel that cell preservation alone would not be enough. That the electro-chemical soup they reside in needs to be preserved also?

That seems accurate to preserve exact point in time state, but to preserve general state (e.g. personality), it's possible that could be re-created/re-balanced due to the state of the cells that naturally maintain those things. Kind of like waking up in the morning but it might take 5 days of waking up for the system to get back in harmony (maybe).

Although an additional complexity is that to simulate on the computer, you would probably need to get into neuron DNA methylation level of detail because that drives synapse maintenance after learning and probably many other things. Meaning if you ignored it, you might have all the synapses mapped, but the system would not maintain them because the flag that says to maintain it is set in DNA methylation.
  #833  
Old 01-27-2018, 04:23 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by RaftPeople View Post
Agreed.

Not following this point, seems like it's all computation. Can you expand on this point for clarification?
THANK YOU. God damn it was getting on my nerves, being just basically Wolfpup arguing bullshit and always interpreting every post like I was a Soviet Spy, and then the fucking moderator was taking his side, and there are like 10 other morons in here who keep just parroting random shit and who don't take the argument seriously.

I mean we might both be wrong, Raftpeople, it might not be "just computation", but the evidence says that at the present time!
  #834  
Old 01-27-2018, 04:26 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by Sunny Daze View Post
For me, the disagreement further is that we should go ahead and kill folks now, instead of letting them die, so that we can save some of the data.
Agreed, that's a tiny bit optimistic.
  #835  
Old 01-27-2018, 04:28 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by RaftPeople View Post
We do it for embryos today, but the brain is on a much larger scale, so from a cell preservation perspective, it seems to work.

So it sounds like you feel that cell preservation alone would not be enough. That the electro-chemical soup they reside in needs to be preserved also?

That seems accurate to preserve exact point in time state, but to preserve general state (e.g. personality), it's possible that could be re-created/re-balanced due to the state of the cells that naturally maintain those things. Kind of like waking up in the morning but it might take 5 days of waking up for the system to get back in harmony (maybe).
You know how you can actually do quite a lot of things to someone's brain, and if they go on living, they do have about the same memories and personality. You can shut it all down with anesthetics. Cut the blood supply and use cold water as blood. Throw all kinds of drugs that have a profound effect on specific neurotransmitters. Destroy whole sections. And for the most part, most of their memories and personality stay the same, and people can, within limits, even compensate for missing portions.

So I think this evidence indicates that the synaptic weights (you measure them by simply counting how many receivers are on the receptor cell and what state they are in) and wiring topology is probably all you actually need. The "soup", the myelination states, etc is probably all temporary. Like starting up a computer system again where you've cleared the RAM, but the hard disk is the same, and this computer system is very robust where you can scramble a random 30% of the bits on the hard disk and it will still run the same as before.

You would physically do this count by tagging the receptors with a molecule that will be visible on an electron microscope and is specific to the type of tag. So your reconstruction only needs to recognize the rough shapes of the actual axons and the probable destinations (for the topology) and there's a strength estimate by counting the amount of tags of a particular type at a synapse. Probably all the rest of the information doesn't even matter.

And even if it isn't, that doesn't matter. Minds are about change. If you can get even sorta close, I think a person could re-learn everything, much like post-stroke someone can re-learn basic tasks. Except if their brain is no longer squishy, inaccurate flesh, but is neural weights in a very large and very fast and accurate computer, it would be like re-learning everything when you have an IQ of 300.

Last edited by SamuelA; 01-27-2018 at 04:33 PM.
  #836  
Old 01-27-2018, 04:31 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,022
Quote:
Originally Posted by RaftPeople View Post
So it sounds like you feel that cell preservation alone would not be enough. That the electro-chemical soup they reside in needs to be preserved also?
My (non-confirmed, because nobody knows) suspicion is that the mind is a process, and that slicing up a preserved brain can no more revive that process than slicing up the CPU and RAM of an computer can restore the game you were playing when the power went out, no matter how fine your microtome slices.
  #837  
Old 01-27-2018, 04:38 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by RaftPeople View Post
Agreed, that's a tiny bit optimistic.
Is it? What if the 95th percentile outcome for their illness and present stage is 1 week to live? Or 1 month? You're trading off a small amount of remaining lifespan where the person is probably in a lot of pain and fear for a non zero positive outcome.

I mean, there's no negative outcome. If the cost to freeze them and for 300 years of coolant is less than the cost of 1 week of ICU care and the cost of a funeral, then you're ahead financially*. Even if you can't ever revive them in 300 years, it's still a positive EV, and you measure the chance that you can revive them times the utility if you succeed, and that's obviously a large positive number, however you weight it.

*the coolant is about $1000 a decade, $10k is a century, and that neglects the fact that you could invest money at 1% interest to pay for coolant next century.

There are also site and security costs, but you could easily put a million patients per underground complex and thus get the cost down considerably.

Last edited by SamuelA; 01-27-2018 at 04:40 PM.
  #838  
Old 01-27-2018, 04:40 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by Sunny Daze View Post
We don't know how the brain works. You and he may think it's computational, but it's not a model that we've developed yet.
While technically true that we don't have a model developed, there aren't too many options:
1 - follows a set of rules (e.g. physics)
2 - random (e.g. quantum)
3 - something else outside of nature

Would you argue that the brain relies on something other than physics?
  #839  
Old 01-27-2018, 04:41 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by RaftPeople View Post
While technically true that we don't have a model developed,
No it isn't, it's just not technically true that we have proven our model beyond all doubt by creating an emulation that does exactly the same thing. Yet. We have a model. There are synapses. They fire when they reach a voltage threshold. Signals are all or nothing. They travel at speeds much less than lightspeed, limited by retransmission nodes. Each synapse has a weight and either adds or subtracts from that voltage level. Some synapses are connected to glands that can emit something that goes brain-wide and affects all the synapses of a specific type.

Everything here is straightforward to emulate, you just need to use a computer with sufficient memory and bandwidth to that memory to even remotely approach realtime speeds. And you need a scan of all the synapses, which is very expensive to get.

Last edited by SamuelA; 01-27-2018 at 04:45 PM.
  #840  
Old 01-27-2018, 04:58 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,228
Quote:
Originally Posted by RaftPeople View Post
Not following this point, seems like it's all computation. Can you expand on this point for clarification?
The brain is "all computation" in the trivial sense of electrochemical signaling. No one disputes that the brain is a mechanistic physical device, but that has never been a question in any creditable field of study. In the actually interesting and formal meaning of computation in computer science and cognition, the essence of computation -- the essence of how computers interpret the world -- is that algorithms perform syntactic operations on abstract symbolic representations, and thereby computationally derive the semantics of how they understand the world.

One of the key questions in cognitive science, and specifically in the computational theory of mind (CTM), is the extent to which mental processes are, in fact, computational. There is strong evidence that some are, and also evidence that many are not or that we just don't know how to characterize them that way.

CTM is a strong theory but no one pretends that it's a complete explanation of how the mind works, much less that it can all be described in terms of classic computation. Mental imagery is a good example of some of the controversy. Do we process images according to this computational syntactic-representational model, or do we have an internalized mental "movie screen" on which we project remembered images? There's evidence for both theories. Some have shown that the visual cortex is involved in such recollections of mental imagery, while others provide evidence of the former (for instance, a priori knowledge influences the interpretation of mental images, making them immune to things like the Muller-Lyer illusion). CTM remains an important pillar of cognitive science but the computational nature of the mind remains controversial and elusive.
  #841  
Old 01-27-2018, 05:12 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
The brain is "all computation" in the trivial sense of electrochemical signaling. No one disputes that the brain is a mechanistic physical device, but that has never been a question in any creditable field of study. In the actually interesting and formal meaning of computation in computer science and cognition, the essence of computation -- the essence of how computers interpret the world -- is that algorithms perform syntactic operations on abstract symbolic representations, and thereby computationally derive the semantics of how they understand the world.

One of the key questions in cognitive science, and specifically in the computational theory of mind (CTM), is the extent to which mental processes are, in fact, computational. There is strong evidence that some are, and also evidence that many are not or that we just don't know how to characterize them that way.
I'm going to give you another shot, here, because you're actually saying something interesting. I don't quite understand how what you are saying matters. Instead of just calling me stupid, let's just say for the sake of argument that I am stupid.

If I've ripped open the guts of some machine and I don't really know how it works, but I find the wires come together into these little parts that I do understand, because all they seem to be doing is adding up and emitting pulses, how does what you are saying prevent me from making another copy of that machine if I tear one down and slavishly duplicate every connection?

Another really fascinating question is let's say I build a machine-learning classifier real quick. But it's one that doesn't start out with tagging. It just looks at camera images with a LIDAR overlay and starts to group contiguous objects together.

Say there are just 2 objects you ever show it, from different angles and distances.

At first the classifier might think there are hundreds of different objects, but let's say some really clever algorithm converges it back down to just 2 that are rotated at different angles.

So at the end of the process, you have this sequence of stages that goes from <input sensors> to [ X X ], where the outputs are [ 0 0 ] (neither present) [ 1 1] (both present) [ 1 0 ] (object A present) [ 0 1 ] (object B present).

I'm really curious how this machine, which we could actually build today, "counts" in your computational theory. Note that we don't have to build it as a python script, we could program separate computer chips to do each stage of processing and interconnect them physically, thus making it resemble a miniature version of the real visual cortex.
  #842  
Old 01-27-2018, 05:13 PM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,073
You think you have a model. You might have ideas of where to start. Your core argument is that because a brain uses signals and a computer uses signals, they must in the end be equivalent. How does the brain use those signals? Different types of signals mean different things. Sometimes the same signals mean different things. Brains re-route to work around damaged sections, sometimes. They self-repair, sometimes. I could go on, but high level, the point is we don't yet understand enough about the brain to make a model about brain function to emulate. You are at step one, which seems plausible, but that's not the same thing as a model that will, in the end, be the right model. We don't know how the brain works. We need to know that in order to know what we want the computer/AI to do. Simply saying we want it to replace the brain with the AI is not sufficient. It is aspirational, but not in any way a methodology for how to get there.

A scan of all the synapses will achieve little on its own, because we don't understand what they do. It's a step, only. It's like mapping the human genome. Great, we've got it. On its own, without further research it's just data.

Last edited by Sunny Daze; 01-27-2018 at 05:13 PM.
  #843  
Old 01-27-2018, 05:58 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by SamuelA View Post
No it isn't, it's just not technically true that we have proven our model beyond all doubt by creating an emulation that does exactly the same thing. Yet. We have a model. There are synapses. They fire when they reach a voltage threshold. Signals are all or nothing. They travel at speeds much less than lightspeed, limited by retransmission nodes. Each synapse has a weight and either adds or subtracts from that voltage level.
It's more complicated than that. For example, one single neuron is an entire network all by itself. The synapses on the dendrites that receive signals trigger localized spiking/signaling (local as in just in that area of the dendrite) that pre-process information prior to that signal reaching the soma.

In addition, there are different types of connections, some electrical, some with neurotransmitters, and then there is glia with gliatransmitters, and neuron DNA methylation that trigger protein creation to maintain synapse stength due to learning, etc. etc. etc.

There is no current understanding of all of the pieces that either perform computation or maintain/alter physical state which impacts computation.



I agree with you from the perspective that it's physical and could theoretically be simulated in the future, but I do not agree with you that we have enough information today to simulate even one single neuron properly/completely.
  #844  
Old 01-27-2018, 06:07 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,757
Quote:
Originally Posted by wolfpup View Post
The brain is "all computation" in the trivial sense of electrochemical signaling. No one disputes that the brain is a mechanistic physical device, but that has never been a question in any creditable field of study. In the actually interesting and formal meaning of computation in computer science and cognition, the essence of computation -- the essence of how computers interpret the world -- is that algorithms perform syntactic operations on abstract symbolic representations, and thereby computationally derive the semantics of how they understand the world.
Sure, at the higher level/functional level there are very interesting and difficult questions about how to model the brain.

But from the perspective of physical simulation, it's possible to be successful without understanding how the higher level computation happens. Determining what level of physical detail to simulate is clearly a non-trivial issue, and getting accurate state of that level of detail is non-trivial.
  #845  
Old 01-27-2018, 10:55 PM
crowmanyclouds's Avatar
crowmanyclouds is offline
Guest
 
Join Date: Sep 2005
Location: ... hiding in my room ...
Posts: 4,761
Quote:
Originally Posted by SamuelA View Post
Is it? What if the 95th percentile outcome for their illness and present stage is 1 week to live? Or 1 month? You're trading off a small amount of remaining lifespan where the person is probably in a lot of pain and fear for a non zero positive outcome.

I mean, there's no negative outcome. If the cost to freeze them and for 300 years of coolant is less than the cost of 1 week of ICU care and the cost of a funeral, then you're ahead financially*. Even if you can't ever revive them in 300 years, it's still a positive EV, and you measure the chance that you can revive them times the utility if you succeed, and that's obviously a large positive number, however you weight it.

*the coolant is about $1000 a decade, $10k is a century, and that neglects the fact that you could invest money at 1% interest to pay for coolant next century.

There are also site and security costs, but you could easily put a million patients per underground complex and thus get the cost down considerably.
Even if your best outcome is possible, I still haven't seen you show any reason to believe that anyone 300 years from now will want to to revive, say, me. (And why I should assume they would for noble purposes and not, again say, to make me into a robot sex slave with a "real people personality"!)

Quote:
Originally Posted by SamuelA View Post
{...} and there are like 10 other morons in here who keep just parroting random shit and who don't take the argument seriously. {...}
We're taking this pitting very seriously . . . it just that you, and others, are ruining this pitting by allowing you to turn it into yet another "debate" with YOU!

CMC fnord!
__________________
It has come to my attention that people are stupid.
We, the smart ones, should be coming up with plans for how to remedy this, but we're all too busy watching Battlestar Galactica. wierdaaron
  #846  
Old 01-27-2018, 11:33 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,022
Doctor to patient: "Well, we have two options here. Option one is that you get to live for a few more weeks--months at best. The quality of your life will go down, but we will try to manage the pain as much as possible, and you will have some more time with your family. Option two is that we cut off your head now and maybe in a few hundred years total strangers will decided to make a computer program based off a thumbnail sketch of your memories. We'll let you think it over."

I don't know why you keep acting like this is some sort of escape from death. You will still be dead. No matter how good the little computer game based on your brain might be, you'll still be dead. You'll never know anything about it, ever, because you will be rotten, dead meat banished to eternal insensate oblivion. So why should you care whether some computer program in the future thinks that it is you? It won't be you. You won't know about it. You will be nothing. Ever again.
  #847  
Old 01-28-2018, 12:51 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
I'm starting to feel I am not doing my part, not having gotten even a single ignore, and he doesn't even know my name.

Part of this is because I am not as versed in some of these subjects as others, so I don't have much to contribute to an argument about computational models or how they relate to simulating and replicating consciousness and stuff like that. I could go to Wikipedia U and get an "I read an article on it" degree, but the nitty gritty of it doesn't interest me quite enough to devote even that much time to it.

I like to think of myself as being a bit above average intelligence, and my interest in science and technology puts my knowledge of such things well above the average layman, but far below an expert. Basically, it qualifies me to, along with an additional 6-10 years worth of intensive study, actually start to understand what is being explored at the most fundamental levels.

Here's the thing though. There are experts in these fields. There are some really smart people who have devoted their entire lives to understanding these concepts, and they have much less confidence in how they will develop than Sammy, who is at best a well informed layman, does.

It's fun to explore our future, and the possibilities that may lie ahead of us. But the entire reason for that is because the future is uncertain. None of us know what is around the next bend. I think that at some point in the future, (assuming trump doesn't kill us all), we will be unrecognizable as we become more one with machines, achieve functional immortality, and spread across the galaxy and universe. As far as timeframes or precise paths that are taken to reach this state, there are many and varied, and we don't actually know which ones are viable yet.

It is hard for me to get mad at optimism. The arrogance is annoying, but it does come from a place of believing that mankind can and will achieve many great things.

I like being ignorant. It means that there are things I get to learn. As long as I acknowledge that I don't know everything and am not the expert on everything, I find that I learn something new in nearly every interaction. It is when it is assumed that one does know everything that one can no longer learn. And that is where Sammy is, he thinks he knows everything there is to know, and so refuses to learn new things.

This turns the innocence and wonder of not knowing into the contempt for learning new things that is willful ignorance. Willful ignorance leads to many irritating and antisocial behaviors, one of which is racism, which our friend has been showing signs of of late. Not the racism of hate or contempt, but the racism of ignorance. Ignorance can be cured easily, as can racism based on ignorance. Willful ignorance is not so easily treated, and really requires some level of humility on the part of the willfully ignorant in order to change.

Humility is also not something that Sammy has demonstrated. That would be the first sign of growth of him as a person.
  #848  
Old 01-28-2018, 02:47 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by k9bfriender View Post
Here's the thing though. There are experts in these fields. There are some really smart people who have devoted their entire lives to understanding these concepts, and they have much less confidence in how they will develop than Sammy, who is at best a well informed layman, does.
The actual experts are saying the same things. The actual world renowned experts in nanotechnology think self replication is very feasible. The actual world renowned experts in AI think that automation of half the economy will be a piece of cake with the present state of the art. (that means self replicating macroscale factories, by the way)

The actual experts in neuroscience have scanned and emulated sections of animal brains and have gotten promising results. They have managed to duplicate at a high level most of the behavior we see.

Fuck, the actual experts in flight think hypersonic aircraft are very possible. It's the engineers trying to deliver who are struggling.

Last edited by SamuelA; 01-28-2018 at 02:50 PM.
  #849  
Old 01-28-2018, 02:59 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by SamuelA View Post
The actual experts are saying the same things. The actual world renowned experts in nanotechnology think self replication is very feasible. The actual world renowned experts in AI think that automation of half the economy will be a piece of cake with the present state of the art. (that means self replicating macroscale factories, by the way)

The actual experts in neuroscience have scanned and emulated sections of animal brains and have gotten promising results. They have managed to duplicate at a high level most of the behavior we see.
And there is no disagreement in those fields at all?

All the experts agree that the finish line of their field is within sight?

Sure, self replication is easy, living things do it all the time. So, just do what living things do, and we are all good, right? And of course, when you do that, you will keep none of the shortfalls and limitations of living things, but have only the robust perfection of machines?

AI is coming a long way, and does many things quite well, and will probably do other things better in the future. But just putting white collar manages out of work because a computer can allocate resources better, faster, and cheaper than a person is not the same thing as actually replicating actual human thought.

The brain scans have been "promising" in that we are learning about things on that scale. They are not "promising" in that we now understand everything about them to the point of being able to make accurate predictions as to how they work, or even a timeframe or roadmap to seeing how they actually work.

But, that all comes back to my point. Yes, there are experts who are optimistic about their fields. But there are also experts that are not so optimistic. You only listen to the first group, and assume that the second group doesn't know what they are talking about, because they do not confirm your positions.

Ignoring the group of experts that are less optimistic about the outcomes is willful ignorance, which leads to the arrogance that many posters have indicated makes you rather off putting.



ETA: Your second edit about hypersonic aircraft (which is a new topic) actually explains what you are lacking. It is the optimistic theorists that you are listening to, while the engineers, the people who actually make theory and reality meet, that you are ignoring.

Last edited by k9bfriender; 01-28-2018 at 03:01 PM.
  #850  
Old 01-28-2018, 03:03 PM
Morgenstern is offline
Guest
 
Join Date: Jun 2007
Location: Southern California
Posts: 11,866
Quote:
Originally Posted by SamuelA View Post
...

Fuck, the actual experts in flight think hypersonic aircraft are very possible. It's the engineers trying to deliver who are struggling.

The Air Force flew the X-15 at speeds over Mach 6 over 57 years ago. Catch up with technology dude.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:18 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright 2019 STM Reader, LLC.

 
Copyright © 2017