Machines of Loving Grace- God Gene/AI link?

lekatt, I may think you’re an utter nutter, but I don’t think you’re (that) stupid.

Sorry everyone, I should know better. Note to self: No replying to lekatt. Bad, Digital Stimulus, bad! I hate it when I have to whack myself on the nose with a newspaper…

Again, robots & wrenches do not require morality. A robot follows it’s programming. When confronted with a decision, morality does not enter into the situation, only it’s programming does. The morality lies with the programmer, and user. If the user makes the robot shoot everyone it can see, that is the problem of the user, not the robot. No different than a human using a gun to shoot everyone they can see. Robots don’t have any more discretion than a gun does.

I became who I am through my brain (and body) developing according to normal human growth patterns and encoding experiences in a physical format. My brain encompasses a good part of me, it did not create me.

Yeah, you’re right. Gotta stop doing that. Pass me the paper.

OK, you became who you are through your brain (and body) developing according to normal human growth patterns and encoding experiences in a physical format.

There is nothing specific here, normal, patterns, encoding, tell us nothing. If the experiences were in a physical format, why can’t scientists find any physical evidence of mind in the brain. There are no memory cells, no cognitive cells, etc. Actually, nothing physical has been found in the brain hinting of the mind. Scientists measure electrical energy when they “map” the brain, they do not measure the brain. Do we know if the electrical activity actually comes from the brain, maybe it flows to the brain from a spiritual source. There is nothing here even close to proving anything, it is all supposition. Did you know people were walking around without hardly any brain? Science doesn’t know, that is the truth.

http://www.mysteries.pwp.blueyonder.co.uk/6,2.htm

I know you don’t want to be interrupted by contrary science, but we have to consider it all.

I think this is one of the few times i’m actually in agreement with lekatt. Certainly I imagine a lot of A.I. research is of the “I wonder if we can do this?” type, but I think a good amount of it (or at least, it will be if reasonable A.I.s become a reality) is about creating something equivalent to a person that we don’t need to worry about morally. Things with the capabilities, intelligence wise, of a human, that we can get to do all the things we’d prefer not to let humans do. Soldiers are a good example.

The problems will come if we get to the point where A.I.s are indistinguishable from humans when observed from the outside; a computer program equivalent to a human brain would be much easier to read on the inside, and for that reason I would imagine that the first lot of truly equal A.I.s will be slaves. For that, and the problems that I imagine spiritual people would be uncertain where souls (or the like) come into it, I think there will be problems.

Unlike** lekatt**, I don’t believe truth will inevitably win out. So I suppose i’m more pessimistic or realistic than him, depending on what you think.

Balls. I knew that agreement would be premature.

Cells, no. Specific regions of the brain associated with memory, cognition and so on? Yep. Your own cite supports this; assuming it is correct (it cites no journals, after all, and thus has the legitimacy of opinion at best), the doctors in the case knew which part of the brain it was that was causing James problems. Their removal of it had unforseen consequences; but it also had the forseen consequences. And how did they know what to do? Why, they knew the hemispheres of the brain actually control the opposite side of the body; it’s as if the mind’s control of the body can be pinpointed to a particular part of the brain. Shock. You can argue certainly that the brain is still mysterious, but evidence of mind can certainly be found.

I think I may keep track of the times your own cites disagree with you. It seems to be a lot. So… 1.

I think you like to ignore anything that is contrary to what you believe, but how will you learn anything by that method. So you resort to name calling and reputation bashing. Because you really don’t have the evidence or proof to counter contrary evidence, just theories and opinions. I would think everyone would want to know all the evidence and eventually the truth.

They can and have.

Wrong.

Wrong.

Wow, you can’t possibly be arguing about this and not know how neurons function can you? Oh, and wrong.

You are arguing from ignorance. Educate yourself.

No, they know exactly where it comes from, absolutely and 100%. Nothing ‘spiritual’ has been found, nor is needed to explain any functionality.

In order to validate a hypothesis, you have to provide evidence for that hypothesis. Point out problems or anomolies with another hypothesis does not automatically validate yours. You h ave yet to provide a single piece of evidence for yours, and so it can be dismissed out of hand.

There is no such as contrary science, there is only science. If it really is science, and this is not, then thats all it has to be. If it disagrees with something established then it has to do so with evidence, and not conjecture & wishful thinking.

Dammit I did it again.

Good, I am glad you replied. Let me ask this question. How do you know specific regions of the brain are associated with memory? Is it through measuring electrical waves of the brain, do certain areas “light up” or what, exactly how does this work?

You’re talking about autonomy, which I’m going to guess is quite far off. They aren’t making ‘soldiers’, they’re making robots. Remote controlled or programmable machines. These machines are not autonomous, they have no morality, no discretion, and no decision making ability beyond what is programmed into them. They are no different from tanks or guns. The decisions are in the hands of the programmers & the controllers.

Now, yes. We’re a long way off from even mildly autonomous types which can be safely left to do their duties. But draw a line between robots and soldiers. Tell me at what point we have to ask them to fight instead of telling them. And at what point should we care when they die, not just because of the loss of a useful tool, but because a being has been lost? You’re right, we’re nowhere near that yet. But we may get there.

I did a search for brain memory cells and found no record of any, no need to go further.

Interesting. Now, i’d probably be with you on the idea that there are no cells used only in memory in the brain (regions yes, a specific type of cell only for memory, no). But out of interest, how did you do this search? I put “brain memory cells” in google, and the first page alone gives examples of specific areas of the brain used for memory. If with such a rudimentary search I can find that, what exactly did you do in your (I am sure) exhaustive, careful search? I am eager to learn your searching tips.

I found some information on neuroplasticity, or changes in neurons that were said to be memory storage. It did not say how this was known. That is what I want to know, how are these “brain” measurements made. How is an area of the brain determined to be memory, and etc. Is it really a brain measurement or a brain wave measurement? Can you answer this for me?

In a self-imposed stab at penance, I’ll make an attempt at actually fostering discussion. My thoughts on (the basic) motivations behind creating AI are above, so I’ll go in another direction.

I’ve just now read the Brautigan poem. The way I read it, there’s little to no link to God (unless one is assuming a Spinoza pantheistism). Rather, “loving grace” takes on a non-theological utopian view of nature worship, combined with a good dose of technology fetishism. The main concepts (as I read it) are: 1st stanza, harmony; 2nd stanza, peace; 3rd stanza, loving grace. Perhaps the theological allusions are intentional; I find it somewhat sad that some cannot conceive of a paradise not dependent on God.

The only thing that gives me pause is the “watched over” in the last stanza. While passive, the phrase implies potential intervention. However, the “mutually programming” found in stanza 1 belies the notion that Brautigan’s machines will have ultimate control. This is further carried forward by the phrase “cybernetic ecology” in the final stanza: while “cybernetic” does refer to the study of control systems, that’s along the lines of analyzing system processes rather than subjugation; whereas “ecology” is the study of the relations and interactions between organisms and their environment.

In short, while I haven’t heard the NPR segment, I think the poem does not make any substantial connection to God. And pop culture treatment of AI, as tremendously uninformed and misleading as it is, has to be sensational to get traction; what could gain more notoriety than linking it to “replacing God”?

On preview, I see the thread has gone its merry way. Oh, well, I did my penance.

Dagnabbit, missed the edit window. I just wanted to make sure that it was understood that the above was self-directed; I wanted to provide constructive discussion fodder rather than just object to lekatt. In no way was I implying that others are not fostering discussion.

Replying to **lekatt ** by PM, rather than hijack further. Sorry all.

I think the pop culture impression of A.I.s is important though, because that will shape what we do with them. It’s one of those things where opinions and preconceptions have been formed on something that doesn’t even exist yet; A.I.s may be what people think they’ll be like because that’s what they’ll be designed to be or to avoid being.

I thought this might be of interest (seeing as how memory has become a topic). A fairly technical article concerning current memory research from Chemical & Engineering News entitled Hold That Thought (published 09/03/07).

One sentence synopsis, as given by the sub-title: Slowly but surely, scientists are unveiling the complex chemical underpinnings of memory.

Important like Heinlein’s influence and inspiration on a generation of scientists? Sure. Important like the Bush Administration’s “creating reality”? Not so much.*

AI is a technology, not “We’ll be greeted as liberators”. In other words, AI is more like a cell phone; various capabilities will be produced and/or refined that may or may not find purchase. You’re right that pop culture may serve as an inspiration for anyone entering the field. But any technology is severely constrained by what came before it and only possible in relation to what exists; ISTM that dreams of what might be have little role in what is actualized.

*My apologies for bringing politics into this. In no way do I mean to divert discussion along those lines, but it was the most fitting example I could come up with.

Thank you for steering this trainwreck back on track. At first, I was a little stoked that I had started a thread that produced a nutter response, but it does get tedious.

There was no link to the God gene/replacing God in the NPR piece, that was my own synthesis. (I wonder which brain cells worked on that?) Humanity has used myth and natural phenomena to create Gods since who knows, and now, at the dawn of artificial intelligence, one looks around at the various futuristic treatments of the idea, and it mostly gives one (or rather, me) the heebie-jeebies. Between that and all the freaks walking around with bluetooths talking to themselves, it makes me want to find a cabin on 200 good acres and write all y’all off. Of course, I’m not the most social person. Nor a wallflower. I guess I’m one who relies on the social lubricant.

My point is, should we be preparing for a new world of actual gods that we’ve designed?

I can’t help but be reminded of Isaac Asimov’s story The Last Question

I agree that what God and religion does for its followers is marginalize morals into Black and White zones. Enough doctrination into that religion, the less thinking the follower must do, because it’s mind has been closed for them. There’s little gray moral area left for the follower to waffle on.

I can’t imagine that true AI will result in us just shutting our brain off morally. If anything, I fear we’ll ignore certain areas intellectually. Can they get to a state of “awareness” that is higher than ours? Will they become smarter than us? Able to think more abstractly than us? Faster? Will they develop intellectually farther than us, and if so, does that make us dependent, if not vulnerable?

But in the end, an intelligence, even a superior one to our own, while might seem divine, can never be truly omniscient, nor supply our existence with meaning any more than we can do ourselves.