A memory is a physical thing.

I need to know what you mean by “yet”. There is instantaneous awareness the moment the picture (the immediate memory) is presented. (I prefer that term to sensory memory, since a memory can be conjured up without senses.) There is passage of time between the event and the presentation of the picture, but the picture is the direct cause of the awareness. The processing that follows awareness of the picture is itself reactionary.

Sure. Sentient’s link provided scientific evidence that such a thing goes on, but I don’t think that evidence is necessary. It can be proved analytically through deduction, and I believe that that’s what I’ve done here.

You’re getting very close, but here are some important qualifiers. There are indeed containers and pools in the brain where it holds information as it juggles other information, but that’s just part of the crap. The staging area for moral decision making is not found in the brain; it is found in the essence of the person. His eternal spirit. One popular way of putting it is that it is in a man’s heart. Definition number 9, and smidgeons from a couple of others. Things about innermost, core, and whatnot.

OK, this moves into a knock-down, drag-out area of debate that I think is premature. I hope we get there, but not too soon, as I don’t think the foundations are there yet.

Sorry - your inclusion of “without a human brain to both make it and interpret it” (where “it” refers to “the brain” or “a mechanism by which memory arises”) is, in essence, a reduced form of the ID argument (reduced in the sense of using “human” rather than “God”). Actually, now that I think about it more carefully, perhaps not really ID so much as a Berkleian position on requiring an observer and interpreter. Either way, there’s a non-physical requirement for this view. I’m gonna step away from the keyboard now, as I don’t want to derail the thread into a dualism debate.

Yet.

Liberal, I’m sure we all agree that there are fundamental differences between any currently existing computers and human brains. Some of the differences you mentioned are arguable. The hotshot chessplaying computers are fed positions from classic chess games. When they make a move that works badly, they revise their strategy. No randomness needed. The strongest difference you mention is that they do not attach significance to results. I think that they do not emote follows from this. But is being silicon-based rather than meat-based a fundamental difference?

“It” referred to the thermometer.

Ah, my mistake. Glad that I included the resolved reference in my post so that it could be corrected. However, it doesn’t change the point. If you choose “created” it, you’re basically taking an ID position. If you choose “interpret” or “observe”, you basically take a Berkeley position. Either way, I think you’ve just stepped outside the bounds of the OP.

And actually, I retract what I said a couple posts ago about “getting there”. If we start to debate physical vs. non-physical, I expect the thread to devolve quickly into pseudo-religious (or actual religious) mini-debates. I’d like to think that the large-picture debate is can cognition be based in purely physical mechanisms? The specific debate here regards the physicality of memory. Where is it that purely physical mechanisms fail memory? If there aren’t any, then we can step up a level and discuss awareness (which we’ve started). Where is it that purely physical mechanisms fail awareness? At this point, I think one part of it can be resolved by considering the difference between nuclear power plants and lower animals.

Is that acceptable?

Just to correct the record, chess computers do not “revise their strategy”. What they do is maintain an auxilliary database (similar to the alternate dictionary of a spell checker), which then serves as an extension to the process they already carry out. In other words, they still make the exact same errors in calculation, going through the same search tree as before, but then “discover” at the end of it all that there is some other move available that is, by their reckonings and formulas, superior. There are some tricks available to them, like alpha-beta pruning, but in every case in which they use such computational devices, they will always use the same ones in the same circumstances every time. The stored positions are just more data tables.

As this writer states: “However, to the surprise and disappointment of many, chess has taught us little about building machines that offer human-like intelligence, or indeed do anything except play excellent chess. For this reason, computer chess, (as with other games, like Scrabble) is no longer of great academic interest to researchers in artificial intelligence, and has largely been replaced by more intuitive games such as Go as a testing paradigm.” And overall, the stagnation of AI research has come about because of the realization that the only thing computers are really good at is what is called a “nongeneralisable exploitation of specific features of the task domain”. I really shouldn’t call it stagnation, because there has been some work with new approaches and new languages like ATRANS, but generally speaking, AI always smacks of simulation. That is not to say that the future does not hold the promise of an artificial brain, and I do appreciate the fact that you did not engage in a slippery slope argument that because Deep Blue beat Kasparov, SNG Data type androids are right around the corner. But if artificial brains ARE developed, then they will not be computers anymore — they’ll be artificial brains. Their resemblance to computers will be no more than that between a Rolls-Royce and a chariot.

I disagree. Merely because I made reference to the fact that the thermometer required a man to build it in no way makes any sort of broader metaphysical statement. I think it would be like me saying that by using the phrase “outside the bounds”, you are attempting to steer the discussion into a debate over whether existence is a predicate. It was just a simple and appropriate verb to indicate that indeed the thermometer was made by a man, and was apropos because the point being made as that it was the man, and not the thermometer, who was responsible for all the things we’re discussing here, like cognition, interpretation, awareness, and memory.

I’m bewildered by what you’re saying here. I’m just answering whatever questions I’m asked, and commenting however I believe is appropriate. I’ve never seen a linear thread in my life. Discussion threads have an ebb and flow about them, and a certain tendency to broaden. You’re a half-step away from accusing me (quite suddenly, I must say) of a hijack, while you yourself are lock-stepped into something about nuclear reactors. As Other-wise indicated elsewhere, there are other kids playing here too. I think that arbitrary suppression of viewpoints that might be interesting is a bad idea.

I have no idea. I’m not even sure what “that” is.

:confused:

Amen to that.

I used to be a cop a long time ago. One of my strangest days was investigating a traffic accident where both drivers described the other driver’s car as blue. Neither one of the cars was blue - and they were both parked right there in plain view!!

A true Rod Serling moment…

I have some nits to pick with this (and basically agree with Hoodoo Ulove). Computers are nothing without software. Computers don’t emote? We have just not provided them with the requisite ROM chips. Our ability as humans to learn is hardcoded into our DNA. The way our brain works is hardcoded into our DNA. Providing computers with the functionality to learn is a requisite, yes. Same for emotions. But we can do that in several ways. We just don’t, because we actually like the predictability (and even then, most people calling a helpdesk attribute computers a lot more randomness and emotions than you’d expect, and they do in fact sometimes get so complex that behavior seems random even to a computer expert).

Also, in theory, we can make a computer simulate a human brain completely in software. There’s no reason why we can’t do that, in theory. For now though, it’s probably still going to be very sloooow. But even then we could construct a simple version that works in much the same way but just has less capacity.

As far as the chess thing is concerned, chess was once considered one of the highest forms of intelligent behavior achieveable by man, and the ultimate test to match one person’s intelligence to the next. To be honest, I’ve never quite understood why, but there it is. Unravelling chess by teaching computers how to beat Kasparov has taught us a lot about Chess, but also a lot about what human intelligence is and isn’t, and what’s special about it and what isn’t. It’s a lot like how translation software taught us about language (as well as the human brain) and again how transcription (voice recognition) software has taught us how we hear and interpret sounds. Incidentally, not to long ago there was a Dutch science team I think that developed a chess game that beat Deep Blue using a lot less advanced computers, but using software that was more closely modelled to how they discovered humans deal with chess. I think it was called Fred or something, if you’re interested I’m willing to try googling it up somewhere.

Frankly, discoveries have gone so quickly recently, I doubt many fully realise how much we already have. It might have something to do that big projects like CYC (anyone remember that?) are taken out of the common picture though and into the musky realms of military intelligence. :wink:

I think this would also fit netter with the nervous pathway represenation I tried to give in my long post.

Will they be free moral agents?

Yes - I guess it just scrapes past my threshold of being aware of something (ie. temperature). Again, I’d qualify that “yes” by saying that bees are “aware of their environment” in ways which are a whole lot “more aware”, IMO.

Ah, I haven’t mentioned morality once yet. Are bees and birds moral? Yes I would try to establish a morality based on the physical (specifically, negative utilitarianism based on sentient suffering), but that’s advanced Godelian set theory compared to the simple arithmetic in this thread, I’d venture.

I just completely fail to see this approach to awareness. Awareness is not something different from being conscious of something, you can’t be aware of something without a consciousness. It ignores the implications of asking someone something like “Are you aware that your food is burning?” You could already been smelling it, seeing it, or whatever, but there’s a distinction being made between your consciousness being aware of the fact that you are smelling this, and actually smelling this, resulting in different cognitive behavior.

A bee is not more aware of its environment than, say, grass swaying in the wind. Or a rock. A bee is closer to being able to being aware, sure, because it meets more requirements. But it never even gets near that level, or there is something I don’t know about bee-brains.

Can’t say. But if they are capable of hosting essential selves and of thinking in metaphorical terms (it’s a little old lady, not just an electromagnetic field), then they certainly would serve the purpose. It isn’t being homo sapiens that makes us morally special; it’s being spiritus eternus.

Arwin, I’m not sure just what nits you were picking because I have no substantial disagreement with most of what you wrote. And in fact, some of it was a restatement of what I wrote — e.g., “Also, in theory, we can make a computer simulate a human brain completely in software” is a recapitulation of “That is not to say that the future does not hold the promise of an artificial brain”.

I think most of our disagreement (and there isn’t a whole lot even there) is with respect to computers and chess. Brute force methods took over because, frankly, brute force beat strategic rules systems. (See previous cite.) Computers lack too much of the necessary ingredients to emulate the human approach. A human might make a move because he finds it to be beautiful, or innovative, or interesting, or bold, or any number of other reasons that have nothing to do directly with any inherent mathematical value of the move itself. Computers will begin playing like humans when they lose as much as they win, when they refuse to play a Smith-Morra Gambit just because they don’t like Smith, or when they play a risky gambit as Black just because it’s exciting. But as you say, and I agree, most of the work done so far with computers in AI have taught us more about other things than they have about AI. I think that AI will make progress the more it distances itself from the aforementioned nongeneralisable exploitation of specific features of the task domain. People just don’t think linearly, and not all thought is about tasks and problem solving.

With all this “presentation” going on, I can only see three choices:

  1. Awareness is a perpetual process of information becoming awareness
  2. Awareness is a perpetual process of information triggering awareness
  3. Awareness is a perpetual process of information being presented to some brain process that is perpetually extant.

But in all three we’re faced with a problem. We’ve got neurons processing pre-aware information (immediate memory), we’ve got neurons processing information we’re aware of, and we’ve got neurons processing information for long-term storage (and, if I’m reading you correctly, the processing that both precedes and follows awareness is simply reactionary).

So what makes awareness itself non-reactionary? Since it’s all just patterns of neuronal firing and chemical exchange; how come all the rest of the brain processes are take place without awareness?

In your opinion, is “awareness” where essence might have an (ethereal) hand?

Crud.

Number three should be:
3) Awareness is a perpetual process of information being input into some brain process that is perpetually extant as awareness.

This one is pretty close to and fits my model, don’t you think? So I’m not sure I see how that wouldn’t work, or cause problems.

O-w, alternatives 1 and 2 suggest that there can be no awareness without information coming in. Do you believe that?

I know that you used that expression lightheartedly, because you and I have expressed our mutual respect. But for the sake of anyone who might think that I have indeed introduced the notion of some flurry of presentation, I want to correct the record. Each time I speak about the presentation of immediate memory, it is about the same thing — er, the presentation of immediate memory. It is not the case that I am talking about the presentation of this plus that plus the other thing.

I don’t even think of awareness as a process at all. It is merely that state which is attained upon presentation of the immediate memory image. The brain has already determined that it is something you will recognize, because that is exactly what it was hard at work doing — working up something that you could look at (metaphorically) and react to. If it were presenting you with some sort of word jumble or picture puzzle, then it is doubtful that humanity would have evolved because it is too onerous a demand that you have to interpret twice, once to make sense of the event and then another to make sense of the brain teaser. The metaphorical brain process pre-awareness of the sudden truck horn goes something like this. The brain processes the sensory stimuli into a pattern (but not one recognizable on the whole). It holds that pattern in a holding place. It scours through the imagery and symbols that it has stored, beginning first with the most significant, which it keeps on top — things that are dangerous or important, etc. Finding something that closely matches what it is holding, it retrieves it. It compares the two, and determines that you might be in mortal danger. It dispatches adrenaline and other useful chemicals, formulates the pattern into an image or symbology you will recognize, such as “What the fuck! That’s a truck!”. It then puts that image into your cognative space while simultaneously putting the reptilian part on full red alert. Sort of “Be prepared for anything. Disregard even breathing until further notice. More instruction to follow.” All of this happens pre-awareness. You are still thinking about Joan’s (or Jim’s, whatever) beautiful legs, when all of a sudden you feel like a man launched from a flight deck. Your body has been prepped, the adrenaline is flowing, and you see the image of the truck predesigned to convey danger. The brain has done the best it could. The more experience you have with the situation, the more information your image will contain. If you’ve never encountered anything like this, then you will have to figure out things like whether to turn left or right or slow down or speed up. And that will be unfortunate because your brain has put you in a mode more suitable for panic than for rationalizing and problem solving. But at any rate, your awareness of the event is a direct result of the event image being presented to you. Awareness is not a process, but a state.

I don’t know why that would be a problem. There are plenty of neurons for the task.

The fact that it is a state.

Because it would hinder their completion. There is no need for awareness in the processing of sensory data or creative ideas. (Incidentally, that experience you’ve had that people call a lighbulb over your head or sudden inspiration — that is an example of nonsensory awareness.)

I’m not sure I understand the question, so if I go off-track, just tell me. The mechanism is still a mystery, but some of Ramachandran’s work is a promising indicator that it might be the limbic system that functions as the facilitator between the spiritual man and the physical man. There is no “ethereal hand” involved until a moral decision is made. Moral decisions are not made in response to every event. The aforementioned little old lady could conceivably be one tiny figure in a large dense crowd, for which purpose she would have no more moral value to you than a lamp post.