A memory is a physical thing.

I, of course, have no problems this version. I was taking: “…it is the location that distinguishes it, not what it’s made of. If it was in another location, it would associate two other memories,…” at face value.

Yes.

Well, to be precise, this is not strictly true. Some neurons spew neurotransmitters into the extracellular fluids instead of the synaptic gap, much like firing a shotgun into the air. But, yes, the main point continues to be that they are connected in the sense that they are interacting.

Yes, that sounds right. This is why I added a disclaimer earlier about my level of knowledge to an earlier post. I know that I’m going to make mistakes about details with which I’m not all that familiar. Correction is welcome and appreciated. Thanks.

Oh, crap. We bottom out on the metaphysical again?

To discern something, you need a discerner. A discerner is not a discerner without something to discern. Sigh. Is there no way out of this insanity loop?

Sure. Just grant that “memories” are physical and “patterns” don’t require an interpreter. :smiley:

Actually, I’m not sure that’ll do the trick, but it might…

Not at all. I can have a memory which I haven’t retrieved, but it’s still there. Perhaps I haven’t thought about Mary for years. but were someone to ask me, I could tell him that she had red hair.

I think I might have a better grasp of what you’re getting at here than I did before, although I’m still convinced there’s a misapplication of levels. I think giving my response in the form of a question might be enough – can you “unfire” a neuron?

Fair enough. We’ll see.

I understand, and I don’t think you’re evasive. But your point had been that the claims it had made before were injurious to its reputation. A hundred researchers making a hundred different claims seems pretty much like an exacerbation of the problem, by orders of magnitude.

Actually, I didn’t. The question mark was intended to convey an inquiry and not a declaration. But I’m not sure about your categorical statement above. Suppose U is the set of two tasks, T1 and T2. And suppose that T2 is the solution to T1. Isn’t T1 more difficult than U?

I think you might be right. But if it is not a computer science problem, then how will computer science solve it?

That depends. Indistinguishable to me, no; to it, yes.

Not really, no. But I have no idea how that answers my question. :smiley:

I so regret that Other-wise and I were unable to continue our deductive chain because that is exactly the sort of matter we were addressing.

The mention of Mary, in our system, was an event that triggered an immediate memory image. The immediate memory (of which you were not aware) was compared to permanent memories, where something about Mary (yes, yes, I know) was found. The brain tweaked the immediate memory image into an image that contained familiar images of Mary and presented that to your consciousness in the form of a state of awareness. Simultaneously, it dispatched appropriate chemicals and stimulations that were a part of the composite image.

The second to last deduction we had made was that, because the brain puts the new image with all its composites back into storage and discards the immediate memory, you can never have the (exact) same memory about Mary again. The next time you experience a Mary event, you will be aware of an all new composite.

We were this close […making appropriate thumb and finger gesture…] to a conclusion as to whether memory is physical. (At this point, I suspect that it is, but you never know with deductive processes what the next inference might be.)

A stupid question from someone not philosophically trained:
If a memory is not a physical thing, then what else can it be?

One might suspect that there is more to our memories and consciousness than just the molecules in our brain, but, AFAIK, there is no proof of these things arising from anything but the molecules in our brain.

Or did someone prove the existence of non-physical things in our universe while I wasn’t watching?

What is injurious was the overreaching. Turing saying the computers will more intelligent than humans in 50 years, etc. If you’re asking for specific claims, you get 100 people with 100 answers. If you’re asking for a single, general claim, I’ve given mine.

Then you’ve made a mistake in defining T2. If T2 is the solution to T1, then it is part of T1’s completion.

How will people in any other discipline solve it?

I don’t know how to parse this. Could you be more explicit?

Hoodoo Ulove, forgive me for once again answering you by answering someone else.

First off, Lib, I don’t think I ever commented on this deduction:

I agree with the above; no problems there that I can see.

Now then…

Lib, as you’ve pointed out, consciousness (and hence, awareness) is closed. Whatever Hoodoo was aware of when he was aware of Mary (associations and all), no one else can ever know.

At some point after Hoodoo stopped interacting with or thinking about Mary, there was still “Something About Mary” lodged deep within his brain, (for the purposes of illustration, let’s say “Something About Mary”, as a permanent memory, is a specific pattern of firing in neural group A).

However, Hoodoo has forgotten “Something About Mary”; he is not currently aware of it, and hasn’t been for some time (other systems in his brain have not accessed “Something About Mary”, at least, not in a way that lead to it’s inclusion in Hoodoo’s awareness). Then, alas, Hoodoo has a mild cerebral ischemia; in the resulting glutamate cascade, a few select neurons in the access pathways to “Something About Mary” are permanently damaged. Neural group A is still happily firing away, but all other brain systems are blocked from accessing it. “Something About Mary” will never again be part of Hoodoo’s awareness, nor help form the basis of a new composite permanent memory.

If someone were to poke a 25th century x-ray microscope into Hoodoo’s skull to take a look at neural group A, what would they see? They’d see neural group A firing away, but they would not be able to discern a pattern to the firing, at least, not one that can be interpreted as “Something About Mary”. Because consciousness is closed, that interpretation was reserved for Hoodoo and Hoodoo alone.

Is memory physical? The physical neurons in group A are intact, as is their firing pattern. But Hoodoo cannot access Neural group A, and others, with the microscope from the future, can access Neural group A, but they cannot interpret what they see. Neural group A has a message that no one can read, and a message isn’t a message if no one can read it, just as a memory that cannot be remembered is merely a bit of electrified meat.

Lib, Hoodoo, you may fire at will.

:eek:

Wow. That’s valid. That’s actually valid, and frankly is an ingenuous inference for the conclusion. Unless a flaw is found in some inference that we’ve made, your conclusion is incontrovertible.

Very well done, Other-wise. Very beautiful.

Both “forgotten” and “remembered” are ambiguous as hell.

“However, Hoodoo has forgotten “Something About Mary”; he is not currently aware of it, and hasn’t been for some time (other systems in his brain have not accessed “Something About Mary”, at least, not in a way that lead to it’s inclusion in Hoodoo’s awareness).”

In a quite ordinary sense, I hadn’t forgotten Mary at all. If you asked me, I could tell you some great stories about her.

Please send a contributiion in my name to the Cerebral Ischemia Foundation.

I’m not sure what you’re saying there. That wasn’t Other-wise’s stipulaton. If your brain is damaged and you are unable to access a memory, then you cannot remember it. Either it is accessible or it isn’t. The only way you can know anything of Mary is if some portion of memory about her remains accessible. […shrug…] Regardless, whatever it is that you are stipulating, Other-wise has reached a sound and very surprising conclusion based on his own stipulation. It’s really quite remarkable and, because it is deductively derived, makes perfect sense.

Incidentally, our method has removed the ambiguity of many terms.

Not disputing o-w’s conclusions. Just referring to the sentence I quoted.

oh. my. god.

You mean I’m not nuts?!?

Lib, more than anybody else I’ve ever met (er, not met) you’ve made me think. Hard. And you’ve kept me from being sloppy in that thinking.

Coming from you, that last post meant a lot. Thank you.

I really shouldn’t sleep and be involved in SDMB both. :slight_smile:

Lib I think you are working with an antiquated model of computing. The separation of program and data is a convention for efficiency, not a hardware requirement. For one thing, the pattern in which programs get taken from memory (mostly sequential) is very different from that of data, so processors often have different data and instruction caches. Since writing into a code area by a program usually means the programmer has screwed up, typically this space is protected, and the user doing so will cause a trap. The OS doing so is fine.

When I was in high school, I made a tic-tac-toe program fit into the 4k of memory I had by changing a bunch of addition statements to subtracts, which let the same code figure out the best move for the user and computer. In the Jargon File, the source of the Hackers Dictionary, there is the story of Bill (or someone) A Real Programmer, doing much the same thing on the same computer I used.

A very common way of testing computers is by generating random instructions. Compilers of course take high level code and emit instructions. So the distinction you make is a convention, not real.

And, inside a processor, the Itanic used to (I don’t know if it still does) handle x86 instructions by translating them into Itanic micro-ops and executing them.

So there really is no difference between program and data, except how it is used. As a microprogrammer doing emulators (which make one machine look like another) all my input data were programs.

As for computers wanting to be right - first of all, I am surprised at Lib committing the fallacy of saying that computers want anything! But techniques like genetic algorithms and simulated annealing often begin with the wrong answer, to get to the right one. If someone came up with an effective learning method that needed the computer to be wrong, no problem.

This isn’t quite how I understand memory to work but close. When you access your Mary memory, you are strengthening the association to it but you are adding new associations that connect with the original. The effect is the same as you described, the next time you think of Mary you’ll access the composite, but the original memory is still there. If you get to it by a new association, say a smell reminds you of the Mary event, you can experience the Mara event without the associations formed by your other accesses to the event.

I know the rest of you are well past this point but I’m not reading the thread often enough to keep up.