A memory is a physical thing.

What is a camera? A lens focuses light reflected from an object onto a plane to form an “image” - so what? Ah, but place an emulsion of light sensitive molecules at that plane, or perhaps an array of light-sensor circuits, and it becomes something else entirely: an image storage device. A memory of the object has been created which can be accessed even after the object itself has been destroyed. A “copy” of the object has appeared: not a literal, exact 3-D copy (like, say, that most excellent of replicators, a DNA molecule) but something else having a link to the original source which may now have been thrown on a bonfire.

Now, what is the nature of that memory? The object itself is physical - an arrangement of particles at a particular energy state (ultimately, a configuration of spacetime). The image of the object is also: a pattern of wave-photons of particular energy as they cross a 2-D plane. All of this would be the case even if no human had ever existed, and the lens was a naturally occurring piece of quartz casting an image on the walls of Plato’s cave. The object and the image are physical things.

But what of the memory of the object - the silver halide emulsion after immersion in various organic compound mixtures, or the configuration of logic gate switches in a silicon chip, which store the wavelength and the intensity of the light at each tiny point on that 2-D plane? Clearly, just as the molecule-comprising object and the incident photon-comprising image are physical things, so the developed emulsion and the logic gates are physical as well.

So far, our brief philosophical sprint doesn’t appear to have tipped any metaphysical hurdles. None of this stuff needs a human mind anywhere near it - the piece of quartz could even have cast the same image on the cave wall for so long that the image was ‘burned in’, providing an entirely natural instance of the formation of a “memory”. But to deviate even slightly from our course has us careering straight for a metaphorical, metaphysical brick wall: what about the data, the information? Surely that is not a physical thing?

Now, I am a physicalist, and so I say it is (or at least, that it “supervenes on” the physical), but we’re getting way ahead of ourselves. It is altogether far more difficult to move an abstract such as “data” from the metaphysical realm to the physical than our aforementioned object, image or memory. However, bear with me.

The permanent storage of a 2D image can be explained in solely physical terms, be it via silver halide molecules or CMOS sensors and Flash memory. Is there any other way of capturing and storing an image? Well, yes there is. Studies show that a never-before-seen image can be captured and stored literally within a fraction of a second (although that short-term memory must be ‘reinforced’ for a few seconds afterwards to transfer it to longer-term memory to stop it being lost) by a biological brain.

Of course, human (or avian, or even insect) visual sensory memory has all kinds of differences to photography (wet or digital). Indeed, the human retina performs so much ‘pre-processing’ before it sends a signal to the visual cortex that it is actually considered part of the brain even by anatomists. I have a memory of Darth Vader being as black as black can be, and yet when look at the screen I saw him on in daylight it is either mid-grey (TV) or even pale white (cinema). My retina took in that image and Photoshopped it for me, turning that grey or white into deepest black.

Clearly, I don’t store countless gigabytes of data every minute, and yet I can still remember the briefly (and never before) -seen face of the shopkeeper who sold me a newspaper yesterday: holding her in my ‘mind’s eye’, I could even reconstruct if I could manipulate a pencil accurately enough. And yet I can’t remember the front page of the newspaper in anything like the same detail, which I would if I had taken a 10 megapixel photo and transferred it to my biological hard drive.

Humans, and birds, and all kinds of other biological brains, store things in different ways to silicon brains. Faces have extremely specialised and extensive modules to process and store them, and even simple objects are stored differently. Where the silicon brain has 2-D “pixels”, the biological brain uses 3-D “voxels” (otherwise known as geons to form) what is called a two-and-a-half-dimensional sketch. Human brains (and pigeon brains) take the raw signals from the light sensitive cells in the eye and identify edges and colours, breaking down each image into recognised components. Common configurations of these components are associated in the brain with linguistic referents (“words”): a cylindrical geon with a ‘U-shape’ geon parallel to the straight edge is a “mug”: if instead the U-shape geon stretches over the circular cross-section, we’re looking at a “bucket” (see page 35 of this enormous but illustrative PDF).

However, I would not wish to get into a discussion here about the technical details of visual cognition. All I seek to explore here is the contention that our memories of objects are not fundamentally different in nature to the objects themselves. If we can agree that this is so, we can move forward to use senses and memory as the basis for an explanation of other aspects of cognition.

Having read through this OP again I have, right now in my mind’s eye, a memory of Darth Vader holding a watering can. I can see him right there, calling to Luke in Bespin City with a green plastic watering can in his left hand! But this is not a “memory” as such. I have superimposed a memory of a watering can onto a memory of Darth Vader: I have Photoshopped my visual memories into something which was never photographed: this, I propose is the basis of creativity: again I would hope you understand that I don’t consider this any kind of step away from the “physical” either. Similarly, certain visual memories may be associated with other sense information, or significant activity in the amygdala (ie. correlating with strong emotion): yet again, I would propose that this is not qualitatively different in nature.

I am not a cognitive psychologist. I’ve probably only read the same popular writers many of you have, such as Stephen Pinker, Susan Greenfield, Jerry Fodor, Daniel Dennett, Roger Penrose and the like. Anyone with any expertise could almost instantly have me on the horns of a dilemma : OK smart arse: why aren’t faces broken down into Mr Potato Head geons like watering cans? How do humans “understand” Godel’s Incompleteness Theorem, clever clogs? If my feelings are just a sum of cognitive module parts, how come I feel like it’s more than that, huh meatbrain?”

As I have said elsewhere, explaining consciousness in terms of biology and computation is the challenge of this millennium, just as Newton, Darwin and Einstein tackled other phenomena in the last. The bridge between cells and mind currently contains many gaps having a few ropes thrown tentatively across, which may well snap under pressure. But I consider that those gaps are shrinking every year, to be replaced by sturdy experimental supports, to the extent that I simply cannot see the entire bridge crumbling to nothing just because of some incredibly specific, not-quite-spanned gap.

I would ask you to approach this thread not by asking “Am I a biological computer?” but “Could I be one?”. Also, while I’ll try and answer any questions as best I can, I also reserve the right to turn it around and explore the alternatives offered by my questioners (ie. if the mind does not emerge from biology, where are you saying it does come from?). Finally (cos it’s my thread, dammit!) I’ll suggest dedicated threads for any bifurcations I think are particularly distracting, such as the contention that there are no physical things in the first place - leave them here, for example.

I have a couple of questions to start.

A. Would you use the terms “memory” and “record” as synonyms?

B. If an oil pressure gauge drops from 40 psi to 30 psi, does it (in your view) have a memory of its state at 31 psi?

C. If (B) is yes, then as a follow up, does it also have a memory of its state at (the exact square root of 947) psi?

A. Yes - if any difficulties arise therefrom I’ll let you know.

B. No. (Unless of course it’s some super-duper gauge which buffers the value read at different intervals according to programmed IF-THEN thresholds and stores that value to be printed out on a graph later: even then that only memorises a simple number “31 psi” - its state at that pressure requires rather more, I’d suggest).

If the oil pressure gauge drops from 40 psi to 30 psi, and has no memory or record of its state at 31 psi, then how do we know that it ever was in a 31 psi state?

If no record or memory of such a state formed, we don’t.

I’m… flummoxed. To be sure I understand, are you telling me that the guage needle possibly moved from 40 to 30 without passing by 31?

Ah, you forgot to tell me how we know that it was at 40 and 30: were memories of those two readings formed somehow? If so, our cognitive modules would take those two memories and put them togther like Darth Vader and the watering can, forming a “memory” of 31 psi. Indeed, I struggle to conceive of an apparatus which memorised 40 and 30 which could not be modified such that it memorised 31 also: could you tell me what kind of memorisation apparatus of 40 and 30 you …ahem… had in mind?

Is there something specific you wish to debate here?

Can you explain how you think memory and consciousness are related?

I’m not entirely sure what the debate here is, since there are so many related topics. (On preview, I notice I’m not the only one. Metacom - the key is computational mind, I think. See below for more.) That may also be because I’m also a physicalist (with the disclaimer that the term may have some specific connotations of which I’m not aware). It strikes me as, let’s say, unwarranted to claim that memory (and cognition) does not rely on some physical substratum. However, I’ll pick up one possible point, just for discussion.

Having argued with others about Fodor’s contention that the mind cannot be computational, I often return to the thought to mull it over. For those not familiar with it, I believe it can be characterized this way, although I’m a little fuzzy on the last part: computation is, at its heart, symbol processing. Because of this, the only way to incorporate new knowledge or “correct” faulty knowledge is to “update” every symbol involved in the computation. Beyond the practical issue of symbol updating, he claims it’s impossible in principle to maintain the symbols. (It’s the “in principle” that I don’t fully grasp.) I apologize if I mischaracterize his argument, and would love to have someone correct and/or inform me.

Now, the above is actually somewhat beyond mere memory, although memory is obviously integral to it. For the mind to be computational, there must be symbols; after all, that is ultimately all computation is (see Universal Turing Machine). One type of symbol is memory (others being, e.g. abstract constructs). My question is one of kinds – it’s the willy-nilly combination and deconstruction of memories (e.g., Darth Vader with a watering can) that bothers me. It seems that a hierarchical paradigm is too weak to do the work we require of it; how does the mind maintain consistency when tasked with not only recognizing instances of a kind, but also forming abstractions of kind from instances? Is simply having the predetermined “hardware” enough (a la Pinker), giving us a foundation on which to build?

More to mull over…I’m looking forward to this discussion.

There is no fundamental difference between the nature of things and the nature of memories of those things.

I think consciousness is the sorting of sensory input into memory, with all the cross-referencing and filtering that entails. As the first article I linked to concludes:

When I sit here “being conscious” (compared to when I’m unconscious or, for 13.7 billion years, nonexistent), I consider that what my brain is doing is accessing senses and memory (and a whole lot more besides, but you get the picture).

Sentient, if I send my robot (who looks just like Uma Thurman) to visit you at your home, and you two play an extended version of the Turing game, including not just talk but any test you care to give, and she passes with flying colors, would you acknowledge she is conscious?

As ‘conscious’ as a bee or a bird, yes. Are the (no pun intended) birds and the bees ‘conscious’? Pigeons appear to have, at least, visual cognitive process very similar to humans. And yet, there’s all kinds of reasons why pigeons don’t cry at sad songs or understand Godel’s theorem. Heck, there’s pleny of humans who don’t cry at sad songs, understand Godel or even see, or feel pain, which pigeons do.

Ultimately, if your robot could demonstrate that it could process sensory information via memory and language just like my friends, I would say it was conscious, even though its consciousness might be as different to mine as that of a bee.

The obvious question I have is – what about non-sensory based memories? Hmm…check that. I suppose one could argue that there is no such thing. After all, I personally don’t have any memory of a mathematical construct that is distinct from the same construct. Hmm, that wasn’t clear. Let’s try this – I “remember” the formula for derivation. That memory – that is, the formula itself – doesn’t change. I may remember a particular time I was thinking about it, but my memory is still sensory based (a particular time in a particular location). The actual “memory” of the formula doesn’t change, unless you posit that such constructs are always associated with sensory perceptions. And yet, I certainly believe that I can “remember” the formula for derivation without attaching any sensory memory. Bleah…I hope that was clear enough. At any rate, the point is still there – clearly, “being conscious” is more than “sorting sensory input into memory”. There are non-sensory…um…things that need to be accounted for also.

I believe there’s current research that shows that memory and consicousness are indeed tied. That when a person accesses memory, they exhibit brain activity that is hugely coincidental with the brain activity of actually “processing” sensory input. This only makes sense, especially from an evolutionary perspective – why have a different mechanism to recall memories when one is already in place? (For one take on this, see Rick Grush’s work regarding emulators.)

As ‘conscious’ as a bee or a bird, yes. Are the (no pun intended) birds and the bees ‘conscious’? Pigeons appear to have, at least, visual cognitive process very similar to humans. And yet, there’s all kinds of reasons why pigeons don’t cry at sad songs or understand Godel’s theorem. Heck, there’s pleny of humans who don’t cry at sad songs, understand Godel or even see, or feel pain, which pigeons do.

Ultimately, if your robot could demonstrate that it could process sensory information via memory and language just like my friends, I would say it was conscious, even though its consciousness might be as different to mine as that of a bee.

If the someone with expertise asked you "If my feelings are just a sum of cognitive module parts, how come I feel like it’s more than that, huh meatbrain?” he’d be lobbing you a relatively easy one. More fundamentally, how come it feels like anything at all?

Whoops, sorry for double post - I thought the other one hadn’t gone through properly.

Yes, ‘higher level’ things like mathematics aren’t quite so accessible to direct memory explanations - like I said, I don’t pretend that I can explain every single aspect of human cognition this way, and senses and memory are only a starting point. This book has been highly recommended to me, but I’ve not yet got hold of it.

The so-called “hard problem”. Again, all I can ask you to do is imagine this incredible sensory-memory apparatus working and ask yourself “could we reasonably call that a ‘feeling’?”

Perhaps I’m too thick as well, but I also fail to see any deep debatable point here. A memory (of the animal sort) is a variety of program, encoded by a configuration of cells and their synaptic connections in the brain. It differs from other brain programs primarily due to some physiological and neurochemical distinctions (regions involved in the encoding process, reinforcement through long-term potentiation, etc.), and, some might argue, the sorts of stimuli that generated the program. Once you’ve got the program, there it is, sure as you’ve got electrons in a transistor or alternate magnetic polarizations on a disk.

Maybe that’s because we are … like minded? :slight_smile: Don’t worry, I’m sure a right-old disagreement will crop up here sooner or later.

(Sentient, I’m taking off my “not-arguing” hat from the other thread, ok?) Well, since the front page of a newspaper is obviously not a bunch of soggy neurons I have to assume you mean that the objects are both, fundamentally, changing patterns of atoms. So what makes one pattern a newspaper and the other a memory of a newspaper? If the difference in identity of the two patterns is inherent and non-arbitrary, how did it get there? If we impose the difference, and we’re just atom-patterns too, who differentiated us?

A) * “The neurobiological process of recollecting an experience is in some ways identical to the process of experiencing it in the first place!”*
This is hyperbole, exclamation point and all. Otherwise, we wouldn’t be able to tell the difference between a recalled memory and a currently happening event.

B) * “all consciousness can be said to be recent memory, due to the time lag between experience and the perception of experience”*
This is non-sensical. We don’t perceive experience. We perceive and then experience those perceptions (which then enters our short-term memory). To perceive experience requires a cartesian homunculus.

(Sentient, you’re going to love “Where Mathematics Comes From”)