Exactly. The change in state is identical to the change in voltage on various parts of the ram cell or flip-flop. Changes in state take energy, and in fact an area of research is restructuring various test techniques that change a lot of memory states at once, more than planned for, to change fewer. When you design a chip you need to know how much energy it takes, which depends a lot on the amount of switching. Changes in information, by Shannon, requires energy.
I have even written a joke paper about base 1 arithmetic. By information theory, base 1 computers don’t need power supplies.
I didn’t want to go near that one. I suppose I should throw out a disclaimer of sorts – I do research in AI. I don’t have the chops you do with hardware, but I’m not a total slack either. I also have a philosophy degree, but I don’t think I have the depth of others here. What can I say? Jack of all trades is master of none.
I suspect Liberal’s objection is going to be along the lines of abstraction and induction. I further suspect you’re familiar with this, but for those who aren’t, consider Newell, Shaw, and Simon’s General Problem Solver. As this page says:
Newell continues with cognitive architecture research with the SOAR project (you can download it, if you wish; it helps to have a decent computer, as last time I tried it, it maxxed out my 1GB of RAM), which is a production system that has “learning” and problem solving capabilities. On the one hand, it is a neat attempt at delving into cognition; on the other, it shows how far we still have to go.
Of course, this is separate and distinct from the Hebbian learning and backpropagation found in artificial neural networks (ANNs). (I’m not totally ignorant in this area either; I have to say that biological neurons are a whole order of magnitude more complex.) IIRC, it has been found that human memory, at least in part, displays the same characteristics as Hopfield networks. It’s been a long time since I’ve read up on the current state of ANN research; I should do that sometime soon. I’m kinda hopeful that the recent explorations into learning via statistical association (I think google is doing something with this, as I think some Psych departments are) will pay off.
There’s more to say, but I’ve digressed (hijacked?) enough. However, to dismiss computers as flat-out not learning seems premature to me without more of an explanation (that is, a more complete definition of “learning”).
Well…then DS, I have to ask for yet another round of “bear with me here for a moment”. It sounds like it requires interpretation to discern a pattern as a pattern, right? (An example would be that famous photograph by R.C. James that looks like a bunch of random black and white blotches until it suddenly “resolves” into a Dalmatian. IIRC, psychologists call these Moody images.)
In other words, if blotches are random until they’re discerned as a pattern (and a pattern must be discerned before the information “in” the pattern can be decoded), then a pattern of electrical activity in RAM chips has to be discerned as a pattern as well (If I’m reading you correctly).
I’m probably completely missing your point in post # 215, but it would seem that meaning or interpretation (which may or may not be physical), is needed not only to decode state changes but to discern state changes as state changes.
OK – I’d appreciate it if you would keep after me until I nail this (or I see the error of my ways). To me, it is the use of the verb “discern” that implies “interpretation”, at least the way I think it’s being used here. A pattern exists one way or the other; in non-volatile computer memory, when power is no longer supplied, there is still a pattern left in the circuitry. That’s how your .MP3 player works (if you have one). In some sense, the .MP3 player interprets its memory to produce sound. When we start talking about human awareness we bring something extra into the conversation, and usually we – without acknowledging it – ratchet our meaning of “pattern”, “memory”, and “interpretation” up a notch. My only contention here is that this augmentation is not necessary for memory; memory could be recorded without awareness of it (see Zoe’s example of being scalded) or any interpretation at all.
This seems weird, since when I say “I remember my wedding”, I’m recalling a host of sights, sounds, feelings, etc. It seems to me that most memories we have are like that. It is our amazing cognitive capability, which is most always active and running full-tilt, that makes it hard to talk about any other way. And it’s made all the more difficult because we’ve already passed through the developmental stages into adulthood; we can’t undo the sensory fusion our bodies and brains perform, much less the associations that are formed with other memories, concepts, etc. I’d think that an infant, if an infant could report on their memories, would have a much easier time of separating things out. Of course, if an infant could make such a report, it would already imply a level of development that would preclude the separation.
I don’t think “decoding” of memory requires “interpretation”. It’s just a mechanism. There’s a whole lot going on “behind the scenes”, automatically, including decoding. It’s the “in front of the scenes” where interpretation has a role. I think I can imagine a situation in which I’m doing “memory comparison” without actually interpreting anything. I find it difficult to come up with an example – I think one reason it is difficult is exactly because I don’t perform any interpretation. Perhaps driving is one. I’m generally not aware of the need to shift into a lower gear. I don’t interpret the sounds of the motor. I am, however, aware of them on some level, so perhaps that’s a bad example. It’s funny…I was just thinking to myself, “I need to pay more attention to my everyday actions so as to come up with a good example.” But it seems that very act of paying attention bankrupts the point of the exercise.
Maybe a better example is sorting or arranging things. I once worked on an assembly line that packaged birth control pills. (My, what a shitty job that was. Paid decently, though; otherwise, who’d do it?) My job was to spread the cases on a conveyor belt, squaring them flush against a rail that ran along the far edge of the belt. After a time, I didn’t have to pay attention at all to do my job. My hands just “knew” which cases were out of line. There was obviously some sort of mental comparison happening. There was no “interpretation”, as I think we’d normally use the word. There wasn’t really any “awareness” either – but that’s not quite accurate, since I was aware of cases that were not arranged properly. Hrmf. I think I should shy away from personal anecdotes for these explanations and try to go “closer to the metal”. Amoeba didn’t work; maybe I should try bees instead…
Well…that turned into quite a ramble. Sorry about that. I almost feel that if I give enough examples, something is bound to come out right. Did I make it clearer or is it still muddled? If it’s still muddled, next time I’ll make a valiant attempt at brevity. I promise.
Now now, you’ve shown me the panties. Don’t just run off. Rather than sending you scurrying to find a paper or article that I could understand, can I just ask you about the mechanics of how this works? I am a very old-school programmer, and haven’t kept up much with the newest computer science advances (other than in modal logic). Back in the day, there was a program that was separate from the data. Programs were merely instruction sets that waited, Von Neumann style, to execute one by one when called by the processor. They executed sequentially, being retrieved from an area in memory where they had been stored. Can you, along those lines, describe the process by which the instructions nowadays are changed on the fly, where they are held and stored, how the changed instructions are reordered, and so forth?
I don’t think it’s a matter of dismissing computers; rather, it is a matter of computer scientists explaining themselves. Better yet, if the computers can learn, teach *them * how to explain it. The door is where the door is. The onus is upon them to step up and knock. I understand that computer scientists are hard at work developing computational learning theories, but until something concrete is advanced, it’s just a big buzz, akin to the hype about string theory.
In my opinion, the fundamental problem with computers is that they always seek to be right. Learning involves, among other things, often being wrong. Sometimes intentionally. Or as Schulte puts it, “Efficient inductive inquiry is concerned with maximizing epistemic values other than convergence to the truth”. I imagine that what is attractive to computer scientists is the fact that learning can be modelled in terms of an examination of doxastic modalities. But modelling something and doing something are not the same.
But that is, surely, as unparsimonious and anti-Ockham a proposal as any you’ve ever heard? You have an explanation of “information” right there in physical terms, and you are striving to ignore it in favour of a cherished dualism, are you not?
Do we, at least, agree that a photograph of an object is as physical as the object?
Yet again: it is the location that distinguishes it, not what it’s made of. If it was in another location, it would associate two other memories, just as telephone lines of exactly the same material can connect different houses.
And further, if a photograph is a physical object (which I can rip up and eat), is photography a physical process. If so, is not the superposition of one transparency of, say, Darth Vader and another of, say, a watering can, a physical process too? Or sticking a piece of analogue tape (sound memory) to the transparency?
DS, with your indulgence, I’m going to once again play Edward Scissorhands with your post. I’m not trying to take you out of context, it’s just that, as usual, I agree with about 92% of your post; there are only a few nagging snippets that I need to isolate and address.
They’re not quite synonyms in common parlance, but then, this isn’t exactly common parlance. I’ll try to keep in mind that we’re using them as synonyms as the discussion continues.
I don’t know how important it is, but I just want to make sure we’re not losing sight of the fact that we “aware interpreters” specifically wrote the .MP3 player software just like we write all software, i.e., so that it’s output can be readily interpreted by us.
I agree, with the caveat that if the memory remains uninterpreted, then it isn’t memory and never was.
Yeah, we’ve entered territory where the paradoxes start flying thick and heavy. Perhaps “discern”, “interpretation”, and “awareness” are synonyms; we seem to be starting to use them as such. There’s something powerful and weird in the “act of paying attention” that I haven’t quite got my finger on it yet. For now, I’d like to point out that while you don’t perform any interpretation while shifting, etc, at some point you surely did, just like in your anecdote about the birth control pills. I drive a stick myself, and when I was learning, my “act of paying attention” was riveted on the stick shift: “what’s that noise?”, “Did it just engage? Is that the way it’s supposed to feel?”, etc. It was only after a lot of interpretation/attention that the motor output patterns were established and shunted off to lower brain areas.
Again, as in shifting while driving, there’s no interpretation, but there had to be quite a bit at some point. Since you brought up Heidegger, I’m sure he would have pointed out that you were “just hammering”, with no need for interpretation or attention until, well, until they were needed.
(Hmmm. After all this, I’m not so sure “discern” and “interpret” are synonyms after all. Bah.)
One thing I want to reiterate is that pattern seems to require, if not interpretation, something very akin to it. As you said, “There wasn’t really any “awareness” either – but that’s not quite accurate, since I was aware of cases that were not arranged properly.” IOW, you were aware of a pattern (or, more accurately, the lack of the proper pattern).
Bateson once said something along the lines of: “A message isn’t a message if no one can read it”. By the same token, information/(memory) isn’t information/(memory) without an interpreter, and a pattern isn’t a pattern without…what? An interpreter? A discerner? An attentioner?
Jeez. It’s 6:00 in the morning and already I could use a stiff shot of bourbon.
Sentient, I don’t cherish dualism, in fact, I can make precious little sense of it. You’re saying that there is an explanation of “information” in physical terms, and I’m saying it looks like what you’re calling information may not be strictly physical. Maybe I’m not understanding what you mean by “physical terms”. Does it mean that something is de facto physical if it’s merely associated with the physical?
Does a photograph of, say, an apple, have a specific mass, volume, etc, in the same manner that an apple has it’s specific mass, volume, etc? Yes.
Sentient, this is just plain wrong. If the brain laid down specific links with specific locations for every memory association, we’d never make it past toddlerhood. I believe your original analogy is flawed; a more accurate one would be to ask: “What makes a telephone call between two occupants, rather than two other occupants?"
A single neuron can have multiple functions, and its function can change over time, just as a house can have multiple occupants and their occupants can change over time. The location of the line (and the occupants) is irrelevant; what makes it a telephone call is that meaningful information has been exchanged. Just like with neurons.
I’ll give this an attempt, though I find Voyager is better than I at putting it succinctly. Yes, programs are instructions. But instructions are nothing but data (perhaps you’re mixing up Harvard vs. Princeton computing?). Code, like any other data, can be modified, even on the fly. I think what you’re getting at is that instructions must have a certain form. I don’t see the issue with this; certainly, information from our senses must take a certain form also. Unless you’re bringing up a different point that I’m missing?
All I can say is that we’re working on it. Note that I’m decidedly not saying that it’s a solved problem. That’s one of the things that pisses me off about previous AI researchers – the overblown claims that make it so hard to now be taken seriously.
One might say the same about various branches of psychology or neuroscience, I think. The difference being, IMO, that computer scientists are working on a harder problem. The only thing we really have to go on is the very subject of the other disciplines; while they simply need to explain, we need to construct. Advancements are being made, but no one has it solved yet. One thing that irritates me about (many) philosophers (not including you in this group, Liberal, just making a general statement) is that they’re so quick to be dismissive. (Granted, I get irritated with computer science people who are dismissive of philosophy also. It’s just so…shallow and blinkered.) I like to point out that computation and information theory have only existed for about 60 years; how does that compare with, say, theories of dualism?
Yes, there’s a problem here. I don’t think it’s of the character that you’re giving it, though. At one level – that is, the hardware and machine code level – we have a very brittle system. However, when you start talking about induction, you’re talking about a whole different level. I don’t think the same restrictions apply. For example, your web browser may crash from time to time – the computer itself doesn’t totally fail (well, shouldn’t, if you use a decent OS). Now, I think there are other problems with induction and abstraction formation, but I think they apply across the board when attempting to explain cognition.
I’m not quite sure what to do with your point about modeling doxastic modalities. Belief-Desire-Intention (BDI) systems are all the rage in agent-based computing right now; have been for a decade or more. But I don’t think that’s what your issue is…I get the feeling it’s more along the lines of “a model isn’t the actual”. To which I would respond, in a very Turing-like fashion – if you can’t tell the difference, why not? This is not to say that there is such a model yet, but I get the sense that you’re objecting on principled, not practical, grounds. I’d not like to debate qualia, which is where I think this ultimately leads, as I’m not sure there can be any resolution to that argument. At the same time, I feel the need to say that I think qualia are nothing more than part of the user illusion; I pretty much dismiss the philosophical zombie argument as non-sense. More explicitly, you cannot have (it makes no sense) a being exactly like another only without qualia.
As promised, I’ll try for brevity here. I could ramble on for pages, as there’s so much to say. Please don’t be offended if my responses seem brusque. Hrmf. It’ll be hard, but here goes:
Not synonyms, just suggestive and fluid.
Irrelevant.
I think this is the crux of it. Memory is still memory, whether it is interpreted or not. A definitional matter, I think.
Yes, I think it’s a matter of not being able to separate what we as developed humans do and what the minimum is that’s required to qualify.
Again, I think this is definitional matter that is the crux of the issue. If that’s how one defines “message”, then that’s it. Take a SETI message broadcast into the void. If no one ever receives it, is it a message? Is it a pattern? You could rely on the fact that we could read it or constructed it. But that just moves the issue; are migratory patterns of birds not patterns until someone notices them? I say they are, others might say they’re not. We can’t get anywhere until we agree upon a definition.
How’d I do? Did I keep it concise and understandable?
SentientMeat, I think we’re on the same track and I think you may have not qualified “location” the way you meant to. Let me try this one.
I think location does come into it somewhere, as physical things necessarily require a location. I don’t think operation of neurons is location dependent in the sense of a fixed location, beyond the fact that they must occupy space somewhere. Rather, the functional ability of neurons is more connection dependent; I’d reference isomorphism here (thanks, dotchan).
I believe this touches on the configuration/state distinction I made earlier. Location is not totally irrelevant to neurons, as they need to be connected to work their magic. Separate from that, the multiple functions and changes of such is, if I understand what you’re saying here, a matter of continuity. An individual neuron can become part of or drop out of a particular through the strength of its connection. Still the same neuron, different aggregate.
No points, just questions, one of which was what is the mechanics of how program instructions are modified on the fly. It wasn’t really a yes/no question, so merely affirming that they can indeed be modified on the fly really isn’t an answer. Let’s use a simplified model and say that address 100 contains an instruction to load the accumulator with the value from address 200. In what address might the instruction to change the value in 100 be? What happens if one new instruction is insufficient? What if instead of merely loading a value, a value’s increment needs to be loaded? Suppose for this example that address 101 is already committed to some essential instruction. For that matter, what happens if all addresses presently committed are essential?
I’m sorry, but did you just say that computer scientists are working on harder problems than neuroscientists? I think you might have forgotten neurosurgeons when you dismissed neuroscientists as mere theorists who don’t have to construct anything. Granted, electronic circuity is delicate and tedius to build, but I don’t think it’s any more problematic than building cells or neural connections.
I’m not sure, after reading that, exactly your problem is with what I said.
Not “a model isn’t the actual”, but a model isn’t the implementation. In other words, just because you manage to model the weather doesn’t mean you’ve made an ecosystem.
I really think there’s an issue of mixing levels here. Yes, atomic operations (or even “blocks” of code) may require an unchanging order of operation once begun. However, “learning” works with objects at a higher level that might change these “blocks” prior to execution. In a sense, it’s akin to the fact that damage to a single optical rod or cone does not prevent visual processing.
I’m not sure what else to say about it; I’m not trying to evade the question, I’m just at a loss. My apologies. Perhaps Voyager can answer better than I.
Yeah, I agree that the disagreement over this definition is fueling the struggle here. I can’t help reading your statement, “Memory is still memory, whether it is interpreted or not” as equivalent to “Memory is still memory, whether it is remembered or not”, which strikes me as non-sensical. And I’m still not seeing the fault in the logic of “A message isn’t a message if no one can read it”.
Since this seems to be the core issue, I’d like to approach it carefully and take it in small bites.
Before we go into this, can we find agreement on a couple things? First, just to be clear, whatever counts as a “noticing” or “interpreting” of patterns need not involve a someone, i.e., I want to quell any assumptions (not that you have any) that noticing/interpreting inherently requires a human mind (whatever that is).
Second, I’m still thinking that just noticing a pattern entails some sort of interpretation regardless of any interpretation of the information contained in or carried by the pattern.
For example, say we had one of those “magic eye” prints that at first seems like a random bunch of dots (hence the name, “Random Dot Stereogram”). Let’s also say that the image “hidden” in the stereogram was of the English word “Hello”. In this case, the instant I detected the pattern, I’d have my interpretation: “Oh, it’s a greeting”. But what happens if I detect the pattern, but the image turns out to be a runic glyph? In this case, I can detect a pattern, but not the information it holds.
Are discerning a pattern and discerning a pattern’s informational content distinct occurrences? If so, do we need to first tackle how pattern is discerned from randomness; order from chaos?
Ask 100 AI researchers, get 100 different answers. I think that cognition is computational and that we will eventually create an AI. I don’t think that answers your question, but I don’t feel that I can supply more specific claims, especially adequately. I apologize again; I’m not trying to be intentionally evasive.
Now, now. I wasn’t putting CS researchers on a higher plane. Just commenting on the fact that, using an analogy as a means of explanation, it is harder to build a house and come up with the blueprints than just coming up with the blueprints. Two tasks are more difficult than one by definition if the one task is included in the set of two. Please, as you’ve chastised me earlier, don’t “assign to me motive or intent to which you are not privvy”.
My “problem” is that your objections seem to be directed at the wrong level. The problem you raise is not a computer science problem per se, it is a general problem I think.
Some artificial life people would argue that it is. I’m not among them. I think a model is just that – a model. But there’s no reason to say that an implementation must be a model; even if it coincides in parts, it can be a separate and distinct thing. Take artificial neural networks. While some implementations are indeed models of the brain, others specifically are not and serve a totally different purpose. If the functionality of a particular AI system is such that it is indistinguishable from a cognitive being, does it matter that it’s foundation might be the BDI model?
Yes, “noticing” entails interpretation. Does it mean that there is no pattern? A metaphysical question, I think. It seems to me that as soon as an observer (human or not) is required you end up with a circularity, as patterns of some sort are required to make sense of “observer”.
By the way, I really enjoy “magic eye” prints. I had a phenomenology professor who said he absolutely couldn’t decipher them, much to his chagrin and frustration. Weird. I felt bad for him.