Frylock wants an expert in the field. How about someone with a PhD in Computer Science, over 40 published papers in the field, and over 20 years of teaching Theory of Computation?
Is that credentials enough?
In those 20+ years I have heard all the usual proposals from students trying to get around the Halting Problem. Including exactly yours.
Again, what you think you are saying is not at all different from the actual Halting Problem in reality. Hence, the Wikipedia article is in full agreement with me. You seem to be focusing on a semantic difference that doesn’t actually mean anything in this context.
I only gave the Goldbach’s conjecture as an example of how hard things become very quickly. (The 3n+1 problem is even simpler, but that uses input. And whether a program has input makes no difference to me of course, for reasons mentioned above.)
Sure, you can write a program to prove that the program
X = 1;
halts, but so what? Can it be done for interesting problems? Of course not. Lot’s of easy examples of that. Note that the complexity of a proof of halting for a program grows much, much faster than the size of the program. Think Busy Beaver for rate of growth. (It has to grow faster than any computable function for obvious reasons.) So even tiny programs are off the chart right away. Checking if programs halt is hard. (!)
Like I’ve said, I’ve heard at all before. Some people just get stuck on something. If you want to get unstuck, okay. If not, I’ll live.
When you get a Wikipedia article with “Frylock” in the title, let me know.
Why so haughty, ftg? Frylock doesn’t, so far as I know, think he has found a way to solve “the Halting Problem”; he is not some crank one step away from announcing “they laughed at Galileo at Einstein”.
Everything you quoted him as saying in post #28 is correct; you’re just jumping the gun and assuming he’s claiming things which he isn’t.
For the most part, this thread has been full of people saying things which are true, and mistakenly assuming someone else disagrees with them. Frylock recognized this a while back and took steps to clarify the misunderstanding.
I personally think that “understanding” is not relevant and far more un-definable than it appears.
There is a continuum of systems that could mimic human intelligence, here are 2 ends of the spectrum:
A completely static lookup table that maps any input (including current state which is made up of all prior inputs and outputs) to output
A set of calculations that has no static lookup table but that arrives at the exact same output as the lookup system
Humans are somewhere in between these, our brains are an optimization of energy with trade-offs being cost of storing pre-calculated responses to situations at some level of abstraction vs cost of calculating those responses by modeling the world, etc. etc.
The man in the CR is, I think, uninteresting but can be effective.
A system of calculations, on the other hand, is interesting due to the fact the same mechanism can be applied to many problems (a form of compression, lossy compression because most responses will not be the perfect response given current goals but will be good enough).
I think that what humans call “understanding” is a by-product of the complexity of the abstraction mechanisms and the incorporation of one-self into the model being operated on. But I don’t think it’s something magical or even necessary to have AI. Whether we can duplicate what we call “understanding” is certainly an open question, but I think it’s just as possible we are fooling ourselves by thinking it’s something more than what it really is.
Searle’s answer doesn’t really change the situation at all. Kurzweil formulates an interesting variation of this (his “Turing typewriter”) in the chapter I was referring to above.
Searle believes that consciousness is caused by some specific biological mechanism, rather than emerging from the combined computations of billions of neurons. Maybe he’s right. But his Chinese Room doesn’t really tell us anything about whether it’s possible in reality to create a “zombie” that can procedurally simulate consciousness without actually being conscious.
I mean, sure, it’s possible to imagine some magic set of rules or database or whatever that would be able to simulate real conversation without consciousness or understanding, but if a machine ever passes a Turing test in reality, it seems clear to me that we would have to assume that anything capable of convincingly arguing in favor of its own consciousness is in fact conscious, not just simulating consciousness with a rule set of impossible size.
That’s not true. You could have an infinitely long program that can solve for inputs which themselves are infinitely long.
It is unfortunate that modern mathermatics does not handle the concept of infinities correctly (due to some overzealousness of early thinkers). All kinds of infinites (and boundless finite numbers) look the same relative to a finite number, so we’ve been successful in bunching them all together in our ideas. (The aleph sequence excluded.) However, in truth, there are many different infinities. A “singly infinitely” long program can solve for input which are finite but not “singly infinite” or “doubly infinite.” A “doubly infinitely” long program can solve single infinities, but not double. Etc.
It’s not all the same. Especially in relation to Goedel, it’s infuriating what idiot conclusions people take themselves to when they exclude infinities.
Sure. You’ll note that if you read my post more carefully, I said exactly what you are saying here.
I don’t know what you mean by saying “It’s not all the same”, but then, I wasn’t very precise by what I meant by saying “it’s all the same” in the first place, so, whatever.
A Turing Test passed by a digital computer would not show that it has subjective awareness. Digital computers cannot have subjective awareness. The machinery of the computer is incidental to the output; the software can be run by paper and pencil, and the output, eg on a typewriter, manually contrived. I just cannot believe that if a hypothetical, very long-lived person with an inexhaustible supply of paper and pencils dry ran the Turing Test passing algorithm (conversing with an infinitely patient subject) there would come into existence a subjectively aware entity. What would its material ground be? A billion discarded sheets? The pile still on the desk? Or the actually subjectively aware person doing the work?
Subjective awareness is meaningful – it has intrinsic meaning. There is no intrinsic meaning in what is going on inside a digital computer. The meaning has been put there by the engineers who designed it, the people who built it, and the programmers who created the software. The electrical currents with changing voltages, charges moving across gaps, and stored charges that constitute the working computer are doing the same thing as the slower pencil and paper dry-runner and there is nothing added when a computer executes an algorithm to convince us that subjective awareness has been created.
Just considering the typical contents of the conscious human mind: sensory, emotional, thinking, involuntarily predicting, comprehension and so on, then comparing that with the resources of the computer – binary memory, a store of reference data, an algorithm, and hardware to run the algorithm and control input and output mechanisms – strongly suggests we are dealing with a category error if we equate the two.
If the brain is the material ground of subjective awareness as claimed by materialist philosophy, how does it differ in essence from a digital computer? Apart from ‘I don’t know’, I think the fact it is analogue in nature could be the key. Digital circuits are really analogue but have been constrained by design to respond to threshold voltages, currents and charges. It is this constraint that reduces the computer one or more logical levels from physical nature to the embodiment of an idealisation, and that is why the work it does is essentially (and logically) the same as a pencil and paper dry run of the algorithm it executes, and that may be why the brain can be subjectively aware while digital computers cannot.
(To Alex_Dubinsky: Perhaps you only mean by “It’s not all the same” to re-emphasize what you said in the first part of your post. In that case, as I hope you can see by re-reading the post you were responding to, I never stated anything disagreeing with you to begin with. Indeed, I was making the same remark about orders of infinities that you were. [Another instance in this thread of everyone saying true things and only imagining disagreements. Only, in this case, we happened to be saying the same true things…])
I’m afraid you are assuming your conclusion. First, how do I know that you have subjective awareness (and vice versa)? Only by observing your responses to stimuli, which is exactly what the Turing Test is measuring. If a computer responds exactly like a person with subjective awareness, how can we say it does not have this? Such a computer would clearly have the ability to monitor its own thought processes, and therefore be able to report on them.
Second, consider a simulation of the brain. Assuming we have some way of measuring all neuronic connections, we could certainly be able to - with enough paper and pencils - simulate the brain. Now the brain changes state, but so does a computer. Some of those papers would have to be keeping track of new connections, while in a computer some of the paper would have to keep track of weights and even possibly internal reprogramming. If your model is a fixed program processing inputs into outputs, your model is too simplistic.
Why does subjective awareness have intrinsic meaning? In any case, a computer that could pass the Turing test would be far more heuristic than algorithmic.
You seem to be confusing applications with resources. If you want to be reductionist, our brain is a very complex set of neurons and connections, with input from our senses and hormones and outputs to control some of our body. All the stuff you mention comes from running these neurons.
In general use, it is pretty much impossible to know what is going on inside a modern computer. To debug a problem you need to rigidly control the input, and often to turn off many of the features inside the computer. I do this for a living.
Our brains respond to electrical and chemical signals also. However, any analog signal can be modeled by a digital signal, given enough bits of precision. Kind of like your MP3 player. If you think understanding digital logic is enough to understand the deep workings of a computer, you haven’t had the pleasure of debugging hardware. Signals couple, wave forms get misshapen by slow or fast rise and fall times, you get glitches which can lead to unpredictable behaviors, etc., etc.
If you think our brains are somehow different from being able to experience subjective thoughts, you need to explain how our brains are fundamentally distinct from those of dogs who do not. More complex, certainly, but really different?
I’d think an intelligent computer would not only have to be able to think, but be able to observe itself thinking. Clearly we can think without observation - for instance when our subconscious solves a problem for us. Is your definition of intelligence require consciousness, or is our subconscious intelligent also. Maybe an AI program would go off and find a new chess move without anymore examination of the details than I have when I solve an anagram, or we all have when we drive with our minds elsewhere.
What am I totally wrong about? I agree with you that programs of length K can solve the halting problem for all programs of length L, even when L is infinite, as long as K >= 2^L (and thus K is larger than L). I also agree with you that programs of length K cannot solve the halting problem for all programs of length K. What’s more, this is what I said in the very post you quoted (“You could not make an infinitely long program which correctly solved the Halting Problem for all inputs including infinitely long programs such as itself.”; the following mention of Cantor’s Theorem should have shown my awareness of the issue of different orders of infinity and how it plays into this).
I’m afraid you’ll have to be more explicit about what it is you think we disagree on…
I think one of the things that makes it difficult to think that a human brain is just computation is how un-computational it feels. For example, the color red seems so different from the color blue. It doesn’t seem possible to capture that difference with 2 different numbers, or a different set of voltages. But in our brain, it’s just a different set of neurons that are firing and/or firing at a different rate.
Everything feels so real and full of depth and multi-dimensional, etc., but it’s just patterns of neurons firing, that’s all (unless there is something magic in there we aren’t aware of).
Reporting on its ‘thought processes’ would be part of the behavior that lets it pass the Test, but that doesn’t strengthen its claim to have interior, subjective consciousness. In fact the computer is using some data in its memory, along with portions of reference data and the algorithm, to formulate words which report, more or less accurately, on the contents of other, selected (by the algorithm) parts of memory. This is purely a mechanical process, and is wholly deterministic, and predictable in principle by dry-running the program.
Why does it have to be merely ‘a fixed program processing inputs into outputs?’ I said it is an algorithm. That means it can do anything that is Turing computational, including modifying the program, and take full advantage of any available memory storage technologies. This includes all possible heuristics. I agree, we could in principle simulate the brain with paper and pencil. At any given time one would be either making a mark on a sheet, erasing a mark, moving a sheet, drinking coffee, sleeping, lying on the beach on holiday etc. Are you proposing that the sum of these activities, crucially with the inclusion of the sheet marking/erasing activity is summoning a spirit? A self-aware being? Located where and when? When is it having a subjective thought? Does it cease to be when you put down your pencil? Does it come back into subjective existence and know itself anew when you pick up your pencil? Are you in fact God that you can do this divine stunt?
Because, to paraphrase Thomas Nagel, it is like something to be a subject of experience. It matters to you. It is important to you. Is is the one thing that allows you to be. Without it you are not. All meaning flows from it and is predicated on it. It is the basis of everything. If that is not intrinsically meaningful, what is?
Me too. Why is this passage relevant?
Self-awareness hinging on glitches? A bit of a stretch. Analog computation is fundamentally different from digital computation because the former is continuous while the latter is discreet; it’s the difference between real numbers and integers. And analogue is the way the world is – digital, in the computing sense, is artificial. That is where my point about computers being the embodiment of an idealisation comes from. Anyway, I’m not trying to make a strong case for the analogue/digital distinction being the crucial difference between the embodied brain and the digital computer – it’s just a suggestion.
You don’t thing dogs have subjective awareness? They’re zombies, dead inside? That’s rather chauvanistic.
I didn’t mention intelligence, though to pass the Test the program would have to simulate intelligence. No-one knows if the subconscious mind is subjectively aware or not, because its experiences, in as much as they are separate from conscious experience (a very simplistic, and misleading, notion in my opinion), are not directly accessable to the conscious mind.