How did the universe and consciousness create themselves from nothing?

No, you act conscious, which does not prove you are conscious. It is like an animal acting aggressive and scaring off a predator, even if it isn’t actually aggressive.

BTW, one place where I agree with you is that complexity by itself does not lead to consciousness. Conscious entities may be complex, but complex systems won’t necessarily become conscious. There was an Arthur C. Clarke story about the telephone system becoming conscious, not one of his best. It might be that a quite simple mutation in a complex brain leads to consciousness, but we don’t know for sure.

As I understand it, the Chinese room cannot even emulate a Turing machine (no tape, no state) and thus is useless in the argument. Searle didn’t seem to have a clue as to how computers actually work. Turing machines can modify their programming. The Chinese room can’t.

Intelligence consciousness it seems to me didn’t come from a more complex brain, but from a mutation introducing a new connection to our brains. I have been in Computer Science long enough to not buy that increasing complexity and speeds leads to anything but the ability to run faster, bigger, programs.

.

I took AI in college rather closer to 704 time than to DeepQA time, and we don’t seem to have made much progress towards true AI. We have solved just about all the problems described in my class, and they are pretty much sitting in my smartphone. I was on a business trip when the 8086 was announced and the USA Today said that AI was just around the corner thanks to the vast power of that machine.

The clock rate of the processor I was on the design team for was nearly a billion times faster than the computer I used in high school. And it could obviously do a lot more faster. But qualitatively different? I don’t think so.

The basic principle used by AI researchers back when I took it was that solving a bunch of problems that looked like they require intelligence (like chess, or planning a route) would somehow lead to an intelligent system when collected together. Clearly not true. Profitable, but not true.
Our computing ranch did simulations using over a thousand processors, all connected. Probably more computing power than existed in the world when I graduated from college. No intelligence emerged that I ever noticed.

Is there a simple argument which almost trivializes subjective consciousness, which makes it just an extension of unconscious intelligence?

An intelligent animal, even if not “conscious,” will form models of its environment. It will develop models to describe the behavior of its predators, prey, and potential mates. Robotic weapons will want models of counter-weapons, and so on.

As soon as such an intelligent animal or machine develops a model of its own behavior, isn’t it well on the road to subjective consciousness?

If that’s the case, evolution cannot select for conscious experience.

I have a feeling we’ve been here before… But no, the Chinese Room is Turing complete. That is quite obvious from Searle’s original presentation; many newer versions somewhat omit it, but in the original, Searle imagined being given three batches of Chinese symbols, the script, the story, and the questions, and together with them, rules in English how to correlate them, as well as an unlimited supply of scratch paper.

Well, there’s an attempt to answer some questions about conscious experience, known as Higher Order Thought (HOT)-theory, in which conscious thoughts just are those that have first-order mental states as their objects. But that’s basically just a brute postulate—it’s not any more enlightening, ultimately, than panpsychism, as no reason is given why higher order thoughts would be conscious. I mean, you accept that an animal could form models of its environment without being conscious, so why not also form models of its own behavior the same way? Why would it need to even go the first step on the road to subjective consciousness?

Lots of things model themselves, and we don’t think about them as being conscious. A thermostat keeps track of its own temperature, and switches the heating on or off based on that. Is it conscious? Is using a thermostat slavery? I don’t think so. But then, what needs to happen in addition to self-monitoring in order to give rise to consciousness? Do things just again have to get suitably much more complex, and then… what, exactly?

Hmm, that’s not quite right. I think this should rather be phrased ‘conscious experiences are those that are contents of higher order thoughts’.

Maybe I’m not understanding your position.

Watson is your example of a system with emergent properties that can not be engineered and built up from the lower levels, correct? The emergent property of intelligence only shows up at some level of complexity and there is no way to create or build that by setting up the rules and functionality at the lower levels of the system, correct?

Watson was built in the following way:
1 - The desired end state was conceptually mapped out. The team wanted Watson to be able to answer English language questions by searching it’s store of information and providing correct answers based on the input.

2 - The layers of functionality to support that goal were then iteratively built out and adjusted.

The “emergent” property did not just emerge, it was designed into the system and then built up through the layers. Each layer including the final were adjusted based on measuring the correctness of the mapping from input to output until that mapping met the success criteria.

The layers of the system was measured and adjusted to arrive at the desired goal.

How does that support your argument that Watson’s emergent property of intelligence can’t be design and built up by the lower layers?

For the umpteenth time if an organism acts as if it were conscious, and this is advantageous, then evolution will select for it. Why is actually being conscious significant?

Yes we have. And my question is - can the Chinese room execute rules written on the scratch paper? The significance of a stored program computer is that it can write, modify, and execute programs, and not just execute code that a person has specifically written. These days when programs are in memory areas which are protected from modification people don’t get this. I’m an old compiler writer, and I started with a computer which you could modify, so I do.
That the room can’t understand Chinese gets a lot less obvious when you can rewrite the program to do more than blindly respond to an input by looking up a response.
Language translation and speech recognition programs today (which did not exist when the analogy was conceived) work by building a model of the semantics of the query, not just the syntax. I don’t see how the Chinese room without modification of the rules would be able to build semantic models.

The question of this thread is how consciousness arose. A frequent contention was that it was evolutionarily selected for. To be selected, it must confer a distinct survival advantage. If it’s possible to behave ‘as if’ one were conscious without actually being conscious, being conscious does not confer a distinct survival advantage. Consequently, evolution is blind towards consciousness (by which I don’t mean to imply that evolution has a sense of sight). Hence, consciousness can’t have been selected for.

It can do anything any modern computer can do, ex hypothesi. It doesn’t need to be able to execute rules from the scratch paper to do so. If you’re not intending to claim that only computers of a certain architecture are candidates for understanding, and thus, there could be classes of Turing complete machines that can possess understanding while other classes do not, it doesn’t matter in what way it achieves Turing machine equivalence, only that it does.

A native speaker of Chinese could emulate a Turing machine. Since the Chinese Room possesses equivalent capacities, so can it. In principle, you could simulate a Turing machine with the Chinese Room by using it to implement the Game of Life cellular automaton—it simply doesn’t matter.

Always been present? This poses the interesting question as to when always starts. or infinity, if you like. And what was there before the universe was created.

One theory is that time and space were created at the time of the Big Bang. But this poses the question: created from what? We are back to the law of conservation of energy, assuming that the law is always applicable.

I, uh, don’t think that’s what “always been present” would mean.

That’s one thing even the greatest minds can only pose theories about it. Nobody knows.

IMO, there was no beginning nor an ending but a big void. It started with The Big Bang is the best theory we have.

Creation from nothing. How is that possible?

Once again, it’s absolutely meaningless and circular to declare with no independent basis that the components used to build intelligent systems already contain “a nontrivial part of the behavior of an intelligent agent”.

There are only two ways to interpret such a position, depending on one’s definition of that peculiar phrase. One, that it’s trivially true by definition, since the components created an intelligent system, ergo, they must have embodied essential elements of its behavior. Two, that without a demonstration of actual intelligence in these lower components – and at some level of decomposition that obviously becomes impossible – it contradicts the definition of what an emergent property is: a property which a collection or complex system has, but which the individual members do not have.

I contend that the first definition is a misleading truism, and that the second is the pertinent criticism of your fallacy.

No, I’m not, and I don’t see how that follows from the arguments above.

I trust that the “nontrivial part of the behavior of an intelligent agent” argument was put to rest above. A calculator is not intelligent by any rational definition, nor can it be described as a “nontrivial part of the behavior of an intelligent agent”, again for the reasons above. It doesn’t even mean anything.

No. That characterization is another case of a superficial truism. The real statement is: machine “A” can perform a task, while machine “B” absolutely cannot. That is a qualitative difference, not merely a quantitative one.

The answer is “yes”, though it may seem paradoxical. Are there significant qualitative differences between a 7-year-old child and a 70-year-old man? But there’s really not much difference between a 7-year-old and an 8-year old, nor between a 69-year-old and a 70-year-old. Exactly where in this continuum of time does a whimpering child suddenly become an old person, perhaps an educated, wise and accomplished one with a storied career? Or should we conclude that this never happens, for lack of a defined transition point?

That should teach you not to pay attention to the popular media! :wink:

Though the truth is, as I’m sure you well know, a number of prominent AI researchers were also overly optimistic back in the 60s and 70s. It was understandable based on extrapolation from some of the rapid progress that had been made starting from nothing, but problems like natural language understanding soon became intractable. That was one of the more notorious areas perhaps because one of the areas of application, natural language translation, soon revealed the vast scope of the problem domain and sometimes led to such humorous results!

The AI on your smartphone exists because of a number of independent factors; it’s a pretty fast platform by the standards of most computers even a few decades ago, the systems were developed using tools and methodologies developed on today’s high-performance computers, in some cases the phone is just a thin client offloading requests to servers, and many other factors including the legacy of many decades of AI research. The fact that your phone is a small thing that fits in your pocket doesn’t diminish the significance of some of what it can do.

I’m not sure where you’re going with this, but obviously a sufficiently powerful hardware platform is a necessary but not sufficient condition for AI. I think there has been more than one sci-fi story about connecting all the computers in the universe and suddenly you have … God, or something! No, if you connect all the computers in the universe, the only thing you’re guaranteed to have is a lot of connected computers!

For interesting emergent properties like intelligence to manifest, the system must also be suitably organized, primarily meaning having appropriate software functionality. What powerful computers do, with their fast processors and large amounts of RAM, is enable such suitable software to be created and run, and underlying that, enable the development of advanced tools and methodologies that are necessary to the creation of such software. We would not be where we are today if hardware developers had not produced all the performance advances that they did.

You’re missing a critically important point. You appear to be trying to create some sort of gotcha that has me arguing that Watson’s intelligent behavior was not actually designed but somehow arose by magic. That’s not at all what I’m saying, nor is that relevant to what an emergent property is. Let’s recall the basic definition: an emergent property is a property which a collection or complex system has, but which the individual members do not have.

Consider for a moment the very nature of layers of abstraction, a concept I mentioned earlier. This is a software engineering principle that has a very specific meaning in computer science, implying complete isolation and independence of the functional layers except through formally defined interfaces between them. Each layer performs a well-defined function that is simple enough to be well understood and developed and thoroughly tested in isolation. It has no knowledge of where its inputs came from or where its outputs are going. Its only responsibilities are to accept messages from the lower layer, perform its functions, and forward the results upward. Each layer is a bit like the little man in Searle’s Chinese Room, applying rules to its inputs and creating the appropriate outputs and knowing nothing else. The capabilities of the system are due to the behavior of the layer stack in the aggregate, and are often surprising and not predictable from the functionality of any particular layer.

The architecture of Watson’s DeepQA engine is something like this writ large. It applies more than 100 different techniques to a dozen major different aspects of the problem, all of them independent and linked only by a common integrating architecture that the team hoped – but certainly did not know with any assurance – would produce the desired results. And moreover, even then, the system didn’t work at all well until it underwent extensive training, adding in this case adaptive enhancements that improved the performance of the system overall.

I thought I had already explained it quite clearly, but I will try again. Intelligence is characterized by an array of behaviors—as recognized by Turing. So, an intelligent agent can solve certain tasks—verbal ones, but also more general behavioral ones. Thus, anything that can solve a certain task that is contained in the array of tasks an intelligent agent is capable of solving, exhibits a nontrivial part of the behavior of an intelligent agent.

This is neither trivial, nor does it commit me to saying this part of a system is intelligent itself. For one, a rock does not show a nontrivial aspect of intelligent behavior; while intelligent beings can also fall down and are subject to gravitational forces, that is not constitutive of their intelligence. For two, such a system does not need to show other aspects of the behavior of an intelligent agent—it need not be capable of holding a discussion about the weather, for instance.

It’s the same with an individual water molecule, or a single starling. The former is not liquid, the latter does not flock; but the latter’s behavior is a necessary precondition for flocking behavior, and the former’s bonding properties are necessary for liquidity at room temperature.

So it’s the calculators capacity to calculate, the starling’s following certain behavioral rules, and the water molecule’s bonding properties that make them possible components of an intelligent agent, a flock, or a pool, and we can easily see how this works. What I’m asking for is the analogue of such properties for consciousness.

Practitioners of AI call systems ‘intelligent’ in a way that’s neither trivial—there’s a dividing line between AI systems and non-AI systems—nor commits them to showing that their systems are generally intelligent, which are, according to you, the only two possible interpretations of attributing some partial intelligence to a system.

You’re too trusting.

There’s a set of behaviors that we use to define ‘intelligence’. Calculating is part of that set (witness the ‘mathematical ability’ portion of any intelligence test). Hence, a calculator exhibits some of the elements of that set. This is very simple and clear.

In fact, we may just use the sections of an intelligence test as a first guide to the parts of intelligent behavior:
[ul]
[li]Verbal Intelligence.[/li][li]Mathematical Ability.[/li][li]Spatial Reasoning Skills.[/li][li]Visual/Perceptual Skills.[/li][li]Classification Skills.[/li][li]Logical Reasoning Skills.[/li][li]Pattern Recognition Skills.[/li][/ul]

Anything exemplifying one or more elements of this set possesses a nontrivial part of the behavior of an intelligent agent.

This is just a false assertion. Machine B, provided it functions normally, absolutely can perform that task; it just takes longer to do so. Anything else just wreaks havoc with the concept of computational equivalence, which forms the foundation of computer science.

Emergent intelligence, as I’ve seen it used in the past, seems to mean that sufficiently complex systems will become intelligent without human intervention. That I don’t buy. The way you use the term above is fine with me.
I don’t blame AI researchers though. Why work on the real problem which probably won’t get solved during their lifetimes when you can work on the popular notion of AI at Google and earn a ton of money.
Also, I was in hardware design, so thank you. :slight_smile:

I fail to see the point of the analogy then. If the Chinese room is equivalent to a computer, stating that a computer can’t become intelligent because the Chinese room can’t is begging the question.
The Chinese room in its usual form cannot become intelligent - actually understand Chinese - because it was limited to responding to input card with a lookup table. And that clearly is too simple to have emergent intelligence.
As in our last discussion this table - set of response cards - can grow without bounds since an input card can refer to an input card three back - or four back or five back. I’ll agree that any system that can select an output from all possible input sequences can mimic intelligence without being intelligent. But that’s a rather absurd requirement.

I, uh, would like to know what it does mean.

You: ‘Always been present? This poses the interesting question as to when always starts. or infinity, if you like. And what was there before the universe was created.’

I don’t think it does necessarily pose that last question. I see no reason to rule out a different possibility: that it was, in fact, always there, such that you’re asking the wrong question with What Was There Before It Was Created.

Take a classic: say a guy insists that, for all you know, he’s never beaten his wife. “This raises an interesting question,” someone replies: “When did you stop?”

No! No, it doesn’t raise that question! Heck, maybe he’s always been unmarried! Responding with a quick ‘okay, but was what was it like before that’ seems to make no sense at all if it’s always been the case!

It’s a cogent argument—if Frank’s a human and can do what any human can do, and Frank can’t fly, then humans can’t fly. But that’s not actually quite the argument Searle is making.

Searle’s Chinese Room was never limited in that way. Moreover, the target of Searle’s argument is more narrowly understanding, or intentionality, rather than intelligence. Searle simply argues that no matter what program he follows, he will never understand what the shapes he manipulates—the Chinese symbols—mean; therefore, following a program is not sufficient for understanding. Since following a program is all every computer does, hence, no computer is capable of understanding.

I think the argument isn’t actually successful, yet curiously, its conclusion is true. The argument is defeated by the so-called systems reply: Searle forms merely a part of the system; understanding (if it is possible) is not located in any of the parts, but in the whole system.

Searle replies that one could internalize the whole system, given a keen enough brain, yet still, one wouldn’t understand Chinese. That, too, is right, but doesn’t establish the conclusion that following a program is insufficient for understanding: in memorizing the program and executing it mentally, Searle essentially creates a simulation of a Chinese-speaking person in his head. Whether that person understands Chinese is not determined by whether Searle does.

Not a gotcha, I think I graciously offered up that I’m not fully understanding your position because it did seem to lead to magic.

I believe we are in agreement at this point. Emergent properties like intelligence, flocking, water, weather, etc. are:
1 - High level behavior of complex systems
2 - Not obvious or visible by studying the attributes of the components in isolation
3 - Able to be simulated, duplicated, engineered by adjusting the attributes of the lower layers and components until the correct high level behavior arises. There is no magic.
Watson’s high level behavior could be measured and compared against the desired result which allowed the team to adjust whichever layer they needed so that the system would produce the desired high level behavior. We are in agreement on this.

Things like Watson, flocking, water, etc. can all be measured so that we can build simulations.

This is a critical question:
How/what do we measure to create a conscious system? How do we detect the internally available and externally hidden attributes that are so critical to be able to adjust our system so we arrive at the right answer (e.g. conscious states)?

The thing which occurs to me, reading this thread, is that there may be a sort of self reenforcing way in which consciousness makes itself difficult to understand. In the respect that, the things which tend to want to be in our awareness are aberrations, and the things which are working smoothly go unnoticed. And this is multiply enforced by the fact that this is both how consciousness works, and what it is for. And then you want to turn it on itself?

I think if we could communicate how Godel’s Incompleteness Theorem and Relativity are connected in a visually intuitive way, we could could get a better handle on consciousness.