Reply
 
Thread Tools Display Modes
  #151  
Old 02-14-2019, 09:52 PM
SigMan SigMan is offline
Guest
 
Join Date: Aug 2015
Location: Texas
Posts: 915
That's one thing even the greatest minds can only pose theories about it. Nobody knows.

IMO, there was no beginning nor an ending but a big void. It started with The Big Bang is the best theory we have.

Creation from nothing. How is that possible?
  #152  
Old 02-15-2019, 12:20 AM
wolfpup's Avatar
wolfpup wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 9,646
Quote:
Originally Posted by Half Man Half Wit View Post
That something show a nontrivial part of the behavior of an intelligent agent isn't the same statement as that something is intelligent, no matter how you look at it ...

... Whenever something emerges, the necessary preconditions of its emergence are rooted in its components.
Once again, it's absolutely meaningless and circular to declare with no independent basis that the components used to build intelligent systems already contain "a nontrivial part of the behavior of an intelligent agent".

There are only two ways to interpret such a position, depending on one's definition of that peculiar phrase. One, that it's trivially true by definition, since the components created an intelligent system, ergo, they must have embodied essential elements of its behavior. Two, that without a demonstration of actual intelligence in these lower components -- and at some level of decomposition that obviously becomes impossible -- it contradicts the definition of what an emergent property is: a property which a collection or complex system has, but which the individual members do not have.

I contend that the first definition is a misleading truism, and that the second is the pertinent criticism of your fallacy.

Quote:
Originally Posted by Half Man Half Wit View Post
So you're saying that whenever a system is called artificially intelligent---since it clearly doesn't possess the full behavior of an intelligent agent---that's just false? It's an all-or-nothing deal? People in AI research just lie, or are mistaken?
No, I'm not, and I don't see how that follows from the arguments above.

Quote:
Originally Posted by Half Man Half Wit View Post
Sure. But I was making claims about behavior of that particular assembly of logic gates. Assemble them different, use them differently, and they'll show a different behavior. Doesn't impinge on the fact that as a calculator, it's used to do something that an intelligent being could do, and that could be used to succeed at a Turing test.
I trust that the "nontrivial part of the behavior of an intelligent agent" argument was put to rest above. A calculator is not intelligent by any rational definition, nor can it be described as a "nontrivial part of the behavior of an intelligent agent", again for the reasons above. It doesn't even mean anything.

Quote:
Originally Posted by Half Man Half Wit View Post
In other words, "Literally the only thing you add by increasing speed and power is that they can do so faster."

Doing the same thing faster is still first and foremost doing the same thing. That flatly isn't a qualitative difference.
No. That characterization is another case of a superficial truism. The real statement is: machine "A" can perform a task, while machine "B" absolutely cannot. That is a qualitative difference, not merely a quantitative one.

Quote:
Originally Posted by Half Man Half Wit View Post
But you can get to a billion times increase by doubling speed successively. So neither of those computers does something qualitatively new, yet somehow, somewhere a qualitative difference appears.

Now, I grant you that this is analogous to the argument that has consciousness just somehow, somewhere appearing once you pile on enough complexity. But it's only analogous in its fallacious nature.
The answer is "yes", though it may seem paradoxical. Are there significant qualitative differences between a 7-year-old child and a 70-year-old man? But there's really not much difference between a 7-year-old and an 8-year old, nor between a 69-year-old and a 70-year-old. Exactly where in this continuum of time does a whimpering child suddenly become an old person, perhaps an educated, wise and accomplished one with a storied career? Or should we conclude that this never happens, for lack of a defined transition point?

Quote:
Originally Posted by Voyager View Post
I took AI in college rather closer to 704 time than to DeepQA time, and we don't seem to have made much progress towards true AI. We have solved just about all the problems described in my class, and they are pretty much sitting in my smartphone. I was on a business trip when the 8086 was announced and the USA Today said that AI was just around the corner thanks to the vast power of that machine.
That should teach you not to pay attention to the popular media!

Though the truth is, as I'm sure you well know, a number of prominent AI researchers were also overly optimistic back in the 60s and 70s. It was understandable based on extrapolation from some of the rapid progress that had been made starting from nothing, but problems like natural language understanding soon became intractable. That was one of the more notorious areas perhaps because one of the areas of application, natural language translation, soon revealed the vast scope of the problem domain and sometimes led to such humorous results!

The AI on your smartphone exists because of a number of independent factors; it's a pretty fast platform by the standards of most computers even a few decades ago, the systems were developed using tools and methodologies developed on today's high-performance computers, in some cases the phone is just a thin client offloading requests to servers, and many other factors including the legacy of many decades of AI research. The fact that your phone is a small thing that fits in your pocket doesn't diminish the significance of some of what it can do.

Quote:
Originally Posted by Voyager View Post
The clock rate of the processor I was on the design team for was nearly a billion times faster than the computer I used in high school. And it could obviously do a lot more faster. But qualitatively different? I don't think so.

The basic principle used by AI researchers back when I took it was that solving a bunch of problems that looked like they require intelligence (like chess, or planning a route) would somehow lead to an intelligent system when collected together. Clearly not true. Profitable, but not true.
Our computing ranch did simulations using over a thousand processors, all connected. Probably more computing power than existed in the world when I graduated from college. No intelligence emerged that I ever noticed.
I'm not sure where you're going with this, but obviously a sufficiently powerful hardware platform is a necessary but not sufficient condition for AI. I think there has been more than one sci-fi story about connecting all the computers in the universe and suddenly you have ... God, or something! No, if you connect all the computers in the universe, the only thing you're guaranteed to have is a lot of connected computers!

For interesting emergent properties like intelligence to manifest, the system must also be suitably organized, primarily meaning having appropriate software functionality. What powerful computers do, with their fast processors and large amounts of RAM, is enable such suitable software to be created and run, and underlying that, enable the development of advanced tools and methodologies that are necessary to the creation of such software. We would not be where we are today if hardware developers had not produced all the performance advances that they did.

Quote:
Originally Posted by RaftPeople View Post
Maybe I'm not understanding your position.

Watson is your example of a system with emergent properties that can not be engineered and built up from the lower levels, correct? The emergent property of intelligence only shows up at some level of complexity and there is no way to create or build that by setting up the rules and functionality at the lower levels of the system, correct?
You're missing a critically important point. You appear to be trying to create some sort of gotcha that has me arguing that Watson's intelligent behavior was not actually designed but somehow arose by magic. That's not at all what I'm saying, nor is that relevant to what an emergent property is. Let's recall the basic definition: an emergent property is a property which a collection or complex system has, but which the individual members do not have.

Consider for a moment the very nature of layers of abstraction, a concept I mentioned earlier. This is a software engineering principle that has a very specific meaning in computer science, implying complete isolation and independence of the functional layers except through formally defined interfaces between them. Each layer performs a well-defined function that is simple enough to be well understood and developed and thoroughly tested in isolation. It has no knowledge of where its inputs came from or where its outputs are going. Its only responsibilities are to accept messages from the lower layer, perform its functions, and forward the results upward. Each layer is a bit like the little man in Searle's Chinese Room, applying rules to its inputs and creating the appropriate outputs and knowing nothing else. The capabilities of the system are due to the behavior of the layer stack in the aggregate, and are often surprising and not predictable from the functionality of any particular layer.

The architecture of Watson's DeepQA engine is something like this writ large. It applies more than 100 different techniques to a dozen major different aspects of the problem, all of them independent and linked only by a common integrating architecture that the team hoped -- but certainly did not know with any assurance -- would produce the desired results. And moreover, even then, the system didn't work at all well until it underwent extensive training, adding in this case adaptive enhancements that improved the performance of the system overall.
  #153  
Old 02-15-2019, 01:33 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
Quote:
Originally Posted by wolfpup View Post
Once again, it's absolutely meaningless and circular to declare with no independent basis that the components used to build intelligent systems already contain "a nontrivial part of the behavior of an intelligent agent".
I thought I had already explained it quite clearly, but I will try again. Intelligence is characterized by an array of behaviors---as recognized by Turing. So, an intelligent agent can solve certain tasks---verbal ones, but also more general behavioral ones. Thus, anything that can solve a certain task that is contained in the array of tasks an intelligent agent is capable of solving, exhibits a nontrivial part of the behavior of an intelligent agent.

This is neither trivial, nor does it commit me to saying this part of a system is intelligent itself. For one, a rock does not show a nontrivial aspect of intelligent behavior; while intelligent beings can also fall down and are subject to gravitational forces, that is not constitutive of their intelligence. For two, such a system does not need to show other aspects of the behavior of an intelligent agent---it need not be capable of holding a discussion about the weather, for instance.

It's the same with an individual water molecule, or a single starling. The former is not liquid, the latter does not flock; but the latter's behavior is a necessary precondition for flocking behavior, and the former's bonding properties are necessary for liquidity at room temperature.

So it's the calculators capacity to calculate, the starling's following certain behavioral rules, and the water molecule's bonding properties that make them possible components of an intelligent agent, a flock, or a pool, and we can easily see how this works. What I'm asking for is the analogue of such properties for consciousness.

Quote:
No, I'm not, and I don't see how that follows from the arguments above.
Practitioners of AI call systems 'intelligent' in a way that's neither trivial---there's a dividing line between AI systems and non-AI systems---nor commits them to showing that their systems are generally intelligent, which are, according to you, the only two possible interpretations of attributing some partial intelligence to a system.

Quote:
I trust that the "nontrivial part of the behavior of an intelligent agent" argument was put to rest above.
You're too trusting.

Quote:
A calculator is not intelligent by any rational definition, nor can it be described as a "nontrivial part of the behavior of an intelligent agent", again for the reasons above. It doesn't even mean anything.
There's a set of behaviors that we use to define 'intelligence'. Calculating is part of that set (witness the 'mathematical ability' portion of any intelligence test). Hence, a calculator exhibits some of the elements of that set. This is very simple and clear.

In fact, we may just use the sections of an intelligence test as a first guide to the parts of intelligent behavior:
  • Verbal Intelligence.
  • Mathematical Ability.
  • Spatial Reasoning Skills.
  • Visual/Perceptual Skills.
  • Classification Skills.
  • Logical Reasoning Skills.
  • Pattern Recognition Skills.

Anything exemplifying one or more elements of this set possesses a nontrivial part of the behavior of an intelligent agent.

Quote:
No. That characterization is another case of a superficial truism. The real statement is: machine "A" can perform a task, while machine "B" absolutely cannot. That is a qualitative difference, not merely a quantitative one.
This is just a false assertion. Machine B, provided it functions normally, absolutely can perform that task; it just takes longer to do so. Anything else just wreaks havoc with the concept of computational equivalence, which forms the foundation of computer science.
  #154  
Old 02-15-2019, 03:05 AM
Voyager's Avatar
Voyager Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 44,921
Quote:
Originally Posted by wolfpup View Post

I'm not sure where you're going with this, but obviously a sufficiently powerful hardware platform is a necessary but not sufficient condition for AI. I think there has been more than one sci-fi story about connecting all the computers in the universe and suddenly you have ... God, or something! No, if you connect all the computers in the universe, the only thing you're guaranteed to have is a lot of connected computers!

For interesting emergent properties like intelligence to manifest, the system must also be suitably organized, primarily meaning having appropriate software functionality. What powerful computers do, with their fast processors and large amounts of RAM, is enable such suitable software to be created and run, and underlying that, enable the development of advanced tools and methodologies that are necessary to the creation of such software. We would not be where we are today if hardware developers had not produced all the performance advances that they did.
Emergent intelligence, as I've seen it used in the past, seems to mean that sufficiently complex systems will become intelligent without human intervention. That I don't buy. The way you use the term above is fine with me.
I don't blame AI researchers though. Why work on the real problem which probably won't get solved during their lifetimes when you can work on the popular notion of AI at Google and earn a ton of money.
Also, I was in hardware design, so thank you.
  #155  
Old 02-15-2019, 03:16 AM
Voyager's Avatar
Voyager Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 44,921
Quote:
Originally Posted by Half Man Half Wit View Post
It can do anything any modern computer can do, ex hypothesi. It doesn't need to be able to execute rules from the scratch paper to do so. If you're not intending to claim that only computers of a certain architecture are candidates for understanding, and thus, there could be classes of Turing complete machines that can possess understanding while other classes do not, it doesn't matter in what way it achieves Turing machine equivalence, only that it does.

A native speaker of Chinese could emulate a Turing machine. Since the Chinese Room possesses equivalent capacities, so can it. In principle, you could simulate a Turing machine with the Chinese Room by using it to implement the Game of Life cellular automaton---it simply doesn't matter.
I fail to see the point of the analogy then. If the Chinese room is equivalent to a computer, stating that a computer can't become intelligent because the Chinese room can't is begging the question.
The Chinese room in its usual form cannot become intelligent - actually understand Chinese - because it was limited to responding to input card with a lookup table. And that clearly is too simple to have emergent intelligence.
As in our last discussion this table - set of response cards - can grow without bounds since an input card can refer to an input card three back - or four back or five back. I'll agree that any system that can select an output from all possible input sequences can mimic intelligence without being intelligent. But that's a rather absurd requirement.
  #156  
Old 02-15-2019, 03:57 AM
Brayne Ded Brayne Ded is offline
Guest
 
Join Date: Nov 2017
Location: Europe
Posts: 293
Always present

Quote:
Originally Posted by The Other Waldo Pepper View Post
I, uh, don’t think that’s what “always been present” would mean.
I, uh, would like to know what it does mean.
  #157  
Old 02-15-2019, 06:03 AM
The Other Waldo Pepper The Other Waldo Pepper is offline
Guest
 
Join Date: Apr 2009
Posts: 16,132
Quote:
Originally Posted by Brayne Ded View Post
I, uh, would like to know what it does mean.

You: ‘Always been present? This poses the interesting question as to when always starts. or infinity, if you like. And what was there before the universe was created.’

I don’t think it does necessarily pose that last question. I see no reason to rule out a different possibility: that it was, in fact, always there, such that you’re asking the wrong question with What Was There Before It Was Created.

Take a classic: say a guy insists that, for all you know, he’s never beaten his wife. “This raises an interesting question,” someone replies: “When did you stop?”

No! No, it doesn’t raise that question! Heck, maybe he’s always been unmarried! Responding with a quick ‘okay, but was what was it like before that’ seems to make no sense at all if it’s always been the case!
  #158  
Old 02-15-2019, 06:12 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
Quote:
Originally Posted by Voyager View Post
I fail to see the point of the analogy then. If the Chinese room is equivalent to a computer, stating that a computer can't become intelligent because the Chinese room can't is begging the question.
It's a cogent argument---if Frank's a human and can do what any human can do, and Frank can't fly, then humans can't fly. But that's not actually quite the argument Searle is making.

Quote:
The Chinese room in its usual form cannot become intelligent - actually understand Chinese - because it was limited to responding to input card with a lookup table. And that clearly is too simple to have emergent intelligence.
Searle's Chinese Room was never limited in that way. Moreover, the target of Searle's argument is more narrowly understanding, or intentionality, rather than intelligence. Searle simply argues that no matter what program he follows, he will never understand what the shapes he manipulates---the Chinese symbols---mean; therefore, following a program is not sufficient for understanding. Since following a program is all every computer does, hence, no computer is capable of understanding.

I think the argument isn't actually successful, yet curiously, its conclusion is true. The argument is defeated by the so-called systems reply: Searle forms merely a part of the system; understanding (if it is possible) is not located in any of the parts, but in the whole system.

Searle replies that one could internalize the whole system, given a keen enough brain, yet still, one wouldn't understand Chinese. That, too, is right, but doesn't establish the conclusion that following a program is insufficient for understanding: in memorizing the program and executing it mentally, Searle essentially creates a simulation of a Chinese-speaking person in his head. Whether that person understands Chinese is not determined by whether Searle does.
  #159  
Old 02-15-2019, 11:29 AM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
Quote:
Originally Posted by wolfpup View Post
You're missing a critically important point. You appear to be trying to create some sort of gotcha that has me arguing that Watson's intelligent behavior was not actually designed but somehow arose by magic. That's not at all what I'm saying, nor is that relevant to what an emergent property is. Let's recall the basic definition: an emergent property is a property which a collection or complex system has, but which the individual members do not have.
Not a gotcha, I think I graciously offered up that I'm not fully understanding your position because it did seem to lead to magic.

I believe we are in agreement at this point. Emergent properties like intelligence, flocking, water, weather, etc. are:
1 - High level behavior of complex systems
2 - Not obvious or visible by studying the attributes of the components in isolation
3 - Able to be simulated, duplicated, engineered by adjusting the attributes of the lower layers and components until the correct high level behavior arises. There is no magic.


Watson's high level behavior could be measured and compared against the desired result which allowed the team to adjust whichever layer they needed so that the system would produce the desired high level behavior. We are in agreement on this.

Things like Watson, flocking, water, etc. can all be measured so that we can build simulations.

This is a critical question:
How/what do we measure to create a conscious system? How do we detect the internally available and externally hidden attributes that are so critical to be able to adjust our system so we arrive at the right answer (e.g. conscious states)?
  #160  
Old 02-15-2019, 06:45 PM
jackdavinci jackdavinci is offline
Guest
 
Join Date: Apr 2000
Location: Port Jefferson Sta, NY
Posts: 8,021
The thing which occurs to me, reading this thread, is that there may be a sort of self reenforcing way in which consciousness makes itself difficult to understand. In the respect that, the things which tend to want to be in our awareness are aberrations, and the things which are working smoothly go unnoticed. And this is multiply enforced by the fact that this is both how consciousness works, and what it is for. And then you want to turn it on itself?

I think if we could communicate how Godel's Incompleteness Theorem and Relativity are connected in a visually intuitive way, we could could get a better handle on consciousness.
  #161  
Old 02-15-2019, 08:33 PM
Voyager's Avatar
Voyager Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 44,921
Quote:
Originally Posted by Half Man Half Wit View Post
It's a cogent argument---if Frank's a human and can do what any human can do, and Frank can't fly, then humans can't fly. But that's not actually quite the argument Searle is making.
Is he also assuming that a human with unrestricted abilities would be able to understand the Chinese symbols? Because understanding dead languages without referents to known languages or things such as pictures has proven to be difficult.
Quote:
Searle's Chinese Room was never limited in that way. Moreover, the target of Searle's argument is more narrowly understanding, or intentionality, rather than intelligence. Searle simply argues that no matter what program he follows, he will never understand what the shapes he manipulates---the Chinese symbols---mean; therefore, following a program is not sufficient for understanding. Since following a program is all every computer does, hence, no computer is capable of understanding.
Work on language translation has shown that it can't be done effectively without semantic understanding. Now if the room works only by card lookup, the semantics has been understood outside the room. But this is clearly impossible, so no Chinese room like this could be built. Now if you allow the power of a Turing Machine, it seems that the translation - assuming it can be done - requires semantic understanding. Semantic models are clearly different from the models we humans use,
Even for the simpler case of a compiler, there is a lexical analysis and syntax phase, and then there is a semantic phase where for instance the compiler "understands" that the structure is a do loop and thus emits the appropriate code. I put understands in quotes since this is a trivial level of understanding.

Quote:
I think the argument isn't actually successful, yet curiously, its conclusion is true. The argument is defeated by the so-called systems reply: Searle forms merely a part of the system; understanding (if it is possible) is not located in any of the parts, but in the whole system.

Searle replies that one could internalize the whole system, given a keen enough brain, yet still, one wouldn't understand Chinese. That, too, is right, but doesn't establish the conclusion that following a program is insufficient for understanding: in memorizing the program and executing it mentally, Searle essentially creates a simulation of a Chinese-speaking person in his head. Whether that person understands Chinese is not determined by whether Searle does.
No problem with any of this. In fact if Searle was a super brain and internalized the system enough so that it went into his subconscious, he might find that he "understands" Chinese just as well as he understands English, even if the method for understanding is more explicit than his method of understanding English.
Anyone with children know how they refine and debug their program for understanding and speaking, and in quite standard ways.
  #162  
Old 02-15-2019, 08:44 PM
Voyager's Avatar
Voyager Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 44,921
Quote:
Originally Posted by RaftPeople View Post
This is a critical question:
How/what do we measure to create a conscious system? How do we detect the internally available and externally hidden attributes that are so critical to be able to adjust our system so we arrive at the right answer (e.g. conscious states)?
My old field had the concept of observability - the ease with which internal states could be observed externally, This is not automatic for complex systems, and has to be built in. The workings of our subconscious is not observable - we mostly see the outputs, except when we are dreaming, perhaps. We do have observability of conscious thoughts. At what level is an interesting question, since we don't seem to observe some thoughts - like pulling your hand away from the hot stove - until they have happened already. Latency is a big problem with digital observability also. If something happens - like an illegal operation is detected - you want to observe the internal state, but often you are getting the state a certain number of clock cycles after the event.
This applies to most of the internal state. There also diagnostic buses which can observe without messing up calculations, but which observe only a few vital items. I wonder if our monitoring of our own thoughts works this way.
  #163  
Old 02-15-2019, 09:40 PM
wolfpup's Avatar
wolfpup wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 9,646
Quote:
Originally Posted by Half Man Half Wit View Post
I thought I had already explained it quite clearly, but I will try again. Intelligence is characterized by an array of behaviors---as recognized by Turing. So, an intelligent agent can solve certain tasks---verbal ones, but also more general behavioral ones. Thus, anything that can solve a certain task that is contained in the array of tasks an intelligent agent is capable of solving, exhibits a nontrivial part of the behavior of an intelligent agent ...

... So it's the calculators capacity to calculate, the starling's following certain behavioral rules, and the water molecule's bonding properties that make them possible components of an intelligent agent, a flock, or a pool, and we can easily see how this works. What I'm asking for is the analogue of such properties for consciousness.
OK, one final comment, and then we may as well drop this as we're never going to agree.

My citing an electronic calculator to make my point about emergent properties was intended to illustrate the fact that here we have a device made up of logic gates that doesn't actually do anything interesting, yet those same logic gates can be assembled in much larger numbers and a much more sophisticated architecture to do amazing things traditionally associated with intelligent behavior. I was surprised that you took off down this rat-hole about a calculator containing "a nontrivial part of the behavior of an intelligent agent" because any remote semblance of truth such a statement might have is coincidental and utterly irrelevant, a fact that I can illustrate in the following way.

Let me introduce you to a gadget I had many years ago that was made by minicomputer maker Digital Equipment Corporation called a "Logic Lab". It was basically a box with a few dozen logic gates wired to a plugboard, and it also had some lights, a variable clock, and pushbutton and binary-state switches. It came with a bunch of patchcords, and the idea was that you could play around with it and get a feel for how logic gates worked. By itself it did nothing, and even if you wired it up to utilize every single gate and plug in it, it still did pretty much nothing except flash some pretty lights. It was just a low-level learning tool.

The argument that I made about the components of the calculator could just as easily apply to the Logic Lab, which does NOT "calculate" -- in fact, it does nothing useful at all. Yet its components, too, just like the calculator, could be used to build a stored-program computer. In fact, quite literally so, since Digital was a major maker of "Flip Chip" logic modules that they also used to build minicomputers, and except for the logic gates in the Logic Lab being slow inexpensive ones, they might well have ended up in an actual computer.

You've gone to great lengths repeatedly to show that "calculating" is in your terms part of the set of behaviors that comprise intelligence. Please explain how the Logic Lab -- which performs no function at all -- similarly "possesses a nontrivial part of the behavior of an intelligent agent".

If you want to argue that it embodies the concept of Boolean logic, then your argument reduces to the absurdity that even a single NAND gate "possesses a nontrivial part of the behavior of an intelligent agent", which is, again, both circular and obviously absurd. If that reductio ad absurdum isn't sufficient, we can take it down to the level of an individual transistor, and thence to individual atoms of silicon -- all of which you would have to argue possess some tiny latent part of the intelligence that they can be assembled to instantiate.

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
Originally Posted by wolfpup View Post
A calculator is not intelligent by any rational definition, nor can it be described as a "nontrivial part of the behavior of an intelligent agent", again for the reasons above. It doesn't even mean anything.


No. That characterization is another case of a superficial truism. The real statement is: machine "A" can perform a task, while machine "B" absolutely cannot. That is a qualitative difference, not merely a quantitative one.

There's a set of behaviors that we use to define 'intelligence'. Calculating is part of that set (witness the 'mathematical ability' portion of any intelligence test). Hence, a calculator exhibits some of the elements of that set. This is very simple and clear.

In fact, we may just use the sections of an intelligence test as a first guide to the parts of intelligent behavior:
  • Verbal Intelligence.
  • Mathematical Ability.
  • Spatial Reasoning Skills.
  • Visual/Perceptual Skills.
  • Classification Skills.
  • Logical Reasoning Skills.
  • Pattern Recognition Skills.

Anything exemplifying one or more elements of this set possesses a nontrivial part of the behavior of an intelligent agent.


This is just a false assertion. Machine B, provided it functions normally, absolutely can perform that task; it just takes longer to do so. Anything else just wreaks havoc with the concept of computational equivalence, which forms the foundation of computer science.
"Wreaks havoc with the concept of computational equivalence'"? No, it absolutely does not. Turing machines were a brilliant but purely theoretical concept intended to model computation, not real computers in the real world. As you are aware a Turing machine is presumed to have infinite memory and infinite time to perform a given computation. Real computers are finite-state automata, a subset of idealized Turing machines, in which the number of available states and the speed at which they can be switched is crucially important to what they can really do and thus how much utility they have, if any.

So to say that two machines are each theoretically Turing complete and thus Turing equivalent ignores the theoretical infinities and simply constitutes a statement that in the context of this discussion is completely trivial and absolutely useless. It's just a fancier way of trying to say, "the IBM 704 and the Watson platform are qualitatively the same because they are both computers". It's like saying that my cordless drill and a subway tunnel-boring machine are qualitatively the same thing because they both drill holes in things.
  #164  
Old 02-16-2019, 03:30 AM
Mijin's Avatar
Mijin Mijin is offline
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 8,813
This is why I think it's important to ground all this in what we know: what model do we have and what predictions or inferences can be made from that model?

With consciousness, too many people IMO want to handwave any gaps of our understanding. I think it is an overreaction to the fact that the world is awash with people writing long screeds about how their consciousness interacts with the universal energy vibration to bring about quantum chakras or whatever.
But you can think all that new age nonsense is nonsense, and that there are still fundamental gaps in our understanding of consciousness at the same time.

In this case, saying that consciousness is an emergent property is a fancy way of restating the problem. "Human brains are conscious, neurons don't appear to be, so something significant happens as we connect up millions of neurons". Sure, what?
I have a hypothetical system of N neurons forming C connections... What properties am I looking for in this system to predict whether it will be conscious? If the answer is all we can do is look at the organism's behavior, then again, that means we don't have an understanding yet.
  #165  
Old 02-16-2019, 05:36 AM
eschereal's Avatar
eschereal eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 15,620
Quote:
Originally Posted by Half Man Half Wit View Post
The question of this thread is how consciousness arose. A frequent contention was that it was evolutionarily selected for. To be selected, it must confer a distinct survival advantage. If it's possible to behave 'as if' one were conscious without actually being conscious, being conscious does not confer a distinct survival advantage. Consequently, evolution is blind towards consciousness (by which I don't mean to imply that evolution has a sense of sight). Hence, consciousness can't have been selected for.
But what kind of control group is there? If we cannot even identify what consciousness actually is, any assertions about its effect on selection are meaningless. There is no way to determine the presence of self-awareness in any living thing, so we cannot tell whether it ever made a difference in evolution. And there is no known way to determine without a doubt that a smart machine actually has consciousness or is merely doing or very good job displaying all the observable indications (mainly, that it claims to be self-aware).

So far, we have only been able to build machines that do what we tell them to do. Sometimes, we will see AI doing same really strange stuff, but it only does stuff that we ask it to do. Van Gogh went out and did his paintings without being prompted. Heinlein wrote novels without being asked to (sort of). Eno composes those odd things mostly right out of his head (apart from the use of found objects). We have yet to build a computer that just does stuff (though, some of the spybots like Alexa exhibit same very strange behavior).

Granted, every action a creature takes is (probably) externally driven in some way (or a. response to internal chemistry). The behavioral drivers are obscure and complex, but it in nearly impossible to point to action that can he undeniably identified as "entirely from within". It seems to be a combination of chemical (emotional) and intellectual factors that generate inspiration: how we would model the chemical factors to build a genuinely creative machine seems enormously difficult.

But even it we could build a creative machine, and that machine informed us that it had consciousness, could we even be sure? I tell you that I have this impenetrable bubble of consciousness and you have to take me at my word. Is my consciousness identical or comparable to yours, and how can we be sure of that?
  #166  
Old 02-16-2019, 05:57 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
Quote:
Originally Posted by Voyager View Post
Is he also assuming that a human with unrestricted abilities would be able to understand the Chinese symbols? Because understanding dead languages without referents to known languages or things such as pictures has proven to be difficult.
No. He is exactly using the fact that humans can't understand language from being merely confronted with the symbols---that the symbols don't wear their meaning on their sleeves, so to speak---to make his case. I mean, that's how the basic argument gets off the ground!

Anyway, I'm not really interested in further explicating an argument I think is flawed, anyway, so I'll drop this thread here.

Quote:
Originally Posted by wolfpup View Post
You've gone to great lengths repeatedly to show that "calculating" is in your terms part of the set of behaviors that comprise intelligence. Please explain how the Logic Lab -- which performs no function at all -- similarly "possesses a nontrivial part of the behavior of an intelligent agent".
I already have:

Quote:
Originally Posted by Half Man Half Wit View Post
Each logical operation is a partial aspect of intelligence; that's how the analysis of thought went, historically, from Aristotle to George Boole's Laws of Thought. The individual logic gates thus carry in them the preconditions for the emergence of intelligence in the same way as water molecules underlie the emergence of fluidity: there is absolutely no mystery in the fact that you can combine them to replicate the behavior of intelligent beings ('weak AI').
Quote:
Originally Posted by wolfpup View Post
If that reductio ad absurdum isn't sufficient, we can take it down to the level of an individual transistor, and thence to individual atoms of silicon -- all of which you would have to argue possess some tiny latent part of the intelligence that they can be assembled to instantiate.
This just doesn't follow. Take the following situation: you hand me a box of puzzle pieces, and claim that, once assembled, they form a picture of the Eiffel Tower. I have no reason to disbelieve you: even though it's hard to just take a look at the pieces and know what picture they show, it's well within the realm of plausibility that they do show a picture, once assembled, and that may well be one of the Eiffel Tower.

How do I know this? Because the individual pieces carry fragments of a picture; they're the sort of thing that one could assemble such that a picture emerges out of that assembly. None of the individual pieces carry that picture; it's a property of the combined assembly. Yet, the properties of the individual pieces are such as to make that emergence possible.

The same with intelligence. Calculators, NAND-gates, and so on, are transparently the sorts of things that, once assembled in the right way, account for the emergence of intelligence. Even though it's hard to say exactly how this works, it's no mystery at all that it might. There is no extra magic sauce needed in order to turn the right assembly of such things into an intelligent agent; the properties of its sub-components suffice.

This is all I'm saying. Now, crucially, this does not incur the sort of reductio you claim: saying that puzzle pieces are the right sort of thing to form a picture of the Eiffel Tower, once assembled, does not commit me to saying that the atoms the puzzle pieces are made from carry some special Eiffel-Tower-picture forming properties: they're just like all the other atoms. The same with the transistors and molecules that form the NAND-gates. They're just building blocks, like pieces of Lego, that can be assembled to fulfill many different purposes---among them, realizing logical circuits and forming the pieces of a puzzle that shows the Eiffel Tower.

So, to sum up: logic gates, calculators and the like are the potential building blocks of intelligence in exactly the same way that puzzle pieces are the potential building blocks of a picture of the Eiffel Tower. This is hopefully uncontroversial.

Now, not any sort of building blocks enables the creation of any sort of higher-level entity. Take my earlier example of neutral atoms and electrically charged objects: no matter how you rearrange the atoms, you won't be able to create something charged. You need building blocks that carry charge.

Or, to use the earlier example, if you present me with a box of uniformly white puzzle pieces, and claim that they assemble to show a picture of the Eiffel Tower, I immediately know that that's wrong. White puzzle pieces, even once you assemble them, don't just spontaneously show a picture. They remain white; they're just not the right sort of thing to produce a picture.

So even though emergence may be difficult to predict, doesn't mean we can't tell whether some building blocks are suitable to give rise to certain higher-level systems. Multicolored puzzle pieces can give rise to pictures, where uniformly white ones can't: there's no mystery here. It's the same with intelligence: calculators and the like form suitable building blocks for intelligence, but rocks don't, even if they're made from silicon.

Arguing for the emergence of intelligence is thus like arguing for the emergence of the picture of the Eiffel Tower: while I can't say for certain before the assembly has been completed, I can very much see how the building blocks might lead to intelligence emerging. But those arguing for the emergence of consciousness are claiming that the picture of the Eiffel Tower could spontaneously appear, once one has assembled enough white puzzle pieces: I have no reason to believe that. Nobody has shown that the building blocks have the right properties to give rise to conscious experience, so this is nothing but a brute statement of faith---at some point, something unknown is going to happen, and then, consciousness.

Quote:
"Wreaks havoc with the concept of computational equivalence'"? No, it absolutely does not. Turing machines were a brilliant but purely theoretical concept intended to model computation, not real computers in the real world.
Turing machines precisely delineate the capabilities of real-world computers. Anything within that delineation is qualitatively the same. Something capable of doing something that no (ideal) computer could, like solving the halting problem, would be qualitatively different. Anything else is merely a difference in quantity---more memory, more time. That's literally what those words mean.

Seriously, this is getting silly. You're not going to convince people to redefine the meaning of words to save your failed argument. If I mow the lawn in an hour, and you do it a billion times faster, you haven't done something qualitatively different from me; you've done the exact same thing. Just faster. You could, for instance, then do it a billion more times in the same time span: then, you would have done, literally, just more of the same. It's the same with faster computers: they can do more computations in the same time span. But that doesn't amount to doing anything qualitatively different.

Last edited by Half Man Half Wit; 02-16-2019 at 05:58 AM.
  #167  
Old 02-16-2019, 11:02 AM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
Quote:
Originally Posted by eschereal View Post
So far, we have only been able to build machines that do what we tell them to do. Sometimes, we will see AI doing same really strange stuff, but it only does stuff that we ask it to do. Van Gogh went out and did his paintings without being prompted. Heinlein wrote novels without being asked to (sort of). Eno composes those odd things mostly right out of his head (apart from the use of found objects). We have yet to build a computer that just does stuff (though, some of the spybots like Alexa exhibit same very strange behavior).
The thing is, it's not even clear that nature produces brains that are doing anything other than deterministic input mapped to output. Meaning that our sense of understanding our options and "choosing" a path based on some conscious process hasn't been very well supported by research. Decisions seem to be made before we know we made them and there is a clear genetic influence in our behavior in general. Consciousness could just be the story that is told after the fact.
  #168  
Old 02-16-2019, 12:26 PM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
==================
Mental imagery:
==================
Wolfpup: "visual cortex has no role in it"
Raftpeople: "research shows visual cortex is used"

Details:
1 - Same visual cortex structures/neural cirtcuits activated during imagery and perception
2 - Subjects with greater usage of visual cortex based on brains scans, report higher levels of subjective quality of imagery
3 - Eye saccades are used when processing imagery just like perception and interrupting eye saccades caused subjective interruption of imagery processing (just like with perception)

Some research examples:
https://academic.oup.com/cercor/article/22/2/372/336465
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3366182/
http://www.jneurosci.org/content/jne.../1367.full.pdf
https://pdfs.semanticscholar.org/41e...15d2d61abd.pdf

Do you have any evidence (recent) to support your position?



==================
Emergent properties:
==================
Wolfpup: We both agree about the details of emergent behavior and that it's engineerable and not magic. When we build things like Watson or weather simulations we start out not really knowing how to get to the end result, but in all cases we know how to measure the end result and we know how to measure the lower level activities that give rise to the end result.

We are able to create these things because we can measure the original system and we can measure our engineered system. We can use math and logic to determine whether our system is exhibiting the same types of behavior as our goal. For Watson we are able to measure the answer, if wrong then it was not as intelligent as we wanted, so we drill in an adjust the lower levels. This process is known and understood, they aren't just randomly flipping bits.


You keep ignoring this point, but this questions is the crux of the entire debate:
How/what do we measure to determine if the emergent property of consciousness exists in our system?





You argue that consciousness emerges when the level of intelligent complexity reaches some level, and you argue that Watson is an example of emergent intelligence with a high level of complexity.

I know you have not implied that Watson is conscious, but your arguments do seem to imply that it is trending in the right direction (complex intelligent system), which leads to the following questions:
Is Watson conscious?
Did it reach a level of complexity where consciousness emerges?
How do we measure Watson to determine if consciousness has emerged?

Last edited by RaftPeople; 02-16-2019 at 12:28 PM.
  #169  
Old 02-16-2019, 01:57 PM
eschereal's Avatar
eschereal eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 15,620
Quote:
Originally Posted by RaftPeople View Post
The thing is, it's not even clear that nature produces brains that are doing anything other than deterministic input mapped to output. Meaning that our sense of understanding our options and "choosing" a path based on some conscious process hasn't been very well supported by research. Decisions seem to be made before we know we made them and there is a clear genetic influence in our behavior in general. Consciousness could just be the story that is told after the fact.
I do not disagree. Skinner paints a picture of behavior that is infested with myriad hard-to-see vectors, their vast expanding vagueness the reason that we perceive actions as arising within because the external and genetic influences are so much work to trace, and because believing in the inner voice is comforting to some.

The strict computational model does depict behavior as deterministic, but when you add in the biochemical aspect to it, the strict determinism becomes clouded by chemical effects (emotions, which are are a nontrivial component of behavior). The biochemistry appears to be a strong direct and indirect motivator, which would be hard to replicate in machinery, especially because of how capricious it seems to be.

The consciousness may well be like a spectator, or it could be sort of like the live audience, which has some element of participation in the process.
  #170  
Old 02-16-2019, 02:12 PM
Mijin's Avatar
Mijin Mijin is offline
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 8,813
Quote:
Originally Posted by RaftPeople View Post
You keep ignoring this point, but this questions is the crux of the entire debate:
How/what do we measure to determine if the emergent property of consciousness exists in our system?
You're absolutely right, and your post is a good one but I want to say that even if the question of how to measure consciousness is the crux of this debate, it is just one of several fundamental questions about consciousness.

We measure explanations by their explanatory power.
If "It's an emergent property" cannot answer such questions, or allow us to make other useful inferences, then at best it's a pointless restating of the problem. At worst it's a handwave; giving some people the false impression that the problem has already been solved.
  #171  
Old 02-16-2019, 03:09 PM
Voyager's Avatar
Voyager Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 44,921
Quote:
Originally Posted by RaftPeople View Post
The thing is, it's not even clear that nature produces brains that are doing anything other than deterministic input mapped to output. Meaning that our sense of understanding our options and "choosing" a path based on some conscious process hasn't been very well supported by research. Decisions seem to be made before we know we made them and there is a clear genetic influence in our behavior in general. Consciousness could just be the story that is told after the fact.
I knew we'd get to free will at some point. The programming that does the mapping clearly evolved, and if the appearance of consciousness evolved along with it, then there must have been an advantage. If consciousness is indeed the story told after the fact, it might be a way of organizing the story to be used as feedback for the next decision.
  #172  
Old 02-17-2019, 02:36 AM
wolfpup's Avatar
wolfpup wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 9,646
Quote:
Originally Posted by Half Man Half Wit View Post
This just doesn't follow. Take the following situation: you hand me a box of puzzle pieces, and claim that, once assembled, they form a picture of the Eiffel Tower. I have no reason to disbelieve you: even though it's hard to just take a look at the pieces and know what picture they show, it's well within the realm of plausibility that they do show a picture, once assembled, and that may well be one of the Eiffel Tower.

How do I know this? Because the individual pieces carry fragments of a picture; they're the sort of thing that one could assemble such that a picture emerges out of that assembly. None of the individual pieces carry that picture; it's a property of the combined assembly. Yet, the properties of the individual pieces are such as to make that emergence possible.
No. Again, the definition of an emergent property is that it's a property with one or more fundamental attributes that its parts or components do not have, which emerge from the interactions of the parts. Since each piece of the jigsaw puzzle plainly has part of a picture on it, which you can plainly see, and can even examine under a microscope to determine the color print quality and the pixel or grain resolution and thereby predict the quality of the assembled picture, this is clearly not analogous to the emergence of a fundamental new property, and your analogy fails before it starts.

Quote:
Originally Posted by Half Man Half Wit View Post
Turing machines precisely delineate the capabilities of real-world computers. Anything within that delineation is qualitatively the same. Something capable of doing something that no (ideal) computer could, like solving the halting problem, would be qualitatively different. Anything else is merely a difference in quantity---more memory, more time. That's literally what those words mean.

Seriously, this is getting silly. You're not going to convince people to redefine the meaning of words to save your failed argument. If I mow the lawn in an hour, and you do it a billion times faster, you haven't done something qualitatively different from me; you've done the exact same thing. Just faster. You could, for instance, then do it a billion more times in the same time span: then, you would have done, literally, just more of the same. It's the same with faster computers: they can do more computations in the same time span. But that doesn't amount to doing anything qualitatively different.
We're talking past each other. And it's ironic that you think I'm trying to redefine the meaning of words since I'm precisely relying on the definition of what an "emergent property" is, and it isn't anything like in your above jigsaw puzzle example.

I note that you failed to quote the most pertinent part of my statement: that the Turing machine is defined as having a symbol tape of infinite length and, by extension, allowed infinite time to perform a specified computation. This, and the fact that real computers are not Turing machines but physically constrained finite-state machines is crucial to the argument.

That the ancient IBM 704 and the Watson platform are constrained subsets of Turing machines and are in that sense computationally equivalent is trivially true. That is, it's a fundamental and profound observation about what computation is, but it's a useless observation about their functional capabilities. Another way to say this is that if you examine the instruction set of the IBM 704, and then the instruction set of the modern POWER7, the fundamental similarities and computational equivalence is immediately obvious -- this was the genius of Turing's insight. This does not extend to describing the functional capabilities of the physical machines themselves.

The point I'm making here is that the scale of a computational system, which appears superficially to be a trivial property, at some level becomes profoundly non-trivial because it can give rise to emergent properties like intelligent behavior. My point, in other words, is nothing less than the argument for emergent properties, which are enabled by suitably organized complexity, for which sufficient scale is a prerequisite.

So the question at hand, with regard to machine "A" and machine "B", is the following: Are "A" and "B" to be considered qualitatively the same because their machine instruction sets are both Turing complete, or are they to be considered qualitatively different because one machine exhibits a high order of intelligent behavior and the other does not, and never could?

The answer, of course, depends on the context in which one is asking the question. I maintain that my answer addresses the relevant and meaningful context of this discussion, an entirely different question than the one you're answering, and one which cuts to the nature of intelligence as an emergent property of suitably organized computational complexity.

Quote:
Originally Posted by RaftPeople View Post
==================
Mental imagery:
==================
Wolfpup: "visual cortex has no role in it"
Raftpeople: "research shows visual cortex is used"

Details:
1 - Same visual cortex structures/neural cirtcuits activated during imagery and perception
2 - Subjects with greater usage of visual cortex based on brains scans, report higher levels of subjective quality of imagery
3 - Eye saccades are used when processing imagery just like perception and interrupting eye saccades caused subjective interruption of imagery processing (just like with perception)

Some research examples:
https://academic.oup.com/cercor/article/22/2/372/336465
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3366182/
http://www.jneurosci.org/content/jne.../1367.full.pdf
https://pdfs.semanticscholar.org/41e...15d2d61abd.pdf

Do you have any evidence (recent) to support your position?
Yes, there is research that tends to support a role of the visual cortex in mental imagery. There is also research that tends to support the opposite. None of it is conclusive, and it's seriously wrong to suggest that the issue is in any way settled. The visual cortex argument is part of a debate about mental imagery that has been going on for half a century. The two sides of it being the so-called quasi-pictorial (or "analog") argument that supposes we process mental images the same way we process visual perception, and the propositional argument that is based on the idea that images are stored representationally as a set of mental symbols from which the semantics are extracted computationally, making it an attractive theory for proponents of the computational theory of mind.

My comment about the visual cortex was specifically addressed to the last paragraph in #109 which claimed that optical illusions are created in "the mind's eye", but the fact that mental images are immune from optical illusions, while conversely perceived images are not only subject to the illusion, but are immune from cognitive influence (the illusion persists even when you know the illusion is false), actually supports the propositional view that no such thing as "the mind's eye" exists at all, and that mental imagery is not quasi-pictorial but computational-representational.


Quote:
Originally Posted by RaftPeople View Post
You keep ignoring this point, but this questions is the crux of the entire debate:
How/what do we measure to determine if the emergent property of consciousness exists in our system?
I haven't answered that question because I don't know the answer, and neither does anyone else. We can't really define consciousness functionally, or how it can be objectively recognized, or what role it plays in evolution. My conjecture is that it arises in the same way as intelligence -- as an emergent property -- and that it's closely and inextricably related to it. Humans have a deeper understanding of the world and its phenomena than other sentient creatures, and when we turn that attention inward to ourselves, we engage in an aspect of ontological inquiry that is sufficiently interesting that we give it a name: we call it "consciousness".

Quote:
Originally Posted by RaftPeople View Post
You argue that consciousness emerges when the level of intelligent complexity reaches some level, and you argue that Watson is an example of emergent intelligence with a high level of complexity.

I know you have not implied that Watson is conscious, but your arguments do seem to imply that it is trending in the right direction (complex intelligent system), which leads to the following questions:
Is Watson conscious?
No.
Quote:
Originally Posted by RaftPeople View Post
Did it reach a level of complexity where consciousness emerges?
No.
Quote:
Originally Posted by RaftPeople View Post
How do we measure Watson to determine if consciousness has emerged?
Maybe when Watson starts to behave like HAL in 2001.
Quote:
Originally Posted by Mijin View Post
We measure explanations by their explanatory power.
If "It's an emergent property" cannot answer such questions, or allow us to make other useful inferences, then at best it's a pointless restating of the problem. At worst it's a handwave; giving some people the false impression that the problem has already been solved.
I'm not trying to "explain" anything, much less claim that this intractable problem has been "solved", but merely to speculate on how and why consciousness seems to arise in sentient beings. Which may well be dismissed as scientifically worthless, but are so are all philosophical ruminations about it. The only empirical aspect of it is limited research on its underlying neurophysiology, which so far is equally worthless from an explanatory standpoint.
  #173  
Old 02-17-2019, 05:29 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
Quote:
Originally Posted by wolfpup View Post
No. Again, the definition of an emergent property is that it's a property with one or more fundamental attributes that its parts or components do not have, which emerge from the interactions of the parts.
Exactly. None of the puzzle pieces shows a picture of the Eiffel Tower; from their interaction, one emerges.

Emergence does not mean that we can't find properties in the parts that are responsible for the emergent properties (or at least, the scientifically serious notion of emergence doesn't; there's also the idea of 'strong emergence', which one could equally well call 'magic'). Wikipedia gives the example of a snowflake: the pattern the snowflake forms isn't inherent in the water molecules; yet, that it forms such a pattern is due to the 120° bond angle of the water molecules. So in the analogy, that bond angle is the part of the picture, and the whole snowflake is the picture of the Eiffel Tower.

Now, in order to say that consciousness emerges, and have that be anything other than a statement of faith, you would have to point to something like the 120° angle of water molecules that is possessed by the parts from which you want to build conscious experience. You'd have to somehow show that the proposed building blocks are the right sort of thing to give rise to conscious experience. You can't do that, of course; but then, saying 'consciousness emerges' is literally without any content whatsoever.

It's on exactly the same level as claiming that some pattern of movements around a camp fire, coupled with burning the right herbs, and chanting the right chants, makes rain emerge: none of the individual parts have anything to do with the emergence of rain, and in no way seem to be the right building blocks for a change in weather. Yet, if one may claim that consciousness emerges from building blocks without being able to point to any properties those building blocks have by virtue of which consciousness may emerge, then one may equally well point to dancing around a campfire as leading to rain: one has as much reason to believe the one as the other.

What's worse is that this magical thinking comes in the guise of scientific sophistication. The mantra 'consciousness emerges' is uttered as if one thereby had made any headway into the mystery of conscious experiences, when one merely has hidden it behind a bit of jargon. Ah, the questioner replies, it emerges, well then I suppose that's not such a deep question after all. But precisely nothing has been done to answer it; indeed, finding a proper answer may have been made dogmatically harder, if consciousness happened not to emerge.

Quote:
We're talking past each other. And it's ironic that you think I'm trying to redefine the meaning of words since I'm precisely relying on the definition of what an "emergent property" is, and it isn't anything like in your above jigsaw puzzle example.
In every case where a property emerges, it does so by virtue of the properties of the parts of the system. That the story of precisely how something emerges may be difficult doesn't impinge on the fact that such a story exists---a story of how ant interactions give rise to an anthill; of how flocking rules dictate flock behavior; of how bonding angles produce snowflakes; of how puzzle pieces give rise to a complete picture. And, whether you like it or not, of how logic gates give rise to intelligent behavior. Without that story, and without being able to point to the properties on which the emergent properties supervene, one simply hasn't described emergence, but something magically appearing out of nothing.

Quote:
I note that you failed to quote the most pertinent part of my statement: that the Turing machine is defined as having a symbol tape of infinite length and, by extension, allowed infinite time to perform a specified computation.
Both of which are differences of quantity. I mean, if I say, 'this heap of sand is bigger than that heap of sand', I have not pointed to a qualitative difference between the two heaps, even if one is a lot bigger than the other. If I say, 'this heap of diamonds is bigger than that heap of sand', then yes, I'd be willing to accept that there's a qualitative difference between the two.

Quote:
That the ancient IBM 704 and the Watson platform are constrained subsets of Turing machines and are in that sense computationally equivalent is trivially true.
You sling around the world 'trivial' a lot, but I don't think it means what you think it means. The issue is non-trivial: an IBM 704 and Watson are both computers in a way that a rock is not.

Quote:
This does not extend to describing the functional capabilities of the physical machines themselves.
Well, let's try to root out our differences then. Take the following statement: "Given enough memory and time, any computation that Watson can perform can be performed by the IBM 704." Do you agree or disagree?

Quote:
So the question at hand, with regard to machine "A" and machine "B", is the following: Are "A" and "B" to be considered qualitatively the same because their machine instruction sets are both Turing complete, or are they to be considered qualitatively different because one machine exhibits a high order of intelligent behavior and the other does not, and never could?
But the latter simply isn't true: given enough memory and time (which is, I hope you agree, the literal definition of a difference in quantity), the IBM 704 could show exactly that behavior.

Last edited by Half Man Half Wit; 02-17-2019 at 05:31 AM.
  #174  
Old 02-17-2019, 05:50 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
  • Why do anthills get built? -- Because of the individual interactions between ants.
  • Why do starlings flock? -- Because the individual starlings follow the flocking rules.
  • Why does water form snowflakes? -- Because of the 120° bond angle of the water molecules.
  • Why does the completed puzzle show a picture of the Eiffel Tower? -- Because the individual pieces show the right parts of the picture.
  • Why does Watson show intelligent behavior? -- Because its components are capable of implementing elementary logical operations.

In each case, these are, of course, just the roughest of sketches for answers. But that is enough to at least make it plausible that a full answer exists---even if it might be too long to be written down, or understood in all details by any single human being. The emergence is justified by the above answers; we can point to a distinct reason for why a certain property emerges.

This sort of thing gives us confidence with as yet unresolved issues, such as:
  • Why does an A(G)I show intelligent behavior? -- Because its components are capable of implementing elementary logical operations.

This is hypothetical: we don't as yet know if it's in fact possible to build such an AI. But we have no reason to assume the opposite: there might be some unforeseen fundamental roadblock up ahead, but if so, we have no way to see it as yet, and hence, no justification for believing in its existence.

But then, consider:
  • Why does conscious experience arise in certain systems?

Here, we have nothing to point to, at all. To assert that conscious experience simply emerges would entail a belief that some unforeseen consequences might arise such that a certain assembly of elements suddenly and spontaneously generates consciousness---that is, belief in the emergence of consciousness is not analogous to belief in the emergence of intelligence, but rather, to belief in the impossibility of intelligence emerging: in both cases, we must rely on something unknown and unforeseen happening. In both cases, hence, our belief would be wholly unjustified, since what would make it true, and if it even exists, is wholly unknown.

That might, of course, change. We could discover additional properties that make certain elements possible building blocks for conscious experience. Then, it may become sensible to assert the emergence of consciousness, just as it is sensible now to postulate the emergence of intelligence. But before then, it's really not any more sensible than the emergence of weather-altering powers from the way a shaman dances around the fire.

Last edited by Half Man Half Wit; 02-17-2019 at 05:51 AM.
  #175  
Old 02-17-2019, 02:07 PM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
Quote:
Originally Posted by wolfpup View Post
Yes, there is research that tends to support a role of the visual cortex in mental imagery. There is also research that tends to support the opposite. None of it is conclusive, and it's seriously wrong to suggest that the issue is in any way settled. The visual cortex argument is part of a debate about mental imagery that has been going on for half a century. The two sides of it being the so-called quasi-pictorial (or "analog") argument that supposes we process mental images the same way we process visual perception, and the propositional argument that is based on the idea that images are stored representationally as a set of mental symbols from which the semantics are extracted computationally, making it an attractive theory for proponents of the computational theory of mind.

My comment about the visual cortex was specifically addressed to the last paragraph in #109 which claimed that optical illusions are created in "the mind's eye", but the fact that mental images are immune from optical illusions, while conversely perceived images are not only subject to the illusion, but are immune from cognitive influence (the illusion persists even when you know the illusion is false), actually supports the propositional view that no such thing as "the mind's eye" exists at all, and that mental imagery is not quasi-pictorial but computational-representational.
A few points:
1 - You stated that the "visual cortex has no role" in mental imagery. Your cite fully admits that the visual cortex is activated during mental imagery and doesn't make any attempt to deny the possible role of the visual cortex in the function of mental imagery. That cite is arguing against a pictorial representation that needs to get re-interpreted, which is a very different point from "no role." (I recognize that you do mention the representation in the post above, but that is a very different point than "no role.")

2 - I purposefully said "recent" research because of all the data compiled in the last 20 years. That cite is based on significantly less information. Would he come to the same conclusion if he knew about, for example, the data surrounding eye saccades and mental imagery?

3 - It's a different topic that we don't need to continue in detail here, but I would say that it's entirely possible that it's not a binary situation. The brain has machinery that performs specific functions, and it's quite possible (IMO likely) that the brain feeds information back into those functional units that allow for a specific type of processing.
  #176  
Old 02-17-2019, 02:16 PM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
Quote:
Originally Posted by wolfpup View Post
I haven't answered that question because I don't know the answer, and neither does anyone else. We can't really define consciousness functionally, or how it can be objectively recognized, or what role it plays in evolution. My conjecture is that it arises in the same way as intelligence -- as an emergent property -- and that it's closely and inextricably related to it. Humans have a deeper understanding of the world and its phenomena than other sentient creatures, and when we turn that attention inward to ourselves, we engage in an aspect of ontological inquiry that is sufficiently interesting that we give it a name: we call it "consciousness".
I don't disagree that it's an angle of attack that shouldn't just be ignored. The issue is that you think it's a "meaningful" explanation at this point despite nothing concrete that helps us move forward in trying to solve the problem.

"Meaningful" seems like an unwarranted and unsupported description.
  #177  
Old 02-18-2019, 01:07 AM
wolfpup's Avatar
wolfpup wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 9,646
Quote:
Originally Posted by Half Man Half Wit View Post
Well, let's try to root out our differences then. Take the following statement: "Given enough memory and time, any computation that Watson can perform can be performed by the IBM 704." Do you agree or disagree?
OK, to wrap this up once and for all: the answer crucially depends on the exact question being asked. To the assertion as you make it here -- in which the naming of specific machines makes it reasonable to apply a pragmatic real-world interpretation -- the answer is no, I do not agree. Explaining the reason mostly involves repeating what I've already said, but I'll try to make it more succinct and with additional examples.

The overarching principle here is that Turing equivalence is a quality that applies -- by definition! -- to the theory of computation in the abstract, not to the physical finite-state automata that perform it. The easiest way to elucidate the difference is to say that when comparing the qualities of two machines, Turing equivalence applies to their instruction sets, not to the physical boxes, where complexity and performance are paramount criteria.

This is a fundamental point. Despite the staggering differences in capability between old computers and high-performance contemporary ones, it's actually surprising how sophisticated the instruction sets of old computers really were. In fact, some of them had a richer set of instruction capabilities than some more modern ones, particularly the ones that were designed during the fashionable era of RISC -- Reduced Instruction Set Computing. It's pretty obvious that all instruction sets, at any rate, are Turing complete, since they are all capable of things like conditional branching and load and store operations on addressable memory and can simulate any aspect of a Turing machine, ignoring capacity issues which are not relevant in a theoretical discussion. The instruction sets are thus qualitatively the same. There is no dispute there, and this is the proper application of the Turing principle.

So if your assertion were to be, "any computation that Watson (or any computer with a Turing complete instruction set) can perform can in theory be performed by any other computer with a Turing complete instruction set", I would completely agree. This is what I mean by "trivially true", and contrary to your accusation, I do understand what the word "trivially" means, and it fully applies here.

Now remove the words "in theory", and you're dealing with a completely different animal. My answer is "no", as above, for a whole host of reasons, including the following:

- "Given enough memory" is a non-starter when discussing real-world computers, because memory limitations are also imposed by its address space, which are intrinsic to the machine architecture. Of course various tricks can be imagined to work around this at the cost of performance, but then we have the next item ...

- "Given enough time" is a non-starter when discussing real-world computers, because there is always some combination of critical success criteria and pragmatism that prevents this from being an arbitrarily extensible parameter; to wit:

- It can easily be shown that the information bandwidth necessary to perform at the Watson level and meet the Jeopardy time rules exceeds by many, many orders of magnitude the bandwidth available in the IBM 704, and

- It can easily be shown that the IBM 704 is guaranteed to fail long before it could ever produce an answer, even if time were not a constraint (we don't know how to produce a system with an MTBF exceeding 2,200 years even with the best current technology). Therefore, it's guaranteed that you will never get an answer. That's a qualitative difference!

One might posit a supervisory software system that periodically checkpoints the entire state of the machine, but then one has to back up the checkpoint in a failsafe manner -- and that's an incredible amount of state information. The problem just grows exponentially, and eventually, your problem becomes that the sun will burn out, become a red giant, and engulf the earth before your computation is done and you ever get your answer. I'd say that's a significant qualitative factor, wouldn't you?

As I said, we seem to be talking past each other because you insist on a narrow theoretical view of computation as the defining characteristic of what "qualitative" means, while I'm endorsing a more pragmatic and, in scientific terms, a more useful proposal in support of a discussion of emergent phenomena where qualitative differences are presumed to emerge from complexity and hence from scale.

Here is a discussion of exactly that issue in the context of social sciences, and it's equally true in computing:
One such mechanism comes into play when the quantitative increase in some entity, usually population, reaching a certain threshold, gives rise to a qualitative change in the structure of a society ...

... In his book Science of Logic, Georg Friedrich Hegel remarked: “It is said that there are no sudden changes in nature, and the common view has it that when we speak of a growth or a destruction, we always imagine a gradual growth or disappearance. Yet we have seen cases in which the alteration of existence involves not only a transition from one proportion to another, but also a transition, by a sudden leap, into a … qualitatively different thing; an interruption of a gradual process, differing qualitatively from the preceding, the former state”
https://www.pnas.org/content/97/23/12926
Quote:
Originally Posted by RaftPeople View Post
A few points:
1 - You stated that the "visual cortex has no role" in mental imagery. Your cite fully admits that the visual cortex is activated during mental imagery and doesn't make any attempt to deny the possible role of the visual cortex in the function of mental imagery. That cite is arguing against a pictorial representation that needs to get re-interpreted, which is a very different point from "no role." (I recognize that you do mention the representation in the post above, but that is a very different point than "no role.")
My cite doesn't actually quite say that, and the problem here is that the visual cortex is an extensive and complex area; the primary cortex V1 has at least six identified subfunctional areas. The controversy, in part, is about which of the early vs later visual processing areas are involved in mental imagery. I had intended to mention pages 175 and 176 of the paper as being especially relevant, and unfortunately I forgot, what with the other stuff I was replying to. For instance:
While some neural imaging studies report activity in topographically organized cortical areas(Kosslyn et al. 1995; 1999a), most have reported that only later visual areas, the so-called visual association areas, are active in mental imagery (Charlot et al. 1992; Cocude et al. 1999; D’Esposito et al. 1997; Fletcher et al. 1996; Goldenberg et al. 1995; Howard et al. 1998; Mellet et al. 1996; 1998; Roland & Gulyas 1994b; 1995; Silbersweig & Stern 1998); but see the review in Farah (1995b) and the some of the published debate on this topic (Farah 1994; Roland & Gulyas 1994a, 1994b). Other evidence comes from clinical cases of brain damage and is even less univocal in supporting the involvement in mental imagery of the earliest, topographically organized areas of visual cortex (Roland & Gulyas 1994b). There is some reason to think that the activity associated with mental imagery occurs at many loci, including higher levels of the visual stream (Mellet et al. 1998).
Also Section 7.2. "What would it mean if all the neuroscience claims turned out to be true?", which argues against this simplistic proposal: "... [Results such as those of Kosslyn et al.] have been taken to support the view that mental images are literally two-dimensional displays projected onto primary visual cortex."

Quote:
Originally Posted by RaftPeople View Post
2 - I purposefully said "recent" research because of all the data compiled in the last 20 years. That cite is based on significantly less information. Would he come to the same conclusion if he knew about, for example, the data surrounding eye saccades and mental imagery?
Given that the author's views haven't changed, AFAIK, I suspect the answer is in the affirmative.

Quote:
Originally Posted by RaftPeople View Post
3 - It's a different topic that we don't need to continue in detail here, but I would say that it's entirely possible that it's not a binary situation. The brain has machinery that performs specific functions, and it's quite possible (IMO likely) that the brain feeds information back into those functional units that allow for a specific type of processing.
Given the organizational complexity of the visual cortex, I suspect that this is true, and accounts for the apparently contradictory evidence.

Last edited by wolfpup; 02-18-2019 at 01:09 AM.
  #178  
Old 02-18-2019, 01:55 AM
Half Man Half Wit's Avatar
Half Man Half Wit Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,481
Quote:
Originally Posted by wolfpup View Post
So if your assertion were to be, "any computation that Watson (or any computer with a Turing complete instruction set) can perform can in theory be performed by any other computer with a Turing complete instruction set", I would completely agree. This is what I mean by "trivially true", and contrary to your accusation, I do understand what the word "trivially" means, and it fully applies here.
It doesn't. Even beyond my rock-example, there are many special-purpose computers whose instruction sets are not Turing complete. Consequently, it's not trivial to point to the equivalence of two computers in this sense.

Quote:
- It can easily be shown that the IBM 704 is guaranteed to fail long before it could ever produce an answer, even if time were not a constraint (we don't know how to produce a system with an MTBF exceeding 2,200 years even with the best current technology). Therefore, it's guaranteed that you will never get an answer. That's a qualitative difference!
It's not. It is still only a difference in scale---time scale. By definition, quantitative. A really big heap of sand doesn't become qualitatively different just because it has more grains in it than exist on Earth, or would take longer to assemble than the Sun's live span, or if my excavator breaks down before it's completed. It's still a heap of sand.

Quote:
Here is a discussion of exactly that issue in the context of social sciences, and it's equally true in computing:
I'm not saying that qualitative changes never occur. The leap from non-Turing complete to Turing complete computation would be a pertinent example. It's just that there's no qualitative difference in your example.

Anyway, this is really just a difference in terminology. My substantial argument still is that it's meaningless magical thinking to claim that something emerges, without that claim being justified by the properties of the building blocks one proposes to use. Since that seems to no longer receive any opposition from you, I think we can drop this part, as well.
  #179  
Old 02-18-2019, 04:09 AM
Pithily Effusive Pithily Effusive is offline
BANNED
 
Join Date: Feb 2019
Posts: 100
They didn't. The proposition presupposes things not in evidence; things people assume science has proved.
  #180  
Old 02-18-2019, 03:04 PM
RaftPeople RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,438
Quote:
Originally Posted by wolfpup View Post
My cite doesn't actually quite say that, and the problem here is that the visual cortex is an extensive and complex area; the primary cortex V1 has at least six identified subfunctional areas. The controversy, in part, is about which of the early vs later visual processing areas are involved in mental imagery. I had intended to mention pages 175 and 176 of the paper as being especially relevant, and unfortunately I forgot, what with the other stuff I was replying to.
You stated that the visual cortex has "no role" in visual imagery. Is that really your position, that the visual cortex has "no role"?

I can't tell what your position is based on your answer.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 10:16 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2018 STM Reader, LLC.

 
Copyright © 2017