But that’s not the question that was asked, unless that is your way of stating that “yes, a static lookup table can be considered intelligent, it doesn’t rule out the possibility”
Please help me understand your position, that’s all I’m asking.
But that’s not the question that was asked, unless that is your way of stating that “yes, a static lookup table can be considered intelligent, it doesn’t rule out the possibility”
Please help me understand your position, that’s all I’m asking.
[quote=“DSeid, post:39, topic:485319”]
We have made tremendous progress in understanding how each of the individual cells operates, how they communicate, how systems are organized, what is active when, etc. Yet even for the simplest system there is no understanding of how qualia emerge even if we can fully describe what is happening as it does. The ability to look at the basic structures and predict what behaviors will result is not even there - and I honestly doubt it ever will be. (This because any neural system capable of complex behaviors is massively nonlinear in its development and function and therefore follows many characteristics of chaotic systems - albeit a chaotic system impacted by regular external inputs - there are predictable attractor basins that the system falls into but the ability to predict the behavior given only starting conditions is liable to impossible - ignore this comment if the analogies are unfamiliar to you please.) Understanding complex behaviors and functions will not come from extreme reductionism and from understanding from the bottom up but rather from an understanding of happens at top levels.**
Fyi: When I said “understand” that was meant to include not just the mechanical operation but how it works at an abstract level. In other words, truly understanding how the brain works. I agree, not going to happen any time soon.
You are asking me whether a static lookup table can be considered intelligent. I feel like this is asking me whether blue can be the wind. The two terms are unrelated. Intelligence was an adjective long before the chemistry of neurons or programming languages were understood. Now, perhaps our understanding of what intelligence is may have changed in the years since the word made a popular appearance, and you wish me to therefore take a position about whether or not a static lookup table can be considered intelligent now that we know more about programming languages and the chemistry of neurons, but my position is that these ideas are still, as of yet, unrelated, for I do not know, nor yet need to know, what the underlying cause of intelligence is, what emanates intelligence, what is the locus of intelligence, and so on, to use the word, so I feel it has not yet changed enough.
Thus, whether or not something is a static lookup table is, on the whole, a red herring for me.
Raft my question to you then remains:
erislover, thank you, now I understand.
I’ve got to go so this is going to be short, but it’s a start.
Methods to arrive at the “right” answer can be thought of as a continuum from the computational perspective:
Precalculated answers at one end
…
Stuff in the middle
…
Completely calculated answers at the other end (meaning to arrive at an answer, the state of every particle in the universe must be queried, calculations run and an answer produced based on predicted position of each particle and how it maps to goal, every new input repeats the entire process)
The two ends of the spectrum don’t seem intelligent to me (impressive but not intelligent). To me, intelligence is what happens in the middle where a system uses all kinds of different methods to categorize information, estimate, plan, guess, etc. etc. and comes up with a good answer w.r.t. the goals.
Intelligence is an optimization strategy balancing processing time, energy spent, information retention, etc. etc.
Or, for that matter, other humans?
We never get to open the box. For all we know we each are the only truly conscious beings in the universe and “all you zombies” are just automata.
(On second thought, perhaps “They” would be more appropriate.)
And that goes straight back to the other-minds problem – the fact that another human being gets the right answer doesn’t prove that it actually did the things we consider to be part of intelligence, so we can’t really use that as part of the standard.
The word is NPC, I believe.
Thank you for the quick stab at an answer Raft.
Within humans it now fairly widely accepted that there are multiple sorts of intelligences. Any individual can have varying levels of intelligence in each domain and be described both in terms of the levels of their individual intelligences and how broad or narrow their intelligences are - from a savants to a Renaissance Man.
That basic method can be used to describe non-human intelligences as well, expanding into domains that humans are completely unfamiliar with. A machine intelligence may be much higher in a very narrow domain but nonexistent in many others or moderate in several domains, some of which are overlapping with humans and some not, for example.
Raft’s point about the system analysis allowing one to infer something about function may hold more true about sentience - if we, for example, eventually accept Doug Hofstadter’s POV that sentience is correlated with the magnitude to which an information processing system contains multiple levels of embedded self-referntial “strange loops” then the degree to which a system has that could be used to infer its level of sentience in a less superficial and human biased manner than by whether or not it asks questions that a person would …
You misunderstand me.
I’m not expecting them to spontaneously know things without having experienced them - I’m saying that an AI that might stand a chance of satisfying me that it is experiencing some kind of inner thought-life would have to display some kind of spontaneous/novel/unexpected interest, curiosity or emotional attachment to something it did know about - i.e. having been informed of the thing’s existence in some other context.
A related issue: what do you think about creativity in this regard? Is it a critical part of intelligence, a dimension of intelligence, or a seperate thing all together?
(I personally have the take that it is a dimension of intelligence and think that most creative thought can be described as translating and transforming an idea from one domain into a different conceptual space and finding an unexpectedly good fit.)
I think one of the biggest problems with that is determinism - computers are pretty much slavishly deterministic, unless they’re explicitly made random.
The human mind may be completely deterministic too, but with computers, the determinism is very clearly exposed - so it’s difficult to think of creativity arising - because we tend to think of it as being subject to unpredictable flashes of inspiration.
If you accept my model of creativity then it is not too difficult to imagine some program that does that: characterize concepts according to metrics and as n-dimensional objects and characterize each domain as an n-dimensional space (e.g. the color spindle); in times of otherwise low processing demand take the n-dimensional representations of ideas and put them through translations, rotations, and transformations in to various domains according to some semi-random algorithm; if there are partial fits then test to see if other points also may exist - presto! a flash of inspiration!
To us human creativity seems unpredictable but of course that is only because the doors to the room are closed. My geometric transformation model may end up being hooey but the process clearly involves having various processing streams active at the same time and recognizing what they have to do with each other. Modelling that in some manner seems to be achievable even if not easily done.
Which is why part of my question in an earlier response was: is AI already here?
I am sure there’s no generalized definition of intelligence. But I think we could conceive of intelligence in specific domains. That’s pretty much my question: the domain of computers… how would they act intelligently? I notice most popular sci-fi invariably has computers acting like cold, logical humans. Perhaps this is inevitable since that is precisely what we are, er, breeding them for. But when I think about things like computer go, and watch bots like Mogo play it, I begin to wonder whether or not that’s a sound inference.
Quoth Sapo:
A fair point: Turing never claimed that an entity which fails his test should be regarded as unintelligent, only that an entity which passes it should be regarded as intelligent. For all we know, rocks might be intelligent, and just not bother us about it because they’re so wrapped up in their own philosophical musings.
You might be interested in this:
http://blog.wired.com/sterling/2009/02/rice-university.html
I want to think so. I think some genetic algorithms show intelligence in that they produce novel (not programmed in) solutions to problems. Not HAL, but not 10 PRINT “Hello World”
Yes, the answer is yes.