Re-defining Artificial Intelligence

   Is the term "artificial intelligence" actually a misnomer? Consider instead that it is merely "asymptotic smartness" and not actual intelligence. This could imply that its purported potential of so-called self awareness is no different than the delusion of an ersatz sociopath who will never have a heart or a true mind. ***Self awareness is NOT the same as consciousness!*** 
   Perhaps the old adage of "Garbage in, garbage out" still applies, if only in a more subtle sense. I think the term AI is only a sales pitch for a ubiquitous branding campaign. 
   Here's a caveat, y'all: *Don't get BRANDED*!

How about “created cleverness”?

What is the question you’re asking? Are you asking what is artificial intelligence in an academic sense, or in a marketing sense? I have no comment with respect how to AI as used by businesses for marketing purposes. AI*1 from an academic perspective is the study of how to make machines think (or at least mimic) a human. Typically, AI researchers aren’t too concerned with the question of “Is this real intelligence or not?” At least, not yet.

*1 - AI is often confused with machine learning (ML) and computational intelligence (CI), which are one of the many sub-fields of AI. And yes, I am aware that there is an ongoing debate as to whether ML and CI are there own things, and not true sub-fields of AI.

Research in machine cognition is almost exclusively focused around autonomous heuristics, e.g. self-directed learning. Given that even the best minds in cognitive science admit to having no real understanding of the nature of consciousness and sapience is it emerges from the human mind (and to greater or lesser degree in any animal with an isocortex or analogous brain structure) and thus, no way to actually model or reproduce it as an abstract model in software.

If and when machine cognition actually achieves some observable degree of sapience and cognition independent of controls, the actual processes that drive it will likely look and work nothing like those in the human brain simply because of the structural and functional differences between a state machine like a digital computer. There is a body of thought within machine cognition that true “artificial intelligence” will actually require some kind of biologic-like cortex which can exhibit the kind of plasticity and natural adaptivity that a computing machine cannot.

Stranger

Before formulating a long and serious answer you may wish to look at the OP’s posting history. That suggests that this is a drive-by post. (Though in validness it does beat his “if we evolved from monkeys why are there still monkeys” post.)

Ahh, I see. Thanks for letting me know so I don’t have to check back on the thread.

I agree. My own research is on algorithms that find algorithms. In a way, the human brain is a “machine” that is very good at this. So, if I can do this on a digital computer, then I’ve achieved some form of intelligence that is similar to a human. However, the way my algorithm finds an algorithm to solve a problem is almost certainly vastly different than how a human brain does so. And as you say, the reason for this is simply because of the differences between us.

Excellent! I’m looking forward to your coherent and useful definitions of these as the apply to artificial minds as well as biological ones.

AI has thus far never been about building a machine consciousness. Really it has been a set of problem solving heuristics, many of which have interesting application in the real world. A big slice of AI has thrown up heuristics that attempt to find useful solutions to the hill climbing problem in multi-dimensional space. A huge number of real life problems reduce to this - where a combinatorial enumeration, or fine grained search of the space in infeasible. Genetic algorithms, simulated annealing etc.
Then you get the AI techniques that attempt to create a predictive heuristic system by nothing more than feeding it examples. Neural nets. (I don’t think anyone could have predicted that such a simple idea could have turned out to be such a productive tool.)
Then you get the knowledge representation questions. Which seek to try to find a synthesis of more traditional computer science matters and the rubbery nature of the real world to form a way of representing useful things about the world in a manner that can be usefully traversed and reasoned over with tools built from formal logics.
And you have hard core bespoke KR systems with hand crafted tools that are directed at very specific tasks. We don’t consider tool like Mathematica as AI, but the core came from the traditional hard AI.
The above is hardly exhaustive. But the point is that nothing in here is even on the same planet as ideas about building a sentient thing. We might imagine we could build a Joi, given enough effort and enough compute, but she is little more than Eliza 90 years on.

I don’t understand OP.

Either way, the term ‘artificial intelligence’ is chauvinisitic. Its implying only biological intelligence (human biological intelligence) is real and everything else is artificial.

Machine intelligence is a more neutral term. Biological intelligence is biological, machine intelligence is machine.

Since I’m not seeing a factual question here, let’s move this to IMHO.

Colibri
General Questions Moderator

Let’s ignore machine “cognition” or machine “consciousness” which are, at best, ambiguous terms, and focus on machine intelligence. A sufficiently intelligent machine may be able to mimic “consciousness.”

The similarities between artificial networks like ConvNet and organic brain tissue are enough to allow the use of the terms ‘neuron,’ ‘axon,’ and ‘synapse’ in the following. (I do not claim that organic brains are well understood, nor that artificial neural networks are always designed to deliberately resemble organic tissue.)

This is, at best, misleading. The best machine intelligence these days (e.g. AphaChess) use convolutional neural networks. IIUC these are similar to 1990-vintage back-prop networks except
[ul][li] the networks are wider (more neurons per layer) and deeper (more neuron layers). This is possible due to faster chip speeds and specialized hardware.[/li][li] for greater efficiency and faster learning the networks support and exploit shift invariance. This was explicitly patterned after processing methods inferred for human vision processing in the cerebral occipital lobe.[/li][li] rather than the saturating sigmoid transfer function used for the axon output in 1990s-era backprop, the axon output is open-ended, perhaps even using the simple rectifier function.[/ul][/li]
Note that with this 3rd point, the neurons better mimic biology: Biological axons cannot readily fall below some minimal firing rate, but can fire much faster than their average or typical rate. The title of the 2000 paper introducing such a transfer function suggests that it was inspired by biological brain:* “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit.”* (I’ve not read it, but here is a non-paywalled Letter discussing that method.)

Learning in such networks is accomplished by active synapses reinforcing themselves in the presence of a ‘Learn’ signal. AFAIK the learning mechanism in biologic neural tissue is not well understood, but is likely to be quite similar to this.

TL;DR: The method used by the best machine intelligence these days is remarkably similar to what we can infer about some types of simple biologic neural tissue.

The OP might be amused to learn that, in the realm of business, the term “artificial intelligence” has been co-opted to refer to things that I would prefer to call “analytics”. The things not only don’t need to think, they don’t even need to adapt their algorithm over time. Does it do calculations on data that detect aberrant data or (gasp!) perform predictive analysis? It’s artificial intelligence baby! Slap that on the packaging in BIG print!

I can’t agree about “artificial intelligence” being a chauvinistic term. I think that, despite the original poster’s obvious problems, he is essentially right this time. Nobody twisted the arm of the original AI researchers to force them to call it that, did they? And nobody now is running any organized campaigns to inform the public that AI has been cancelled or at least indefinitely postponed, are they?

I don’t think there’s anything unclear or chauvinistic or wrongheaded about the idea of artificial intelligence, and I don’t think there’s anything wrong with Alan Turing’s standard of achievement, that real AI is when a human being can’t tell that the AI isn’t human, when freely communicating with it over a text-only connection.

And intelligence constructed piece by piece IS artificial as opposed to natural. That’s what those words mean. Being artificial as opposed to natural is not a bad thing, just information.

Currently, AI is not only non-natural (which is just a neutral unavoidable fact), but IS also fake (which is a true and negative value judgment of its current state). Will it necessarily remain a fake, or an unattainable goal, as it currently appears? I certainly don’t know. But if anyone is currently “in the ballpark”, or even “on the same continent where the ballpark is located”, then they’re awfully good at keeping secrets.

The OP, to me, sounds like a statement of conviction that intelligence can’t and never will be artificially created, more than it sounds like a request for alternate terminology.

Having studied and thought about AI fairly deeply, I can say that while there are some curiosities about the brain that I - at least - don’t understand how they are able to be achieved in an inheritable way, like humans consistent (?) quirk to pick out the elements of a face independently of orientation, whatever all structural components of the brain there are that allow for this sort of thing to be achieved are likely only necessary for rapid bootstrapping and are not necessary for human-like intelligence.

The key ingredients to create human-like intelligence are:

  1. Bootstrapping (e.g, the ability to interact with other intelligences, who have learned more). *
  2. A sufficiently complex environment, along with a decent array of sensors to take this information in.
  3. A hormonal system.
  4. A sufficiently large number of neurons, arranged in a manner that allows for loops.

I don’t know if we yet have what we need on #4, though we do seem to be reaching that range to at least some extent. Minus 1-3, though, and you’re effectively just talking a giant quadratic regression calculator more than you are talking true AI. Adding 1-3, and you need to start considering some pretty deep ethical questions.

  • This is necessary in humans because our minds become locked in on certain things at certain stages of development, and our total ability to learn in a truly flexible manner is time-limited. For an artificially constructed mind, that may not be a limitation and so this sort of bootstrapping might not be necessary, just the remaining items. But it would certainly be helpful to the artificial mind.

From the point of view of an interested, if not expert bystander, I think the issue is defining a advanced computing as intelligence. We do not have a complete understanding of human intelligence and most humans are capable of mastering multiple tasks.

Most of the machines i have looked into are capable of mastering one or two tasks. None are capable of the many, many other tasks required to be functional in the world.

Show me a computational device that can that can design a spacecraaft, then go grocery shopping, walk the dog, play with the kiddos, make dinner and watch BCS, have a conversation about the show and go to bed and I will be impressed.

Would you be impressed by a “one or two tasks” machine that can watch BCS and have a conversation about it, and a separate “one or two tasks” machine that can go grocery shopping and walk the dog, and a separate “one or two tasks” machine that can make dinner — and a separate “one or two tasks” machine that can dial up the BCS machine, or the grocery-shopping machine, or the dinner-making machine?