Artifiial Intelligence Overestimated?

While there are areas in the brain (cortecies) which facilitate specific types of cognition or information processing and cognitive scientists used to believe that all of the pertinent activity occurred in those regions, functional brain imaging has clearly shown that the entire brain is involved in all cognitive activities, including areas that are not considered to be part of the higher cognitive functions such as reason or impulse control. The “functional approach” of trying to model activity in just one part of the brain is now widely considered to be inadequate, and no approaches to date have really presented anything like human thought processes or psychology. The complexity of the brain and the interactions within it will likely defy any attempt at discrete modeling.

Read Kandel, who is not only considered one of the premier researchers in the field but originally trained in psychiatry in the hope of being able to develop working quantitative models of human psychology only to realize that not only do we not know enough about the fundamentals of cognition and memory, but that the phenomena are so complex we probably cannot reproduce them in any model significantly less complex than the brain itself. If you don’t have time to read Principles of Neural Science (and pretty much no one who isn’t a full time student does) his In Search of Memory: The Emergence of a New Science of Mind is a great, not-too-technical discourse on the state of the art in cognition and memory (at least as of the time of publication in 2007) as well as a quasi-autobiography and discourse about the challenges and tedium of research science.

Stranger

Hence, the goofy ‘human batteries’ in The Matrix; not providing "juice’ to power the machines, but ‘juice’ to power their creativity? With some controversy within the AI community.

We don’t have a ‘pretty good simulation’ of a human brain, that’s an incredibly complex problem that is nowhere near solved. Exactly how the human brain works isn’t known, and what is known points to it being much too complicated to efficiently emulate with modern computers. So instead of being an ‘easy running-start’, trying to emulate a human brain on a modern computer is an incredibly hard problem that no one knows how to do and probably can’t actually be done. Meanwhile the ‘starting from some other place entirely’ has already been done, AI’s like Watson and AlphaGo exist along with self-reprogramming neural networks and many other non-human-like systems.

It’s kind of weird to say that ‘we know’ a particular approach, like emulating a human brain on modern computing hardware, works when no one has been able to make it work and most people in the field think it’s not possible to make it work. (Not that emulating a human brain is impossible, but doing so on modern hardware in a way that is fast enough to interact with the world probably is).

Agreed it’s not simple and clean separation. At the same time, I read new research regularly that appears to show an increasing amount of function specific processing areas. Examples: object recognition, location tracking within an environment, time (duration, cycles, etc.).

I fully understand that there is also significant information flow across many areas and the entire topic is incredibly complex, but that doesn’t mean there aren’t a somewhat finite set of functions or capabilities that are the underlying tools that could be created and linked together in a way that creates a similar system.

I’ll read the link you provided, sounds interesting.

Did you just say “model the world”? Yeah, OK. I suppose if you modeled the “world” and had the computing capability to run that model then you would probably surpass a human ability to anticipate and react. Much short of that, a battalion of human operated drones with a few macros will probably beat similarly equipped AI robots 9 times out of 10.

Ghillie suits are not the only only trick in the bag. You can baffle Infrared profiles. You can baffle motion detectors. You can baffle anything that is programmed.

But how does it sounds for it’s intended usage and audience which is background music listened to by average Joe or Jane non-music expert?

I would bet that people playing video games or watching a movie trailer would think that level of quality is reasonable and serves it’s purpose. Or at minimum that it’s within the ballpark and a few more iterations/improvement improves it to the point of regularly useful.

Computers don’t need to create a number one hit to be considered successful, they just need to be good enough that people use them instead of a human for the same task.

I’m fairly sure you don’t understand what I meant by modelling the world. I’m not talking about simulation.

Human brains are programmed. What’s your point?

Human brains are programmed in a very loose sense in the sense that we learn stuff. But the firmware, hardware and operating system are the result of hundreds of millions of years of evolution.

I don’t think AI has yet developed the ability to disconnect an reconnect circuits on its motherboard to facilitate different types of software.

I very much doubt that anyone is going to do psychology on a neuron-level simulation of a brain. But we don’t verify instruction sets or experiment with architectural enhancements for performance on a gate level model of a computer either. If we are able to abstract parts of the brain once we get a neuron level simulation working, we might be able to build a higher level simulator - which almost certainly will include operations we can’t imagine today looking top down. Or we might not - impossible to know until we do the lowest level version. We might be able to do psychology research on that.

We’re not talking skilled labor here, but very unskilled, and cheap, labor.
And the machines that would replace them are not highly sophisticated. They do have self-monitoring to keep them from going out of spec, but are certainly not self-repairing. Technicians are fairly cheap also.
It is nothing like sophisticated CNC machines replacing expensive machinists.
BTW this company has plenty of more automated traditional factories. They are hardly wedded to using cheap labor, and had a large engineering staff looking for opportunities to automate. But they also looked at ROI - and it wasn’t there in this case.

I know one thing for sure - no one is going to do a full brain simulation on a standard computer. If you want standard hardware, you might be able to use GPUs to simulate neurons, but you would most likely design custom chips for the jobs. I don’t know the optimal partition, but the number you would need would make a custom design quite cost effective. I suspect communications would be the bottleneck, but we know that every neuron does not communicate directly with every other neuron, so we might be able to design bus architectures that resemble those the brain uses.
So don’t say a brain simulation is impossible by referring to the performance specs of the latest Intel processor.

It could if the motherboard were made up of FPGAs. And back when I was in grad school we talked about language directed architectures, where you could dynamically microprogram a computer to be optimized for a specific language or even piece of software.
It didn’t turn out to be very effective, since you can do just a good a job at the higher level (which AIs do already) and for software it is cheaper to compile for the architecture you can use. But recent SPARC processors have features to make them run Oracle databases faster. Larry is of the generation where this was cool. It hasn’t seemed to work any better today than back then, though.

I don’t know what you mean by that exact phrasing but it appears to be a really limited notion of what AI may be. The notion that it would be explicitly programmed with any kind of defined behaviours is probably false - it will most likely need to be trained - to learn, to fall short of goals and retry, adjusting itself and trying again, retaining what works and discarding or changing what does not.

Deep learning networks and other such constructs (not AI, but a promising step along the way) are already at a state of development where they can reconfigure themselves so as to attain goals that were not specifically thought of by the people who made them.
Trained algorithms are already capable of surpassing humans in recognition tasks

For the record, I wasn’t saying “we know” the emulate-a-human-brain approach works; I was saying “we know” the think-like-a-human approach works. So if a guy postulates an AI that thinks in an utterly inhuman fashion, I couldn’t say much about its odds of playing diagnostician; but postulate an AI that thinks the way a human diagnostician does, and I’d figure ‘since the one thing we already know is just how well a human can do that, I guess you’d be off to a solid start.’

Here is another, from the latest Technology Review. Since smartphone microphones are now more sensitive than our ears, a grad student at MIT has developed a system which got trained on car noises, and which can diagnose failures and incipient failures in cars by the noises only. It detected that wheels were out of balance better than professional drivers did. He and his professor formed a start-up, and you may soon have an app you can buy which can tell if your car has problems.
And which won’t decide that a belt needs replacing because sales at the garage have been down this month.

The diagnostic system I mentioned earlier had a feature which allowed someone to hand insert diagnoses. It turned out that the experts in the factory pre-programmed it, so it started out ahead, and then used statistical methods to improve further. This was not intended, but it was nice.
So doing it the human way and the machine way can sometimes be combined.

  1. Why is this AI? To me, it just looks like a new measuring device. On which measures audio input, instead of , say, electrical inductance from a wire attached to the wheel.

2.No–if it is true AI,it will definitely be able to choose to tell you that you need a belt replaced— Because this month’s profits are lower than average, and analyzing the Big Data from your credit card over the past 5 years shows that you fit the statistical model of customers who can be talking into making impulse purchases.

My assumption is the way that it is trained. Give the AI a bunch of inputs from various cars, link them to various faults, and have the AI figure out what the sonic similarities are between various audio inputs and faults. So, you know, pretty much how a human learns it.

It’s “AI” because that is the term being used for things that involve machine learning, pattern matching and pretty much anything that we don’t have a straightforward algorithm/formula we can program into the system. If it was a simple as “Analog to digital converter value of 78 means loud noise so engine needs tune up”, then we wouldn’t call it “AI”.

It’s true that it’s on the low end of the continuum of things we might include in the “intelligence” bucket, but these machine learning techniques that have been developed/refined over the past 20/30 years are a significant step forward. And the rate of improving capabilities is increasing.

Our survival instincts could be evolved pretty rapidly. According to malcolm Gladwell it takes 10,000 hours for a human to master a task. Alpha Go Zero mastered the game of Go in 4 hours. I see no reason to think our survival instinct is even a good thing for a machine, let alone something they couldn’t develop too.

AI can’t create art or music yet, but they couldn’t drive cars or answer verbal queries not too long ago either.

I don’t know when or where it’ll be cheaper to use a human though. Even when you factor in energy, right now a kwh is maybe $0.10. Most robots don’t use that much energy.

I hope AI comes and helps us solve the big issues facing humanity (poverty, medical problems, sustainability, war & peace, etc). I’m sure the problem solving capabilities of humans will continue to grow this century. The real issue is will there be AGI and will there be an intelligence explosion. I don’t know when both those things will happen.

This is not the only person, some far more influential and well-known, who thinks maybe so.