I think this isn’t a bad working description of what a real AI system would look like. We have plenty of self-learning systems now, this is a necessary but not sufficient requirement. (My Android SMS app knows what I type so that I can tell my wife I’m on my way home with two clicks.)
And thanks for saying I wrote intelligent software, but that’s true only for very weak definitions of intelligent. In industry unless your application has learning as a requirement, you write the most effective algorithm or heuristic to solve the specific problem, since this takes less time and runs faster. There are GA versions of some of our tools in academia, but no one would actually ever use one.
If the researcher were being honest (and not writing a grant proposal) he’d say that these things are steps to AI, not AI itself. I never had this discussion with anyone, so I can’t say for sure, but I think they knew what real AI would look like, and Habeed kind of nailed it.
We definitely have a lot of things that an AI would need in its toolbox - language processing and speech, object recognition, data access, but we don’t have the stuff that would tie it all together and know when to use each one, let alone anticipate and learn. No one talking into a smartphone is going to have the slightest illusion that they are talking to a thinking being.
Thanks. Yeah, you’d have to get there a layer at a time. You want low level systems to be dead on, rock solid reliable, where lots of people worldwide can work on them and get them to do revenue generating things. Then you can build from there. You want to start with relatively simple tasks for your learning algorithms and optimize them over years.
“What is this object in front of the camera?”. Or, actually, more practically “is this object in the class objects the car can safely drive over, or is it the kind that you want to avoid, or somewhere in between?”.
“Is this component on the assembly line good enough to use or faulty?”
“This object needs to fit into this object. How do you control the robot servos so the object in the left waldo fits correctly into the object in the right object?”
And so on. There need to be bulletproof techniques that do these things, software libraries and architectures that let a huge number of people contribute (aka you don’t write the neural net node simulator by hand yourself and it runs at high speed on a GPU like chip as it was optimized), a competitive environment where a lot of people can contribute possible solutions and have them evaluated in simulators and in the real world.
Phase 0 was getting computer chips fast enough and with enough memory, and software algorithms that sorta exhibit slightly intelligent behavior.
Phase 1 is getting neural networks stolen from human brain research that are starting to be actually effective. Supposedly, the reason google recently got their Atari playing network sim working is they stole a technique from nature for rapidly resolving the node states.
Phase 2 is using those networks to do low level tasks, similar to the functions in a human mind.
Phase N is the system that integrates probably on the order of 1000 to 10,000 or separate subsystems, all feeding data back and forth to each other, all self modifying, and this system includes features that act kind of life self awareness, long term memory, goals, some sense of whether a set of actions was a “good” or a “bad” choice, etc etc etc.
One issue that is obvious is that if each sub-system is self modifying, and each subsystem can potentially self-modify itself into a state that is non functional, and each subsystem self modifies as a result of data from other self modifying subsystems…how in the world do you build a system that doesn’t rapidly and consistently fail almost immediately?
Ironically, the failures would be very different from computer crashes. The network would continue to run as the low level code, probably assembly language for a GPU, would be bulletproof and would continue to simulate changes in node states. It would just be “locked up” where subsystems have stopped producing meaningful outputs and feedback is continuing to push those subsystems into a failure mode.
TLDR, the system fails and the morality module locks up. Fortunately, a few microseconds later, other critical systems lock up and your would be skynet just sits there, none of the connected robots doing anything but twitching maybe, and the CPU fans on the server farm are all at max as your AI network does a good job of modeling a resistor.
I don’t think anyone here is saying we have AI or close to AI yet!!! What we have to day is very crude AI the intelligent of a mouse!!
I think there are many things in this thread going unanswered.
1 Will computer power keep going up to we have true AI?
2 Have we reached wall with computer power now?
3 The past 5 years there been little to no progress with computers we are hitting wall.
4 The future of computers look really bad. Even the next 5 years from now
5 Computers stores sale old computers for some strange unknown reason be it cost issue or stagnation technology.
6 Computers technology is stagnation
7 Moore’s Law is starting to come to end now.
8 Computer stores sale old junk computers or overpriced with old technology
9. AI and future robots does not look good because of stagnation technology of computers.
It would take billion or trillion networking CPU’s to get any where close to AI of person brain.
I think it’s clear that very shortly, AI will be a software problem more than a hardware problem. We can build a computer architected something like a brain–the IBM computer is along those lines–but actually programming it to be a general-purpose AI is a completely unanswered problem.
For single-threaded performance, we’re pretty close. It’s obvious that we’ve hit the gigahertz wall. Performance per cycle goes up, but very slowly. The future is massive parallelism.
That’s not true overall. As I said, single-threaded performance isn’t going up very fast. But there are lots of other factors to improve. SSDs continue to improve dramatically, for instance. HBM technology (“High Bandwidth Memory”) will give a big boost to graphics processors, and may make it down to CPUs as well.
I’d say there’s at least a couple of decades left in process improvements. Not just size but form factor as well.
That’s a problem local to you. Just order online if you’re stuck with crappy local shops.
No; the software is making leaps and bounds. We seem to have recently hit a threshold where neural nets actually make sense. They were stagnant for decades but there’s a recent resurgence in their use for image and voice recognition. And it looks like neural nets can scale with the technologies that we’ll have to switch to (massive parallelism, asynchronous, local communication, etc.).
Maybe, but they can run at a much lower clock rate. That makes things like 3D chips possible (for the reasons that Francis Vaughan mentioned). Trillions of tiny, low speed, asynchronous processors may well be possible. The IBM machine has 4096 and only consumes 70 mW. And that’s just a first-gen version of that kind of design.
This presupposes that AI is the driver or ultimate goal. Our really big compute markets are driven by scientific and technical questions. Even if we did achieve a real AI, this may still remain a driver. But I remain sceptical that AI will ever be seen as a financial driver.
Technically no. But the current drive of technology that begat Moore’s Law is petering out. Every step since the 4004 has been the same deal, smaller transistors on a 2D slab of silicon. That particular trail is going cold.
Progress is there, but has slowed. That is all. Maybe to about half the pace that it was. It is still pretty astounding.
Not really, 10nm, and 7nm processes are going to happen. But they are not coming at the Moore’s Law rate.
No store sells old computers or technology. Nobody makes them. Stores sell up to date technology. But technology has reached the point where the consumer is satisfied with a system that is now at a much lower price point than before - because of the improvements in technology.
Nope. In many ways it is as vibrant as it has ever been. But the familiar well worn paths are petering out.
Moore’s Law as Moore saw it is coming to an end. Our ability to cram transistors on the a 2D slab of silicon is getting close to the end. Moving to 3D doesn’t get us as neat a deal - as transistors went up with the square of the inverse of feature size. 3D only gets us linear with number of layers. New technologies for logic elements are probably where any major advance lies. But we may have something of a haitus in the meantime.
As above. Nobody makes old technology, so there is none to sell. Shops are price sensitive. They stock what sells, and the margins are razor thin. Look at the gaming market to find the fast stuff. Many gamers build their own, so the market for ready built machines is mostly full of lower end systems.
You need a use case that can fund the research. The computer industry has invested many many billions, but it did this on the back of selling a very large number of chips into PC, cars, phones, home entertainment systems. Billions of customers. There may simply not be the customer base to drive large scale AI.
Likely true. In many ways this bit isn’t what matters.
Actually, it’s more driven by gaming. Scientific computing has made great leaps and bounds, to be sure, but it’s ridden on the back of gaming advances. It turns out that the same GPUs that do the bulk of the work in most games are also quite well-suited to a wide variety of scientific problems. But there’s a reason they’re called “graphics processing units” and not “vector calculation processing units”.
I’m a PhD student in AI/Machine Learning, but I think it’s a super long way off, though as a functionalist (with a bit of the Ability Hypothesis thrown in) I think it’s possible in a strict sense. I do think that once the technology is there, it’s conceivable a human-like AI can be produced by interested self-motivated people, but likely not as part of any funded research project. For one, if we produced an AI exactly like a human, you’d have to treat the thing like a real human baby for a few years before you even knew if the damn thing was working properly.
That said, I think people are missing a lot of applications of AI that are distinct from Siri-like personal assistants, but also not “hard” AI. There’s a lot of planning problems that computers can help with. There are a lot of problems where the policy for “what should I do in this situation” is too hard to hand code, either because the number of variables is too big or you can’t precisely codify the important variables up front (for it to learn the policy itself).
Current planners struggle with, in effect, “unknown unknowns”, things that we don’t explicitly know, or things that we can’t explicitly model the uncertainty of (with probability). This is a huge problem in using AIs in safety-critical applications, because they can go off the rails and start doing really bad things. So there’s a big gap in AI right now for developing a sort of “meta knowledge” – planners and agents that can figure out when they have no idea what to do and ask for help. The big problem with AI is we just get a random number from some regression and trust it blindly – do thing x if the value is at least y. They have no way to assess the confidence of their assertions, and doing so is an open research problem.
Siri and the like have very low-level versions of this when it asks for disambiguation, but that’s mostly hand-coded (“I found two contacts named <x>” and such).
It has been driven by gaming for about a decade. But GPU’s have only had limited scientific utility - usually only single precision floating point, and even then they often cut corners. For a while there was talk of games physics processors, that would do the hard work of the games simulation, but this seemed to get overtaken by the idea of the GPGPU. Now it seems we are getting better deal. But GPUs, or even GPGPUs are only useful for a range of things. (In a previous life a group of colleagues were big time lattice gauge guys, and they still keep an eye on GPUs, they only need single precision.)
The big limitation, even with more sensible floating point implementations, is the lack of memory integration. Bandwidth on and off the GPUs is poor, and the latency of transfers isn’t much chop either. If you can push your problem into the GPU’s memory for long enough to do some useful work you can win. But lots of problems won’t do that. Enough do that there are some very serious machines around with GPU acceleration.
Then you get funny devices like Intel’s Phi, that seems to be neither fish nor fowl, and a bit of a disappointment.
Well, I said that many scientific problems work well on GPUs, not all of them. And the existence of problems where the limitations of GPUs are significant is yet another illustration that it’s games that are the driver, not science.
I would argue, though, that games have always been a bigger driver of computer tech than science, even in the era before GPUs. Remember, Unix itself was created because a group of grad students had a spare computer lying around, and wanted to put an OS on it so they could use it for games.
I’ve seen the “faster hardware means AI” fallacy for decades. When the 386 came out the USA Today said AI was just around the corner - which is pretty funny. AI is not a hardware problem or else we’d have it already.
Clock speed, maybe, but power nowhere close. There are two more process nodes well along.
Software tends to expand with hardware power, so the response you see on your PC might look the same as it did five years ago, the CPU is more powerful. I see that as my work computer - Win 7, is much faster than my creaky old Vista computer I use at home.
Bad for PC makers maybe, since PCs are becoming a commodity, but not bad for the industry as a whole. We’re just beginning to make them useful assistants.
Really? Same amount of memory? Same amount of disk? Same number of cores? Unless your store sells antiquated machines, probably not true. Look for benchmark results to see for sure.
No and no. We recently jumped to a new process node and the next is on our roadmap.
Time for you to find a new store.
I suspect we have plenty of computing power to do AI now if someone knew how to implement real AI in software. Brain emulators will be custom hardware designs and won’t have that much to do with standard CPUs.
Grad students? You mean people in Bell Labs Area 11, though I can understand how people couldn’t tell the difference. Except that Bell Labs people get paid a lot more.
Here is something that might be interesting to you in the sense that it might stimulate you to think in different directions. Unknowns are a but issue in digital hardware simulation, especially fault simulation. We have Boolean algebras that deal with unknowns. In fact a lot of time spent when a processor powers up is getting rid of unknown state.
When you power up a flip flop it starts at unknown (X). Say you’ve designed a circuit to initialize when a counter reaches a certain value. In real life if you clock that counter long enough, it is guaranteed to reach that value at some point. But when you simulate the count it starts at all X and stays there so your simulated circuit never initializes. (I’ve seen a real example of this.) There has been work on dealing with circuits of this type, but it is easier to just initialize the counter to all 0s or something.
At gate level Xs are not that big a problem. But if you want to simulate at the functional level (an adder just adds, and is not simulated as a collection of gates) unknowns are a big problem and tend to propagate like weeds and poison the results - which is more or less what is happening in your example.
Undergraduate logic design classes don’t deal with unknowns a lot, so they are unknown to many computer scientists.
Although the main architecture development costs are paid by gaming, the scientific market is now big enough to justify limited additional investments into these problems.
The NVIDIA Titan has excellent double-precision performance. All chips support slow, emulated DP (1/32 to 1/24 speed), while the Titan is 1/3 speed. This does increase chip costs but at the high end this isn’t as important, and getting DP on there makes them more interesting for compute.
NVLink is NVIDIA’s solution to the CPU-GPU link problem. The next supercomputers from Oak Ridge will feature this interconnect.
BTW, all recent NVIDIA GPUs are IEEE 754-2008 compliant (not sure offhand if they ever produce denorms, but they accept them).
Because that is all people need. Unless you are doing serious media editing you don’t need anything bigger, and spending the money won’t make your computing experience faster.
As above. Unless you are storing lots of movies, an average person will never fill a 1TB disk.
Because i3 and i5 are not old technology. They are market points, not technology. Intel released a new i5 a month ago as part of the 6[sup]th[/sup] gen release, and will release many more. If you think you need an i7 you need to learn more about real life compute results. If you only can make use of a couple of cores, doing mundane web browsing, email, and office type applications, an i3 is more than adequate. Why spend the extra money just to use more electricity?
Mostly to get the price down. An easy up-sell.
As above. Unless the customer has a workload that can use more cores, selling a machine with them is wasting money and electricity.
Same price? Same performance? Same power draw? Performance is not about clock rate.
You will usually win building up your own machine. However you need to be careful to really be comparing like with like.
Yeah, I built my own computer once. It was a valuable learning experience, and I’m glad I’ve done it. And like most valuable learning experiences, I don’t intend to ever do it again.