I think you’re significantly underestimating the routing problem. Interconnect speed on a chip isn’t the problem, it is getting enough space to do the routing.
Optical interconnect inside a chip doesn’t solve that problem. For between chips (and you’ll need lots) optical interconnect could maybe solve some problems, except it has never seemed to work well as a standard interface. When I was at Bell Labs there was a lot of research on optical switches.
Clock routing is not a problem. I’m quite familiar with asynchronous methods, which have been around 45 years that I’m aware of, and have also never worked very well. HP and Sun had research efforts in this area which both failed, and I’m sure tons of other companies did also. When I was in grad school we had a dataflow machine simulator which I demonstrated did a terrible job simulating instruction graphs (thanks to the need for some access to memory.)
Local memory will be no problem at all. Neurons talking will be.
The real brain you are reading this message with does it in a space that is relatively small, and it doesn’t have multiplexing. Every single needs a distinct wire, and the wires are idle in between driver firings. (each axons only runs at a kilohertz, though admittedly a high speed brain emulator would be drastically faster)
So the routing problem is :
Don’t. The brain emulator would be a programmable switch fabric. Nearly all interneuron messages would travel down tiny local buses that go a very short distance and only are shared between a few hundred synapse simulators at most.
Your actual brain does this whenever possible - most of the computational circuits are crammed into tightly arranged columns and rows with repeating interconnect patterns. Synapses that aren’t useful get pruned or downregulated. So your chip emulated version would similar place synapse state machines hanging off of tight local buses that only talk to neighbors.
Use frequency division multiplexing instead of routing when possible for the big connects. Frequency divided interconnects wouldn’t add latency.
As for asynchronous chips not working very well - no shit, nearly all computer programming implicitly demands a synchronous machine. Brain isn’t synchronous.
Oh, just saw this article. IBM’s latest ASIC neuron emulator chips contain 5.7 billion transistors and eat only 70 milliwatts of power! That’s insanely efficient, and right in line with what I’ve been saying. Don’t believe me, believe IBM.
That’s roughly four thousand times the power efficiency of normal 5.7 billion transistor chips. (high end graphics cards have about that many today and eat 300 watts running flat out and overclocked to their limits)
With a four thousand fold reduction in waste heat, you absolutely could build the chip in layers and make chips that have trillions of transistors in a single cube. Moore’s law ain’t dead, it ain’t even resting…
“Getting” by isn’t good enough for lots of people. I don’t game or edit video, but I have a maxed out iMac Retina with a low-res iMac as a second monitor. It was considerably more expensive than $500, but not nearly as expensive as a Pro machine. While it’s indeed fast for a lot of things, the most noticeable thing is not having lag. That’s annoying, and (to me) money well spent.
My first hard drive was only 20 megabytes, and I had one megabyte of RAM, and it was only capable of black and white. That was about $2500 IIRC. My first computer used a cassette recorder and only had 2K of RAM.
Not wanting to start a tit for tat on my hijack of Habeed’s hijack of sweat’s thread; I do get and agree with your larger point that semi-intelligence is sometimes better than no intelligence and is also often plenty good enough.
But for the use case above, I gotta point out that Outlook 2000 could do that. And did for me on many occasions. To be sure, it was beeping, not talking, but I could have made a audio clip of a voice and had that play instead of the built in Windows reminder chime noise.
OK, on double-checking, I should have said 69 years, not 70. We built machines that were better at some cognitive tasks than humans are. With the help of those machines, we designed further machines, that we couldn’t have designed on our own, that were even better at those tasks, and also good enough that they were better than us at some other tasks. The process has been repeating since then, leading ultimately to Google, Siri, Wolfram Alpha, and the like. Do you think that either the programs for those systems, or the hardware on which the programs run, could have been designed without the use of computers?
Yeah I’m a little piqued about that, LSLGuy. You could have at least responded to the massive pile of text I wrote above, normally you have intelligent things to say.
Part of the reason I’m describing specifically what a description of a *real *artificial intelligence would resemble, from a hardware and system architecture level, is to get an idea of the scale of such a system. It’s all well and good to state you took AI courses in the 1970s, but this would be like starting school after the first bottle rockets were invented, and it’s now the 1930s and we finally have liquid fueled rockets, and you are asking why we haven’t flown to the Moon yet and implying it is impossible.
A real AI would still be a massive machine, bigger by a significant margin that the largest supercomputer to date, and it wouldn’t be remotely feasible without custom ASIC chips specifically made for the task. (it is thousand of times less efficient to use modern CPUs or GPUs even though they are also Turing machines and capable of running some form of neural net application)
Or, using the rocketry analogy, we now have sounding rockets and we know an engine that can reach the Moon can be constructed, but we need a rocket the size of a building to do it and there are a lot of details about the design that have yet to be worked out.
A set of hardware that can support an AI is a “building sized” piece of equipment - quite literally.
I’ve written Electronic Design Automation Tools that were used in the design of the next generation of computers. I managed the writing of still more, and I’ve been a customer of others. No one who has ever written any, and certainly no one who has used any, would ever mistake them for intelligent. I’ve never heard any AI researchers claim these tools as being part of their domain.
Some do use the same kind of heuristics and search space algorithms used by some AI programs, but that isn’t even close to AI.
BTW, not only are computers required to design other computers, computers are required to build the next generation of computers. Every stage of a fab is computer controlled. Board assembly is computer controlled - it requires placement of some components way too small for people to do. Testing is computer controlled. Managing the factories is computer controlled, and collecting data for yield improvement and production planning is computer controlled.
@Habeed: I’ve said nothing ref your posts because it’s out of my depth.
I have a BS Comp Sci from the late 70s/ early 80s, and 15 cumulative years in business software development, most recently ending in 2011. I have faith in strong AI … eventually.
I think you are probably addressing me. When I took AI in 1971, one of our textbooks was from 1959. Great ideas back then also.
You are talking about how AI could be built. I actually agree with you that brain simulation is a more likely viable approach than figuring out how the mind works, given that we’ve made little progress on the fundamental issues in 50 years.
My argument with you is that you are grossly underestimating the difficulty of the problem. My training is in computer architecture, I studied multi-processor systems extensively in school, I cobbled some together and wrote the software to control it, and I’ve been in processor design groups for the last 20 years.
Off the top of my head I’d say we’ll have a simulation of a worm brain running within 10 years. Human brains? A lot, lot longer. A lot of the issues don’t scale with Moore’s Law, to get back on topic.
Actually, there are tons of applications that are not inherently synchronous. Compilers and processors both juggle instructions left and right. There are data and control dependencies, but that is another problem. Threading represents another example of where it is not important to be synchronous - across threads at least.
Hell, Spice simulations are not synchronous. If asynchronous computers worked well, they’d be perfect for running Spice.
One big problem with asynchronous logic is that they are a pain to test and debug. When I was involved with HRC someone involved with research at a major corporation told me they dumped their asynchronous logic project for this very reason.
BTW I/O is getting asynchronous High Speed I/O these days carries the clock with the signals, which gets reconstructed on chip, and a particular bit may appear at various times. Testers, which used to look for a 1 or a 0 at a particular time, have had to be adapted for this new regime and to be aware of protocols, not times.
Anyone who has ever laid out a clock tree would love asynchronous logic.
One problem is that until recently, we weren’t sure what *kind *of intelligence we really needed. Now we do.
What a structure of an AI would look like is a series of nodes, interconnected into modules. Data flows in, and data exits. Feedback from measurement systems tied to real factors up and down regulates specific activity from specific modules - in the case of humans, these factors are things like whether you got to eat, pain, pleasure, evaluation by “fixed” sections of your neural net that do not seem to be programmable, which is why steak still tastes like steak and members of the other gender are still attractive and poop smells bad, for almost all humans their entire lives.
This is the missing factor in your circuit design tools. There’s no memory location where it saves what happens when it tries applying a rule to the board design with a specific coefficient or other numerical term that can be varied, and there’s not this combinatorial graph where it can remember “if the circuit resembles THIS and THIS happens and THIS is also true and so is THIS, do <strategy x1>”. You know right away why you didn’t do it this way - you’d rather the tool acts consistently instead of sometimes screwing up and sometimes operating brilliantly.
Also, solutions generated by these kinds of networks if you let em run wild can be pretty fragile.
The reason why we’re nowhere close to the kind of AI that makes decisions on it’s own, duplicates the efforts of entire humans, etc, is that example I just linked, you can count the number of connections. There seem to be less than 100 when the thing finishes and is now able to solve the level, reaching an equilibrium. (once the simplest network exists that will solve that mario level and get the maximum score, it can no longer get any more advanced - you have extracted all the information you can from the input data set. It won’t start plotting to take over the planet, not at this level)
The brain you are using to parse this uses something like 86 trillion connections. So it’s comparing a model rocket to a Saturn V. But, rough estimates show we have enough materials to build a “Saturn V” in 2015…
Also, the network linked in the video develops itself from nothing. In the brain you are using, there are clearly hundreds - if not thousands - of distinct neural structures that form in every “normal” human being. The neurons start out interconnected into specific patterns and will develop into specific systems. The mario AI uses just a single number as score, you use your exact nutritional state, countless environment factors, feedback from all kinds of sensors, and so on and so forth.
A neural network is very similar to spice - it is in fact just a class of spice simulation, where you are simulating the voltage at 86 trillion nodes. Some of the nodes are switches, like the various transistor gates in a spice simulation. Some of the nodes can adjust their own gate thresholds and other electrical factors according to various rules - a spice simulation that also simulates the effects of a connected digital circuit that can change which resistors are connected to the circuit in response to logic is comparable.
My “no shit” was that it isn’t commercially viable to make a chip that only runs spice sims and there’s inherently uncertainty - each run of the sim would give different results, the actual charge levels in each node would fluctuate as the various parts of the chip running the sim aren’t coordinated. You couldn’t use such a sim for circuit validation, but it appears to run fine for developing intelligent behavior like the system that is reading this message…
Well, IBM just blew past your predictions. Sure, you can move the goalposts (their solution may not be accurate enough, but it’s roughly at the scale of “rat brain” which is an enormously more complex system. Also, apparently, gmail quietly now uses neural net learning in it’s spam filters which is why it has suddenly gotten a huge amount better)
Maybe they wouldn’t call them intelligent, but they should. My point all along is that we have a skewed notion of what we mean by “intelligent”. People will say in one breath that what they mean by “intelligent” is “able to do what a human does”, and then in the next breath say that they mean “something that would lead to even better computers”. But the first is useless, since we already have as many things as we need that are able to do what humans do, and the second is something that we’ve had for a very long time.
What I’m trying to get across is that we should just be thinking in terms of “able to perform cognitive tasks”, and that any machine that performs any cognitive tasks better, in some way, than a human is a success. We shouldn’t try to match humans; we should try to surpass them, in as many ways as possible. The measure of computer progress is in the number of ways that computers surpass us, and in the amount by which they surpass us in those ways. Yes, this will mean that there will also be some ways in which the computers are inferior to humans, but that doesn’t matter, because for any of those tasks, we’ll just use a human anyway.
I’m saying what we mean when we say “artificial intelligence” is a machine that is capable of self learning and self improving. That software doesn’t make itself better automatically, so it is fairly stupid. A more intelligent version of the software would contain automatic heuristics - based on statistics submitted by users of the software, it would suggest various design ideas, such as letting you know what the most common part used for a particular role was. Google does all this right now.
A more intelligent version still would self-design new software structures to make itself more optimal at doing things, such as the auto-routing functions.
A more intelligent version still would have memories, goals, etc.
So yeah, the version he worked with is intelligent, just not very intelligent, and we can do a crapton better.
So there are many, many, many intermediate stages between what we have now and “hey I want a circuit that does this”, with the description given in plain english.
And more stages still to “I demand a new robot! It must be able to do <task x> reliably!”. That last bit would be this massive integrated system, with many many many subsystems that do the things described above. That’s a “real” AI.
One way is moving away from silicon base chips into neuron chip would make more true AI and true robots like seen in movies.
With neuron chip would make more true AI it would replace Moore’s law. If they do away with silicon chips and use neuron base chips you could have true AI intelligent by year 2050!!
Year 2050.
AI good morning BOB
Person good morning
AI BOB would you like your daily stock score.
Person yes or no?
AI BOB you should leave 30 minutes for work now there been car accident and traffic is backing up!! You don’t want to be late for work.
AI BOB it is cold out side you should dress warm it is xx out side.
After work
AI BOB it is Friday you always go to bar? Are you planning on going to bar tonight because if you are you will be late your friend.
AI remember on Saturday you have some family get together so don’t get really drunk.
AI base on my readings you are too drunk to drive. I will not allow you to drive you must take a cab or I’m calling the police.
AI BOB you forgot to take you meds for your high blood pressure today.
AI BOB your favorite movie CSI is on today at 8:00PM
AI BOB I’m detecting a tornado get to basement of your house now.
AI BOB you playing a video game you have appointment with XY at 7:00AM
AI BOB I’m detaining you having heart attack like symptoms go to ER ASAP.
AI BOB downloading free movies and free video games is illegal
AI BOB it is late at night you will be very tired for work the next day.
AI BOB base on your personality on likes and dislikes you should go to this web site lots of girls just like you with same personality on likes and dislikes!! Called dating XY.
AI BOB would you like me to play game of chest with you?
AI what is 10+ 20 =
AI how many miles is it from earth to moon?
AI what is orbit of moon like and how old is from big bang.
What intelligence means is a subject for a thread by itself. But I think it means something broader than doing one task very, very well.
I’m not arguing that these programs are not successful. It’s a billion dollar market and they have enabled the design of the bigger machines we all use, which would have been absolutely impossible without them. The tools have also gotten a lot better and a lot more scalable. But AI? Not hardly.
$0 years ago the thought was that if you understood how to do lots of cognitive tasks that humans do, you’d either get insight into what intelligence was or else get an intelligent machine by stitching them all together. Neither was correct.
As far as cognitive tasks go, I’ve done tic-tac-toe playing machines in both software and hardware. They used optimal strategies, and could beat 6-year-old kids. But they sure as hell weren’t as intelligent as even a dull kid.
Looks to me they’ve implemented hardware neural networks, not even a new concept. Standard data analysis tools (many open source) have neural networks as an option. Another excellent heuristic for the right application, but you can build one as big as a planet and still not have AI.
Don’t believe everything you read in magazines.
We’re still trying to say whether something is or is not AI without actually defining what AI is. Or rather, we are defining it, but we keep on moving the goalposts. In practice, AI seems to always be defined as “what we don’t have”. I guarantee you that if you took a modern smartphone (and all of its associated infrastructure) back to 1970, and showed it to an AI researcher of the day, he’d have agreed that it was a real AI. You can ask it natural-language questions (even extremely difficult questions) and get natural-language answers, it can find you near-optimum routes through cities even considering traffic, it can play chess with you. But now that we have smartphones, we define AI to mean something else.