Is Moore's law starting to come to end now?

Progress in AI has been self-accelerating for 60 or 70 years now. You’re speculating about the future when you should be simply remembering the past.

This may be true for PC chips but totally wrong for the industry in general. The explosion of mobile devices has increased the pace of production. Last year Intel sold the most chips ever and they have a tiny market share for mobile CPUs (which is dominated by ARM).

One thing that has changed in the last ten years is the priority of designs. It used to be that CPUs were largely measured by pure power. Now, however, power/watt and power/mm[sup]2[/sup] are more important.

I can think of one. The human brain, you could build a ASIC based emulator where for every neuron firing, about 10-100 clocks occur in the emulation hardware. (see, between neuron firings, all the separate finite state machines in the emulated brain update the states of each neuron in response to the last set of stimuli).

So, if you can “only” run at 1 ghz, you’d have a simulated mind running as fast as 100 mhz.

That’s pathetic, you say. I had a computer in the 90s that was quicker than that.

100 mhz is 100,000 faster than the existing human brain. If you had the same relative efficiency as the human brain (it would realistically be higher - even if you started with a human like architecture, you would upgrade it), you’d be almost unfathomably quicker and smarter.

1 day for us meatbags, where we can do 16 hours of mental labor or so before our work quality plummets precipitously, would be 273 years for the digital replacement. Yeah, I think I’ve have trouble competing in the job market with something like that…or even the staying alive market…

Most people don’t want to spend more than $500 on a computer so by high end you mean $1,000 to $1,600 than yes. But in past we had trickle down system. In 5 years a high end system became average system.

That what I was getting. A $1,600 computer high end for only small number of people would be almost obsolete in 5 years.

I got computer in 2010 for $600, I go to computer store now and look at computers for $600 and it same spects.

From what I read most games today hardly use CPU, where most multimedia and video editing makes no use of the video card at all.

That if you into games look for good video card. And if you into multimedia and video editing look for a good CPU.

I also read very few games use more than 4 cores of a CPU. That is supposed to change in DirectX12 is to come out soon.

IMHO, if you want to play games, get a game box.
If you want to do work, get a computer.

Game consoles are shit. They use graphics chips that were obsolete when they were released, and utter garbage by the end of their lifetime. The top console games don’t even run at 1080p much of the time; 720p to 900p is more common. And mostly 30 hz instead of 60.

The general consumer isn’t very discerning and so these factors may not be too relevant. But for anyone that cares, even a modest PC is a vast improvement in quality. And as it happens, an excellent gaming PC is also an excellent work PC.

Self-accelerating? I took AI in 1971. Most of the cute apps they talked about exist today. Progress in real AI - as in a thinking machine - has been minimal to non-existent.

What AI existed 70 years ago? Turing and Shannon wrote 1950 or so. I don’t think you can call Dr. Asimov an AI pioneer.

What we will get in ten years or sooner are real personal digital assistants with a sense of time and location and a model of your habits - who can tell you what the traffic will be like on the way home without you asking. What we won’t get is a truly intelligent one.

Not nearly as easy as you think. If we could use these techniques for circuit simulation that would run 10% as fast as a real system, processor designers would stand up and cheer.
The real problem is that the neurons are heavily interconnected, and the data transfer between neurons kills you. It’s bad enough on chip but when you cross chip boundaries it is even worse.
Interconnect overhead has slowed down multiprocessor systems from the beginning.

Yeah, well, the 10% version you can think of as the barely achievable bleeding edge version. It would mean single clock multipliers, lots of parallel logic, an unimaginable amount of short interconnects probably consisting of light paths crossing the whole chip (the real brain is mostly white matter, remember?), and so on. Also, unlike chip emulation, you do not need it to be clock synchronous - brain doesn’t have a central clock, and so each piece of this thing would probably self-clock with internal crystals or RC clocks and they would not be synchronized. I say barely achievable because the real brain does thing like update it’s individual neuron states based on timing correlations and neurotransmitters from other neurons. So those on average 10 clocks in between outputs is when it’s using single clock MACs to update it’s internal state based on the timing data (equation is dt * coefficient, probably several coefficients) and charge leakage and so forth. (it’s a digital system but it emulates the leakage of charge from the system it’s modeling)

Each sub-neuron (basically a piece of this 3d chip equivalent to a synapse) has all of the memory it is going to operate from physically present as part of itself. One of the reason real systems are slow is their data is across a bus.

And before that happens, we have to figure out how to simulate a single neuron. I’m not particularly convinced that we understand neurons completely enough to simulate them. Subtleties in the construction of neurons–such as degeneration of the myelin sheath, buildup of amyloid plaques, or tau protein abnormalities–can cause wide-ranging and poorly understood neurological disorders. How can we possibly simulate even one neuron if we don’t even know what causes them to misbehave?

I’d like to see a faithful simulation of a Caenorhabditis elegans before really concluding that hyper-speed human brain simulations are just on the horizon…

Thinking about it, that’s actually where your statement breaks down. This “brain emulation” architecture, you have a state machine physically dedicated to each and every single synapse in the real brain. Each state machine has local memory exclusively for itself, and all it does all day is cycle through a series of logic states where it updates itself and processes inputs with a cycle delay to let other inputs arrive. The part that I don’t know (and nobody does) is exactly what all the rules would be, and what coefficients you would use, but I can say with confidence that a set of rules that would work can be discovered*

  • the reason I can say this with confidence is the way the brain implements the rules is very expensive to do. A new rule would require a whole additional library of proteins to implement, and they have to be developed one random-walk improvement at a time over millions of years. So the real rule-set is small and simple, and current neuroscience research indicates this.

That is, you could fit into a reasonable amount of transistors a series of rules for updating the synapse emulating state machines, and there would be roughly 300 different sets of rules (there seems to be about that many kinds of synapse in the real meatware brain). Once you build a neural emulation chip, you would then set programmable busses to mimic the actual connectivity graph of a real brain, and set each and every state machine with an initial memory state ripped from a real brain.

Let me be a little more specific what I mean by “rules”.

A rule could be :

  1. For all time, accumulate inputs to variable I.
  2. If I > threshold, send output pulse
  3. If dt_output_pulse - dt_neighbor_output_pulse > time_factor, threshold - (delta_dt*coefficient)
  4. else threshold + (delta_dt*coefficient)

That’s basically back-propagation and a simple integrated and fire model. The real rule set probably now contains extra things like
4. Delta+charge + coefficient*(modular_concentration_1) …

So an ASIC designer takes that rule set, and creates the 300 different state machines variants. Well, he actually writes a series of rules that will let all the variants be auto-designed, and auto-laid out into a 3d chip…

Those “coefficients” are things you have to determine using a standardized set of techniques. All the existing literature on brains is no good because it’s different experiments done in different labs. You would find out what the coefficients are using the exact same technique in some massive science-industrial plant basically (it would be a warehouse big room with robotic equipment to eliminate the human error, about the scale of the centrifuge hall used for the Manhattan project, where the robots are testing each and every synapse combination over and over and over under a series of condition variable permutations.)

We just developed commercially available 3d chips recently, so they aren’t cubes in shape yet, but we could get enough chip area by building the hardware emulator as a cluster of separate chips in a gigantic building full of hardware. Today’s silicon density could do it by my rough figuring, it just would cost a lot.

Finding out the coefficients using robots is obviously a massive engineering project, similar in scope to the one that developed the genome sequencing robots. You would have another set of automated tools that would slice and scan preserved samples from human brains in order to steal the connectivity graph from nature.

Yes, this chip would use some programmable buses. (some connectivity might be fixed). This means there would be some spare synapse emulating state machines buried throughout the chip. There would be a way for new connections to form and old ones to be pruned so the intelligent entity that this chip creates can learn over time.

Oh. One final note. It might be important for the survival of the human race. If you design a high speed version of such a chip, the programming bus - the one that lets you externally set the initial state of each and every node in the system - needs to be festooned with fuses that can be blown on command. Once you set up the initial brain to be emulated, blow the programming bus. Very important. This prevents the emulated brain from self-updating in an alien way or being hacked by another brain and overwritten. It is only permitted to update by internal rulesets that can no longer be changed, and not external self-surgery. A real good idea, this…

Anyways, if you missed it in my wall of text above, a real computer chip ends up running code that says things like

if (sample[32434] > threshold), <do something>
else <do something else>

Is sample[32434] in local cache right now? It might be, but maybe this code isn’t going in order and the last memory element was sample[34]. (maybe the code is implementing an octree in a flat array, that creates those kind of jumps)

So you have to wait, and wait, and wait, for the memory system to bring you sample[32434]. That takes a lot of clocks.

You also don’t know the result of branches.

With my described architecture above, there are branches, but they are limited, and you may design the chip to just calculate the result of a branch every time you consider it. That is, every time in my example above that you do the boolean “if (If dt_output_pulse - dt_neighbor_output_pulse > time_factor)” you automatically compute (delta_dt*coefficient) and take the 2’s complement using dedicated circuits. The result of the boolean determines whether you add the positive or negative version of the calculated value.

Your chip contains parallel ALUs so it can do this and only needs 1-2 clocks to do that whole section of code. The chip isn’t programmable - it emulates a specific kind of synapse type and is optimized exclusively for that. And there’s never any data bus - all the state variables the chip needs to do it’s job are stored as part of itself, the equivalent of keeping it in “registers” in existing processor design, and it never needs additional memory (it has precisely as much as it needs for any possible logic state). It communicates with other pieces of the chip by sending impulses, encoded with the destination address, and there’s cut-over routing at switches.

Dual ALUs you say! How many synapses are in the brain? There’s thought to be on average 1000 per neuron, and around 86 billion neurons. This thing is immense. It would produce enough heat to melt itself if all that circuitry were active at once. Conveniently, state machines in the chip that haven’t been stimulated would quiet themselves down to an idle state (in the real brain, most of your tissue is just idling, waiting for an incoming active potential). And you’d need some amazing cooling technology for the high end version of this, since you want what is basically a cube, the smaller the better, because the smaller it is, the less time it takes for a message at the speed of light to go from one end to the other. (you would use hollow-optical paths for the long distance interconnects, equivalent to the big thick wires in the corpus callosum in the real brain. hollow optical fiber, the leading edge of the message travels at lightspeed)

I realize you’re not cheerleading for this invention, but …

That sounds far worse than useless. The last thing I need is some damn computer annoying me with info I don’t want because it’s forever misunderstanding the vagaries of my life and assuming I want this today at 9am because I asked for it at 9 am yesterday or this day last week.

It’d be like being married to a retarded (or insane) yenta. No thank you.

Admittedly compared to most working folks I lead a life where no two days are much alike. So the one environmental factor that can’t provide good input to my current life state is the clock/calendar. Unless I pre-key in my intended schedule: eat now, drive now, sleep now , etc. And keep the damn thing up to date as it changes.

But even when I had an office job the only reliable info you could discern from following me around was that working was less likely between midnight at 5am than at other times of day, and slightly less likely on weekends than weekdays. Which is hardly a deep and actionable insight.

By high end I mean Top-500 supercomputer systems.

Money spent here has gone up enormously over the last couple of decades - it used to be that $5M got you in the top 10. Now you need more like $100M. This is a real measure of success - the systems have so much better bang for the dollar that now really important tasks are within grasp, and thus they are no longer academic curiosity problems or a very restricted set of viable and cost effective commercial ones. But it is still a limited arena. It has driven work on interconnects, but everything else is commodity.

Francis Vaugha I was not talking about supercomputers and ultra business class computers the CIA, NSA, professional video editing Hollywood makers or Disneyland so on can spend on a computer that cost $5,000 to $10,000 having 20 computers in room all 18 cores running at 7GHz over night editing movie scene.

What I was talking about is trickle down system of high end semi professional use or gamers use a 2015 high end Intel core i7 8 core CPU cost $500 , video card $700 , 4TB hard-drive $400, RAM 16GB with computer costly $1,600 to $2,000!!

In 5 years trickle down spects become almost obsolete. This high end computer is no longer a high end computer by that time that Intel core i7 8 core CPU cost $500 now cost $100, there is Intel core i9 by that time a 12 core CPU cost $500, video card that cost $700 now cost $100 as there is better video card, 4TB hard-drive that cost $400 now cost $80 now :eek::eek:there is 10TB hard-drive that cost $400 so on.

Most people that play average games ,check e-mail, go on the internet, check facebook ,do some MS Office like word and spreadsheet so on can get buy with $500 computer!!! If you are gamer or big on multimedia and video editing you going to need a $1,000 computer or more.

Only way the average user gets more powerful computer by trickle down system if they need it or not.

If the average user does not want to spend the money on $1,000 computer or a $1,600 computer they will have to wait. This how it was in 90’s to 2010.

The more you want to pay the better the computer you get. If you not want pay much and bit cheep you have to wait.

Computers where going up very fast that hardware was changing faster than software so there was fast trickle down system.

Every year hard-drives where getting bigger, every year RAM was going up ,CPU where getting faster the same with video card.

In 2002 I got computer with 70GB hard drive!!! yes a 70GB hard drive!! In year 2010 I got computer with 1TB hard-drive!!! I remember in 2006 RAM of 512MB!!!

Game consoles can not play games at 4K but I have seen game hobbyist build 4K systems.

I seen many game hobbyist take grand theft auto on high setting or mod skyrim running at very high resolution.

You can’t do this with Game consoles.

The new Game consoles have good graphics but playing these games on PC is way better graphics.

But most game hobbyist don’t buy computer they get the good parts and build it them self. They don’t go and by Gateway computer ,HP computer or Dell computer.

If we could simulate good behavior first, then we could move to faulty behavior, Trust me, I know that there are many more ways for something to fail than work.

BTW, I loved the Worm Runners Digest collection a long time back …

Might not work for you, but look how many people bought the Apple Watch. A columnist for the Mercury News gave up on because it didn’t do anything really well, including tell time.
But I think my system would get bought by a lot of people, and used by at least a few of them. Hell, it going off and saying “you have a meeting in five minutes” when I’m in another meeting would be worth the price.
Especially if I could fake the second meeting.