Is Moore's law starting to come to end now?

Computers will continue to get faster and SSDs and RAM will continue to get bigger. But I don’t believe it will continue according to Moore’s law anymore, or at least we might fall off Moore’s law for 10 years or so until some major breakthrough happens.

Read the Ars Technica article I linked above, Intel is way behind schedule on the most recent generations of newer CPUs.

The problem is that increasing speeds increases power consumption, and tiny transistors are leaky, somewhat intentionally, which also increases speed. Power consumption has gone up exponentially. ASICs 20 years ago consumed milliamps of power - processors today consume exponentially more than that. Enough so that supplying power during wafer probe is a real challenge, and there are some tests we can’t apply because of it. They get applied at package test.

3D designs are about stacking chips on top of one another. Most of the layers of a processor are not transistors but the large number of routing layers to send all these signals around the chip. Routing layers would probably grow faster than linear growth of transistor layers, so forget it.
Though power is a problem now, it would be worse if we didn’t spend a lot of effort turning off parts of the chip not being used at a given time. This is even more important in chips that go into mobile devices run on batteries, and which probably shouldn’t burn a hole in your shirt, but processors use these methods also.

Multiple core systems are important because you can improve throughput without increasing power too much, and because it is easier to design 8 15 million gate cores than one 120 million gate core. We’ve also run out of clever architectural improvements, besides making caches bigger, of course, which is an easy way to use up the extra transistors.
We’re also bumping against the physical limits of how big a chip can be.

A more important reason for things slowing down is economics. Nanometer fabs are damned expensive, and only a few companies can afford them - Intel, TSMC, Global Foundries. With less competition the pressure to move to the next node is not as great, and the companies are more able to actually make some money before having to cut prices.

Moore’s Law is not only about silicon. I think it was Gordon Bell who projected it back in time. I’d not be surprised if the next way of making processors keeps it going post-silicon.

But what is the problem is the SSD of 500GB ,800GB and 1 TB GB and 2 TB too costly and that is why they don’t have it? Or when 2 TB ,4 TB and 10 TB come out than 500GB ,800GB and 1 TB will be standard for computers as price go down?

As for hard-drive or SSD is you should always back it up two areas because hard-drives and SSD can break down. I have 1TB external hard-drive almost full and it backed up other 1TB external hard-drive.

I put nothing on my computer and have 1TB external hard-drive I use main storage and other 1TB external hard-drive as back up. So I have two copies.

Hard-drives are on the way out. And SSD is the future.

It seems Apple and netbooks and some laptops are only one pushing SSD but it is mostly 128GB and 256 GB.

May be the desktops and other laptops they have feel people need more storage and SSD too costly so they are going with 1TB hard-drive. And when price comes down for 500GB ,800GB and 1 TB SSD they start to put it in. Other than that I have no idea why the technology is stagnating at the computer store to buy new computer.

Other thing is most computers you buy have onboard video card.:(:(:frowning: Than when you get computer in price rage of $700 to $1,000 they put in crapy video card a $50 or $100 video card.

And when you get computer costly $1,6000 they put in $150 video card or $200 video card.

There’s also a question of what the end goal actually is. What’s the “top of the tech tree” for computers. That is, what do we want them to actually do for us, if we could have a computer do anything a computer could compute?

It seems like the obvious answer is we want :

  1. A computer who can do everything a human brain can do, either working for free or we will emulate the brains of humans so that the wealthy elite can live on past biological death.
  2. A computer that can crack certain digital locks so we can steal people’s secrets until they all go to one time pad.
  3. A computer that can emulate reality well enough to create a photorealistic, interactive world that has enough visual and sound and even touch details that humans cannot distinguish it from reality.

Fortunately for us, #1-3 don’t actually need computer chips that run in series faster than the 4 ghz they run at today, nor do we need to pack all the circuits into one crazy high cube that melts from internal heat.

  1. To do #1, we would build a chip that physically resembles the brain. It would probably just be a big cube, but the idea is, inside are processing circuits and the memory and interconnects all next to each other. Like the human brain, at any given instant in time, only about 10-20% of the processing circuits are active - the others are in a lower power state, waiting to be stimulated by incoming signals. Also, most of the circuits are memory, not processing, so we only have to extract maybe 1% of the heat versus if the whole cube was all CPU circuitry. The cube would be interlaced with water or liquid helium or something cooling channels.

  2. To break digital locks, we need a relatively small chip made with circuitry that does very exotic things at extreme low temperatures, keeping q-bits isolated, using one of about a half dozen possible methods. The limiting factor here is that there’s only so much money to drive development and no one has even an idea how to make a Shor’s algorithm quantum computer with enough q-bits to steal everyone’s secrets - there’s a nonlinear relationship between the number of entangled bits and how difficult it is to do.

  3. A computer powering a holodeck is an “embarrassingly parallel” task. The “game engine” could run on a cluster of computers, each one responsible for rendering the visual, sound, or physical touch details of a single piece of a realistic world. Modern games aren’t written this way, because it’s a nightmare to program, but the problem doesn’t have to be solved in series. (example : you are standing in a brick walled alleyway in virtual reality. The task of rendering each brick is handled by a different computer chip in a massive cluster. Information shared in common by all the chips is broadcast out on a common bus, and each chip contributes separately the result of some light rays to a single data array that represents the visual field of the human player. A single chip averages all these incoming light rays, calculated on thousands of separate chips, to create a unified picture.

Touch would be done the same way - human body sensations are divided between dermatome regions that are separate. Ditto with sound - what the player hears would be a sound signal calculated for each ear by overlapping all the waveforms of each sound present in the world interacting with the environment.

The biggest problem in the way of building the “holodeck” is the part that interfaces with a human. We can render photorealistic environments and realistic sound and probably generate the signals experienced by human nerves experiencing touch. (it just takes expanded versions of known techniques and/or some really low latency programming and design for a rendering cluster) But you have to be able to trick a given human into experiencing these things - the only way I know of that would reasonably work would be a surgically installed implant at the sensory homunculus at the top of the brain. And that sounds like a bit much to expect people to submit to, although VR porn does sound pretty sweet…

That is want happen around 2004 they hit wall of 3GHz. If they made CPU run faster than 3GHz it started to get very hot. So they started to look into things like multi core processor like 2,4 and 6 core processor and multi threads.

There are 6 cores, 8 cores , 10 cores and 18 cores:):slight_smile: it just it too costly for the average person unless you have lot of money.

Intel and AMD can’t find ways to lower the cost. So from 2005 to 2015 the average person is stuck with 2 or 4 cores.:(:(:(:(:(:(.

You have been able to purchase 8+ core AMD chips for the same price as Intel quad cores for years and years now. They are not a wise purchase because the Intel quad cores are faster, both at single threaded and multithreaded tasks with “just” four cores.

Virtually all software presently being sold or in production does not efficiently use more than 2-3 cores. It becomes more difficult the more cores you want to scale to because of a principle called Amdahl’s law.

You also have kind of a dichotomy. Certain tasks need massive computing power and scale to parallel processes very easily. Other tasks are too difficult to parallelize or not worth the trouble. So on PCs, most developers who have hugely parallel tasks write a GPU version of their code. Instead of trying to use an 8+ core CPU (they do exist), why not use a GPU instead? Teraflops of computing power, 10 or more times what the CPU can offer.

Also, most desktop users are doing just a handful of tasks at the same time as there is just one human behind the screen. TLDR, it’s more cost effective when speccing out a decent desktop PC to use a quad core (or less…) as the main CPU. The benefit of a 6+ core chip is so marginal as to not be worth it, even if the price premium is small. (Intel will sell you one for around $350-$500, only slightly more money than their fastest quad core)

If you want one, go right ahead and buy one. There’s an 8 core Haswell-E. Oooh that’s spendy. Or you can grab the 6-core variant. Just 389 bucks, and the nicest quad core is 339 bucks.

But you’d be better served buying the nicest quad core because it’s slightly faster at individual CPU threads, and that matters far more. You’d also be better served by buying more RAM (gotta keep those chrome tabs fed!) and a faster SSD and a bigger graphics card.

Basic

4K like 8K gaming require faster computers
better weather forecasting require faster computers
Video editing of 4K like 8K require very fast computers
Other scientific applications
Advance stuff like.
AI artificial intelligence and robots.

Computers today and robots have the intelligence of cockroach. Today’s super computer world’s fastest supercomputers that beet jeopardy was Watson.

To have computer that can do the calculations of human brain of true AI and robots you see in movies require very fast computers that Watson does not even come close to.

You probably need billion Watson supercomputers or more.

Same problem for star trek holodeck are computers are too crude.

So you are saying there really is no point getting a 6 cores, 8 cores, 10 cores and 18 cores because games and windows is not coded probably to use it?

A 6 cores, 8 cores, 10 cores would be better if one is into video editing and multimedia?

But for games and other software for the average home user a 6 cores, 8 cores , 10 cores and 18 cores is not coded probably to use it?

Sweat, I’m saying that more than 4 cores is not useful for basically anyone on a desktop at all. Even video editing or multimedia. There’s a small number of people who benefit and nobody else needs one.

The trouble with the very high end use cases for computers is that there are either not many customers, or there really isn’t any clear demand. This limits the available money.

AI capabilities are cute science fiction, but whist people are affordable, there is scant reason to plough massive funds into AI systems apart from the pure fun of it. Reaching the technology of a Culture type of society is more than just sci-fi. Maybe, one millennium.

Weather and climate forecasting are the current big time buyers, but there are only about 5 weather high performance compute systems in the world. One might argue that that is 4 too many. There is only one Earth weather to model. Weather and climate is amenable to serious parallelisation, which is why there is such useful progress. But there are limits to what can ultimately be achieved.

The Grand Challenge high performance tasks are not a lot changed from a couple of decades ago, and they are being chipped away at. But if you compare the money going into them compared to the money needed to sustain the current advances in compute technology, it is pitiful.

Computers became mass consumption products, and the money needed to fund their advance came from that. Home PCs and basic business PCs running spreadsheets and word processors. That drove the industry, in a self sustaining upgrade cycle. But if the average user, home or business, sees no gain in a faster machine, things can become grim. Computers appear in many other commodity products - especially cars and home entertainment. Maybe self driving cars will drive the next phase. But PCs may be moribund.

That’s patently untrue. I have a 6 core Mac Pro, and in FCP X many video filters and effects can use all 6 cores at 100 percent utilisation and scale linearly with number of cores. 3D rendering (Maya, 3D Studio Max, Cinema 4D) is another easily parallelised task that scales very well with number of cores.

FCP X, After Effects, Premiere Pro, BlackMagic Resolve and Autodesk Smoke are all written to use all cores for certain rendering and encoding / decoding tasks as well as the GPU.

I think you are completely and utterly wrong on the last bit. First, the “Culture” requires numerous things that real life physics probably do not actually permit no matter what. However, in terms of what is achievable - fusion drive and antimatter spacecraft, orbiting cities, Matkovsky brains, awakening the whole solar system and converting it all to sentient matter - it isn’t correct to say a millenium.

Progress is nonlinear. More-over, think about what you need to end this whole game. You need a computer that is as smart and creative as the software, hardware, and math scientists and engineers who designed it. That computer’s effective thinking speed needs to be around 10 to 100 times the speed of those humans for there to be practical and useful gain. The nascent AI uses it’s skills and knowledge to build version 1.1 of itself at up to 10 times the speed it took the development team to build version 1.0. (there’s only enough of the expensive hardware for one or two entities at once, so if it thinks 100 times faster, it would issue instructions to helper humans to do the first few stages of upgrades) The new version is even faster still…

This creates an explosion of development, a hard transition edge. If this works out this way, we’ll go from a society much like our own - some autonomous cars, some other bits of tech we don’t have now but pretty much the same, with similar national borders, jet planes, people live in cities, nobody lives on Mars…

To this explosion of unpredictability. I can’t say what will happen next. I can say what could happen, but this would just be speculation. Almost anything we know about now would be possible. Tearing the whole moon into self replicating machinery? Yep. Fusion powered spacecraft that are optimized to physical limits? Probably. Starships, if physics lets you build em? Yep. A real treatment for all human disease and illness and death? If humans are still important enough in the new world order, yeah. Even if the “treatment” just involves growing you a brand new body or copying your brain to a computer, it would fix anything.

I can describe to you a rough idea of how to build such a computing machine using the tools we have right now, today. It’s a massive endeavor, but it requires nothing we don’t either have in a catalog that can be ordered, or something that is straightforward custom engineering work. Historically, there’s not millennium long gaps between detailed technical documents and reality. Documents that actually described real Moon rockets were done in the 1920s, not Jules Verne.

We have already achieved the goal of AI; we just didn’t realize that that was the goal. What people think they want from AI is a machine that can do the job of a person. But that won’t do anything: We already have people to do those jobs. What we really want from an AI is something that can do the cognitive jobs that humans can’t do, like searching through immense masses of information, or doing complicated calculations. What we want from an AI is Siri or Wolfram Alpha… And that’s what we have.

I built my last computer about 5 years ago, and from what I can gather, it’s not so much that individual processors are getting faster, it’s that more processors are being put into a computer. When I rebuilt my last computer, I had a quad core CPU and one GPU. If I spent a little more, I could have gotten 2 or 4 GPU’s. I know several individuals who run their computers on the GPU rather than the CPU for more processing power. Also, at the time, the average CPU speed was around 2.5 ghz, with many people overclocking them for 3-4 ghz or higher.

Imho, it’s not that cpu/gpu’s are getting better, they’re just sticking more of them into computers.

Also I should add that multiple cores require larger power supplies. When quad core processors came out, you had to estimate the capacity of your power supply to make sure everything would work. At the time, very few super large (over 1200w) were available, and with increasing numbers of cores, this will need to increase. This could create a bottleneck as standard 120w plugs in the US just wouldn’t be able to handle it (you would need an industrial strength outlet, like those for a washing machine or air conditioner.)

I wasn’t really thinking about Culture level space travel - just the dual society of human/humanoid (as well as the Affront etc) and the AI’s all the way up to minds. I didn’t say in a millennium, I said some millennium. I think we take our current successes as much more than they are in may ways.

Progress might be non-linear, but that isn’t the same as exponential, or even monotonic. It can stall for all sorts of reasons. The idea that progress in AI will somehow become self accelerating pre-supposes that there don’t exist inherent theoretical limits on what can be achieved. When Harley Was One pointed out one limitation of super AI computers.

We simply don’t know. It is fun to prognosticate, but the evidence of the past success does not mean that we can or will accelerate onward to such lofty heights.

One thing that helps a great deal is that the energy dissipation is highly non-linear with clock speed. Halving the clock rate results in a stupendous drop in power needs. The reason we don’t go this route is that we don’t really have a good idea how to use a huge number of processors running slowly versus a much smaller number of processors running fast. But if we took something like modern chip processes and started to build vertically, and dropped the clock rates to say 1/4, we may well get something like a 100 or more layer CPU that didn’t melt. Trick is to work out how to use it to best effect. Apart from solutions to some very specialised problems we aren’t very good at that.

Total BS.
Not only can multimedia software take advantage of all cores, but you forget the case where you are running multiple programs at a time - surfing the Internet while rendering a movie while ripping a DVD. In those cases, each program can utilize some fraction of the available cores.

For heat dissipation, there are new solutions. Intel was developing a chip with internal water cooling, and the Peltier system enables cooling below room temperature. However, depending on the chips involved, there are extreme cooling solutions like liquid nitrogen.

When I tested my quad core, it was pretty straightforward: if you reduced the quad core by half, you got essentially a 2 core processor. However, the trick here is a) are you running 2 cores at full speed and 2 are disabled, or b) all 4 cores are running at 1/2 speed. This is a big difference depending on the application. Quad core processors see a huge boost when running processes that can utilize all 4 cores, such as graphics rendering. Conversely, if the process can only use one core, you will see no dropoff in scenario A but a large drop in scenario B.

So sticking to the letter of Moore’s Law, the processing power could keep doubling if the chips had more cores or you put in more chips, but in reality, it probably won’t be twice as fast.

Another way to think about Moore’s Law is size. If you could put two computers in the same space as one computer, that would also double your processing power, which is essentially what a modern two-slot graphics card does.