Is Moore's law starting to come to end now?

What I’m saying is in year 2010 when you walk into a computer store and looked at most computer at the price of $500 they had those spects. It is 5 years later the same price and same spects!!!

I got computer in 2010 for price of $500 it had 1TB hard-drive, 4GB of RAM!!! I have seen laptops and notbooks with 128GB SSD and 256GB SSD in year 2010!!! Now 5 years later same 128GB SSD and 256GB SSD.

So with new technology you claim there should be higher spects by now.

The point of this thread is computers technology is stagnation.

If computer technology is stagnating and it was like before the year 2010 yes the year before 2010!!! It would be 1TB hard-drive the year 2010 and by the year 2015 6TB hard-drive and year 2020 12TB hard-drive and year 2025 16TB hard-drive this is not technology that is stagnating .

Why the year pre date 2010 the hard-drive gone up so much?

So I got computer in the year 2010 with 4GB of RAM the computer cost $500 so why do they still sale some $500 computer with 4GB of RAM? I can see if they where to sale $200 computer with 4GB of RAM.

So the problem is sloppy programmers and software makers that will not bring out software to use more than 4 cores!!! Other wise we would have 6 cores, 8 cores and 12 cores by now and in 5 years from now 12 cores ,16 cores and 20 cores!! And in 10 years from now 20 cores, 25 cores and 30 cores!!!

I could build $500 computer with two times the spects for that price they have at the computer store.

I just got a 4TB external hard drive the other day back up drive for price of $200!!!

Gigabyte GA-78LMT $80
gtx 950 or gtx 750ti which are around 200 for really good video card.
hdd $90- $120
-Corsair vengeance RAM $100 for 8GB
Amd FX 8320 3.5 GHZ 8 core CPU price $100

If you look at iPad spects of 2010 to 2014.

iPad (1st generation)
Announcement date January 27, 2010

Memory 256 MB DDR RAM built into Apple A4 package
Storage 16, 32, or 64 GB
iPad Air 2
October 16, 2014

Memory 2 GB LPDDR3 RAM
Storage 16, 64, or 128 GB

The answer comes in two parts.

One - they are not the same specifications. You are concentrating on the wrong numbers. It is like saying that monitors are in stagnation because everyone is still buying 24" monitors. Surely now they should be selling 50" computer monitors. What matters is that the computer is better. This can come in many ways. Cheaper. Faster. Smaller. Lower Power.

The iPad Air2 is significantly faster and lighter, with a longer battery life than its five year old predecessor. The first generation iPad was something of a miracle, in that they got the basics to work.

Compare the processors from 5 years ago to now. Sure the name is much the same - i3 i5 i7, and the number of cores is the same, but the compute speed has improved, cache is much bigger. More than that, a lot of effort has gone in the the GPUs on chip. Sure, if you are a gamer, you won’t care so much, but for just about all other uses, a modern fifth or sixth generation Core processor has enough graphics ability all by itself to satisfy. The Iris graphics in the top end are ridiculously good. Look at the Intel NCU boxes.

Second point remains from before. They sell what people want. If there is no value in selling something more powerful than people need, they won’t. People can get way with low spec machines, and see no reason to demand higher.

There is some sloppyness, but many problems simply don’t parallelse. It is very hard to work out how to make a word processor use more than a few cores. Applications like photo-editing and movie editing, animation, and of course many games, they do have parallelisable components. But parallel coding is not easy. It isn’t hard to come up with a parallel algorithm that is no faster running across a few cores than a simple algorithm running on one core. It is also altogether very easy to mess up and create buggy code that crashes, deadlocks, or gets the wrong answer.

Parallelization is extremely difficult, except for things that are very, very obviously able to be done in parallel.

The issue with parallelization is when anything needs an intermediate result from something else, these synchronization points gut the whole idea. Sometimes you can get away with “sampling”, that is, taking a sample of the current intermediate (or previous complete) state of some computation and using it as an intermediate value until your next sample. This can work for, e.g., video game rendering, where you can use the results of the previous physics computation to draw everything on screen while the physics and game logic work on churning out the next computation.

It’s more common, though, that even things that are partially parallelizable gain no real benefit from actually parallelizing it, because of very low-level (and largely unimportant to the layman) details like kernel level thread switching cost.

Parallel computation can be a real trial by fire. There are times when I’ve written things that, by all rights, should’ve been sped up in parallel but wasn’t, because at that performance level tiny changes to the program architecture ruined everything. Most of the details require too much background to really explain to a layman, but tiny things like the overhead of atomic values, memory layout and cache contiguity/prefetch, extra heap allocation, and other normally mostly inconsequential things can add up when you structure a program to be both parallelizable and maintainable.

Sometimes you can speed things up a little bit more if you’re not worried about ever having to look at and understand the code again, but that’s usually a bad idea. Some languages that have lightweight co-routines or green thread schedulers like Erlang and Go can actually perform better in the general case, because they allow you to treat non-parallelizable but concurrent things and actually parallelizable things the same way, parallelizing what can be made parallel, and running sequentially what can’t. These help a little bit, but you’re still bound by the need to describe something you can actually understand the logic of in a month.

I’d actually say that there are three categories of parallelizability, not two. You’ve got some jobs that can’t really gain much from extra processors at all. Then you’ve got some that can benefit from maybe somewhere in the neighborhood of 1.5-3 processors (1.5 processors meaning that you’re using one to full capacity, and a second one to about half capacity). And then there are the massively-parallel jobs, that can easily use as many processors as you can throw at them. I once had a project where I literally let my computer run for weeks on end computing something, and if I’d had more processors, I’d have gotten linear improvements in performance all the way up to about 5000 processors.

IMO, the better way to get the benefits of parallelism today are a couple levels higher up in the work stream.

A single user is often using their computer / tablet / smartphone to accomplish only a single end-user task. It might or might not have parallelizable sub-tasks, but that’s not where the gravy is. They might be doing two tasks, such as a news feed or music alongside their useful work: writing SDMB posts :). But they’re not doing 12 tasks. At least the vast majority of users aren’t. Our esteemed (and wonderfully persistent) **sweat209 **certainly isn’t.

That computer also has some OS & housekeeping functions, e.g. back-up, software update, cloud sync, etc. Those could use additional cores, but again only a handful at most. And most of that work is not CPU-bound, such that one core could keep up with the RAM and off-machine connectivity speeds of several of those ancillary tasks.

The real gain of parallelism is at the servers. A search server with 10,000 cores can usefully process 10,000 independent search requests. A cloud-based word processing server can usefully employ ten thousand cores to simultaneously edit ten thousand docs for ten thousand users even if each instance of the word processing app is completely single-threaded.

It’s very obvious when you take your smartphone offline that a vast amount of its utility goes away too. Returning sorta to the OP’s lament, or at least to the sound responses to it, that will be increasingly the case with what we think of as “typical PCs”.

A typical PC in 2020 may well be not much more powerful than a 2010 PC in isolation. All the gain will be in what it’s connected to. And it will therefore be vastly more useful as long as it’s connected. And as the experts have said repeatedly above, the gain will be more in price/performance than in raw performance. We already have performance and capacity enough for the routine tasks of the typical deskbound PC user.

My guess would be that for a typical PC user, more is gained by giving a compute-intensive process its own processor than trying to parallelize that code. And it’s a lot easier. When you have a machine dedicated to one job it’s another matter, but I’m not sure we’ve gotten that much better since I learned this stuff from Dave Kuck.

And I agree with LSLGuy about servers. That’s why Google and all build their own servers. Single thread throughput is not that important.

I think OP is mainly concerned about computer gaming performance.

Gaming performance is not driven much by CPU beyond a certain point. That point has been reached and is commonly available. Same thing for RAM. Same thing for SSD.

What chiefly matters for gaming is VRAM and GPU processing power. On that front, there has been plenty of progress.

You want lots of memory and lots of processors?
Here’s the Titan Z with 5760 CUDA cores and 12GB of video RAM.
GPUs / Video Graphics Cards | Newegg.ca 600499110
CUDA cores can be used for general processing applications like physics.

This underestimates the amount of progress since, as has been pointed out a few times, computing performance isn’t just core numbers, core freq. & ram.

For a more reasonable enthusiast GPU, there’s the R9 390X with 2816 processors and 8GB of VRAM
GPUs / Video Graphics Cards | Newegg.ca 600478784 600566294 600494828&IsNodeId=1

Gaming is now much more dependent on GPU & VRAM than CPU & RAM so consumer level computing progress has been mainly in GPU & VRAM.

Francis (and other but Francis brought it up),
You talk about computing being able to do really interesting stuff now, what kinds of things do you have in mind?
LSL (and other but LSL brought it up),
In keeping with the OP’s gaming-centric concerns, I do wonder when/how cloud computing will start to displace client-side processing for graphics. There’s been a few noises from Microsoft but I take it that the bottlenecks for advanced cloud-gaming graphics lie in lowering the encoding/decoding delay enough that the ping is below 100ms and in having enough bandwidth allowance to allow affordable cloud-gaming.

I don’t know what it’s like in the US but an enthusiast in Canada would be able to have unlimited bandwidth at a reliable 15Mbps download and 10Mbps upload for 64CAN$ which is pretty good. That might be enough for 1080p stream gaming given that gaming graphics compression isn’t quite as efficient as it is for movies.

That depends on the game. Larger strategy games like Europa Universalis 4 are almost entirely CPU-bound, and it actively harms the AI because of optimizations they have to do to make that many agents co-exist (like only considering certain facts on the first of an in-game month).

I think this would be true only in way if the 2015 computer had better CPU and video card but the hard-drive and RAM is same.

So Gateway or HP or what ever still sale $500 computer with same RAM and hard-drive spects as 2010 computer but better CPU and video card.

I got some hard-drive with more than 1TB and they stop working after two or three years. So may be there is engineering problems with hard-drives more than 1TB.

As for hard-drives they are obsolete and SSD is the future. A new computer I get will have SSD and no hard-drive.

I would take 500GB SSD over 2TB hard-drive any day.

Hard-drives are on the way out and SSD is the future.

Like what other have said there are CPU’s made with more than 4 cores but they for some reason are very costly. That is why most computer stores do not have them.

Here is one awesome CPU!! 18 Core CPUs!!!

Intel Xeon E5 2699 v3 Processor


The problem? It’s not for your regular consumers. And the price may never go down for the regular consumers. So you may never see this in computer stores. Unless you have lots of money and build your own system.

We won’t know what the OP’s goals are until the OP bothers to tell us, which he has thus far been unwilling to do.

But IMO he seems mostly to be complaining that entry level $500 machines sold at low end stores appear to him to have no fancier specs than $500 entry level machines sold there a few years ago.

As has been demonstrated, his perception is false. But true or false, it has little to do with the needs of a state of the art gamer. My bet is he’s big on saving videos and maybe doing some minor league editing on them. Hence his desire for mongo disks or SSDs in low end machines.

Maybe he’ll tell us now what his actual use cases are.

You would know a lot more about this than I.

However, EU4 & such are easily handled by common CPUs.

This Can You RUN It | Can I Run It | Can My PC Run It says
Intel Pentium IV 2.4 GHz or AMD Athlon 64 3500+ , 2GB RAM, NVIDIA GeForce 8800 or ATI Radeon X1900, 512 MB video memory which is piddling.

Hence my statement that gaming performance is not driven much by CPU beyond a certain point and that that point has been reached.

Would you drop the issue of packing more cores in a CPU already? It’s of very little utility at a consumer level. A consumer who buys a Xeon E5 2699 for consumer-level applications is like a guy who wants to buy a pick up truck and buys a dump truck: he’s a moron with far too much money.

SSD will become common but HDD is there to stay for a long time. Some people build movie collections and once 4K becomes common in porn, collections & siterips will be gargantuan.
ETA: Following LSL’s comment: Sweat, would you tell us why this matters to you? We can then better answer your questions.

Oh, you can definitely handle it with a common computer, however EU4 and CK2 had really notable performance issues before they had some optimization patches that made them consider a lot of actions only on the first of the month instead of every day or week.

Your CPU also affects how fast the fastest game speed can go, which isn’t too much of an issue, because you usually don’t want to speed 5 that far into the future, but it is CPU-bound.

(Also, I believe both games are single-threaded. EU4 and CK2 in particular are games where the AI could make decisions in parallel, only taking into account the previous day)

The start of thread was is Moore’s law starting to come to end now? The reply was no, it not, that computer are getting faster and better it just not as fast like before.

There no sign we hit wall yet it just not as fast.

I looking at every thing as computer technology is going up exponential base on 90’s and 2000’s.

I think you are messing the point of this thread. The start of thread was some doom and gloom that computers hit wall and the technology is stagnating. Many people replied and said no computers are getting faster and better just not as fast.

May be I’m thinking every thing needs to go up exponential base on 90’s and 2000’s and there is communication break down here in thread here.

The point of any new technology is it is very costly and after many years it goes down in price.

Yes Intel Xeon E5 2699 v3 Processor 18 core is one awesome CPU!!! But base on Moore’s law and the exponential base on 90’s and 2000’s you be getting it at free market for price of $100 in 10 years. Well new Intel Xeon Processor 30 core is out!!!

But you messing the point and putting in philosophy in this thread that most people that check the e-mail, go on the internet and check youtube and facebook would not need Intel Xeon E5 2699 v3 Processor. You are messing the point on exponential technology growth the hard-drive spects and CPU or many cores is just one of them.

This is not thread to talk about the philosophy of wants and needs of computer use. Hank most people probably do check the e-mail, go on the internet and check youtube and facebook can do that on 10 year old computer!!

But you are messing the point of base on Moore’s law and the exponential base on 90’s and 2000’s. Unless this over now and computer will not go up exponential but little steps now and than.