2010 to mid 2015 computers
CPU some progress but not two or three times faster.
GPU/ video cards not that much faster…not two or three times faster.
Hard-drive 2010 1TB standard, now in 2015 1TB or 2TB standard!! More the 2TB you be paying lot of money.
No 500GB, 800GB or TB solid state drive.
CPU’s max out in 2004 at about 3GHz. CPU makers gone to many cores and max out about 4 cores for the average person. There are more cores like 8 cores and 12 cores but they are too costly for 90% of the people. This why when you walk into computer store today most is 2 or 4 cores. More than 4 cores is too costly for 90% of the people.
Intel was claiming base on Moore’s law every two years the number of transistors per spatial region would double. That Intel claiming every 12 to 18 months computers will double in speed base on number of transistors per spatial region.
Even though Gordon Moore never talked about speed or performance. Computers in the 90’s to 2005 was going at a exponential rate!!
Some people say Moore’s law will end around 2015 to 2020 time line. I think even Gordon Moore was saying some time ago he now predict along with Intel to end around 2015 to 2020 at around 10 nm to 5 nm scale.
I think even Intel is saying they now are having trouble and predict in the next 5 years to get it to 8 nm or may be even 5 nm scale and even this is hard. After that quantum world starts to play havoc.
So my questions do you see performance may not be growing as quickly it was? And the implication on society and future technology?
4K like 8K gaming require faster computers
better weather forecasting require faster computers
Video editing of 4K like 8K require very fast computers
Other scientific applications
As has been said many, many times here, Moore’s Law has nothing to say about processing speeds.
It’s an observation of transistor density, and it is still on track.
Speed is certainly correlated with transistor density, but it’s not linear. For a while there we were in a regime where it was close to linear, and that’s where people got the idea that Moore’s Law was about speed, but speed has since mostly leveled off while density continues to increase.
In 2010, the fastest NVIDIA GPU was the GTX 480, which has these basic specs:
3 billion transistors
1.3 TFLOPS processing
177 GB/s bandwidth
33 GP/s fillrate
42 GT/s texture rate
Today, the fastest NVIDIA GPU is the Titan X:
8 billion transistors
6.1 TFLOPS processing
336 GB/s bandwidth
96 GP/s fillrate
192 GT/s texture rate
Memory bandwidth increased the least, but that’s largely because bandwidth efficiency has gone up greatly. The other stuff went up by a factor of 3x to 4.5x.
The newest Intel & AMD chips use a 14 nano meter wide process for the circuit paths on the chip.
A hydrogen atom (the smallest one) is about 1nm wide. Since the circuit trace has to be about double that space, it’s clear that they are approaching the limit for how small circuit traces can be on a chip.
But …
they have recently shown experimental versions of SSD memory chips made with a 3-D process (both across and up/down) that seem to be working. So IC chip makers may reach the limits of 2-D circuit size, but move on to 3-D circuitry (imitating Seymour Cray’s 1964 move to cordwood construction for mainframes).
Indeed, I have a 1TB SSD in my macbook pro (after market). Samsung just announced a 16TB SSD coming out soon:
But yes Moore’s law does seem to be in some trouble, the latest updates from Intel have only been very modest performance gains and they are delaying the switch to 10nm.
There’s nothing experimental about them. I have in my computer right now an SSD (the Samsung 850 EVO) using NAND flash with 32 layers. 48 layer will be available soon and there’s little reason to believe they’ll stop there.
CPU’s max out in 2004 at about 3GHz. CPU makers gone to many cores and max out about 4 cores for the average person. There are more cores like 8 cores and 12 cores but they are too costly for 90% of the people. This why when you walk into computer store today most is 2 or 4 cores. More than 4 cores is too costly for 90% of the people.
So? The cost of computing is always going down. Hell, I paid well over $500 for a Pentium processor when they first came out.
Back in 1969 when NASA was putting a man on the moon, they had multiple IBM System/360 Model 75 mainframe computers, costing about $3 million each. Nowadays your average free smartphone out-performs them by a mile.
Trust me, we’re still in the dark ages of computing power.
May be the problem is computers stores have cheap CPU’s and GPU and older SSD like 128GB and 256 GB. And the 500GB ,800GB and 1 TB GB and 2 TB are only custom built computers or people that put computer together?
If that is the case may be that is what is giving me the idea that GPU and SSD technology is stagnating?
If that is case than why are computers stores have old stuff? Unless you build your old computer or use custom built computer from Dell or Apple.
I remember in 2010 we had 128GB and 256 GB and now in mid 2015 still 128GB and 256 GB has SSD.
Lots of computers don’t even have SSD that is how sad it is.:(:(:(
Most computers you buy have cheap $50 video card and $600 to $1,000 computer if you are lucky a $100 video card.:mad::mad::mad::(
May be the problem is not Moore’s law but computers stores have old stuff.
3D chips work for memory, but how can they work for CPUs? It’s my understanding that shrinking the transistors reduces the power consumption per transistor. (This must be so, or power consumption would have exponentially grown.) If they stop shrinking the transistors and start just stacking them on each other, the power consumption will grow with each layer. A 100-layer CPU will use 100 times the power of a 1-layer. It will be the end of the gains in processing power per watt. It will also be impossible to extract the heat. The chips will melt as soon as you turn them on.
Even if you put cooling passages in it and immerse it in liquid helium, you will run into limits. They won’t be able to grow endlessly until we have 10,000-layer processors.
Computer stores stock what people are buying. The reality is that a $300 computer will do what most people want. Only gamers and some professional users need anything faster. Gamers drive the mainstream GPU market. The Iris graphics in a current higher end Core series CPU is more than fast enough for the vast majority of even critical non-gaming uses. (If you look at the die, the GPUs occupy about twice the area of the CPUs).
If you want to build a nice very fast machine, you are mad not to go with a SSD, and soon it will be the standard build, but for the moment we are in transition. But even SSDs are changing, The NVM (direct PCIe interface) will push out SATA (including M.2) and things will settle down to a highly integrated and neat architecture.
But the single biggest danger to the semiconductor industry is that the PC has stagnated. There are not the drivers towards faster machines to fund progress at the pace there once was. The impact Cloud based computing is going to make is slowly dawning on the commercial end of town. If you need it, there is insane compute available, but if you don’t, a $300 PC will do fine for a long time yet.
M.2 supports both PCIe and SATA. I have an SM951 in my home machine and it is both amazingly compact (smaller than a stick of gum) and fast: ~2 GB/s, compared to ~0.5 for SATA. If you have the means, I highly recommend picking one up.