Just a random musing. I was always in awe what a terrific combination of mechanics and electronics a magnetic HDD is, the precision and speed with which it performs. OTOH, since SSDs are on the market it seems a little anachronistic to still rely on mechanical means for accessing storage. It’s been quite a while since SSDs became affordable, but they are still five times more expensive than HDDs, if I can trust my brief amazon research. Why is it more expensive to produce SSDs than HDDs with their many movable mechanical parts, while SSDs are just silicone, a little copper and a plastic case? Maybe the production of Flash RAM is exceptionally elaborate, I don’t know. So, what is the reason?
They also tend to have a limited (though climbing) number of writes available before they stop working and are more prone to failures than spinning spindles/electro-mechanical hard drives.
Why do they cost more? From here:
ETA: And another source:
I don’'t think this is true. And the limited number of limited writes is taken into account in the deisgn (there’s actually more available space than is reported by the device, and algorithms spread out writes as much as possible as well).
Pretty sure a good SSD is a lot more reliable than a spindle drive.
This is the primary answer. Flash is an integrated circuit, like CPUs, GPUs, RAM, and so on.
Modern integrated circuits are unbelievably sophisticated. They start with a perfect crystal of silicon. They then go through hundreds of optical, chemical and mechanical processing steps, each one with nanometer precision. If any one of these steps is even slightly wrong, the entire unit is junk. “Wrong”, in this case, can mean literally just a few atoms off. And although some types of chips (flash is one) can handle some number of defective components (by turning off the defective cells), many billions of components must still come out perfect.
Hard drives require care in depositing the magnetic layer on the platters, but it’s a uniform coating. The other pieces of a hard drive have looser tolerances, and to a large extent are self-correcting; the heads follow the tracks even when there is variation (they must, or else hard drives wouldn’t work if there was any external movement at all, or even thermal variations).
It is true, on both accounts. The manufactures will tell you up front that this is the case if you buy a shelf of SSD for SAN storage. The mean time to failure is higher than traditional drives and there are a limited number of writes you can do on the system before it’s junk. I can get you cites if you really want them and don’t know this. The gap is narrowing on this, and I figure eventually it will be what you say, but it ain’t there yet.
Enlightening and very interesting answers so far. Thanks!
You’re operating under some very old information (in computer years anyway) and it isn’t correct anymore, on either account.
SSD’s today offer MTBF rates of 1.5 million hours and higher and cost about 1/3 as much per gigabyte as what you quoted in that 5 year old article.
This article is a little more current.
Here is just one example of a drive available today with that kind of MTBF and a price less than 1/3 what you quoted.
http://www.corsair.com/en-us/force-series-lx-256gb-sata-3-6gb-s-ssd
Besides MTBF there are other ways storage could die much sooner - drops, falls, vibration can kill a HDD quickly and SSDs are essentially immune to those things. No moving parts means less things to fail, and less power consumption. When you really add up all the numbers and don’t just look at gross cost per gigabyte, SSD’s aren’t even very much more expensive than HDDs anymore.
This endurance test (already 2 years old since when it first began) hammered 6 SSD’s for 18 months with nonstop writes greatly exceeding the amount of data most average users would write in a lifetime, and all of them, even the earliest ones to fail, outlasted their official endurance specifications by hundreds of terabytes. The longest lasting of the group in the end failed after writing more than 2 *petabytes *(two million gigabytes) of data.
Another point about SSD versus HDD.
SSDs have been coming down in price, but rather more slowly than we might have hoped. Part of this is that they really only came a major player in the storage space when the continual reduction in process feature size had already slowed down, and Moore’s law was already failing. During this time HDD drives have not stood still, so a HDD has significantly better capacity for the dollar than now relative to one of five years ago as well. The question is more “why hasn’t the capacity for the dollar ratio gap shrunk” and the answer is basically that both technologies are being developed still, and there is no reason that the ratio between the two should reduce. It could get bigger.
Some of the justifications cited earlier are close to bogus however. Sure SDDs need more sophisticated controllers than a simple USB flash drive, but that complexity scales with production volume and improved process, just like anything else. Investment in software for the controllers is a one time cost and, like all software, once written is essentially free. It isn’t as if HDDs don’t contain very sophisticated controllers themselves.
The law of supply & demand kicks in here, and it hits you on both ends.
SSD’s used to be expensive because they were a lower-demand item than hard disks, thus less competition & higher prices.
Now SSD’s are more expensive because they are a high-demand item (in computers, cellphones, game systems, DVR’s, cable boxes, anything wireless, high-end appliances like stoves, washers, sewing machines, etc.). So demand is outstripping supply, thus the price stays high. The manufacturers are rushing to build more factories to produce memory, but they are expensive, take months or years to build, and need a while for workers to reach high-productivity skill levels (whil hard disk factories are already built and staffed with experienced workers).
Eventually production of SSD’s will bring the price down, while declining demand for hard disks will cause their price to increase (though as yet, that demand seems to be holding steady).
One answer is both HDD and SSD cost per megabyte constantly improve. SSDs are cheaper per MB than HDDs were – around 2004. However HDDs keep improving. This can be seen in this graph:
Another answer is storage demands are ever increasing, so it is often economically impractical to use all-SSD storage. E.g, each raw photo from a Nikon D810 or Sony A7RII camera is about 40 megabytes, and it’s easy at an event to shoot 1,000 stills – 40 gigabytes. That is just still photos, no video or anything else.
Re reliability, in general SSDs are more reliable but they can unpredictably fail for no known reason. Just like a HDD they must be backed up. In some studies SSDs at certain points in the life cycle had higher failure rates than some HDDs:
MTBF or Bit Error Rate (BER) is often misunderstood. It is not a typical error rate but a worst-case error rate. E.g, many HDD manufacturers list a spec of 1 failure per 10^14 bits reads. 10^14 bits = 12.5 terabytes. So by that spec you might expect a failure on average every 12.5TB. We know from actual experience this does not happen, see Empirical Measurements of Disk Failure Rates and Error Rates (Gray, 2005):
The BER is more akin to a 60,000-mile automobile warranty. It doesn’t mean the car breaks at 60,000 miles, rather the manufacturer guarantees the car to work at least that long.
There have been a couple of recent papers summarizing SSD reliability studies. Overall not much has changed since earlier studies. In general SSDs as a class are more reliable per drive than HDDs as a class, but SSDs can have significant failure rates, e.g, 20% uncorrectable failure rate for SSDs over a four-year period:
Flash Reliability in Production: The Expected and the Unexpected (Schroeder, 2016):
A Large-Scale Study of Flash Memory Failures in the Field (Meza, 2015): A Large-Scale Study of Flash Memory Failures in the Field | Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems
What can one do to reduce the odds of an SSD failing?
From what I understand, 3D NAND is starting to make a real difference in terms of cost reduction, right? How much potential do the “skyscrapers” of 3D NAND have to keep increasing in height?
Anything that looks promising in the next few years?
RAW doesn’t use any compression, even lossless?
The previous-stated sizes were for compressed RAW. By default RAW on the D810 uses lossless compression and the A7RII uses visually lossless compression. They both allow non-compressed RAW which are very roughly 2x the size.