This article from Slate.com talked a bit about something I’ve never heard before: solid-state hard drives, drives that use transistors rather than spinning magnetic disks to store data
I haven’t kept up with computer technology as much as I used to, but this took me completely by surprise. Does anyone have one of these and can vouch for their speed? Are they as reliable (or unreliable) as the old spinning drives?
I’m looking to upgrade my computer soon as well and I was wondering if I should invest in this. The guy only mentioned a 128GB version which seems kind of small, but maybe he was trying to still save money. Would a PC still running XP be able to recognize the drive with no problems?
The biggest advantage of them is that, with no moving parts, they’re more robust against bumps, shakes, drops, and the like. The biggest disadvantage to them is that they’re still fairly low-capacity for their price (or expensive for their capacity, however you want to look at it). Last I heard, the speeds were comparable, but solid state has a lot more room to improve, again because of the lack of moving parts.
Enterprise class SSD, also knownd as Enterprise Flash Storage, is impressive. It is expensive, but the gains in throughput due to the use of this technology can be staggering. Also, since flash storage requires a much smaller footprint and much less cooling, they are being touted as “green” in terms of data center floor space, power consumption, and climate control needs.
If you check out some of the white papers over at FusionIO (note, I am not affiliated with them, do not buy product from them, but I am familiar with their product) you can see how the technology is used in their systems.
Solid state drives may suffer data losses over time, so they are not as reliable for long term storage as conventional disk drives. Since backup and archiving methods are required for valuable information anyway, this shouldn’t remain much of a road block to further development of these devices. It shouldn’t be too long before they begin to dominate the market for PCs. It may be some time before cost effective large scale devices are available. Since the current devices offer no great advantages over conventional technology, the market seems to be waiting for advances in the technology.
The most common uses of this technology are in ‘thumb drives’ and ‘memory sticks’.
Perhaps consumer-level devices suffer data loss, but I am not aware of enterprise level devices that don’t overprovision carefully and run tests to make sure that the life of the drive isn’t comparable to a standard disk drive. MTBF rates for EFD are much lower because of the lack of moving parts.
I am not sure if you’re talking about devices for stuffing into a notebook computer, or devices on the enterprise scale. For consumer grade devices, performance is usually more limited by other factors, like the OS itself, and so Windows doesn’t run blazingly fast on SSD. However, when you need sustained random read/writes, the raw throughput supplied by EFD can be breathtaking. The major holdback right now seems to be most companies treating them like a simple replacement for a typical hard disk. When connecting enterprise hardware, most often large volume storage is attached via some sort of network, whether it be Fiberchannel SAN or GigE IP SAN, and the latency the network introduces can be a major roadblock to realizing perofrmance gains. Of course, this networked storage provides other values, like redundancy, that attached storage cannot, so it continues to be used.
EMC’s VMAX product line also makes extensive use of EFD hardware. We do use VMAX where I work and have approximately 1TB of Flash storage available. It has made major improvements in perofmrnace possible, but it does not approach the performance gains seen by directly attaching devices to a server through PCIe.
I have no disagreement with that analysis. As you noted, all enterprise level storage systems should have sufficient redundacy to prevent data loss from any technology. There have been questions about the reliability of solid state devices long term in an unpowered state. Do you know of any conclusive information about that issue?
Edit: I should have noted that ‘cost effective’ would evaluate differently for installation of new storage devices vs. replacement of existing units.
I don’t have info on that issue, sorry. The economics of flash devices is such that, right now, they don’t lend themselves to archival storage in an unpowered state. Perhaps as the prices come down, this will be investigated more, but at this time, 1TB and larger standard disks are a much better economic solution for that sort of thing.
The article said that he configured the system so that as little as possible was on the solid-state drive (just the OS with the data files on a conventional drive). I read the article as well, and was also tempted by the idea.
We use them all the time in industrial computers. A regular hard drive would quickly get destroyed from the vibration and dust in an industrial environment. As others have noted the capacity is smaller than a mechanical drive for the same amount of money.
As for the issue of problems with long term storage, I’ve read about that but we personally haven’t had an issue with it.
As TriPolar noted, it’s the same thing that’s in a thumb drive, it’s just that it’s in a different package and has an IDE or SATA interface instead of USB.
Solid state drives do have limited write capacity. You can’t just write to them over and over like you can with a mechanical drive. Solid state drives get around this somewhat by faking out the tracks and sectors so that you don’t write to the same spot in the drive’s memory even though you may be writing to the same track and sector according to the software or operating system. Still, if you have an application that does an exceptional amount of writing to disk, you may have problems in the long run.
Technically, solid state drives don’t use transistors. Each memory cell is kinda like a MOSFET type transistor (MOSFET = metal oxide semiconductor field effect transistor) but the cell has an extra gate that is not present on a normal MOSFET. If you are trying to describe that to someone who doesn’t know much about electronics though I can see why someone would just call it a transistor. It’s a bit more correct to say it uses “transistor-like” technology.
Yes. The drive looks like a standard mechanical hard drive from the interface’s viewpoint. You can run pretty much any operating system on one. I’ve used windows, linux, and even DOS on solid state drives. Our industrial computers boot to FREEDOS, then use a DOS program to boot to a custom operating system.
Take a look at www.smartmodular.com. They make SSD’s (among other things). They make ones with a form-factor exactly like a standard hard-drive, as well as others on pc boards that can be mounted in other products.
Apple’s MacBook Air has a solid state “drive”. Calling them “drives”, btw, is like calling a calculator an electronic slide rule. But I guess that’s just the way of things.
Again I am going to question this. Most providers get around this by overprovisioning and using fake-out techniques like you are talking about. Enterprise class devices I have seen typically are rated for 10 years at a very high sustained throughput. Compare that to the MTBF numbers of a typical multi-platter hard drive, and you’ll see that you’re not really coming out behind. Coulpe that with the fact that the chipsets keep track of which areas of the disk are close to their maximum writability and move data off of those areas, and you have some very solidly reliable storage. I think a lot of this problem is that people are getting caught up in theoretical limitations, when they are easily worked around. If you get a “budget” provider you may run into issues, but with enterprise class devices you should not have longevity problems unless you’re not in the habit of replacing your hardware on any sort of a sane schedule.
2 of my 4 current rigs have 64gig ssd’s and are noticeably faster than standard drives. All are running Win7 x64.
There are still some open issues though. For example there is a division of opinion about whether or not to keep the system page file on an SSD as this is presumed to decrease it’s life span. But I’m not sure I believe that.
There is also the issue of defragmenting. Most of what I’ve read says it’s not necessary but since I don’t know enough about latency between blocks during a sequential read, I can’t really say for myself.
What I hear regarding SSD life is that each bit can only be written to so many times. Thus, the rapid write and re-write activity of a page file will wear it out more quickly than everyday file access. The drives are designed so that bits that go bad stop being used by the computer; so your whole drive will probably not go bad, but the usable space on it may go down over time.
Under normal use, it’s probably not enough of an issue to worry about. If your system is constantly thrashing the page file though it might become an issue in the long term.
Solid state drives don’t have to physically move the read/write head during a track to track seek. The latencies are small enough that they really aren’t worth worrying about. Defragmenting really isn’t going to help your performance any.
True, but the drives are designed for a normal level of use and for most folks this isn’t an issue. If you are really thrashing the page file or if you have an application that does a tremendous amount of disk writes you might have an issue. Otherwise it’s not worth worrying about.