My computer has a Solid-state disc drive. It stopped working a week ago, or so. The IT guy at work wasn’t optimistic, but sent it off to a data recovery place. Mr IT guy thought the $1000-$2000 they quoted was more than it would be worth. Actually, ten times that would strill be worth it.
Anyway, the data recovery place said the system area was severely corruptrd and that they do not have any routines they can use on it, but they are always developing new ways.
I thought thr SSDD technology was supposed to be more reliable, but I guess not. Any advice on what I might try?
Really? I’ve always heard the chief downside to SSDDs was they’re less reliable. It’s effectively Flash memory and has a limited number of read/write cycles before the data corrupts.
It’s true that SSDs are more reliable, but only physically; data corruption can still occur for many reasons - power outage, virus, bad software, and so on, even bad drive firmware (the software that runs the controller chip, which is also present on hard drives).
I’d think that if only the system area is corrupted, data is still intact and recoverable. In fact, from my link, it basically tells the drive firmware the parameters of the drive - why not copy it from a good drive (might be some problems; for example, it includes a defect list, but link says it is possible).
This is not normally an issue though; some calculations will show you why, especially when you consider that drives implement wear-leveling (1 GB a day to a measly 32 GB drive with 10,000 write cycles per byte would take over 800 years to wear out; 10 GB is still over 80 years (for a 500 GB drive, that’s 156.25 GB per day, 1.56 TB/day for 8 years, only stuff like heavy video editing and database servers write that much a day). Also, failure occurs when trying to write data, which will immediately become apparent when the drive tries to do so (drives do not just blindly write data and assume it is fine; they read it afterwards to verify that it was written properly), not when you try to later read the data. Plus they use sophisticated error correction (even more so for hard drives, which wouldn’t be possible without ECC with today’s densities - yes; every single read has to be ECC’d - think about that), so a single bad memory cell (or even a byte or three) won’t garble up a sector.
Also, before you mention it, data retention time isn’t an issue either when you consider this paper (note that a 10 year retention time at 85C, typical absolute maximum temperature, is much, much longer at 25-35C, typical operating temperature, such that if you go the other way, you can simulate thousands of years at relatively moderate temperatures in a day or less).
No, assuming you went through all the regular protocols unfortunately you’re pretty much boned. Drives (of any type) are like butterflies and can die at a moment’s notice. SSDs are more resistant to physical shocks, but in a desktop that’s not really all that big a concern. Over time they are just about as flaky as platter drives re reliability.
As a side note did you try slaving it to an existing working system to see if the system even recognized it as a slave drive?
In addition to the issue with a limited number of flash write cycles, there’s also the issue of the firmware that runs the drive. Since SSD’s are still relatively new, and there’s no real barrier to entry, everybody and their cousin is building them, often with buggy firmware. Sometimes this firmware causes a partial or complete loss of data.
Back in the early days, a number of SSDs stopped acting like drives and changed the name they reported to the OS to “Yatapdong Barefoot”.
Do a search for “<your brand> <your model> bricked” to see how common it is.
If the data recovery place didn’t remove any of the “void if removed” seals, you should be able to get the manufacturer to replace it (if they’re still in business) under warranty. That doesn’t do a thing for your data, though.
Some types of bricking are recoverable by installing new(er)* firmware. I’d suggest asking in the support forum of your SSD’s manufacturer to see if they have any advice.
Note: Don’t just grab the latest firmware and try that - sometimes the data layout changes and a new version will result in the drive being wiped. This is called a “Destructive format”. An older version, but newer than what’s on your drive now, might not involve a destructive format. That’s why I suggest you ask in the manufacturer’s forum.
Here is an interesting article for those who claim that SSDs are “flaky” and unreliable; basically, SSDs generally ARE more reliable and many problems that do occur are not due to the technology but bad firmware, as I mentioned (hard drive failure rates are also much higher than the manufacturers report, enterprise drives also aren’t more reliable). Also rehashes the other stuff I said, like write endurance (which doesn’t matter at all unless you don’t upgrade for decades or centuries or have unusual usage patterns).
Nope, see my previous posts (unless, are you a hard core video editor? Then I’d say that it could be an issue, but they can survive up to TBs per day for years, with larger being better since they can take more total written bytes).
It’s been years, but as I recall, the concern people had when the drives first showed up was Windows. With its swaps and caches, it’s constantly re-writing the same area of the drive, so it could wear out the sectors with Windows while the rest of the drive would be fine.
It was never a worry of mine - I tend to upgrade every couple of years. But then I’ve yet to buy a laptop with an SSDD, as they aren’t typically in mid-range laptops.
This depends greatly on the particular flash chips used (mostly feature size and SLC vs. MLC). Other factors are how much overprovisioning the SSD has (real capacity vs. reported capacity) and any firmware problems. I’ve heard of firmware issues that caused wear leveling to not work properly for certain blocks / data.
Of interest is this ZDNet article from September, 2012 where 17% of respondents had a SSD fail within 6 months of installing it.
I’m sure most of those problems are firmware related though; the basic technology isn’t the problem (plus, it suggests only getting drives from reputable manufacturers like Intel, which has a less than 1% failure rate - or less since that includes non-failed returns), true of hard drives as well; remember the IBM “Deathstar” fiasco? Indeed, the biggest problem with SSDs is that just about anyone can take some flash chips (prepackaged) and slap them together with a controller and sell it, unlike making hard drives which require a sophisticated facility, as indicated by the number of SSD vs. hard drive brands.
Actually, the OS has nothing whatsoever to do with where the data is physically stored on an SSD. The drive addresses they see are mapped to physical data cells by the drive’s firmware.
Most firmware will rotate data among those cells so they all experience the same amount of wear. SSD’s will usually have more physical storage than advertised (on the drive and to the OS) so that they can maintain the advertised capacity even after some of the physical cells have died, and for temporary storage in the data rotation I mentioned earlier.
Finally, you might want to take a look at a product called spinrite. It’s supposed to be able to work miracles on dead or dying hard drives (solid state and the traditional platter type)
There’s a chart out their that gave failure rates for various brands. It’s now several years old so longer accurate or relevant, but failure rates ranged from a fraction of a percent (Intel) to several percent (OCZ being the worst). That’s a huge difference.
Hard drives have been around for… how many years? 40? That’s a fairly mature technology. SSDs are 1/4 of that, plus, the race of price/performance/capacity is a lot more competitive now. Back in the 80’s, a HDD was expensive and very few people had them, very few. Most people who even had a computer would have used floppies or maybe even a tape drive.
I’ve also had HDDs fail on me about once very decade. Had a 2-year old, 500 GB, die on my while I was finishing up school. Luckily, I had a NAS setup and that’s where I did all my saves.
Any ways, back to the OP’s question… if the drive is still accessible, even if the system data is corrupted, the other data should be salvageable. This really should be no different than saving data off of a SD or CF card. I’ve also had both fail on me and I was able to recover some if not most data.
If the the controller software for the drive itself corrupted, they might be able to reflash the firmware (as mentioned earlier). If they can’t unbrick it then you are out of luck. On a normal HDD, you could swap the controller boards and hope it works, but most of the SDD I have seen the controller and data are on one board.
Maybe something more extreme like having someone pull the controller flash chip off and resolder a new one on. Then a reflash. I’ve never tried it or heard of it.
I’m going to go ahead and be the jerk of the thread.
It is just a solid state drive (SSD). Since it physically has no disk, there is no disk in the name.
It is too late for the OP, but let it be a lesson to everyone else… backup, backup, backup. Onsite and offsite.
The market and methodologies for recovering disk drives is quite mature, but unfortunately is still developing for SSDs. I love mine, but know the dangers.
Stupid, basic question: why does data corrupt? It’s not like a physical matrix (disk) where the magnetic orientation begins to become unresponsive (that’s a WAG, and I don’t understand why that happens either).
Indeed. I saw the letterism in the title and wondered how OP got a Single-Sided Double Density floppy disk… and how they got it to survive this long, since that bizarre combination of sidedness and data density would have been a weird non-standard layout dating back to before the MS-DOS 3.2 days.
Indeed. My only SSD is an Intel X-25 that I’m using primarily as a boot volume, with swap turned off (and the swap partition relocated to the 1 terabyte rotary media hard drive I have as the second drive), so there’s not really that much writing and no files I would miss. I don’t back up, simply because there’s nothing on that volume I can’t restore from install media (OS, a few apps, etc.). But I’m working on building a household Network Attached Storage (NAS) device, and I’ll start backing up to that once it’s online, and everything will be tasty and golden.
The rotary hard drives I’ve had fail have gone south because they wouldn’t spin back up after having been turned off (mechanical failure, probably, in the spindle bearings or the drive motor); loss of tracking (read failure, or mechanical misalignment), so that the hard drive no longer knows where the data is on the platter; or controller board failure (which would be a non-mechanical cause for the other two, and also make the drive fail to register with the computer even though it’s still mechanically sound).
Controller electronics failure would be the common cause between rotary magnetic drives and SSDs. The failure modes may be similar (system just doesn’t see the drive any more), or significantly different (loss of tracking only applies to rotary drives; rough equivalent in SSDs would be loss of mapping between sector addresses and flash memory cell addresses).
Preach it Hermitian. Preach it! I’m shouting out an AMEN to backups.
At my work I back up our production database to a site that’s 7 miles away and another site that’s 400 miles away. It’s updated every 15 minutes. I also do a data dump every night and send that file to two other computers. Then there’s a weekly tape backup just for good measure.
At home I do a file backup and an image backup once a week to an external hard drive. Then about once a month I take that external hard drive to my work and bring home another one. External hard drives are dirt cheap. There’s no reason not to back up your computer frequently.