Too TRIM? When SSD Data Recovery is Impossible

Discussion in 'Reviews and Articles (Archived)' started by Rob Williams, Mar 5, 2010.

  1. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    It goes without saying that solid-state drives are well-worth the investment in order to give your PC some responsiveness, but with all the benefits they can offer, there's one lesser-known issue that we'll talk about here. That issue is simple. As soon as you delete a file on a TRIM-enabled SSD, the data is gone, for good.

    You can read our full report here and discuss it here.
     
  2. Optix

    Optix Basket Chassis Staff Member

    1,514
    0
    Dec 15, 2009
    New Brunswick, Canada
    [​IMG]

    While I don't own a SSD and I can't see this being an issue for me since I always considered what I move from my recycle bin as being lost forever and am careful enough with my work it was really neat to read this. I had no idea that TRIM had a shortcoming. Well done!
     
  3. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    Wow.

    I saw a new post in this thread, so I clicked on the thread from the main forum index, then minimized the browser to deal with something else while it loaded. I ended up forgetting about it, so the next time I opened up the browser, and saw that dude staring me in the face, I roared laughing. Definitely caught me off guard. Good stuff.

    Thanks for the compliments also ;-)
     
  4. Optix

    Optix Basket Chassis Staff Member

    1,514
    0
    Dec 15, 2009
    New Brunswick, Canada
    Brightening your day one stupid Google searched image at a time. :D
     
  5. Unregistered

    Unregistered Guest

    sounds perfect

    another reason to upgrade to SSD!
     
  6. Unregistered

    Unregistered Guest

    Nilfs

    My response to this is two-fold:

    1) Back up your data to protect against accidental deletion; and

    2) Use Linux with nilfs: http://www.nilfs.org/en/

    nilfs, being a log-structured filesystem, is almost like a version control system used in software engineering, except that it's done for the entire filesystem. So if you nuke a file, you can simply revert the change as if it never happened.

    Nilfs is also being optimized specifically for SSDs in some of the lower level routines, so you should see good performance on SSDs once it is stable. It's still in the development phases as of now, but with nilfs on a TRIM drive, you can still issue the command: rm -rf / (as root) and recover from it using an out-of-band system (just mount the nilfs drive and revert).

    Even on a mechanical drive, using nilfs is better than relying on random chance with deleted files: even a very good undelete program will not be able to recover deleted files if you've done any serious amount of disk writes after you deleted your files. A sufficiently high number of writes have a very high probability to map a block or two into the places that have just been freed up, making your data unrecoverable unless you go to the electromagnetic experts referred to in the article.

    Nilfs, on the other hand, will "fill up" your disk with filesystem revisions, and only start deleting the oldest revisions after the disk is totally full. Obviously, the log structure can go deeper depending on what percentage of your disk is full: if you write over a 1GB file with totally new data ten times, that will occupy at least 10GB of space with logs (previous versions). But the more data your disk has to keep track of in the current version, the less space is available for those previous versions; the oldest files will start getting kicked off the disk (permanently deleted) to make room for files in the current version.

    Nilfs also makes redundant a lot of popular program features, such as the "backup file" used by many text ediitors, where the editor appends or prepends a ~ character and saves that as the previously saved copy. Heck, it even makes version control redundant to a degree, except that true software engineering version control systems keep an unlimited amount of history (assuming sufficient disk space) and allow for special features like merging, commit messages and branches.
     
  7. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    The first of your two recommendations is the easier one ;-)

    Most people aren't going to move over to a different OS just because of a feature like this. It's important to note regardless that file recovery under Linux with most file systems is near-impossible to begin with, and if you do get the data back, it's only going to be after a lot of hard work, and even then you'll likely get broken files with random file names. I am not sure if ext4 is that much different from ext3 to make file recovery easier, but I doubt it.

    As for NILFS, that's still a file system that most would consider bleeding-edge, or experimental, so I'm not sure even I would be willing to trust it on an OS drive. But from what you described, it sounds excellent, and I'd love to give it a test at some point in the future. I'm really curious about the performance though.
     
  8. Ron

    Ron Guest

    Mr. Ron N.

    Rob: Thanks for an excellent article backed by your individual research. But I couldn't tell by looking at the screenshots of the recovery programs you used what was actually going on. In other words, I was unable to recognize or interpret the evidence you provided to support your hypothesis. Of course that may be because I have never used a data recovery program but, even so, the screens were not very self explanatory. However, I'll take your word for your conclusions.

    Ron N.
     
  9. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    Hi Ron:

    The screenshots weren't helping me out too much is half of the problem. As it appears in those, even on the TRIM drive, the SSD is loaded with data, but really, it isn't. Back in December, when I began doing all this testing, the TRIM-affected drive basically showed as 100% empty (minus the required file system space), but when doing it again for this article, the drive showed up all pink. I'm not quite sure why, but overall, the TRIM drive simply had no data, despite listing a couple of folders and files.

    The thing I pointed out in the article to look at in both screenshots was the "Specific File Documents". You can see that on the non-TRIM drive, there were 288,000 files or traces of files found, while on the TRIM drive, there was just 488. And again, despite that, nothing is recoverable.

    Hope this helps.
     
  10. TheFocusElf

    TheFocusElf Obliviot

    48
    0
    Mar 1, 2005
    I have seen similar forays into the flaws of Trim coupled with SSDs, not just in data loss but in deterioration of speed and access as well. Scary. I did enjoy the Asheron's Call 1 world builder though! =D
     
  11. Kougar

    Kougar Techgage Staff Staff Member

    2,588
    0
    Mar 6, 2008
    Texas
    There aren't any "flaws" in TRIM that cause deterioration of speed or increased latency... whenever that happens it's the flaws in the underlying controller itself being exposed.
     
  12. thehailo

    thehailo Obliviot

    8
    0
    Apr 14, 2010
    Well, having a background in IT security let me fill in some of the gaps. This is kind of from a different angle than the original article though; more about making sure data is actually deleted rather than trying to make sure it’s recoverable.

    The fundamental principal that makes recovery of data from magnetic media so possible is that it’s imperfect. The fact is modern hard drives are so dense manufactures are literally dealing with quantum randomness. This means just because a head writes over a track doesn’t mean it writes over every single molecule in that track.

    Now, to be practical, writing over data once is enough to foil almost anyone. Past that you need time, skill, money, and an electron microscope. Even then you’re digging only for fragments of data at a very slow pace (100s of MB/h). Unless you piss NSA off you’re good with a single-pass wipe. The best practice today for government/military use is an eight-pass wipe, which pretty much beats even the electron microscope. Some people advocate as much as 35-pass wipes; but this is just overkill given modern recovery technology.

    As far as SSDs, we won’t really know for a few years until the forensics field matures more. Until you get someone like FBI/NSA interested, who can take equipment like an electronic microscope and huge budgets and analyze them, we won’t know to what extent, if any, data may be recovered after being overwritten. In the past few years we’ve had minor breaks in data recovery from traditional RAM, but this is hit and miss even under the best circumstances such as having physical access immediately after shutting down and having something on hand to immediately cool the RAM down. Of course, NAND is by design more permanent, and MLC and SLC have fundamental differences, so once again, we just don’t know yet.

    Either way, for forensics purposes recovering overwritten data doesn’t come into play much. Most people leave traces everywhere on their system. “Deleted” files, page and hibernation files, web browser history and cache, Windows thumbnails, System Restore, and Shadow Files are just a few of the places ripe for finding information users thought was long gone. I prefer EnCase for these purposes, as it’s extraordinary thorough in finding traces of long lost data.

    One more note here deals with “secure wipe” tools. The fact is they don’t work, at least not on modern equipment. Journaling file systems don’t necessarily map directly to the hard drive. That is, because the file system points to sector 1 doesn’t mean that’s where the data is. The file system is duplicating bits for redundancy, moving things around for performance, and lots of other small tasks to keep things running. On top of this, modern hard drives also obfuscate the physical layer to some extent, quietly moving data around to bypass bad sectors for example. This all means that no software tool can know exactly where a file is on the physical drive, or where previous traces of it are. The only sure way to securely wipe data is either encrypt your entire drive or fill the entire drive with data, thus wiping all deleted files out. The Windows cipher command works well for this.
     
  13. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    The lesson here is that it's just easier to get rid of the actual storage device, rather than run these tools... if you are THAT worried. Thermite, anyone?

    I think the best method from both a time/effectiveness perspective is just to do a secure erase, like with the HDD Erase tool. It goes beyond simple writes and rewrites and runs vendor-specific algorithms across the drive in order to truly purge the data. After running HDD Erase on a drive, the absolutely only thing you should see from a recovery drive map is: nothing.

    That's not to say that data still couldn't be recovered by those who really take it seriously, but if I were to choose one method for the best chance of purging data, it would be with a secure erase.
     
  14. thehailo

    thehailo Obliviot

    8
    0
    Apr 14, 2010
    That's how I use cipher actually. The tool has a few functions, but cipher /W wipes the free space of a drive with a three-pass wipe; once with 0x00, once with 0xff, and once with random numbers, respectively.

    When I decommission a drive I typically just format it, attach if as a secondary drive, then run cipher on it. It's quick and easy, and since cipher is built into Windows it's always on hand and absolutely free.
     
  15. meathead

    meathead Obliviot

    1
    0
    Aug 26, 2010
    Not only is this completely untrue, data that has been written to an SSD is still recoverable until more than enough data has been written to the disk to completely fill it.

    The TRIM command doesn't erase blocks, it lets the drive know that blocks are no longer in use. The drive is then free to reallocate those blocks as it sees fit. Actually overwriting the physical data is unnecessary, and in the context of an SSD with a limited number of writes, wasteful.

    In other words, if a file occupies block 10, a normal file delete wouldn't actually touch that block, just the file system structures that associate it with a file. In a mechanical disk, that's enough, since the file system is responsible for file placement.

    An SSD uses virtual blocks, a look up table, and wear leveling to increase drive lifetime. Virtual block 10, which is what the OS sees may be physically located in block 45 on the disk, and any read request for block 10 would get redirected, using the look up table, to physical block 45. Next time the OS writes to virtual block 10, it might be writing to physical block 38.

    With TRIM, the drive is notified right away that a block is available for reuse. The SSD can then remove that block from the look up table and return it to the pool of available blocks. It's still physical block 45, but it's no longer associated with virtual block 10. While the data is no longer available to the OS, it still physically exists.

    Any read of virtual block 10 will now return empty data, but the data that was written to virtual block 10 still exists on physical block 45.

    Wear leveling prefers least recently used blocks, so the original data will stay intact until enough data has been written to the disk to completely fill it up. It's even worse than that. SSDs have additional reserved space. An 80GB drive may actually have 90GB of storage space. The extra space exists both to compensate for blocks that have gone bad and to make it easier for the wear leveling algorithm to reduce fragmentation.

    Add to that the fact that wear leveling algorithms are proprietary and not strictly LRU, and the only way to be sure the data is gone is to completely overwrite the disk at least twice.
     
  16. Sillious

    Sillious Guest

    Useful Info

    Last night I was working my new arrived PC with Corsair Nova series 64GB SSD and I ran into questions. I came online searching for the terms NAND, SSD, MLC, TRIM, etc and I came across this article - I can safely say I am much more educated on SSD technology now than I was yesterday.

    Thanks for all your time researching and for those who left comments.
     
  17. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    That's not how I understood it, and to be fair, there's conflicting information all over the Web, and after the fact that I couldn't get much of a straight answer from various companies (conflicting information there), I settled on the fact that the block is indeed affected after the TRIM command is issued, not just the pages/LBA. In looking around the Web, it's difficult to find definitive information about this process, but given that no data seemed to be reproducible, even in a broken manner, I settled on the fact that TRIM did indeed affect the raw data left in the block.

    It's kind of amazing just how complicated something seemingly simple like this is. It's just hard to get straight information, and when you look at different sources, sometimes different terms are used, complicating the situation further. Even Wikipedia doesn't clear things up that well:

    "In SSDs, a write operation can be done on the page-level, but due to hardware limitations, erase commands always affect entire blocks. As a result, writing data to SSD media is very fast as long as empty pages can be used, but slows down considerably once previously written pages need to be overwritten. Since an erase of the cells in the page is needed before it can be written again, but only entire blocks can be erased, an overwrite will initiate a read-erase-modify-write cycle: the contents of the entire block have to be stored in cache before it is effectively erased on the flash medium, then the overwritten page is modified in the cache so the cached block is up to date, and only then is the entire block (with updated page) written to the flash medium."

    I'll need to get on a new search and try to find this definitive information, because I just haven't seen it posted by an official source in a clear-cut manner anywhere. If TRIM doesn't affect the data, then it means that data recovery -should- be possible. But across the tools we used, all of them showed the drives up completely empty after TRIM did its thing.

    Glad you enjoyed the article - nice choice on that drive :D
     
  18. orthancstone

    orthancstone Obliviot

    19
    0
    Apr 2, 2010
    Texas
    I've got an old drive I'm looking to destroy sometime in the future, and now you've gone and given me ideas :p.

    (Actually I think my current idea is to take it apart physically just to have fun with the parts, then go Office Space on anything that I'm tossing :D )
     
  19. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    Thermite is so much prettier, though! Of course, you risk serious damage to yourself if you're not careful, so I might vote for your second idea as well :D
     

Share This Page