SSD performance measurement: Best practices

Discussion in 'Reader-Submitted News' started by Psi*, Aug 26, 2013.

  1. Psi*

    Psi* Tech Monkey

    751
    0
    Jun 17, 2009
    Westport, CT
    Wasn't quite sure where to post this, but this is a link to an EDN article about SSD testing. This seems out of place for EDN but maybe they needed a little filler.

    I have kind of lost touch with how SSDs are being tested here, but thought that this might dredge that up a bit.
     
  2. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    How's it going, Psi? I have been meaning to email you for a bit to see how things are going!

    Good article, I agree with the methods mentioned, and am glad that Iometer is being recommended for benchmarking (we use it, despite it being more of an enterprise tool as this article suggests).

    I'll link Robert to this article and thread, although I am not sure he'll gain much from it given he's put a lot of time and effort into figuring out the best methodology for benchmarking SSDs. We don't usually test dirtied drives, I don't believe, but that's become a little less of an issue since TRIM came to be. It's still worth considering, though.
     
  3. Kougar

    Kougar Techgage Staff Staff Member

    2,588
    0
    Mar 6, 2008
    Texas
    Hey Psi, thanks for the find! Always interesting to read up on how others set up and perform their metric testing given there's multiple ways to go about it.

    Our sequential write test is identical for the first four steps of their precondition process, but after that things diverge as we are testing for different things. The focus of their test is to find the steady-state sequential write performance of the entire SSD after writing to it for a sufficient length of time. By comparison, our SSD is the OS host so we reserve an 8GB portion of it and perform the sequential 128K writes to that specific 8GB for 5 minutes. This will still give us the sequential write performance of a "well used" SSD, but our seq write result would not be a substitute for the final "steady state" performance of the SSD itself.

    Steady-state testing isn't something we test for specifically, but it's something we can consider adding to our SSD testing in some fashion. During the course of our five benchmarking runs even the 240GB SSDs we test will have been fully written to between 1-2 times over, so our results are more indicative of a dirtied SSD inside an average system. But we don't actually "dirty" an entire SSD by preconditioning it several times over before we begin testing.

    Preconditioning/dirtying an SSD to find its steady-state performance is good to know since it provides a worst-case scenario for performance. But even simple garbage collection combined with a few minutes of idle time can quickly (and drastically) change drive performance, and in real-world scenarios those SSDs with aggressive garbage collection algorithms would actually prevent the SSD from reaching its low steady-state performance just by the virtue of having the brief breaks of idle time to clean up the NAND.

    Corsair's Neutron GTX is one such drive, it may not have the most aggressive garbage collection routines out of all SSDs but it certainly is one of the top contenders I know of in that respect. Even a few minutes of idle time would completely change results on tests with that SSD...

    When testing SSDs I actually have to consistently finish a round of benchmarks one after the other, because if I stopped for any reason during the middle of a run and came back later to finish the round of tests, then the results I'd get back would always be higher. My point being, even a few minutes break is enough for many modern SSDs to bounce back or maintain a much higher consistent performance level, well above the absolute worst-case "steady state" level.
     
  4. Psi*

    Psi* Tech Monkey

    751
    0
    Jun 17, 2009
    Westport, CT
    Hey ... I have been hunkered down working! :mad: But making money.:cool:

    I get interested in these articles as I get closer (maybe) to a new build. I guess it causes me to be social again.:eek:I also allowed a little dalliance by getting into marine aquariums. I have a 90G + 50G refugium + 20 sump. I made it thru the really hot weather weeks without having to get a chiller, but the power bill for house AC was way high. $100s higher!

    Speaking of chillers, the next build is going to include an aquarium chiller. LOL How is that for a segue? I think I will get 1 chiller capable of cooling 4 systems. One pump. One reservoir. With some of those expensive plumbing disconnects so a box can be taken offline for whatever.

    I am liking the ASUS P9X79 which can use an SSD as a buffer for a HDD. I think I read about that here some months ago which kind of anchored that feature in my head. As usual I am thinking about the i7-3970X (maximum smoke) and OC capability. But, in the past, someone (Rob) has suggested stunningly simpler (read much less expensive) alternatives that have gotten me thru many months of very high performance.

    That was another segue ... for SSDs. I wonder what (SSD) might be a good pairing with the average 1 to 2 TB WD HDD with this mobo? I reboot about once a month so loading the OS off the a SSD has little interest for me. I have some 60 GB SSDs that I have tried using as "working" drives. Basically I copy a project over to it, run it (and it does run much faster!!), then copy it back to the primary HDD. That last step is what I suck at! Running much faster means at least 1 hour saved for every 5 that the program runs. This is good ... except for the part that I suck at which really screws things up.

    I don't even know the alternatives to this mobo. I have seen HDDs that have SSD buffering built in and do wonder how this compares.

    Anyway, I am interested in how you are testing SSDs and need to read Kougar's description a couple more times. Thank you. Whether you or anyone has opion about SSD buffers, mobos, or CPUs I would like to know.
     
  5. Kougar

    Kougar Techgage Staff Staff Member

    2,588
    0
    Mar 6, 2008
    Texas
    That's the wackiest idea I've heard in awhile... and something I haven't even seen on the XS forums before which is saying something! :D Sounds like a really good idea, my only concern would be that a computer will significantly heat up the water, so is an aquarium chiller designed for chilling continuously heavily heated water?

    Regarding SSDs you'd need to decide if you're going to use it as a cache drive or as a normal drive. If it's going to be for caching only, then go for a model SSD designed specifically for caching. Just a question, but how much data is an average project or thereabouts? If we're talking something around 30-50GB then I wonder if a ramdrive might actually be a better option for you.

    Writing 20GB or more of data for an intensive ~5 hour project, then dumping it off before starting over on the next project would be pretty heavy workload (and not the best fit for a cache drive), so I'm trying to understand your needs for this system.
     
  6. Psi*

    Psi* Tech Monkey

    751
    0
    Jun 17, 2009
    Westport, CT
    hahaha .. kougar ... open your mind up ... a bit.

    I currently have 2 WC-ed & OC-ed i7-990X (4.3GHz) systems each equipped with Nvidia GPGPUs (1 C2070 & 1 M2090). They sit next to each other and always have regardless of the office location. Same mobo, pump, radiators, fans (nearly so). The current Asus mobos are ~2 1/2 years old These pumps are ~10 years old tho. They are 120VAC controlled with a 5VDC relay triggered by the PSU. One of the relays failed recently ... of course it fails open meaning no pump. Interesting what happens with a minor load and no water flow. Think choo choo as in steam engine. So I am thinking about an active cooling system external to the box. The aquarium has given me recent plumbing experience, perhaps over confident.

    I need more RAM for number crunching and rather than spend the money to bump 1 of these up from 24GB, I am considering a new system with 64GB with some small expectation of getting a "not even yet hinted" at PCIE 3 capable GPGPU in the future. Regardless of PCIE 3 or not, the newer i7s are fast enough versus the 990s that makes it a plus. Although the increase is not that impressive by itself. I also have another M2090 lady in waiting.

    I have a laughably older dual Opteron 290 system with 16 GB ram. As a test I put a RAM drive on it & used it across the network but am limited by the network speed. I would like fiber but that is much too much. And, the simulation problems have grown to where they routinely run out of (24GB) RAM.

    Therefore, I believe that this says put as much speed in *the* box as I can. OC-ed i7-3970X with a little extra water chilling show up near 5GHz. Attractive, that is. The data written to the drive during the run times varies depending on the problem. A simulation problem? Imagine seeing literally a voltage impulse travel from a CPU chip, thru the CPU socket, and on to and thru the mother board. Typically some subset, but still a good chunk of this path depending on what is asked. There are several channels so crosstalk is also seen. These are what take so much RAM and run for hours/days even on the accelerator. Capturing those time steps every simulated 1 to 50 ps momentarily stops the number crunching. The more RAM the problem uses the bigger the picture that is taken. A "picture" probably is taken several minutes apart so this is not a continuous write, but a many spread out.

    This software can also simulate a lightning strike. The geometry is very simple initially; just a 200' metal pole in the ground with a ground system. But this too is an example of a large problem. The surrounding plot of land is 1000' X 1000' X 500'(deep). It ran for 7 days on the GPU! Every write of the electric and magnetic fields (2 different types of capture) paused the GPU ... but have to have them. Those pretty animations absolutely fascinated the customer all the way to Germany! They send money then.:) The design work is done from the chart and graphs which have no hit on the speed of anything. Customers have to be talked out of that money!!:confused:

    This is the what and why. I think an inexpensive under 100GB SSD used as a buffer with the >1TB HDDs is a very worth while and cost effective speed up. The fastest SSDs, even striped, I doubt would be as useful as just getting the significant bump of just using a good performing SSD as a buffer.

    Currently I have 'egg pricing for a system e/w i7-3970X, Asus P9X79, 64 GB DDR3 2133 RAM, PSU, & HDD + SSD in a $240 SilverStone FT02 case ... just under $2700. I already have the video cards. Talk is cheap at this point. This is shopped just for performance.
     
    Last edited: Aug 28, 2013
  7. Psi*

    Psi* Tech Monkey

    751
    0
    Jun 17, 2009
    Westport, CT
    a chiller thread/review over at XS.

    I am considering 1 reservoir to dump the water from all 4 (eventually) systems into, then a radiator to drop off a few extra joules, then the chiller.
     
  8. Rob Williams

    Rob Williams Editor-in-Chief Staff Member Moderator

    12,080
    1
    Jan 12, 2005
    Atlantic Canada
    *brain_splode*

    Well, the i7-4960X is due out soon (as in, next week if rumors prove true), so that'd be the more attractive choice for sure (well, it's a generation ahead). Though I can't speak of its overclocking-ability, especially not with THAT kind of setup. I'd suspect 4.5GHz on an AIO liquid cooler to be fine, just like the i7-3970X.

    Also, if you need a lot of RAM, and perhaps a lot of fast RAM, current and upcoming X79 boards that support IV-E should also support DDR3-2133 at least, 8x8GB. I actually could test that configuration with a board I have here, if I can find the time. If memory speed is more important, you might be able to go up to 2400 or even higher (I am really not sure at the moment).

    And damn, I can't get over the kind of calculations you are churning.
     

Share This Page