hahaha .. kougar ... open your mind up ... a bit.
I currently have 2 WC-ed & OC-ed i7-990X (4.3GHz) systems each equipped with Nvidia GPGPUs (1 C2070 & 1 M2090). They sit next to each other and always have regardless of the office location. Same mobo, pump, radiators, fans (nearly so). The current Asus mobos are ~2 1/2 years old These pumps are ~10 years old tho. They are 120VAC controlled with a 5VDC relay triggered by the PSU. One of the relays failed recently ... of course it fails open meaning no pump. Interesting what happens with a minor load and no water flow. Think choo choo as in
steam engine. So I am thinking about an active cooling system external to the box. The aquarium has given me recent plumbing experience, perhaps over confident.
I need more RAM for number crunching and rather than spend the money to bump 1 of these up from 24GB, I am considering a new system with 64GB with some small expectation of getting a "not even yet hinted" at PCIE 3 capable GPGPU in the future. Regardless of PCIE 3 or not, the newer i7s are fast enough versus the 990s that makes it a plus. Although the increase is not that impressive by itself. I also have another M2090 lady in waiting.
I have a laughably older dual Opteron 290 system with 16 GB ram. As a test I put a RAM drive on it & used it across the network but am limited by the network speed. I would like fiber but that is much too much. And, the simulation problems have grown to where they routinely run out of (24GB) RAM.
Therefore, I believe that this says put as much speed in *the* box as I can. OC-ed i7-3970X with a little extra water chilling show up near 5GHz. Attractive, that is. The data written to the drive during the run times varies depending on the problem. A simulation problem? Imagine seeing literally a voltage impulse travel from a CPU chip, thru the CPU socket, and on to and thru the mother board. Typically some subset, but still a good chunk of this path depending on what is asked. There are several channels so crosstalk is also seen. These are what take so much RAM and run for hours/days even on the accelerator. Capturing those time steps every
simulated 1 to 50 ps momentarily stops the number crunching. The more RAM the problem uses the bigger the picture that is taken. A "picture" probably is taken several minutes apart so this is not a continuous write, but a many spread out.
This software can also simulate a lightning strike. The geometry is very simple initially; just a 200' metal pole in the ground with a ground system. But this too is an example of a large problem. The surrounding plot of land is 1000' X 1000' X 500'(deep). It ran for 7 days on the GPU! Every write of the electric and magnetic fields (2 different types of capture) paused the GPU ... but have to have them. Those pretty animations absolutely fascinated the customer all the way to Germany! They send money then.
The design work is done from the chart and graphs which have no hit on the speed of anything. Customers have to be talked out of that money!!
This is the
what and
why. I think an inexpensive under 100GB SSD used as a buffer with the >1TB HDDs is a very worth while and cost effective speed up. The fastest SSDs, even striped, I doubt would be as useful as just getting the significant bump of just using a good performing SSD as a buffer.
Currently I have 'egg pricing for a system e/w i7-3970X, Asus P9X79, 64 GB DDR3 2133 RAM, PSU, & HDD + SSD in a $240 SilverStone FT02 case ... just under $2700. I already have the video cards. Talk is cheap at this point. This is shopped just for performance.