You are indeed correct, the calculations are not nearly so simple. I am by far an expert but I can clarify what I can. Sorry for the length of this reply, but it is a very complex subject and you've posed a few things I'd like to try and properly address.
You make the very true point about blocks going unstable and such, which is more of a problem the smaller the NAND flash becomes. However this is what the spare area is for, to replace any potential failed blocks. If you recall I did not include the 6GB of spare area of the 90GB Force drive into my calculations at all, but if I did by all rights it would increase the net lifespan of the drive. Unlike a hard drive where spare area just sits there in reserve, most SSDs (if not in fact all) will use spare area for write leveling as well as maintaining drive performance from day 1, even though the user cannot access or utilize it directly.
The second part that is oversimplified is the write amplification number, I used 10 just because it was extremely high. I've heard evidence some Jmicron controllers were about this high, but I'm not aware of hard evidence. Good quality controllers will attempt to minimize this number at reach as close to the magic "1" number as possible. Achieving a 1x amplification is the equivalent of saying there is perfect 1:1 write ratio and so no overhead in additional writes are incurred.
Intel claims they have achieved about a 1.1x write amplification number with their G2 controllers. SandForce actually beats this with a 0.60 or so write amplification. The way SandForce drives operate is almost tantamount to cheating, but it is an effective and ingenious solution and if you understand how their controller works you will immediately realize how a number below 1 is possible.
Recall that SandForce controllers achieve their performance by realtime data compression. A user "writes" 10GB to the drive, but in effect some number less than 10GB is actually written. It depends on the data itself and how compressible it is, not all data can be compressed or further compressed than it already is such as Jpegs or mp3s. To "compress" a Jpeg or mp3 further you would have to delete actual data from it, which SandForce controllers don't do (Bit depth, bit rate, that sort of stuff). The SF-1200 will compress everything it can, but will not remove any data so it does have limitations. This is where AS SSD's numbers are useful as they illustrate given a worst case scenario with uncompressible data. But anyway, the net result is the user has written 10GB of data, but depending on the data, 4GB, 8GB, or some number less than the original amount was actually written to the physical NAND. Enough so that the net average Anand Lal Shimpi worked out for his personal system was a write amplification of about 0.60x. Only he has access to the special oftware that will measure actual read, write, and write amplification totals and SandForce has him under NDA to not hand out their software, unfortunately. Otherwise I would generate and quote some specific numbers from my personal use.
There are some other topics, but back to your original question. Another huge factor overlooked in SSD lifespan is the size of the SSD you buy. For example, if you write 10GB of data a day to your drive, then a 60GB drive will wear out sooner than a 120GB drive. Simply divide the capacity over the daily writes. So a small 40GB boot drive can be expected to wear out much quicker than a 120GB drive, regardless of all else. There is just less NAND to average out the writes across.
I suppose this is why Sandforce makes much of the ECC 55 bits per 512 (??). I would like to see a small (or large) SSD written to destruction, or close to, so that any impact on performance can be assessed.
SF-1200 offers 24 bits per 512, due to the issues with 25nm flash the SF-2200 got bumped to 55 bits per 512 block for ECC protection. NAND lifespan shouldn't affect performance, in fact if it was even noticeable at all the drive itself would be wearing out in a matter of months. There error rate shouldn't be constant either, and I do not believe NAND write errors would be able to have more than an extremely brief, negligible effect on performance numbers.
I have considered attempting to wear out an SSD, but there are some concerns first with setting up a system to do so. Besides getting an accurate number of just how much data was being pushed through the NAND, we would need several drives from multiple controllers, Marvell, Jmicron, Intel, Samsung, just to name the big ones (I'd personally love to see Jmicron, and Western Digital's fully in-house firmware they wrote for their own JMicron controller based SSDs compared) . Ideally they would need to be at nearly similar capacities as well if any sort of direct comparison was to be made.
That idea has crossed my mind several times before now, and I am still toying with it.The only real hurdle is procuring similar enough drives for a meaningful comparison, otherwise it would just be a one-off and any conclusions would be specific to the single controller type tested. I don't believe manufacturers would want to send sample units just to have them used for a one-time endurance test, either.
I don't personally have any worry s about SSD longevity as my needs are simple, but for others on a heavier write schedule.. it might be more problematic.
Plus they're *still* too expensive ...
Intel quotes their latest SSD 510 series drives as being capable of a minimum 5 year lifespan with 20GB a day in writes. They don't mention WHICH capacity this applies to, so to be safe I'd assume it refers to the 240GB drive. Intel made similar claims with the G2 series although I don't recall the exact numbers off the top of my head.
In any event, where truly serious workloads and longevity are concerned, either a very large capacity MLC drive, or preferably an SLC drive would be best. SLC flash had a 10x lifespan over comparable MLC flash on the same fab process, but it does cost more on the flipside.
As far as price goes... that's how technology rolls. I recall the Raptors/Velociraptors HDDs having some huge price premiums of their own long before SSDs came onto the scene, and that was for a much much smaller boost in performance. I agree with ya but at least prices are declining quickly, and I expect we will be seeing the latest, greatest, blazing fast SF-2200 powered drives at prices below current SF-1200 SSDs around the end of summer if NAND prices keep going the way they are.