On 07/21/09 03:00 PM, Nicolas Williams wrote:
On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes
Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if not referred to from snapshot).
of data a day on a ZFS file system for 5 years (according to Intel) to
reach the expected endurance. I don't know many people who delete
that much data continuously (I suspect that the satellite data vendors
might in their staging servers... not exactly a market for SSDs)
Don't forget atime updates. If you just read, you're still writing.
Of course, the writes from atime updates will generally be less than the
number of data blocks read, so you might have to read many more times
what you say in order to get the same effect.
(Speaking of atime updates, I run my root datasets with atime updates
disabled. I don't have hard data, but it stands to reason that things
can go fast that way. I also mount filesystems in VMs with atime
disabled.
You might find this useful;
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp
It's from a year ago.
In general though, regardless of how you set things in the article,
I was involved in some destructive testing on nand flash memory,
both SLC and MLC in 2007. Our team found that when used as
a boot disk, the amount of writes, with current wear-leveling
techniques, were such that we estimated the device would not
fail during the anticipated service life of the motherboard (5 to 7 years).
Using an SSD as a data drive or storage cache drive is an entirely different
situation. Solaris had been optimized to reduce writes to the boot disk
long before SSD, in an attempt to maximize performance and reliability.
So, for example, in using a CF card as a boot disk with unmodified Solaris,
the write were so low per 24 hours that Mike and Krister's team calculated
a best case device life of 779 years and a worst case under abuse of approx
68,250 hours. The calculations change with device size, wear-level
algorithm,
etc. Current SSD's are better. But the above calculations did not take
into
account random electronics failures (MTBF), just the failure mode of
exhausting the maximum write count.
So I really sleep fine at night if the SSD or CF is a boot disk,
especially with
atime disabled. If it's for a cache, well, that might require some
additional
testing/modeling/calculation. If it were a write-cache for critical
data, I would
calculate, and then simply replace it periodically *before* it fails.
Neal
Yes, I'm picking nits; sorry.
Nico
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss