On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
> But to put this in perspective, you would have to *delete* 20 GBytes

Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if not referred to from snapshot).

> of data a day on a ZFS file system for 5 years (according to Intel) to
> reach the expected endurance.  I don't know many people who delete
> that much data continuously (I suspect that the satellite data vendors
> might in their staging servers... not exactly a market for SSDs)

Don't forget atime updates.  If you just read, you're still writing.

Of course, the writes from atime updates will generally be less than the
number of data blocks read, so you might have to read many more times
what you say in order to get the same effect.

(Speaking of atime updates, I run my root datasets with atime updates
disabled.  I don't have hard data, but it stands to reason that things
can go fast that way.  I also mount filesystems in VMs with atime
disabled.

Yes, I'm picking nits; sorry.

Nico
-- 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to