Gary Mills wrote:
We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
filer.  There's a great deal of churn in e-mail folders, with messages
appearing and being deleted frequently.  I know that ZFS uses copy-on-
write, so that blocks in use are never overwritten, and that deleted
blocks are added to a free list.  This behavior would spread the free
list all over the zpool.  As well, the Netapp uses WAFL, also a
variety of copy-on-write.  The LUNs appear as large files on the
filer.  It won't know which blocks are in use by ZFS.  It would have
to do copy-on-write each time, I suppose.  Do we have a problem here?

The Netapp has a utility that will defragment files on a volume.  It
must put them back into sequential order.  Does ZFS have any concept
of the geometry of its disks?  If so, regular degragmentation on the
Netapp might be a good thing.

If you measure this, then please share your results. There is much
speculation, but little characterization, of the "ills of COW performance."

Should ZFS and the Netapp be using the same blocksize, so that they
cooperate to some extent?


ZFS blocksize is dynamic, power of 2, with a max size == recordsize.
Writes can also be coalesced. If you want to measure the distribution, then
there are a few DTrace scripts which will measure it (eg. iosnoop)

I did a large e-mail server over ZFS POC earlier this year.  We could
handle more than 250,000 users on a T5120 message store server using
decent storage (lots of spindles). Since the I/O workload for IMAP is
quite a unique and demanding workload, we were very pleased with
how well ZFS worked.  But low-latency storage is key to maintaining
such large workloads.
-- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to