Hi Everyone,
I hope this is the right forum for this question. A customer is using
a Thumper as an NFS file server to provide the mail store for multiple
email servers (Dovecot). They find that when a zpool is freshly
created and populated with mail boxes, even to the extent of 80-90%
capacity, performance is ok for the users, backups and scrubs take a
few hours (4TB of data). There are around 100 file systems. After
running for a while (couple of months) the zpool seems to get
"fragmented", backups take 72 hours and a scrub takes about 180
hours. They are running mirrors with about 5TB usable per pool (500GB
disks). Being a mail store, the writes and reads are small and
random. Record size has been set to 8k (improved performance
dramatically). The backup application is Amanda. Once backups become
too tedious, the remedy is to replicate the pool and start over.
Things get fast again for a while.
Is this expected behavior given the application (email - small, random
writes/reads)? Are there recommendations for system/ZFS/NFS
configurations to improve this sort of thing? Are there best
practices for structuring backups to avoid a directory walk?
Thanks,
bill
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss