[EMAIL PROTECTED] said:
> I feel like we're being hung out to dry here.  I've got 70TB on 9  various
> Solaris 10 u4 servers, with different data sets.  All of these  are NFS
> servers.  Two servers have a ton of small files, with a lot of  read and
> write updating, and NFS performance on these are abysmal.  ZFS  is installed
> on SAN array's (my first mistake).  I will test by  disabling the ZIL, but if
> it turns out the ZIL needs to be on a separate  device, we're hosed.  

If you're using SAN arrays, you should be in good shape.  I'll echo what
Vincent Fox said about using either zfs_nocacheflush=1 (which is in S10U4),
or setting the arrays to ignore the cache flush (SYNC_CACHE) requests.
We do the latter here, and it makes a huge difference for NFS clients,
basically putting the ZIL in NVRAM.

However, I'm also unhappy about having to wait for S10U6 for the separate
ZIL and/or cache features of ZFS.  The lack of NV ZIL on our new Thumper
makes it painfully slow over NFS for the large number of file create/delete
type of workload.

Here's a question:  Would having the client mount with "-o nocto" have
the same effect (for that particular client) as disabling the ZIL on the
server?  If so, it might be less drastic than losing the ZIL for everyone.

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to