On Tue, 21 Nov 2006, Joe Little wrote: > On 11/21/06, Roch - PAE <[EMAIL PROTECTED]> wrote: > > > > Matthew B Sweeney - Sun Microsystems Inc. writes: > > > Hi > > > I have an application that use NFS between a Thumper and a 4600. The > > > Thumper exports 2 ZFS filesystems that the 4600 uses as an inqueue and > > > outqueue. > > > > > > The machines are connected via a point to point 10GE link, all NFS is > > > done over that link. The NFS performance doing a simple cp from one > > > partition to the other is well below what I'd expect , 58 MB/s. I've > > > some NFS tweaks, tweaks to the neterion cards (soft rings etc) , and > > > tweaks to the TCP stack on both sides to no avail. Jumbo frames are > > > enabled and working, which improves performance, but doesn't make it fly. > > > > > > I've tested the link with iperf and have been able to sustain 5 - 6 > > > Gb/s. The local ZFS file systems (12 disk stripe, 34 disk stripe) > > > perform very well (450 - 500 MB/s sustained). > > > > > > My research points to disabling the ZIL. So far the only way I've found > > > to disable the ZIL is through mdb, echo 'zil_disable/W 1'|mdb -kw. My > > > question is can I achieve this setting via a /kernel/drv/zfs.conf or > > > /etc/system parameter? > > > > > > > You may set in in /etc/system. We're thinking of renaming > > the variable to > > > > set zfs_please_corrupt_my_client's_data = 1 > > > > Just kidding (about the name) but it will corrupt your data. > > > > -r > > > > > > Yes, we've entered this thread multiple times before, where NFS > basically sucks compared to the relative performance locally. I'm > waiting, ever so eagerly, for the per pool (or was it per FS) > properties that give finer grained control of the ZIL, named > "sync_deferred". Where is that by the way?
Agreed - it sucks - especially for small file use. Here's a 5,000 ft view of the performance while unzipping and extracting a tar archive. First the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII CPUs and 4Gb of RAM: $ cp emacs-21.4a.tar.gz /tmp $ ptime gunzip -c /tmp/emacs-21.4a.tar.gz |tar xf - real 13.092 user 2.083 sys 0.183 Next, the test in run on the same box in /tmp $ ptime gunzip -c /tmp/emacs-21.4a.tar.gz |tar xf - real 2.983 user 2.038 sys 0.201 Next the test is run on a NFS mount of a zfs filesystem on a 5 disk raidz device over a gigabit ethernet interface with only two hosts on the VLAN (the zfs server is a dual socket AMD whitebox with two dual-core 2.2GHz CPUs): $ ptime gunzip -c /tmp/emacs-21.4a.tar.gz |tar xf - real 2:32.667 user 2.410 sys 0.233 Houston - we have a problem. What OS is the ZFS based NFS server running? I can't say, but lets say that its close to Update 3. Next we move emacs-21.4a.tar.gz to the NFS server and run it in the same filesystem that is NFS mounted to the 280R: $ ptime gunzip -c /tmp/emacs-21.4a.tar.gz |tar xf - real 3.365 user 0.880 sys 0.154 No problem there! ZFS rocks. NFS/ZFS is a bad combination. Happy Thanksgiving (to those stateside). Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED] Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006 _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss