On Oct 20, 2012, at 3:11 PM, Ivan Voras <ivo...@freebsd.org> wrote: > On 20 October 2012 13:42, Nikolay Denev <nde...@gmail.com> wrote: > >> Here are the results from testing both patches : >> http://home.totalterror.net/freebsd/nfstest/results.html >> Both tests ran for about 14 hours ( a bit too much, but I wanted to compare >> different zfs recordsize settings ), >> and were done first after a fresh reboot. >> The only noticeable difference seems to be much more context switches with >> Ivan's patch. > > Thank you very much for your extensive testing! > > I don't know how to interpret the rise in context switches; as this is > kernel code, I'd expect no context switches. I hope someone else can > explain. > > But, you have also shown that my patch doesn't do any better than > Rick's even on a fairly large configuration, so I don't think there's > value in adding the extra complexity, and Rick knows NFS much better > than I do. > > But there are a few things other than that I'm interested in: like why > does your load average spike almost to 20-ties, and how come that with > 24 drives in RAID-10 you only push through 600 MBit/s through the 10 > GBit/s Ethernet. Have you tested your drive setup locally (AESNI > shouldn't be a bottleneck, you should be able to encrypt well into > Gbyte/s range) and the network? > > If you have the time, could you repeat the tests but with a recent > Samba server and a CIFS mount on the client side? This is probably not > important, but I'm just curious of how would it perform on your > machine.
I've now started this test locally. But from previous different iozone runs, I remember locally the speed was much better, but I will wait for this test to finish, as the comparison will be better. But I think there is still something fishy… I have cases where I have reached 1000MB/s over NFS (from network stats, not local machine stats), but sometimes it is very slow even for file completely in ARC. Rick mentioned that this could be due to RPC overhead and network round trip time, but earlier in this thread I've done a test only on the server by mounting the NFS exported ZFS dataset locally and did some tests with "dd": > To take the network out of the equation I redid the test by mounting the same > filesystem over NFS on the server: > > [18:23]root@goliath:~# mount -t nfs -o > rw,hard,intr,tcp,nfsv3,rsize=1048576,wsize=1048576 > localhost:/tank/spa_db/undo /mnt > [18:24]root@goliath:~# dd if=/mnt/data.dbf of=/dev/null bs=1M > 30720+1 records in > 30720+1 records out > 32212262912 bytes transferred in 79.793343 secs (403696120 bytes/sec) > [18:25]root@goliath:~# dd if=/mnt/data.dbf of=/dev/null bs=1M > 30720+1 records in > 30720+1 records out > 32212262912 bytes transferred in 12.033420 secs (2676900110 bytes/sec) > > During the first run I saw several nfsd threads in top, along with dd and > again zero disk I/O. > There was increase in memory usage because of the double buffering > ARC->buffercahe. > The second run was with all of the nfsd threads totally idle, and read > directly from the buffercache. _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"