On Sun, 27 Jun 2010, Rick C. Petty wrote:
First off, many thanks to Rick Macklem for making NFSv4 possible in FreeBSD! I recently updated my NFS server and clients to v4, but have since noticed significant performance penalties. For instance, when I try "ls a b c" (if a, b, and c are empty directories) on the client, it takes up to 1.87 seconds (wall time) whereas before it always finished in under 0.1 seconds. If I repeat the test, it takes the same amount of time in v4 (in v3, wall time was always under 0.01 seconds for subsequent requests, as if the directory listing was cached).
Weird, I don't see that here. The only thing I can think of is that the experimental client/server will try to do I/O at the size of MAXBSIZE by default, which might be causing a burst of traffic your net interface can't keep up with. (This can be turned down to 32K via the rsize=32768,wsize=32768 mount options. I found this necessary to avoid abissmal performance on some Macs for the Mac OS X port.) The other thing that can really slow it down is if the uid<->login-name (and/or gid<->group-name) is messed up, but this would normally only show up for things like "ls -l". (Beware having multiple password database entries for the same uid, such as "root" and "toor".)
If I try to play an h264 video file on the filesystem using mplayer, it often jitters and skipping around in time introduces up to a second or so pause. With NFSv3 it behaved more like the file was on local disk (no noticable pauses or jitters). Has anyone seen this behavior upon switching to v4 or does anyone have any suggestions for tuning? Both client and server are running the same GENERIC kernel, 8.1-PRERELEASE as of 2010-May-29. They are connected via gigabit. Both v3 and v4 tests were performed on the exact same hardware and I/O, CPU, network loads. All I did was toggle nfsv4_server_enable (and nfsuserd/nfscbd of course). It seems like a server-side issue, because if I try an nfs3 client mount to the nfs4 server and run the same tests, I see only a slight improvement in performance. In both cases, my mount options were "rdirplus,bg,intr,soft" (and "nfsv4" added in the one case, obviously).
I don't recommend the use of "intr or soft" for NFSv4 mounts, but they wouldn't affect performance for trivial tests. You might want to try: "nfsv4,rsize=32768,wsize=32768" and see how that works. When you did the nfs3 mount did you specify "newnfs" or "nfs" for the file system type? (I'm wondering if you still saw the problem with the regular "nfs" client against the server? Others have had good luck using the server for NFSv3 mounts.)
On the server, I have these tunables explicitly set: kern.ipc.maxsockbuf=524288 vfs.newnfs.issue_delegations=1 On the client, I just have the maxsockbuf setting (this is twice the default value). I'm open to trying other tunables or patches. TIA,
When I see abissmal NFS perf. it is usually an issue with the underlying transport. Looking at things like "netstat -i" or "netstat -s" might give you a hint? Having said that, the only difference I can think of between the two NFS subsystems that might affect the transport layer is the default I/O size, as noted above. rick _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"