> >>>>By the way, I'm thinking that more frequently hang might related with > >>>>large read/write block in mount_nfs -r/-w (I use 8192, original is 1024). > >>Has it even been considered to up these values to something bigger?? > > Read- and write-size of 32768 seems to work optimal for me: > How did you come to this conclusion? What kind of workload?
To make a short story long ;-) Last year just after christmas I got a new storage system and had an opportunity to replace our Linux-nfs-server with FreeBSD. I searched the archives for nfs-related tuning-information, and found some links suggesting the usage of tcp rather than udp and adjusting the r/w-size. So I nfs-mounted some clients and started to copy back and forth. The december release of the (back then) current had some "server not responding" messages, but they appeared less with r/w-sizes of 32768. The copying itself was faster as well. So I upgraded (two or three times) until I had the Feb. 18'th 2004 current and the "server not responding" almost vanished. Some weeks after that the server went into production and have been rock-stable! It went down once but that was only due to a poweroutage that lasted a few hours, longest uptime was 117 days before I took it down for servermaintenance. The files are at most some MB in size (images) and some KB (thumbnails). > This is in line with what the graphs suggest: > Use Laaarrrrrggggeee sizes. And use tcp as well. regards Claus _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"