On Wed, 2002-11-06 at 19:52, BigBrother wrote: > > > Although the man page says this, I *think* that the communication is done > like this > > CLIENT <=> NFSIOD(CLIENT) <=> NFSIOD (SERVER) <=> NFSD > > which menas that NFSIOD 'speak' with each other and then they pass the > requests to NFS. > > Of course if u dont need to have on the server too many NFSIOD. So in my > case I just have 8 nfsiod on server running and most of them are idle, and > besides, they only take 1.5MB of memory which I can afford. So I think > having *some* NFSIOD also on server is not a bad idea. Of course on the > server u should have a lot of NFSD. > > in other words, running NFSIOD on server is not a bad idea..
NFSIOD is enabled by putting nfs_client_enable="YES" into rc.conf. Empirical evidence suggests that NFSIOD is not used server-side. I ran 4 nfsiod daemons on the server and checked their usage time. All were 0:00. If the server does any client NFS stuff it would make a difference, and it may be different under other OS. Certainly it does no harm to have them running. > > Also monitor the mbufs on all your machines (especially the server). > > do from time to time a 'netstat -m' and see the peak value of mbuf > and mbuf clusters...if it is close to the max limit then you will suffer > from mbuff exhaustion will will eventually make the machine unreachable > from network. > > > u can change mbuff max value before kernel load ...see tuning (7) kern.nmbclusters="<value>" in /boot/loader.conf, if anyone needs to do this. Have to reboot for this one :-( > > > Also if u have mbuf exhastion try to use smaller block size in NFS mounts. Now this is interesting. I had thought mbuf cluster exhaustion was due to a high number of connections. Although I guess a high number of connections * large buffer size would do it too. Thank you for your response and suggestions vis the other NFS stuff - I managed to get our server talking UDP. The wildcard binding was the problem, and the -h flag to nfsd fixed it. Network usage graphs are showing the differential between incoming and outgoing traffic to be much less now, so I would say there was a lot of overhead in there, as well as retransmissions. I am still playing with buffer sizes but chances are in this case the FreeBSD default is best. Regards -- Duncan Anker Senior Systems Administrator Dark Blue Sea To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message