On 27/07/2014 17:55, J. Roeleveld wrote:
On 27 July 2014 18:25:24 CEST, "Stefan G. Weichinger" <li...@xunil.at> wrote:
Am 26.07.2014 04:47, schrieb walt:
So, why did the "broken" machine work normally for more than a year
without rpcbind until two days ago? (I suppose because nfs-utils was
updated to 1.3.0 ?)
The real problem here is that I have no idea how NFS works, and each
new version is more complicated because the devs are solving problems
that I don't understand or even know about.
I double your search for understanding ... my various efforts to set up
NFSv4 for sharing stuff in my LAN also lead to unstable behavior and
frustration.
Only last week I re-attacked this topic as I start using puppet here to
manage my systems ... and one part of this might be sharing
/usr/portage
via NFSv4. One client host mounts it without a problem, the thinkpads
don't do so ... just another example ;-)
Additional in my context: using systemd ... so there are other
(different?) dependencies at work and services started.
I'd be happy to get that working in a reliable way. I don't remember
unstable behavior with NFS (v2 back then?) when we used it at a company
I worked for in the 90s.
Stefan
I use NFS for filesharing between all wired systems at home.
Samba is only used for MS Windows and laptops.
Few things I always make sure are valid:
- One partition per NFS share
- No NFS share is mounted below another one
- I set the version to 3 on the clients
- I use LDAP for the user accounts to ensure the UIDs and GIDs are consistent.
These are generally good recommendations. I'd just like to make a few
observations.
The problems associated with not observing the first constraint (one
filesystem per export) can be alleviated by setting an explicit fsid.
Doing so can also help to avoid stale handles on the client side if the
backing filesystem changes - something that is very useful in a
production environment. Therefore, I tend to start at 1 and increment
with each newly added export. For example:-
/export/foo *(async,no_subtree_check,fsid=1)
/export/foo/bar *(async,no_subtree_check,fsid=2)
/export/baz *(async,no_subtree_check,fsid=3)
If using NFSv3, I'd recommend using "nolock" as a mount option unless
there is a genuine requirement for locks to be co-ordinated. Such locks
are only advisory and are of questionable value. Using nolock simplifies
the requirements on both server and client side, and is beneficial for
performance.
NFSv3/UDP seems to be limited to a maximum read/write block size of
32768 in Linux, which will be negotiated by default. Using TCP, the
upper bound will be the value of /proc/fs/nfsd/max_block_size on the
server. Its value may be set to 1048576 at the most. NFSv3/TCP is
problematic so I would recommend NFSv4 if TCP is desired as a transport
protocol.
NFSv4 provides a useful uid/gid mapping feature that is easier to set up
and maintain than nss_ldap.
NFS4 requires all the exports to be under a single foldertree.
This is a myth:
http://linuxcostablanca.blogspot.co.uk/2012/02/nfsv4-myths-and-legends.html.
Exports can be defined and consumed in the same manner as with NFSv3.
I haven't had any issues in the past 7+ years with this and in the past 5+
years I had portage, distfiles and packages shared.
/etc/portage is symlinked to a NFS share as well, allowing me to create binary
packages on a single host (inside a chroot) which are then used to update the
different machines.
If anyone wants a more detailed description of my setup. Let me know and I will
try to write something up.
Kind regards
Joost
--Kerin