On 01/20/14 19:55, Chris Stankevitz wrote: > On Mon, Jan 20, 2014 at 9:10 AM, Alan McKinnon <[email protected]> > wrote: >> Most NFS servers in the real world are thus file shares and permit >> read-only access to all users. > > Alan, > > Thank you for explaining this in english for me. I am a bit blown > away that it is taking me so long to figure out that NFS might not be > for me. However, it is now making sense why everybody, even linux > people, seem to use SMB.
Indeed. The original use-case for NFS is no longer relevant whereas the design for smb *is* what suits most folk. > > My problem: > > I have a handful of users on Mac and Linux who want to share some > files whose content is not secret, but to avoid accidents I would like > to restrict write access to those with a password. Most users are > probably UID 1000 on their respective machines. Normally we use git > for this, but we have 1TB of large binary files and do not need > versioning. So I thought "problem solved... I'll just make an NFS > share. From your machines, just open nfs://share/ and when prompted > for a username/password, just use one I'll supply. NFS doesn't do passwords. It uses host-based authentication, i.e. it is the computer that gets validated, not the human user driving the keyboard. To further diverge from what you are looking for, it is the client computer that validates the human user, not the server. The server has to trust that the client is what it claims to be. > > So this little plan of mine has hit several problems: > > 1. Accessing an NFS share from linux is not as simple as "Please open > nfs://foo/bar". At least not on XFCE4 (see my post > http://mail.xfce.org/pipermail/xfce/2014-January/033023.html). It > seems I have to get fstab involved. Not sure about from the mac. There is such a thing as libnfs which lets devs build nfs applets. KDE uses something similar, I dunno about XFCE. I know that Mint runnign Gnome doesn't, so I hold little hope for XFCE. Another wrinkle is although you can set up NFS to let users mount and umount, it must go in fstab, and editing that requires root ... > > 2. Opening SMB is as simple as "Please open smb://foo/bar". Perhaps > this simplicity is due to the efforts of > metacity/gvfs/fuse/samba/udev/polkit/consolekit. No, it's just how smb works. It started out as a username/password system and has stayed that way. Much like Apache basic auth in fact. > 3. NFS is UID based and I have no idea what the UIDs are, and worse, > most of my users probably have the same UIDs on their system. This > sounds like a show stopper to me. FWIW, I'll show you how to do it in a simple way (see end) but you are probably better off with smb to be honest. Give each user a username/passwd once and your maintenance ends right there > > > === > >> Most NFS servers in the real world are thus file shares and permit >> read-only access to all users. > > Are you saying that NFS can be configured to allow ro access to > everyone, even those people whose UID was not known when the NFS was > setup? If so, can the same be done for rw access? Yes, both are possible. But, in practical terms you end up giving the access (either ro or rw) to every computer on a defined network segment including the CEO and the janitor as well as the intended user group - eg sales :-) > >> squash was invented - when root access comes over the wire, the server >> changes it from UID=0 to something else (usually nobody) and then >> applies Unix permissions to that account. > > Got'cha. If I go with NFS, I think I would be interested in is more > of a "global squash". No matter which UID is making the connection, > squash it over to the generic local UID 9999 which was granted rw > access to the share. yeah, that's the way. Suppose you have a network 192.168.0.0/24, the nfs server is 172.20.0.3 and you want to share /data: /etc/exports on server (read man 5 exports, there are gotchas): /data 192.168.0.0/24(rw,anonuid=9999,anongid=9999,all_squash) and run as root "exportfs -ra" and add "nfs" to default runlevel On each client, add to /etc/fstab: 192.168.0.3:/data /mnt/<somewhere> nfs noauto,user,rw 0 0 and add rpc.statd to default runlevel (with some setups you might need the portmap service instead) To use the share the user runs (as themself) in a terminal: mount /mnt/<somewhere> and "umount /mnt/<somewhere>" to unmount But wait, there's more! Go back to the nfs server. It uses regular Unix permissions and accesses them as uid 9999 - there's no magic "let nfs access stuff regardless if it's over nfs" So create a group with gid 9999 and add your nfs user to it. Now you have to guarantee that that user can always read and write the contents of /data, you must set the sgid bit for all directories in that tree: find /data -type d -exec chmod g+s {} \; AND every file in it has the correct owner and permissions. use a cronjob that runs as frequent as you need it to (assume your user 9999 has username "nfs"): chown -R media:media /mnt/,somewhere> find /mnt/<somewhere> -type d -not -perm 2775 -exec chmod 2775 {} + find /mnt/<somewhere> -type f -not -perm 664 -exec chmod 664 {} + There is a way to do what these crons do using POSIX acls, but trust me you do not want to go there - maintenance nightmare for everyone not named Chris and for yourself two weeks later trying to figure out what the hell the rules mean. If you omit these crons, you must trust the users to always chmod every file they create and not fiddle with umask. No user in the world does this. Too much work? I think so. This is not the area where nfs excels. SMB is probably better and or familiar to users as well. -- Alan McKinnon [email protected]

