I think I've discovered a different wrinkle to this issue that I haven't seen discussed here.
There are, in my experience with trying to work around this, three separate issues. 1. It's necessary to add the root of your NFS home directories to {@HOMEDIRS}. I have done that, and confirmed that the rules in the apparmor profile for snap-confine expand correctly and do allow writing to ~/snap/ 2. Next, I still had issues where, when trying to run a snap, the following audit messages were in my syslog: Sep 18 12:43:29 <hostname> kernel: [ 4603.119892] nfs: RPC call returned error 13 Sep 18 12:43:29 <hostname> kernel: [ 4603.119926] audit: type=1400 audit(1600433009.543:113): apparmor="DENIED" operation="sendmsg" profile="/snap/snapd/9279/usr/lib/snapd/snap-confine" pid=4371 comm="snap-confine" laddr=<laddr> lport=837 faddr=<faddr> fport=2049 family="inet" sock_type="stream" protocol=6 requested_mask="send" denied_mask="send" I took steps similar to https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1662552/comments/9 which is less than ideal, but reported to work. In my case, I added network inet, network inet6, to /var/lib/snapd/apparmor/snap-confine/nfs Mysteriously, upon rebooting, this file *vanishes*, but it seems not before the apparmor caches are regenerated and that file *does* get included (I confirmed this by putting garbage in it, and confirming that AppArmor reported syntax errors when starting up). I further confirmed that when running the snap, although I still get "permission denied", there are no longer *any* audit messages from apparmor. 3. That's because there appears to be a third, very simple issue, having nothing to do with apparmor. Because snap-confine is setuid it runs as root, and like many NFS mounts mine is configured to treat root as a normal user when reading users' home directories. Early in its startup, snap-confine runs sc_preserve_and_sanitize_process_state() https://github.com/snapcore/snapd/blob/9defc23e31afee03cf5994f309422681971f42db/cmd/snap-confine/snap-confine.c#L322 This tries to use openat() to open a handle to CWD to store before calling chdir("/"). Since this happens well before sc_set_effective_identity() it will fail if your CWD happens to be in the NFS mount. This is easily worked around of course by changing directory outside the NFS mount, but for a typical user they will be typically sitting in their home directory, and this is very confusing. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1662552 Title: snaps don't work with NFS home To manage notifications about this bug go to: https://bugs.launchpad.net/snapd/+bug/1662552/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs