Harry,

sorry for my first answer, now that you rephrased some of the original post, I
now remember, what your initial problam really was... More inline below...

You (Harry Putnam) wrote:
> And made these settings:

[...]

>   zfs get sharenfs z3/projects
>   NAME         PROPERTY  VALUE     SOURCE
>   z3/projects  sharenfs  on        local
> 
> The problem is that when mounted on linux client with this command:
>  (one example of 3 shares)
> 
>    mount -t nfs -o users,exec,dev,suid zfs:/projects /projects
> 
> Wheres `zfs' is the name of the server host.
> 
> All those detailed were spelled out with some care in the OP. 
> 
> The problem... also detailed in OP and slightly rewritten here:
> 
>   The trouble I see is that all files get created with: 
>      nobody:nobody
>      as UID:GID
>   even though /projects is set as normal USER:GROUP of a user
>   on the zfs/nfs server.
>   (that would be USER=reader GROUP=wheel)
> 
>   From the remote (linux client),  any attempt
>   to change uid:gid fails, even if done by root on the remote.

OK, so that ROOT can chnage things, you need to add options to the EXPORT
side (aka, the ZFS server), and then you need to do something like, BECAUSE
NFS was made a bit more secure in versions after .1 (root on the client should
not automatically have root-priviliges on the server seide, therefore that
needs to be sprcified explicitly! And you might need even more options,. more
on that below, the example is just one small from my setup at home):

        zfs set sharenfs=ro...@192.168.2 pfuetz/smb-share

"man zfs" states at the sharenfs section:

     sharenfs=on | off | opts

         Controls whether the file system is shared via NFS,  and
         what  options  are  used. A file system with a"sharenfs"
         property of "off" is managed through  traditional  tools
         such  as  share(1M),  unshare(1M), and dfstab(4). Other-
         wise,  the  file  system  is  automatically  shared  and
         unshared  with  the  "zfs  share" and "zfs unshare" com-
         mands. If the property is set  to  "on",  the  share(1M)
         command  is  invoked  with  no  options.  Otherwise, the
         share(1M) command is invoked with options equivalent  to
         the contents of this property.

         When the "sharenfs" property is changed for  a  dataset,
         the dataset and any children inheriting the property are
         re-shared with the new options, only if the property was
         previously "off", or if they were shared before the pro-
         perty was changed. If the new  property  is  "off",  the
         file systems are unshared.

You state, that you have the users and groups entries on BOTH (client +
server) done exactly same by hand, so that should be good, still, I would
verify that. And also please check on the ZFS-Server the file:
/etc/nsswitch.conf for the entries for:

        passwd
        group

They should look like:

        passwd:     files
        group:      files

So, with that knowledge, we need to consult "man share_nfs":

The things interesting to you might be the following options:

         anon=uid            Set uid to be the effective user  ID
                             of   unknown   users.   By  default,
                             unknown users are given  the  effec-
                             tive  user  ID UID_NOBODY. If uid is
                             set to -1, access is denied.

         root=access_list    Only  root  users  from  the   hosts
                             specified  in  access_list have root
                             access. See  access_list  below.  By
                             default, no host has root access, so
                             root  users   are   mapped   to   an
                             anonymous  user ID (see the anon=uid
                             option described  above).  Netgroups
                             can  be  used  if  the  file  system
                             shared is using UNIX  authentication
                             ( AUTH_SYS).

Sadly I don't know, if the Linux host (I don't actively manage Linux hosts, so
I have limited knowhow here!) might also need some additional option, when
mounting the NFS FileSystems...

> So certain things cannot be done.  For example a sandbox setup where I
> test procmail recipes will not accept the .procmailrc file since its 
> set:
>   nobody:nobody
> Instead of that USER:GID
> 
> And again, the USER:GID exists on both server and client with
> the same numeric uid:gid:
> 
>  osol (server) host:
> 
>      uname -a
>   SunOS zfs 5.11 snv_133 i86pc i386 i86pc Solaris
> 
>   root # ls -lnd /export/home/reader
>   drwxr-xr-x 60 1000 10 173 2010-03-11 12:13 /export/home/reader

Which "ls" command? /usr/gnu/bin/ls, or /usr/bin/ls? Still: Here we see, that
the file had been created by a user with User-ID: 1000 and Group-ID: 10.

What's the output of:

        ls -ld  /export/home/reader

Does that resolve and list the user-name and group-name?

> -------        ---------       ---=---       ---------      -------- 
> linux (client) host:
> 
>   root # uname -a
>   Linux reader 2.6.33-gentoo #2 SMP Sun Feb 28 22:43:57 CST 2010 
>   i686 Intel(R) Celeron(R) CPU 3.06GHz GenuineIntel GNU/Linux  
> 
>   root # ls -lnd /home/reader/no_bak/procmail_ex
>   drwxr-xr-x 2 1000 10 48 Mar 11 14:02 /home/reader/no_bak/procmail_ex
> 
> 
> > The Linux host needs to be able to MOUNT the NFS-exported files.
> 
> > The /etc/auto.master file is using a later "extension" to the NFS
> > system, name "automount". This only mounts directories, when they
> > are accessed, therefore "auto-mount".
> >
> > You could also add the to-be-mounted diretories into /etc/fstab, so
> > that they are mounted ALWAYS.
> 
> I do it in the init scripts, with the same result.
> 
> > But, it seems, you might need to digg a bit around, and get some
> > intorductoriy 
> > infos on NFS AUTOMOUNT
> 
> Well, that is no doubt true...
> 
> I didn't use automounting on the linux host but am not having a
> problem mounting the shares, with the command shown above.
> I do the mount in initscript `local.start' so the shares are always
> mounted.

>From the above I only can see, that both, client and server to see the 1000:10
as UID and GID, I don't see, what "names" are associated with these IDs...

> But I am having the problem described above.  Even though, far as I
> know, I haven't made any changes on either end regarding exporting the
> shares or mounting them.  But the problems began somewhere a month or
> two ago.
> 
> I have made at least 2 upgrades with these settings in place on the
> server end... and the linux end.  Now at b133 on the solaris end.  
> 
> I've probably made some change and forgot it.. or something similar
> but having trouble tracking down the problem.
> 
> The settings on both ends are now as shown above.  But the problem
> with all files being created nobody:nobody persists.

That "nobody:nobody" might be related to either:

>    mount -t nfs -o users,exec,dev,suid zfs:/projects /projects

or

>   zfs get sharenfs z3/projects
>   NAME         PROPERTY  VALUE     SOURCE
>   z3/projects  sharenfs  on        local

As mentioned, I would not only set "sharenfs" to on, but add at least the
"root" option, and the "suid" option...

Sadly, again, I don't know, what the "users" option on the Linux Client does,
so I can't say, if that might cause some trouble... Same for the "exec" and
"dev" option...

        Matthias
-- 
Matthias Pfützner | Tel.: +49 700 PFUETZNER      | Durch Männer lernt man,
Lichtenbergstr.73 | mailto:matth...@pfuetzner.de | wie die Welt ist, durch
D-64289 Darmstadt | AIM: pfuetz, ICQ: 300967487  | Frauen jedoch, was sie
Germany      | http://www.pfuetzner.de/matthias/ | ist. (Cees Nooteboom)
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to