Robert,

Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
following the import?  These messages imply that the d5110 and d5111
directories in the top-level filesystem of pool nfs-s5-p0 are not
empty.  Could you verify that 'df /nfs-s5-p0/d5110' displays
nfs-s5-p0/d5110 as the "Filesystem" (and not just nfs-s5-p0)?

-Mark

Robert Milkowski wrote:
All pools were exported than I tried to import one-by-one and got this with 
only a first pool.

bash-3.00# zpool export nfs-s5-p4 nfs-s5-s5 nfs-s5-s6 nfs-s5-s7 nfs-s5-s8
bash-3.00# zpool import nfs-s5-p4
cannot mount '/nfs-s5-p4/d5139': directory is not empty
cannot mount '/nfs-s5-p4/d5141': directory is not empty
cannot mount '/nfs-s5-p4/d5138': directory is not empty
cannot mount '/nfs-s5-p4/d5142': directory is not empty
bash-3.00# df -h /nfs-s5-p4/d5139
Filesystem             size   used  avail capacity  Mounted on
nfs-s5-p4/d5139        600G   556G    44G    93%    /nfs-s5-p4/d5139
bash-3.00# zpool export nfs-s5-p4
bash-3.00# ls -l /nfs-s5-p4/d5139
/nfs-s5-p4/d5139: No such file or directory
bash-3.00# ls -l /nfs-s5-p4/
total 0
bash-3.00# zpool import nfs-s5-p4
bash-3.00# uname -a
SunOS XXXXXXX 5.11 snv_43 sun4u sparc SUNW,Sun-Fire-V240
bash-3.00#

No problem with other pools - all other pools imported without any warnings.

The same on another server (all pools were exported first):

bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00# zpool export nfs-s5-p0
bash-3.00# ls -la /nfs-s5-p0/
total 4
drwxr-xr-x   2 root     other        512 Jun 14 14:37 .
drwxr-xr-x  40 root     root        1024 Aug  8 11:00 ..
bash-3.00# zpool import nfs-s5-p0
cannot mount '/nfs-s5-p0/d5110': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
cannot mount 'nfs-s5-p0/d5112': mountpoint or dataset is busy
cannot mount '/nfs-s5-p0/d5111': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
bash-3.00#
bash-3.00# uname -a
SunOS XXXXX 5.11 snv_39 sun4v sparc SUNW,Sun-Fire-T200
bash-3.00#

All filesystems from that pool were however mounted.

No problem with other pools - all other pools imported without any warnings.


All filesystems in a pool have set sharenfs (actually sharenfs is set on a pool 
and then inherited by filesystems). Additionally nfs/server was disabled just 
before I exported pools and automatically started when first pool was imported.



I belive there's already open bug for this.
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to