Hello. I have a really weird problem with a ZFS pool on one machine, and it's
only with 1 pool on that machine (the other pool is fine). Any non-root users
cannot access '..' on any directories where the pool is mounted, eg:
/a1000 on a1000
read/write/setuid/devices/nonbmand/exec/xattr/noatim
Forgot to add that a truss shows:
14960: lstat64("/a1000/..", 0xFFBFF7E8)Err#13 EACCES
[file_dac_search]
ppriv shows the error in UFS:
$ ppriv -e -D -s -file_dac_search ls -ld /a1000/..
ls[15022]: missing privilege "file_dac_search" (euid = 100, syscall = 216)
needed at ufs_ia
Bingo, they were 0750. Thanks so much, that was the one thing I didn't think
of. I thought I was going crazy :).
Thanks again!
-Dustin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
I replaced a bad disk in a RAID-Z2 pool, and now the pool won't come online.
Status shows nothing helpful at all. I don't understand why this is, which I
should be able to lose 2 drives, and I only replaced one!
# zpool status -v pool
pool: pool
state: UNAVAIL
scrub: none requested
config:
Okay.. I "fixed" it by powering the server off, removing the new drive, letting
the pool come up degraded, and then doing zpool replace.
I'm assuming what happened was ZFS saw that the disk was online, tried to use
it, and then noticed that the checksums didn't match (of course) and marked the
Tim: I couldn't do a zpool scrub, since the pool was marked as UNAVAIL.
Believe me, I tried :)
Bob: Ya, I realized that after I clicked send. My brain was a little frazzled,
so I completely overlooked it.
Solaris 10u7 - Sun E450
ZFS pool version 10
ZFS filesystem version 3
-Dustin
--
This m
Cindy: AWESOME! Didn't know about that property, I'll make sure I set it :).
All I did to replace the drives was to power off the machine (the failed drive
had hard-locked the SCSI bus, so I had to anyways). Once the machine was
powered off, I pulled the bad drive, inserted the new drive, and