On November 16, 2006 1:18:22 AM +1100 James McPherson
<[EMAIL PROTECTED]> wrote:
On 11/15/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
After swapping some hardware and rebooting:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b31acc-0de8-c1f3-84ec-935574615804
DESC: A ZFS pool failed to open. Refer to http://sun.com/msg/ZFS-8000-CS
for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
restore from backup.
# zpool status -x
all pools are healthy
And in fact they are. What gives? This message occurs on every boot
now. It didn't occur before I changed the hardware.
Sounds like an opportunity for enhancement. At the
very least the ZFS :: FMA interaction should include the
component (pool in this case) which was noted to be
marginal/faulty/dead.
Does zpool status -xv show anything that zpool status -x
doesn't?
Nope.
But I see that my raid array (3511) is now beeping like crazy, playing
a song really. I think there must be some delay that is causing the
disks not to be available early in the boot? Then they become available
and get imported? (I do notice that unlike scsi disks, if I add a disk
to the 3511 it is noticed immediately on the host.)
-frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss