On 11/15/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
After swapping some hardware and rebooting:

SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b31acc-0de8-c1f3-84ec-935574615804
DESC: A ZFS pool failed to open.  Refer to http://sun.com/msg/ZFS-8000-CS
for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
            restore from backup.

# zpool status -x
all pools are healthy

And in fact they are.  What gives?  This message occurs on every boot now.
It didn't occur before I changed the hardware.

Sounds like an opportunity for enhancement. At the
very least the ZFS :: FMA interaction should include the
component (pool in this case) which was noted to be
marginal/faulty/dead.


Does zpool status -xv show anything that zpool status -x
doesn't?

James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
             http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to