This morning we got a fault management message from one of our production 
servers stating that a fault in one of our pools had been detected and fixed. 
Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME                 UUID                                 SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
  100%  fault.fs.zfs.device

        Problem in: zfs://pool=vol02/vdev=179e471c0732582
           Affects:   zfs://pool=vol02/vdev=179e471c0732582
               FRU: -
          Location: -

My question is: how do I relate the vdev name above (179e471c0732582) with an 
actual drive? I've checked these id's against the device ids (cXtYdZ - 
obviously no match) and against all of the disk serial numbers. I've also tried 
all of the "zpool list" and "zpool status" options with no luck.

I'm sure I'm missing something obvious here, but if anyone can point me in the 
right direction I'd appreciate it!
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to