I just ran zpool scrub on an active pool on an x4170 running S10U7 with the 
latest patches and iostat immediately dropped to 0 for all the pool devices and 
all processes associated with that device where hard locked, e.g., kill -9 on a 
zpool status processes was ineffective. However, other zpool on the system, 
such as the root pool, continued to work.

Neither init 6 nor reboot where able to take the system all the way down, 
though reboot did get further. After a hard reset the system came backup 
cleanly and a subsequent zpool scrub succeeded, but I am now concerned about 
when it is safe to run a scrub. The most interesting pool action taken before 
the scrub was attempted was to append another mirror vdev to the pool which 
already had 2 mirrors in it.

Are there known bugs I can track with regards to zpool scrub locking up a 
system, and if it happens again is there any useful information I can gather 
before or during the system reset to help track this down further?

Thanks.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to