Blake Irvin wrote: > I'm also very interested in this. I'm having a lot of pain with status > requests killing my resilvers. In the example below I was trying to test to > see if timf's auto-snapshot service was killing my resilver, only to find > that calling zpool status seems to be the issue: >
workaround: don't run zpool status as root. -- richard > [EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL | grep " in progress" > scrub: resilver in progress, 0.26% done, 35h4m to go > > [EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL > pool: pit > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist > for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: resilver in progress, 0.00% done, 484h39m to go > config: > > NAME STATE READ WRITE CKSUM > pit DEGRADED 0 0 0 > raidz2 DEGRADED 0 0 0 > c2t0d0 ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > spare DEGRADED 0 0 0 > c3t3d0 UNAVAIL 0 0 0 cannot open > c3t7d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > c3t6d0 ONLINE 0 0 0 > spares > c3t7d0 INUSE currently in use > > errors: No known data errors > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss