On Mon, Sep 7, 2009 at 12:05, Chris Gerhard <chris.gerh...@sun.com> wrote: > Looks like this bug: > > http://bugs.opensolaris.org/view_bug.do?bug_id=6655927 > > Workaround: Don't run zpool status as root. I'm not, and yet the scrub continues. To be more specific, here's a complete current interaction with zpool status: w...@box:~$ zpool status pool pool: pool state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: scrub in progress for 39h37m, 100.00% done, 0h0m to go config:
NAME STATE READ WRITE CKSUM pool DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c8d1 ONLINE 0 0 0 c8d0 ONLINE 0 0 0 c12t4d0 ONLINE 0 0 0 c12t3d0 ONLINE 0 0 0 c12t2d0 ONLINE 0 0 0 c12t0d0 OFFLINE 0 0 0 logs c10d0 ONLINE 0 0 0 errors: No known data errors w...@box:~$ Running the same command again immediately shows the same thing. In other words, the scrub is not restarting, just never finishing. iostat shows this: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 303.9 0.0 12380.2 0.0 33.0 2.0 108.5 6.6 100 100 c8d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c9d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c10d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c7d1 303.9 0.0 12348.2 0.0 33.0 2.0 108.5 6.6 100 100 c8d1 366.9 0.0 13627.8 0.0 0.0 4.6 0.0 12.5 0 51 c12t2d0 351.9 0.0 12956.0 0.0 0.0 4.3 0.0 12.2 0 58 c12t3d0 369.9 0.0 13787.8 0.0 0.0 6.8 0.0 18.3 0 72 c12t4d0 while rwtop shows about 3 MB/s to and from applications. Will _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss