I have a pool composed of a single raidz2 vdev, which is currently
degraded (missing a disk):
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c8d1 ONLINE 0 0 0
c8d0 ONLINE 0 0 0
c12t4d0 ONLINE 0 0 0
c12t3d0 ONLINE 0 0 0
c12t2d0 ONLINE 0 0 0
c12t0d0 OFFLINE 0 0 0
logs
c10d0 ONLINE 0 0 0
errors: No known data errors
I have it scheduled for periodic scrubs, via root's crontab:
20 2 1 * * /usr/sbin/zpool scrub pool
but this scrub was kicked off manually.
Last night I checked its status and saw:
scrub: scrub in progress for 20h32m, 100.00% done, 0h0m to go
This morning I see:
scrub: scrub in progress for 31h10m, 100.00% done, 0h0m to go
It's 100% done, but yet hasn't finished in 10 hours! "zpool iostat -v
pool 10" shows it's doing between 50 and 120 MB/s of reads, when the
userspace applications are only doing a few megabytes per second of
I/O, as measured by the DTraceToolkit script "rwtop" ("app_r: 4469
KB, app_w: 4579 KB").
What can cause this kind of behavior, and how can I make my pool
finish scrubbing?
Will
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss