I am planning to update my OpenIndiana oi_151a9 system by overwriting
its boot disks using the latest OpenIndiana Hipster live USB image.
My system (16 Xeon cores with 128GB RAM) has an attached SAS2 storage
pool of 4 mirrors of SAS2 drives (8 1TB disks). These disk drives
have never been the speediest (I think 9ms seek time) but they have no
errors and all of them appear healthy with similar response times. I
normally do a scrub of the pool once a week.
This storage pool is used both for regular software development work,
but it is also used as a backup area for nightly 'zfs send's from two
other systems. It does have quite a lot of snapshots and since it is
an older pool version, it does not use the snapshot optimizations that
newer zfs pool versions enjoy.
Yesterday I did a scrub of the pool using OpenIndiana oi_151a9 and it
completed in about two hours. After booting into the OpenIndiana
Hipster live USB image I decided to do a scrub using the modern zfs
software and it seems to be taking an incredible amount of time,
although nothing seems overtly wrong:
jack@openindiana:/jack$ zpool status tank
pool: tank
state: ONLINE
status: Some supported features are not enabled on the pool. The pool
can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not
support
the features. See zpool-features(5) for details.
scan: scrub in progress since Sat Apr 18 17:08:33 2020
3.33G scanned at 62.6K/s, 2.98G issued at 56.1K/s, 1.03T total
0 repaired, 0.28% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c4t50000393E8CA21FAd0p0 ONLINE 0 0 0
c8t50000393D8CA34B2d0p0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c5t50000393E8CA2066d0p0 ONLINE 0 0 0
c9t50000393E8CA2196d0p0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c6t50000393D8CA82A2d0p0 ONLINE 0 0 0
c10t50000393E8CA2116d0p0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c7t50000393D8CA2EEAd0p0 ONLINE 0 0 0
c11t50000393D8CA828Ed0p0 ONLINE 0 0 0
errors: No known data errors
jack@openindiana:/jack$ iostat -xnE
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1.4 0.0 5.6 0.0 0.0 0.0 0.1 0.7 0 0 lofi1
0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.2 0 0 lofi2
0.2 0.2 1.6 0.8 0.0 0.0 0.0 0.0 0 0 ramdisk1
0.4 0.0 3.6 0.0 0.0 0.0 0.1 1.6 0 0 c12t0d0
0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.7 0 0 c13t0d0
0.0 0.0 0.1 0.0 0.0 0.0 0.2 0.9 0 0 c13t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c13t2d0
236.8 6.5 7021.9 383.9 0.0 1.8 0.0 7.2 0 89
c4t50000393E8CA21FAd0
232.8 6.4 6763.2 379.2 0.0 1.7 0.0 7.2 0 87
c5t50000393E8CA2066d0
229.8 6.6 6650.1 384.4 0.0 1.7 0.0 7.4 0 88
c6t50000393D8CA82A2d0
236.8 6.4 7099.0 382.8 0.0 1.8 0.0 7.3 0 90
c7t50000393D8CA2EEAd0
234.5 6.5 6881.9 383.9 0.0 1.7 0.0 7.2 0 88
c8t50000393D8CA34B2d0
234.6 6.4 6984.4 379.2 0.0 1.7 0.0 7.1 0 87
c9t50000393E8CA2196d0
232.4 6.6 6908.4 384.4 0.0 1.9 0.0 7.9 0 95
c10t50000393E8CA2116d0
235.4 6.4 6949.4 382.8 0.0 1.8 0.0 7.3 0 89
c11t50000393D8CA828Ed0
1875.6 50.5 55335.9 3064.9 22949.2 14.2 11914.9 7.4 98 99 tank
What I see is that is quite a lot of small read activity, and also a
fair amount of write activity. The scrub has yet to pick up and start
reading the actual data at a higher bandwidth. Perhaps the scrub is
also updating the pool to take advantage of modern features available
within the bounds of its zfs version?
Is this normal expected behavior? When the scrub with the newer
software completes, can it be expected that subsequent scrubs will be
faster, or will they always be so incredibly slow?
Thanks,
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Public Key, http://www.simplesystems.org/users/bfriesen/public-key.txt
_______________________________________________
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss