On Jun 23, 2009, at 11:50 AM, Richard Elling wrote:
(2) is there some reasonable way to read in multiples of these
blocks in a single IOP? Theoretically, if the blocks are in
chronological creation order, they should be (relatively)
sequential on the drive(s). Thus, ZFS should be able to read in
several of them without forcing a random seek. That is, you should
be able to get multiple blocks in a single IOP.
Metadata is prefetched. You can look at the hit rate in kstats.
Stuart, you might post the output of "kstat -n vdev_cache_stats"
I regularly see cache hit rates in the 60% range, which isn't bad
considering what is being cached.
# kstat -n vdev_cache_stats
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 129.03798177
delegations 25873382
hits 114064783
misses 182253696
snaptime 960064.85352608
Here is also some zpool iostat numbers during this resilver,
# zpool iostat ldas-cit1 10
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
ldas-cit1 16.9T 3.49T 165 134 5.17M 1.58M
ldas-cit1 16.9T 3.49T 225 237 1.28M 1.98M
ldas-cit1 16.9T 3.49T 288 317 1.53M 2.26M
ldas-cit1 16.9T 3.49T 174 269 1014K 1.68M
And here is the pool configuration,
# zpool status ldas-cit1
pool: ldas-cit1
state: DEGRADED
status: One or more devices is currently being resilvered. The pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 96h49m, 63.69% done, 55h12m to go
config:
NAME STATE READ WRITE CKSUM
ldas-cit1 DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c0t1d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
spare DEGRADED 0 0 0
replacing DEGRADED 0 0 0
c6t2d0s0/o FAULTED 0 0 0 corrupted data
c6t2d0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c5t3d0 ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
c5t0d0 ONLINE 0 0 0
c5t4d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c3t6d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c5t6d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c3t7d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
c5t7d0 ONLINE 0 0 0
c6t7d0 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
spares
c6t0d0 INUSE currently in use
errors: No known data errors
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss