On Wed, 22 Jul 2009, Roch wrote:

HI Bob did you consider running the 2 runs with

echo zfs_prefetch_disable/W0t1 | mdb -kw

and see if performance is constant between the 2 runs (and low).
That would help clear the cause a bit. Sorry, I'd do it for
you but since you have the setup etc...

Revert with :

echo zfs_prefetch_disable/W0t0 | mdb -kw

-r

I see that if I update my test script so that prefetch is disabled before the first cpio is executed, the read performance of the first cpio reported by 'zpool iostat' is similar to what has been normal for the second cpio case (i.e. 32MB/second). This seems to indicate that prefetch is entirely disabled if the file has ever been read before. However, there is a new wrinkle in that the second cpio completes twice as fast with prefetch disabled even though 'zpool iostat' indicates the same consistent throughput. The difference goes away if I tripple the number of files.

With 3000 8.2MB files:
Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
14443520 blocks

real    3m41.61s
user    0m0.44s
sys     0m8.12s

Doing second 'cpio -C 131072 -o > /dev/null'
14443520 blocks

real    1m50.12s
user    0m0.42s
sys     0m7.21s

Now if I increase the number of files to 9000 8.2MB files:

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
144000768 blocks

real    35m51.47s
user    0m4.46s
sys     1m20.11s

Doing second 'cpio -C 131072 -o > /dev/null'
144000768 blocks

real    35m22.41s
user    0m4.40s
sys     1m14.22s

Notice that with 3X the files, the throughput is dramatically reduced and the time is the same for both cases.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to