Bob: Sun v490, 4x1.35 processors, 32GB ram,  Solaris 10u7 working with a raidz1 
zpool made up of 6x146 sas drives on a j4200. Results of your running your 
script:

# zfs-cache-test.ksh pool2
zfs create pool2/zfscachetest
Creating data file set (6000 files of 8192000 bytes) under /pool2/zfscachetest 
...
Done!
zfs unmount pool2/zfscachetest
zfs mount pool2/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
96000512 blocks

real    5m32.58s
user    0m12.75s
sys     2m56.58s

Doing second 'cpio -C 131072 -o > /dev/null'
96000512 blocks

real    17m26.68s
user    0m12.97s
sys     4m34.33s

Feel free to clean up with 'zfs destroy pool2/zfscachetest'.
#

Same results as you are seeing.

Thanks Randy
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to