I have no idea. I downloaded the script from Bob without modifications and ran it specifying only the name of our pool. Should I have changed something to run the test?

We have two kinds of x4500/x4540, those with Sol 10 10/08, and 2 running svn117 for ZFS quotas. Worth trying on both?

Lund




Ross wrote:
Jorgen,

Am I right in thinking the numbers here don't quite work.  48M blocks is just 
9,000 files isn't it, not 93,000?

I'm asking because I had to repeat a test earlier - I edited the script with 
vi, but when I ran it, it was still using the old parameters.  I ignored it as 
a one off, but I'm wondering if your test has done a similar thing.

Ross


x4540 running svn117

# ./zfs-cache-test.ksh zpool1
zfs create zpool1/zfscachetest
creating data file set 93000 files of 8192000 bytes0
under /zpool1/zfscachetest ...
done1
zfs unmount zpool1/zfscachetest
zfs mount zpool1/zfscachetest

doing initial (unmount/mount) 'cpio -o . /dev/null'
48000247 blocks

real    4m7.13s
user    0m9.27s
sys     0m49.09s

doing second 'cpio -o . /dev/null'
48000247 blocks

real    4m52.52s
user    0m9.13s
sys     0m47.51s








_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss

--
Jorgen Lundman       | <lund...@lundman.net>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
Japan                | +81 (0)3 -3375-1767          (home)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to