On Tue, Jul 14, 2009 at 11:09:32AM -0500, Bob Friesenhahn wrote:
> On Tue, 14 Jul 2009, Jorgen Lundman wrote:
>
>> I have no idea. I downloaded the script from Bob without modifications 
>> and ran it specifying only the name of our pool. Should I have changed 
>> something to run the test?
>
> If your system has quite a lot of memory, the number of files should be 
> increased to at least match the amount of memory.
>
>> We have two kinds of x4500/x4540, those with Sol 10 10/08, and 2 
>> running svn117 for ZFS quotas. Worth trying on both?
>
> It is useful to test as much as possible in order to fully understand  
> the situation.
>
> Since results often get posted without system details, the script is  
> updated to dump some system info and the pool configuration.  Refresh  
> from
>
> http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Whitebox Quad-core Phenom, 8G RAM, RAID-Z (3x1TB + 3x1.5TB) SATA drives via an 
AOC-USAS-L8i:

System Configuration: Gigabyte Technology Co., Ltd. GA-MA770-DS3
System architecture: i386
System release level: 5.11 snv_111b
CPU ISA list: amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 
i86

Pool configuration:
  pool: pool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c3t7d0  ONLINE       0     0     0
            c3t6d0  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0

errors: No known data errors

zfs create pool/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under /pool/zfscachetest 
...
Done!
zfs unmount pool/zfscachetest
zfs mount pool/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real    4m59.33s
user    0m21.83s
sys     2m56.05s

Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks

real    8m28.11s
user    0m22.66s
sys     3m13.26s

Feel free to clean up with 'zfs destroy pool/zfscachetest'.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to