On Tue, Jul 14, 2009 at 08:54:36AM +0200, Ross wrote:
> Ok, build 117 does seem a lot better.  The second run is slower,
> but not by such a huge margin.
Hm, I can't support this:

SunOS fred 5.11 snv_117 sun4u sparc SUNW,Sun-Fire-V440
The system has 16GB of Ram, pool is mirrored over two FUJITSU-MBA3147NC.

>-1007: sudo ksh zfs-cache-test.ksh
zfs create rpool/zfscachetest
Creating data file set (4000 files of 8192000 bytes) under /rpool/zfscachetest 
...
Done!
zfs unmount rpool/zfscachetest
zfs mount rpool/zfscachetest

Doing initial (unmount/mount) 'tar to /dev/null'

real    5m12.61s
user    0m0.30s
sys     1m28.36s

Doing second 'tar to /dev/null'

real    11m13.93s
user    0m0.22s
sys     1m37.41s

Feel free to clean up with 'zfs destroy rpool/zfscachetest'.
 user=2.32 sec, sys=343.41 sec, elapsed=23:39.41 min, cpu use=24.3%

And here's what arcstat.pl has to say when starting the second read:

    Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c  
11:53:26   11K   895      7    41    0   854  100    13  100    13G   13G  
11:53:27   12K   832      6    39    0   793  100    13  100    13G   13G  
11:53:28   11K   832      7    39    0   793  100    13  100    13G   13G  
11:53:29   11K   832      7    39    0   793  100    13   76    13G   13G  
11:53:30   12K   896      7    42    0   854  100    14  100    13G   13G  
11:53:31   11K   832      7    39    0   793  100    13  100    13G   13G  
11:53:32   11K   768      6    36    0   732  100    12  100    13G   13G  
11:53:33   11K   832      7    39    0   793  100    13  100    13G   13G  
11:53:34    7K   497      7   253    3   244   99     4   11    13G   13G  
11:53:35    5K   385      7   385    7     0    0     0    0    13G   13G  
11:53:36    5K   374      7   374    7     0    0     0    0    13G   13G  
11:53:37    5K   368      7   368    7     0    0     0    0    13G   13G  
11:53:38    4K   340      7   340    7     0    0     0    0    13G   13G  
11:53:39    5K   383      7   383    7     0    0     0    0    13G   13G  
11:53:40    5K   406      7   406    7     0    0     0    0    13G   13G  
11:53:41    4K   360      7   360    7     0    0     0    0    13G   13G  
11:53:42    4K   328      7   328    7     0    0     0    0    13G   13G  
11:53:43    4K   346      7   346    7     0    0     0    0    13G   13G  
11:53:44    4K   346      7   346    7     0    0     0    0    13G   13G  
11:53:45    4K   319      7   319    7     0    0     0    0    13G   13G  
11:53:47    4K   337      7   337    7     0    0     0    0    13G   13G  

I used tar in this run instead of cpio, just to give it a try...
[time (find . -type f | xargs -i tar cf /dev/null {} )]

Another run with Bob's new script: (rpool/zfscachetest not destroyed before
this run, so wall clock time below is lower)

>-1008: sudo ksh zfs-cache-test.ksh.1
zfs unmount rpool/zfscachetest
zfs mount rpool/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
64000512 blocks

real    4m40.25s
user    0m7.96s
sys     1m28.62s

Doing second 'cpio -C 131072 -o > /dev/null'
64000512 blocks

real    11m0.08s
user    0m7.37s
sys     1m38.58s

Feel free to clean up with 'zfs destroy rpool/zfscachetest'.
 user=15.35 sec, sys=187.87 sec, elapsed=15:43.65 min, cpu use=21.5%

Not much difference to the "tar"-run...

Kurt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to