Hi,
Here is the result on a Dell Precision T5500 with 24 GB of RAM and two HD in a mirror (SATA, 7200 rpm, NCQ).
[glehm...@marvin2 tmp]$ uname -a SunOS marvin2 5.11 snv_117 i86pc i386 i86pc Solaris [glehm...@marvin2 tmp]$ pfexec ./zfs-cache-test.ksh zfs create rpool/zfscachetestCreating data file set (3000 files of 8192000 bytes) under /rpool/ zfscachetest ...
Done! zfs unmount rpool/zfscachetest zfs mount rpool/zfscachetest Doing initial (unmount/mount) 'cpio -o > /dev/null' 48000247 blocks real 8m19,74s user 0m6,47s sys 0m25,32s Doing second 'cpio -o > /dev/null' 48000247 blocks real 10m42,68s user 0m8,35s sys 0m30,93s Feel free to clean up with 'zfs destroy rpool/zfscachetest'. HTH, Gaëtan Le 13 juil. 09 à 01:15, Scott Lawson a écrit :
Bob,Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool called test1 which is contained on a raid 1 volume on a 6140 with 7.50.13.10 firmware onthe RAID controllers. RAid 1 is made up of two 146GB 15K FC disks.This machine is brand new with a clean install of S10 05/09. It is destined to become a Oracle 10 server withZFS filesystems for zones and DB volumes. [r...@xxx /]#> uname -a SunOS xxx 5.10 Generic_139555-08 sun4u sparc SUNW,SPARC-Enterprise [r...@xxx /]#> cat /etc/release Solaris 10 5/09 s10s_u7wos_08 SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 30 March 2009 [r...@xxx /]#> prtdiag -v | moreSystem Configuration: Sun Microsystems sun4u Sun SPARC Enterprise M3000 ServerSystem clock frequency: 1064 MHz Memory size: 16384 Megabytes Here is the run output for you. [r...@xxx tmp]#> ./zfs-cache-test.ksh test1 zfs create test1/zfscachetestCreating data file set (3000 files of 8192000 bytes) under /test1/ zfscachetest ...Done! zfs unmount test1/zfscachetest zfs mount test1/zfscachetest Doing initial (unmount/mount) 'cpio -o > /dev/null' 48000247 blocks real 4m48.94s user 0m21.58s sys 0m44.91s Doing second 'cpio -o > /dev/null' 48000247 blocks real 6m39.87s user 0m21.62s sys 0m46.20s Feel free to clean up with 'zfs destroy test1/zfscachetest'.Looks like a 25% performance loss for me. I was seeing around 80MB/s sustainedon the first run and around 60M/'s sustained on the 2nd. /Scott. Bob Friesenhahn wrote:There has been no forward progress on the ZFS read performance issue for a week now. A 4X reduction in file read performance due to having read the file before is terrible, and of course the situation is considerably worse if the file was previously mmapped as well. Many of us have sent a lot of money to Sun and were not aware that ZFS is sucking the life out of our expensive Sun hardware.It is trivially easy to reproduce this problem on multiple machines. For example, I reproduced it on my Blade 2500 (SPARC) which uses a simple mirrored rpool. On that system there is a 1.8X read slowdown from the file being accessed previously.In order to raise visibility of this issue, I invite others to see if they can reproduce it in their ZFS pools. The script athttp://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.kshImplements a simple test. It requires a fair amount of disk space to run, but the main requirement is that the disk space consumed be more than available memory so that file data gets purged from the ARC. The script needs to run as root since it creates a filesystem and uses mount/umount. The script does not destroy any data.There are several adjustments which may be made at the front of the script. The pool 'rpool' is used by default, but the name of the pool to test may be supplied via an argument similar to:# ./zfs-cache-test.ksh Sun_2540 zfs create Sun_2540/zfscachetestCreating data file set (3000 files of 8192000 bytes) under / Sun_2540/zfscachetest ...Done! zfs unmount Sun_2540/zfscachetest zfs mount Sun_2540/zfscachetest Doing initial (unmount/mount) 'cpio -o > /dev/null' 48000247 blocks real 2m54.17s user 0m7.65s sys 0m36.59s Doing second 'cpio -o > /dev/null' 48000247 blocks real 11m54.65s user 0m7.70s sys 0m35.06s Feel free to clean up with 'zfs destroy Sun_2540/zfscachetest'. And here is a similar run on my Blade 2500 using the default rpool: # ./zfs-cache-test.ksh zfs create rpool/zfscachetestCreating data file set (3000 files of 8192000 bytes) under /rpool/ zfscachetest ...Done! zfs unmount rpool/zfscachetest zfs mount rpool/zfscachetest Doing initial (unmount/mount) 'cpio -o > /dev/null' 48000247 blocks real 13m3.91s user 2m43.04s sys 9m28.73s Doing second 'cpio -o > /dev/null' 48000247 blocks real 23m50.27s user 2m41.81s sys 9m46.76s Feel free to clean up with 'zfs destroy rpool/zfscachetest'.I am interested to hear about systems which do not suffer from this bug.Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- Gaëtan Lehmann Biologie du Développement et de la Reproduction INRA de Jouy-en-Josas (France) tel: +33 1 34 65 29 66 fax: 01 34 65 29 09 http://voxel.jouy.inra.fr http://www.itk.org http://www.mandriva.org http://www.bepo.fr
PGP.sig
Description: Ceci est une signature électronique PGP
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss