Bob, On Sun, Jul 12, 2009 at 23:38, Bob Friesenhahn<bfrie...@simple.dallas.tx.us> wrote: > There has been no forward progress on the ZFS read performance issue for a > week now. A 4X reduction in file read performance due to having read the > file before is terrible, and of course the situation is considerably worse > if the file was previously mmapped as well. Many of us have sent a lot of > money to Sun and were not aware that ZFS is sucking the life out of our > expensive Sun hardware. > > It is trivially easy to reproduce this problem on multiple machines. For > example, I reproduced it on my Blade 2500 (SPARC) which uses a simple > mirrored rpool. On that system there is a 1.8X read slowdown from the file > being accessed previously. > > In order to raise visibility of this issue, I invite others to see if they > can reproduce it in their ZFS pools. The script at > > http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh > > Implements a simple test.
--($ ~)-- time sudo ksh zfs-cache-test.ksh zfs create rpool/zfscachetest Creating data file set (3000 files of 8192000 bytes) under /rpool/zfscachetest ... Done! zfs unmount rpool/zfscachetest zfs mount rpool/zfscachetest Doing initial (unmount/mount) 'cpio -o > /dev/null' 48000247 Blöcke real 4m7.70s user 0m24.10s sys 1m5.99s Doing second 'cpio -o > /dev/null' 48000247 Blöcke real 1m44.88s user 0m22.26s sys 0m51.56s Feel free to clean up with 'zfs destroy rpool/zfscachetest'. real 10m47.747s user 0m54.189s sys 3m22.039s This is a M4000 mit 32 GB RAM and two HDs in a mirror. Alexander -- [[ http://zensursula.net ]] [ Soc. => http://twitter.com/alexs77 | http://www.plurk.com/alexs77 ] [ Mehr => http://zyb.com/alexws77 ] [ Chat => Jabber: alexw...@jabber80.com | Google Talk: a.sk...@gmail.com ] [ Mehr => AIM: alexws77 ] [ $[ $RANDOM % 6 ] = 0 ] && rm -rf / || echo 'CLICK!' _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss