I added a second Lun identical in size as a mirror and reran test.
Results are more in line with yours now.
./zfs-cache-test.ksh test1
System Configuration: Sun Microsystems sun4u Sun SPARC Enterprise
M3000 Server
System architecture: sparc
System release level: 5.10 Generic_139555-08
CPU ISA list: sparcv9+vis2 sparcv9+vis sparcv9 sparcv8plus+vis2
sparcv8plus+vis sparcv8plus sparcv8 sparcv8-fsmuld sparcv7 sparc
Pool configuration:
pool: test1
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Jul 15
11:38:54 2009
config:
NAME STATE READ WRITE
CKSUM
test1 ONLINE 0
0 0
mirror ONLINE 0
0 0
c3t600A0B80005622640000039B4A257E11d0 ONLINE 0
0 0
c3t600A0B8000336DE2000004394A258B93d0 ONLINE 0
0 0
errors: No known data errors
zfs create test1/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under
/test1/zfscachetest ...
Done!
zfs unmount test1/zfscachetest
zfs mount test1/zfscachetest
Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real 3m25.13s
user 0m2.67s
sys 0m28.40s
Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real 8m53.05s
user 0m2.69s
sys 0m32.83s
Feel free to clean up with 'zfs destroy test1/zfscachetest'.
Scott Lawson wrote:
Bob,
Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool
called test1
which is contained on a raid 1 volume on a 6140 with 7.50.13.10
firmware on
the RAID controllers. RAid 1 is made up of two 146GB 15K FC disks.
This machine is brand new with a clean install of S10 05/09. It is
destined to become a Oracle 10 server with
ZFS filesystems for zones and DB volumes.
[r...@xxx /]#> uname -a
SunOS xxx 5.10 Generic_139555-08 sun4u sparc SUNW,SPARC-Enterprise
[r...@xxx /]#> cat /etc/release
Solaris 10 5/09 s10s_u7wos_08 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 30 March 2009
[r...@xxx /]#> prtdiag -v | more
System Configuration: Sun Microsystems sun4u Sun SPARC Enterprise
M3000 Server
System clock frequency: 1064 MHz
Memory size: 16384 Megabytes
Here is the run output for you.
[r...@xxx tmp]#> ./zfs-cache-test.ksh test1
zfs create test1/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under
/test1/zfscachetest ...
Done!
zfs unmount test1/zfscachetest
zfs mount test1/zfscachetest
Doing initial (unmount/mount) 'cpio -o > /dev/null'
48000247 blocks
real 4m48.94s
user 0m21.58s
sys 0m44.91s
Doing second 'cpio -o > /dev/null'
48000247 blocks
real 6m39.87s
user 0m21.62s
sys 0m46.20s
Feel free to clean up with 'zfs destroy test1/zfscachetest'.
Looks like a 25% performance loss for me. I was seeing around 80MB/s
sustained
on the first run and around 60M/'s sustained on the 2nd.
/Scott.
Bob Friesenhahn wrote:
There has been no forward progress on the ZFS read performance issue
for a week now. A 4X reduction in file read performance due to
having read the file before is terrible, and of course the situation
is considerably worse if the file was previously mmapped as well.
Many of us have sent a lot of money to Sun and were not aware that
ZFS is sucking the life out of our expensive Sun hardware.
It is trivially easy to reproduce this problem on multiple machines.
For example, I reproduced it on my Blade 2500 (SPARC) which uses a
simple mirrored rpool. On that system there is a 1.8X read slowdown
from the file being accessed previously.
In order to raise visibility of this issue, I invite others to see if
they can reproduce it in their ZFS pools. The script at
http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh
Implements a simple test. It requires a fair amount of disk space to
run, but the main requirement is that the disk space consumed be more
than available memory so that file data gets purged from the ARC. The
script needs to run as root since it creates a filesystem and uses
mount/umount. The script does not destroy any data.
There are several adjustments which may be made at the front of the
script. The pool 'rpool' is used by default, but the name of the
pool to test may be supplied via an argument similar to:
# ./zfs-cache-test.ksh Sun_2540
zfs create Sun_2540/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under
/Sun_2540/zfscachetest ...
Done!
zfs unmount Sun_2540/zfscachetest
zfs mount Sun_2540/zfscachetest
Doing initial (unmount/mount) 'cpio -o > /dev/null'
48000247 blocks
real 2m54.17s
user 0m7.65s
sys 0m36.59s
Doing second 'cpio -o > /dev/null'
48000247 blocks
real 11m54.65s
user 0m7.70s
sys 0m35.06s
Feel free to clean up with 'zfs destroy Sun_2540/zfscachetest'.
And here is a similar run on my Blade 2500 using the default rpool:
# ./zfs-cache-test.ksh
zfs create rpool/zfscachetest
Creating data file set (3000 files of 8192000 bytes) under
/rpool/zfscachetest ...
Done!
zfs unmount rpool/zfscachetest
zfs mount rpool/zfscachetest
Doing initial (unmount/mount) 'cpio -o > /dev/null'
48000247 blocks
real 13m3.91s
user 2m43.04s
sys 9m28.73s
Doing second 'cpio -o > /dev/null'
48000247 blocks
real 23m50.27s
user 2m41.81s
sys 9m46.76s
Feel free to clean up with 'zfs destroy rpool/zfscachetest'.
I am interested to hear about systems which do not suffer from this bug.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_________________________________________________________________________
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax : +64 09 968 7641
Mobile : +64 27 568 7611
mailto:sc...@manukau.ac.nz
http://www.manukau.ac.nz
__________________________________________________________________________
perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
__________________________________________________________________________
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss