On Mon, Jul 13, 2009 at 9:34 AM, Bob
Friesenhahn<bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 13 Jul 2009, Alexander Skwar wrote:
>>
>> Still on S10 U7 Sparc M4000.
>>
>> So I'm now inline with the other results - the 2nd run is WAY slower. 4x
>> as slow.
>
> It would be good to see results from a few OpenSolaris users running a
> recent 64-bit kernel, and with fast storage to see if this is an OpenSolaris
> issue as well.

Indeed it is.  Using ldoms with tmpfs as the backing store for virtual
disks, I see:

With S10u7:

# ./zfs-cache-test.ksh testpool
zfs create testpool/zfscachetest
Creating data file set (300 files of 8192000 bytes) under
/testpool/zfscachetest ...
Done!
zfs unmount testpool/zfscachetest
zfs mount testpool/zfscachetest

Doing initial (unmount/mount) 'cpio -o > /dev/null'
4800025 blocks

real    0m30.35s
user    0m9.90s
sys     0m19.81s

Doing second 'cpio -o > /dev/null'
4800025 blocks

real    0m43.95s
user    0m9.67s
sys     0m17.96s

Feel free to clean up with 'zfs destroy testpool/zfscachetest'.

# ./zfs-cache-test.ksh testpool
zfs unmount testpool/zfscachetest
zfs mount testpool/zfscachetest

Doing initial (unmount/mount) 'cpio -o > /dev/null'
4800025 blocks

real    0m31.14s
user    0m10.09s
sys     0m20.47s

Doing second 'cpio -o > /dev/null'
4800025 blocks

real    0m40.24s
user    0m9.68s
sys     0m17.86s

Feel free to clean up with 'zfs destroy testpool/zfscachetest'.


When I move the zpool to a 2009.06 ldom,

# /var/tmp/zfs-cache-test.ksh testpool
zfs create testpool/zfscachetest
Creating data file set (300 files of 8192000 bytes) under
/testpool/zfscachetest ...
Done!
zfs unmount testpool/zfscachetest
zfs mount testpool/zfscachetest

Doing initial (unmount/mount) 'cpio -o > /dev/null'
4800025 blocks

real    0m30.09s
user    0m9.58s
sys     0m19.83s

Doing second 'cpio -o > /dev/null'
4800025 blocks

real    0m44.21s
user    0m9.47s
sys     0m18.18s

Feel free to clean up with 'zfs destroy testpool/zfscachetest'.

# /var/tmp/zfs-cache-test.ksh testpool
zfs unmount testpool/zfscachetest
zfs mount testpool/zfscachetest

Doing initial (unmount/mount) 'cpio -o > /dev/null'
4800025 blocks

real    0m29.89s
user    0m9.58s
sys     0m19.72s

Doing second 'cpio -o > /dev/null'
4800025 blocks

real    0m44.40s
user    0m9.59s
sys     0m18.24s

Feel free to clean up with 'zfs destroy testpool/zfscachetest'.

Notice in these runs that each time the usr+sys time of the first run
adds up to the elapsed time - the rate was choked by CPU.  This is
verified by "prstat -mL".  The second run seemed to be slow due to a
lock as we had just demonstrated that the IO path can do more (not an
IO bottleneck) and "prstat -mL shows cpio at in sleep for a
significant amount of time.

FWIW, I hit another bug if I turn off primarycache.

http://defect.opensolaris.org/bz/show_bug.cgi?id=10004

This causes really abysmal performance - but equally so for repeat runs!

# /var/tmp/zfs-cache-test.ksh testpool
zfs unmount testpool/zfscachetest
zfs mount testpool/zfscachetest

Doing initial (unmount/mount) 'cpio -o > /dev/null'
4800025 blocks

real    4m21.57s
user    0m9.72s
sys     0m36.30s

Doing second 'cpio -o > /dev/null'
4800025 blocks

real    4m21.56s
user    0m9.72s
sys     0m36.19s

Feel free to clean up with 'zfs destroy testpool/zfscachetest'.


This bug report contains more detail of the configuration.  One thing
not covered in that bug report is that the S10u7 ldom has 2048 MB of
RAM and the 2009.06 ldom has 2024 MB of RAM.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to