Cross posting to zfs-discuss.

By my math, here's what you're getting;

4.6MB/sec on writes to ZFS.
2.2MB/sec on reads from ZFS.
90MB/sec on read from block device.


What is c0t1d0 - I assume it's a hardware RAID LUN,
but how many disks, and what type of LUN?

What version of Solaris (cat /etc/release)?

Please send "zpool status" output.

I'm not yet sure what's broken here, but there's something
pathologically wrong with the IO rates to the device
during the ZFS tests. In both cases, the wait queue is
getting backed up, with horrific wait queue latency numbers.
On the read side, I don't understand why we're seeing
4-5 seconds of zero disk activity on the read test in between
bursts of a small number of reads.

I just did a quick test on an X4600 (older 8 socket AMD box),
running Solaris nv103. Single disk ZFS.

For writes;
# ptime dd if=/dev/urandom of=/tzp/TESTFILE bs=1024k count=512
512+0 records in
512+0 records out

real       11.869
user        0.001
sys        11.861
#
# bc -l
(1024*1024*512) / 11.9
45115202.68907563025210084033

So that's 45MB/sec on the write. Did an unmount/mount
of the ZFS, and the read;

# ptime dd if=/tzp/TESTFILE of=/dev/null bs=1024k
512+0 records in
512+0 records out

real        2.696
user        0.000
sys         0.411
# bc -l
(1024*1024*512) / 2.69
199580264.68401486988847583643


So that's about 200MB/sec on the read. I did this several times,
with unmounts/mounts in between, to make sure I can
replicate the numbers.


Do me a favor - capture "kstat -n arcstats" before a test,
after the write test and after the read test.

Sorry - I need to think about this a bit more.
Something is seriously broken, but I'm not yet
sure what it is. Unless you're running an older
Solaris version, and/or missing patches.

Thanks,
/jim




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to