On Sep 27, 2009, at 11:49 AM, Paul Archer <p...@paularcher.org> wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat showed me that the drive was only writing
at around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/
sec even during the dd.
This same hardware and even the same zpool have been run under
linux with zfs-fuse and BSD, and with BSD at least, performance
was much better. A complete resilver under BSD took 6 hours. Right
now zpool is estimating this resilver to take 36.
Could this be a driver problem? Something to do with the fact that
this is a very old SATA card (LSI 150-6)?
This is driving me crazy. I finally got my zpool working under
Solaris so I'd have some stability, and I've got no performance.
It appears your controller is preventing ZFS from enabling write
cache.
I'm not familiar with that model. You will need to find a way to
enable the drives write cache manually.
My controller, while normally a full RAID controller, has had its
BIOS turned off, so it's acting as a simple SATA controller. Plus,
I'm seeing this same slow performance with dd, not just with ZFS.
And I wouldn't think that write caching would make a difference with
using dd (especially writing in from /dev/zero).
I don't think you got what I said. Because the controller normally
runs as a RAID controller the controller controls the SATA drives' on-
board write cache, it may not allow the OS to enable/disable the
drives' on-board write cache.
Using 'dd' to the raw disk will also show the same poor performance if
the HD on-board write-cache is disabled.
The other thing that's weird is the writes. I am seeing writes in
that 3.5MB/sec range during the resilver, *and* I was seeing the
same thing during the dd.
Was the 'dd' to the raw disk? Either was it shows the HDs aren't setup
properly.
This is from the resilver, but again, the dd was similar. c7d0 is
the device in question:
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 238.0 0.0 476.0 0.0 1.0 0.0 4.1 0 99 c12d1
30.8 37.8 3302.4 3407.2 14.1 2.0 206.0 29.2 100 100 c7d0
80.4 0.0 3417.6 0.0 0.3 0.3 3.3 3.2 8 14 c8d0
80.4 0.0 3417.6 0.0 0.3 0.3 3.4 3.2 9 14 c9d0
80.6 0.0 3417.6 0.0 0.3 0.3 3.4 3.2 9 14 c10d0
80.6 0.0 3417.6 0.0 0.3 0.3 3.3 3.1 9 14 c11d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c12t0d0
Try using 'format -e' on the drives, go into 'cache' then 'write-
cache' and display the current state. You can try to manually enable
it from there.
-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss