On Sep 27, 2009, at 8:49 AM, Paul Archer wrote:

Problem is that while it's back, the performance is horrible. It's resilvering at about (according to iostat) 3.5MB/sec. And at some point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/ dsk/c7d0'), and iostat showed me that the drive was only writing at around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/ sec even during the dd. This same hardware and even the same zpool have been run under linux with zfs-fuse and BSD, and with BSD at least, performance was much better. A complete resilver under BSD took 6 hours. Right now zpool is estimating this resilver to take 36. Could this be a driver problem? Something to do with the fact that this is a very old SATA card (LSI 150-6)? This is driving me crazy. I finally got my zpool working under Solaris so I'd have some stability, and I've got no performance.



It appears your controller is preventing ZFS from enabling write cache.

I'm not familiar with that model. You will need to find a way to enable the drives write cache manually.


My controller, while normally a full RAID controller, has had its BIOS turned off, so it's acting as a simple SATA controller. Plus, I'm seeing this same slow performance with dd, not just with ZFS. And I wouldn't think that write caching would make a difference with using dd (especially writing in from /dev/zero).

The other thing that's weird is the writes. I am seeing writes in that 3.5MB/sec range during the resilver, *and* I was seeing the same thing during the dd. This is from the resilver, but again, the dd was similar. c7d0 is the device in question:

   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.0  238.0    0.0  476.0  0.0  1.0    0.0    4.1   0  99 c12d1
  30.8   37.8 3302.4 3407.2 14.1  2.0  206.0   29.2 100 100 c7d0

This is the bottleneck. 29.2 ms average service time is slow.
As you can see, this causes a backup in the queue, which is
seeing an average service time of 206 ms.

The problem could be the disk itself or anything in the path
to that disk, including software.  But first, look for hardware
issues via
        iostat -E
        fmadm faulty
        fmdump -eV

 -- richard


  80.4    0.0 3417.6    0.0  0.3  0.3    3.3    3.2   8  14 c8d0
  80.4    0.0 3417.6    0.0  0.3  0.3    3.4    3.2   9  14 c9d0
  80.6    0.0 3417.6    0.0  0.3  0.3    3.4    3.2   9  14 c10d0
  80.6    0.0 3417.6    0.0  0.3  0.3    3.3    3.1   9  14 c11d0
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c12t0d0


Paul Archer
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to