So, after *much* wrangling, I managed to take on of my drives offline, relabel/repartition it (because I saw that the first sector was 34, not 256, and realized there could be an alignment issue), and get it back into the pool.

Problem is that while it's back, the performance is horrible. It's resilvering at about (according to iostat) 3.5MB/sec. And at some point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/dsk/c7d0'), and iostat showed me that the drive was only writing at around 3.5MB/sec. *And* it showed reads of about the same 3.5MB/sec even during the dd.

This same hardware and even the same zpool have been run under linux with zfs-fuse and BSD, and with BSD at least, performance was much better. A complete resilver under BSD took 6 hours. Right now zpool is estimating this resilver to take 36.

Could this be a driver problem? Something to do with the fact that this is a very old SATA card (LSI 150-6)?

This is driving me crazy. I finally got my zpool working under Solaris so I'd have some stability, and I've got no performance.

Paul Archer



Friday, Paul Archer wrote:

Since I got my zfs pool working under solaris (I talked on this list last week about moving it from linux & bsd to solaris, and the pain that was), I'm seeing very good reads, but nada for writes.

Reads:

r...@shebop:/data/dvds# rsync -aP young_frankenstein.iso /tmp
sending incremental file list
young_frankenstein.iso
^C1032421376  20%   86.23MB/s    0:00:44

Writes:

r...@shebop:/data/dvds# rsync -aP /tmp/young_frankenstein.iso yf.iso
sending incremental file list
young_frankenstein.iso
^C  68976640   6%    2.50MB/s    0:06:42


This is pretty typical of what I'm seeing.


r...@shebop:/data/dvds# zpool status -v
 pool: datapool
state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
       still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
       pool will no longer be accessible on older software versions.
scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       datapool    ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c2d0s0  ONLINE       0     0     0
           c3d0s0  ONLINE       0     0     0
           c4d0s0  ONLINE       0     0     0
           c6d0s0  ONLINE       0     0     0
           c5d0s0  ONLINE       0     0     0

errors: No known data errors

 pool: syspool
state: ONLINE
scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       syspool     ONLINE       0     0     0
         c0d1s0    ONLINE       0     0     0

errors: No known data errors

(This is while running an rsync from a remote machine to a ZFS filesystem)
r...@shebop:/data/dvds# iostat -xn 5
                   extended device statistics
   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  11.1    4.8  395.8  275.9  5.8  0.1  364.7    4.3   2   5 c0d1
   9.8   10.9  514.3  346.4  6.8  1.4  329.7   66.7  68  70 c5d0
   9.8   10.9  516.6  346.4  6.7  1.4  323.1   66.2  67  70 c6d0
   9.7   10.9  491.3  346.3  6.7  1.4  324.7   67.2  67  70 c3d0
   9.8   10.9  519.9  346.3  6.8  1.4  326.7   67.2  68  71 c4d0
   9.8   11.0  493.5  346.6  3.6  0.8  175.3   37.9  38  41 c2d0
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
                   extended device statistics
   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0d1
  64.6   12.6 8207.4  382.1 32.8  2.0  424.7   25.9 100 100 c5d0
  62.2   12.2 7203.2  370.1 27.9  2.0  375.1   26.7  99 100 c6d0
  53.2   11.8 5973.9  390.2 25.9  2.0  398.8   30.5  98  99 c3d0
  49.4   10.6 5398.2  389.8 30.2  2.0  503.7   33.3  99 100 c4d0
  45.2   12.8 5431.4  337.0 14.3  1.0  247.3   17.9  52  52 c2d0
   0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0


Any ideas?

Paul

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to