On 03/25/10 11:23 PM, Bruno Sousa wrote:
On 25-3-2010 9:46, Ian Collins wrote:
On 03/25/10 09:32 PM, Bruno Sousa wrote:
On 24-3-2010 22:29, Ian Collins wrote:

On 02/28/10 08:09 PM, Ian Collins wrote:

I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations.  The pool is
currently feeding a tape backup while receiving a large filesystem.

Is this imbalance normal?  I would expect a more even distribution as
the poll configuration hasn't been changed since creation.

The system is running Solaris 10 update 7.

                  capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
tank          15.9T  2.19T     87    119  2.34M  1.88M
   raidz2      2.90T   740G     24     27   762K  95.5K
   raidz2      3.59T  37.8G     20      0   546K      0
   raidz2      3.58T  44.1G     27      0  1.01M      0
   raidz2      3.05T   587G      7     47  24.9K  1.07M
   raidz2      2.81T   835G      8     45  30.9K   733K
------------  -----  -----  -----  -----  -----  -----


This system has since been upgraded, but the imbalance in getting
worse:

zpool iostat -v tank | grep raid
    raidz2      3.60T  28.5G    166     41  6.97M   764K
    raidz2      3.59T  33.3G    170     35  7.35M   709K
    raidz2      3.60T  26.1G    173     35  7.36M   658K
    raidz2      1.69T  1.93T    129     46  6.70M   610K
    raidz2      2.25T  1.38T    124     54  5.77M   967K

Is there any way to determine how this is happening?

I may have to resort to destroying and recreating some large
filesystems, but there's no way to determine which ones to target..
Hi,

As far as i know this is a "normal" behaviour in ZFS...
So what we need is somesort of "rebalance" task what moves data around
multiple vdevs in order to achieve the best performance possible...
Take a look to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425


It would be if drives had been added, but this pool hasn't been
changed since it was created
Hi,

You never experienced any faulted drives, or something similar? So far i
only saw imbalance if the vdevs add changed, if a hotspare is used and i
think even during a replacement of one disk of a raidz2 group.
There has been one faulted drive, but a hot spare kicked in. A the moment, I have another replaced by a spare, but I/O to that vdev is unaffected:

  raidz2      1.69T  1.94T    126     46  6.54M   598K
    spare         -      -    121     34  1.09M  31.4K
      c0t4d0      -      -     34     23  1.37M  98.7K
      c7t5d0      -      -      0     78      0   786K
    c1t2d0        -      -     34     23  1.37M  98.8K
    c4t3d0        -      -     37     24  1.44M  98.9K
    c5t3d0        -      -     36     24  1.44M  98.8K
    c6t2d0        -      -     36     23  1.42M  98.9K
    c7t1d0        -      -     36     24  1.42M  98.8K
    c6t1d0        -      -     36     23  1.44M  98.7K
    c4t1d0        -      -     36     23  1.43M  98.6K

The vdev with the most free space has never lost a drive.

Cheers,

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to