On 03/25/10 09:32 PM, Bruno Sousa wrote:
On 24-3-2010 22:29, Ian Collins wrote:
On 02/28/10 08:09 PM, Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a large filesystem.
Is this imbalance normal? I would expect a more even distribution as
the poll configuration hasn't been changed since creation.
The system is running Solaris 10 update 7.
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
tank 15.9T 2.19T 87 119 2.34M 1.88M
raidz2 2.90T 740G 24 27 762K 95.5K
raidz2 3.59T 37.8G 20 0 546K 0
raidz2 3.58T 44.1G 27 0 1.01M 0
raidz2 3.05T 587G 7 47 24.9K 1.07M
raidz2 2.81T 835G 8 45 30.9K 733K
------------ ----- ----- ----- ----- ----- -----
This system has since been upgraded, but the imbalance in getting worse:
zpool iostat -v tank | grep raid
raidz2 3.60T 28.5G 166 41 6.97M 764K
raidz2 3.59T 33.3G 170 35 7.35M 709K
raidz2 3.60T 26.1G 173 35 7.36M 658K
raidz2 1.69T 1.93T 129 46 6.70M 610K
raidz2 2.25T 1.38T 124 54 5.77M 967K
Is there any way to determine how this is happening?
I may have to resort to destroying and recreating some large
filesystems, but there's no way to determine which ones to target..
Hi,
As far as i know this is a "normal" behaviour in ZFS...
So what we need is somesort of "rebalance" task what moves data around
multiple vdevs in order to achieve the best performance possible...
Take a look to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425
It would be if drives had been added, but this pool hasn't been changed
since it was created.
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss