> This system has since been upgraded, but the imbalance in getting worse:
> 
> zpool iostat -v tank | grep raid
>   raidz2      3.60T  28.5G    166     41  6.97M   764K
>   raidz2      3.59T  33.3G    170     35  7.35M   709K
>   raidz2      3.60T  26.1G    173     35  7.36M   658K
>  raidz2      1.69T  1.93T    129     46  6.70M   610K
>   raidz2      2.25T  1.38T    124     54  5.77M   967K
>
> Is there any way to determine how this is happening?
>
> I may have to resort to destroying and recreating some large 
> filesystems, but there's no way to determine which ones to target...
> 
> -- 
> Ian.

Hi, if you have had faulted disks in some raidsets that would explain imbalance 
as zfs "avoids" writing to them while they are in faulted state.
I've encountered similar imbalance but that is due later changes in pool 
configuration where vdev's were added after first one's got too full.
Anyway, this is an issue, as your writes will definitely get slower after first 
raidsets get more full, as mine did, writes went from 1.2GB/s to 40-50KB/s and 
freeing up some space made problem go away (total pool capacity was around 60%).


Yours
Markus Kovero
 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to