On 03/26/10 12:16 AM, Bruno Sousa wrote:
Well...i'm pretty much certain that at my job i faced something similar..
We had a server with 2 raidz2 groups each with 3 drives, and one drive
has failed and replaced by a hot spare. However, the balance of data
between the 2 groups of raidz2 start to be imbalance.
You were right.
I cleared about 25% of the data from the pool (2TB) and started to load
it back. Using zpool iostat I could see the vdev with the active spare
was performing significantly less writes then the others, even though it
had the most free space. So I detached the faulted drive to bring the
spare into the pool and the write load balance is now biased to that vdev:
raidz2 2.91T 730G 0 75 0 3.14M
raidz2 2.93T 715G 0 125 76 3.42M
raidz2 2.92T 721G 0 140 25 3.46M
raidz2 1.54T 2.08T 0 391 0 5.24M
raidz2 2.36T 1.26T 0 116 0 3.25M
I assume this is due the fact that during the resilvering process (due
to the hot spare thing) the vdev was somehow "halted" of writting new
data..therefore during that time only 1 group was getting new data, and
that lead to an imbalance data across the 2 raidz2 groups..
Or could the writes be reduced to cut the eventual resilver time when
the faulted drive is replaced?
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss