I notice you use the word "volume" which really isn't accurate or
appropriate here.

If all of these VDEVs are part of the same pool, which as I recall you
said they are, then writes are striped across all of them (with bias for
the more empty aka less full VDEVs).

You probably want to "zfs send" the oldest dataset (ZFS terminology
for a file system) into a new dataset.  That oldest dataset was created
when there were only 2 top level VDEVs, most likely.  If you have
multiple datasets created when you had only 2 VDEVs, then send/receive
them both (in serial fashion, one after the other).  If you have room for
the snapshots too, then send all of it and then delete the source dataset
when done.  I think this will achieve what you want.

You may want to get a bit more specific and choose from the oldest
datasets THEN find the smallest of those oldest datasets and
send/receive it first.  That way, the send/receive completes in less
time, and when you delete the source dataset, you've now created
more free space on the entire pool but without the risk of a single
dataset exceeding your 10 TiB of workspace.

ZFS' copy-on-write nature really wants no less than 20% free because
you never update data in place; a new copy is always written to disk.

You might want to consider turning on compression on your new datasets
too, especially if you have free CPU cycles to spare.  I don't know how
compressible your data is, but if it's fairly compressible, say lots of
text,
then you might get some added benefit when you copy the old data into
the new datasets.  Saving more space, then deleting the source dataset,
should help your pool have more free space, and thus influence your
writes for better I/O balancing when you do the next (and the next) dataset
copies.

HTH.

On Tue, Aug 3, 2010 at 22:48, Eduardo Bragatto <edua...@bragatto.com> wrote:

> On Aug 3, 2010, at 10:08 PM, Khyron wrote:
>
>  Long answer: Not without rewriting the previously written data.  Data
>> is being striped over all of the top level VDEVs, or at least it should
>> be.  But there is no way, at least not built into ZFS, to re-allocate the
>> storage to perform I/O balancing.  You would basically have to do
>> this manually.
>>
>> Either way, I'm guessing this isn't the answer you wanted but hey, you
>> get what you get.
>>
>
> Actually, that was the answer I was expecting, yes. The real question,
> then, is: what data should I rewrite? I want to rewrite data that's written
> on the nearly full volumes so they get spread to the volumes with more space
> available.
>
> Should I simply do a " zfs send | zfs receive" on all ZFSes I have? (we are
> talking about 400 ZFSes with about 7 snapshots each, here)... Or is there a
> way to rearrange specifically the data from the nearly full volumes?
>
>
> Thanks,
> Eduardo Bragatto
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to