On Mon, Mar 7, 2011 at 1:50 PM, Yaverot <yave...@computermail.net> wrote:
> 1. While performance isn't my top priority, doesn't using slices make a 
> significant difference?

Write caching will be disabled on devices that use slices. It can be
turned back on by using format -e

> 2. Doesn't snv_134 that I'm running already account for variances in these 
> nominally-same disks?

It will allow some small differences. I'm not sure what the limit on
the difference size is.

> 3. The market refuses to sell disks under $50, therefore I won't be able to 
> buy drives of 'matching' capacity anyway.

You can always use a larger drive. If you think you may want to go
back to smaller drives, make sure that the autoexpand zpool property
is disabled though.

> 3. Assuming I want to do such an allocation, is this done with quota & 
> reservation? Or is it snapshots as you suggest?

I think Edward misspoke when he said to use snapshots, and probably
meant reservation.

I've taken to creating a dataset called "reserved" and giving it a 10G
reservation. (10G isn't a special value, feel free to use 5% of your
pool size or whatever else you're comfortable with.) It's unmounted
and doesn't contain anything, but it ensures that there is a chunk of
space I can make available if needed. Because it doesn't contain
anything, there shouldn't be any concern for de-allocation of blocks
when it's destroyed. Alternately, the reservation can be reduced to
make space available.

> Would it make more sense to make another filesystem in the pool, fill it 
> enough and keep it handy to delete? Or is there some advantage to zfs destroy 
> (snapshot) over zfs destroy (filesystem)? While I am thinking about the 
> system and have extra drives, like now, is the time to make plans for the 
> next "system is full" event.

If a dataset contains data, the blocks will have to be freed when it's
destroyed. If it's an empty dataset with a reservation, the only
change is to fiddle some accounting bits.

I seem to remember seeing a fix for 100% full pools a while ago so
this may not be as critical as it used to be, but it's a nice safety
net to have.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to