On Tue, Feb 16, 2010 at 04:47:11PM -0800, Christo Kutrovsky wrote:
> One of the ideas that sparkled is have a "max devices" property for
> each data set, and limit how many mirrored devices a given data set
> can be spread on. I mean if you don't need the performance, you can
> limit (minimize) the device, should your capacity allow this. 

There have been some good responses, around better ways to do damage
control.  I thought I'd respond separately, with a different use case
for essentially the same facility.

If your suggestion were to be implemented, it would be in the form of
a different allocation policy, when selecting vdevs and metaslabs for
writes.  There is scope for several alternate policies addressing
different requirements, in future development, and some nice XXX
comments about "cool stuff could go here" accordingly.

One of these is for power-saving, with MAID-style pools, whereby the
majority of disks (vdevs) in a pool would be idle and spun down, most
of the time.  This requires expressing very similar kinds of
preferences, for what data goes where (and when).

AIX's LVM (not the nasty linux knock-off) had similar layout
preferences, for different purposes - you could mark lv's with
allocation prefernces to the centre of spindles for performance, or
other options, and then relayout the data accordingly.  I say "had",
it presumably still does, but I haven't touched it in 15 years or
more. 

--
Dan.

Attachment: pgpCgpyGqngSC.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to