On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
<nicolas.willi...@sun.com> wrote:
> On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
>> On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
>> <nicolas.willi...@sun.com> wrote:
>> > On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
>> >> > >> pool-shrinking (and an option to shrink disk A when i want disk B to
>> >> > >> become a mirror, but A is a few blocks bigger)
>> >> >  This may be interesting... I'm not sure how often you need to shrink a 
>> >> > pool
>> >> >  though?  Could this be classified more as a Home or SME level feature?
>> >>
>> >> Enterprise level especially in SAN environments need this.
>> >>
>> >> Projects own theyr own pools and constantly grow and *shrink* space.
>> >> And they have no downtime available for that.
>> >
>> > Multiple pools on one server only makes sense if you are going to have
>> > different RAS for each pool for business reasons.  It's a lot easier to
>> > have a single pool though.  I recommend it.
>>
>> Other scenarios for multiple pools include:
>>
>> - Need independent portability of data between servers.  For example,
>> in a HA cluster environment, various workloads will be mapped to
>> various pools.  Since ZFS does not do active-active clustering, a
>> single pool for anything other than a simple active-standby cluster is
>> not useful.
>
> Right, but normally each head in a cluster will have only one pool
> imported.

Not necessarily.  Suppose I have a group of servers with a bunch of
zones.  Each zone represents a service group that needs to
independently fail over between servers.  In that case, I may have a
zpool per zone.  It seems this is how it is done in the real world.[1]

1. Upton, Tom. "A  Conversation with Jason Hoffman."  ACM Queue.
January/February 2008. 9.

> The Sun Storage 7xxx do this.  One pool per-head, two pools altogether
> in a cluster.

Makes sense for your use case.  If you are looking at a zpool per
zone, it is likely a zpool created on a LUN provided by a Sun Storage
7xxx that is presented to multiple hosts.  That is, ZFS on top of ZFS.

>> - Array based copies are needed.  There are times when copies of data
>> are performed at a storage array level to allow testing and support
>> operations to happen "on different spindles".  For example, in a
>> consolidated database environment, each database may be constrained to
>> a set of spindles so that each database can be replicated or copied
>> independent of the various others.
>
> This gets you back into managing physical space allocation.  Do you
> really want that?  If you're using zvols you can do "array based copies"
> of you zvols.  If you're using filesystems then you should just use
> normal backup tools.

There are times when you have no real choice.  If a regulation or a
lawyer's interpretation of a regulation says that you need to have
physically separate components, you need to have physically separate
components.  If your disaster recovery requirements mean that you need
to have a copy of data at a different site and array based copies have
historically been used - it is unlikely that "while true ; do zfs send
| ssh | zfs receive" will be adapted in the first round of
implementation.  Given this, zvols don't do it today.

When you have a smoking hole, the gap in transactions left by normal
backup tools is not always good enough - especially if some of that
smoke is coming from the tape library.  Array based replication tends
to allow you to keep much tighter tolerances on just how many
committed transactions you are willing to lose.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to