On Wed, 26 Nov 2008, Paul Sobey wrote:
>
> Pointers to additional info are most welcome!
>
> 1. Do these kinds of self-imposed limitations make any sense in a zfs
> world?

Depending on your backup situation, they may make just as much sense 
as before.  For zfs this is simply implemented by applying a quota to 
each filesystem in the pool.

> 2. What is the 'logical corruption boundry' for a zfs system - the
> filesystem or the zpool?

The entire pool.

> 3. Are there scenarios apart from latency sensitive applications (e.g.
> Oracle logs) that warrant separate zpools?

I can't think of any reason for separate zpools other than to limit 
the exposure to catastrophic risk (e.g. total pool failure) or because 
parts of the storage may be moved to a different system.

The size of the overall pool is much less important than the design of 
its zdevs (two-way mirror, three-way mirror, raidz, raidz2).  Golden 
Rule: "The pool is only as strong as its weakest zdev".  The number of 
zdevs in the pool, and the performance of the individual devices 
comprising the zdev, determine the pool's performance.  More zdevs 
results in better multi-user performance since more I/Os can be active 
at once.  With an appropriate design, a larger pool will deliver more 
performance without sacrificing reliability.

Given that your pool is entirely subservient to one system ("Thor") it 
likely makes sense to put its devices in one pool since (barring zfs 
implementation bugs) the reliability of the pool will be dicated by 
the reliability of that system.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to