On Wed, 26 Nov 2008, Bob Friesenhahn wrote:
>> 1. Do these kinds of self-imposed limitations make any sense in a zfs
>> world?
>
> Depending on your backup situation, they may make just as much sense as 
> before.  For zfs this is simply implemented by applying a quota to each 
> filesystem in the pool.

We're more worried about the idea of a single 'zfs filesystem' becoming 
corrupt somehow. From what you say below, the pool is the boundry where 
that might happen, not the individual filesystem. Therefore it seems no 
less dangerous creating a single 5TB pool vs. 10 500GB ones, from a 
risk-of-corruption point of view - is that correct?

>> 2. What is the 'logical corruption boundry' for a zfs system - the
>> filesystem or the zpool?
>
> The entire pool.
>
>> 3. Are there scenarios apart from latency sensitive applications (e.g.
>> Oracle logs) that warrant separate zpools?
>
> I can't think of any reason for separate zpools other than to limit the 
> exposure to catastrophic risk (e.g. total pool failure) or because parts of 
> the storage may be moved to a different system.
>
> The size of the overall pool is much less important than the design of its 
> zdevs (two-way mirror, three-way mirror, raidz, raidz2).  Golden Rule: "The 
> pool is only as strong as its weakest zdev".  The number of zdevs in the 
> pool, and the performance of the individual devices comprising the zdev, 
> determine the pool's performance.  More zdevs results in better multi-user 
> performance since more I/Os can be active at once.  With an appropriate 
> design, a larger pool will deliver more performance without sacrificing 
> reliability.

Given that we have a load of available disks (can't remember exact number 
for an X4540 - is it better to chop a storage pool into a few raidz devs 
then, rather than all into one? Are there any metrics I can use to guide 
me in this as far as performance tuning goes?

> Given that your pool is entirely subservient to one system ("Thor") it likely 
> makes sense to put its devices in one pool since (barring zfs implementation 
> bugs) the reliability of the pool will be dicated by the reliability of that 
> system.

The Thor is an X4540 - we're rather pleased with it so far. On a slightly 
off topic note - do people find the top loading nature of these easy to 
work with? Strikes me that it's a lot of torque on those rails when fully 
extended - presumably they are better in the bottom of racks than the 
top?!

Paul

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to