On Tue, Apr 8, 2008 at 9:55 AM,  <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM:
>
>
>  >  In our environment, the politically and administratively simplest
>  > approach to managing our storage is to give each separate group at
>  > least one ZFS pool of their own (into which they will put their various
>  > filesystems). This could lead to a proliferation of ZFS pools on our
>  > fileservers (my current guess is at least 50 pools and perhaps up to
>  > several hundred), which leaves us wondering how well ZFS handles this
>  > many pools.
>  >
>  >  So: is ZFS happy with, say, 200 pools on a single server? Are there any
>  > issues (slow startup, say, or peculiar IO performance) that we'll run
>  > into? Has anyone done this in production? If there are issues, is there
>  > any sense of what the recommended largest number of pools per server is?
>  >
>
>  Chris,
>
>       Well,  I have done testing with filesystems and not as much with
>  pools -- I believe the core design premise for zfs is that administrators
>  would use few pools and many filesystems.  I would think that Sun would
>  recommend that you make a large pool (or a few) and divvy out filesystem
>  with reservations to the groups (to which they can add sub filesystems).
>  As far as ZFS filesystems are concerned my testing has shown that the mount
>  time and io overhead for multiple filesystems seems to be pretty linear --
>  timing 10 mounts translates pretty well to 100 and 1000.  After you hit
>  some level (depending on processor and memory) the mount time, io and
>  write/read batching spikes up pretty heavily.  This is one of the reasons I
>  take a strong stance against the recommendation that people use
>  reservations and filesystems as user/group quotas (ignoring that the
>  functionality is not by any means in parity.)
>

Not to beat a dead horse too much, the lack of quotas and the mount
limits either of the clients or the time per filesystem mentioned
above allows us to heavily utilize ZFS for second tier, where quotas
can be at a logical group level, and not first tier use which still
demands per user quotas. Its unmet requirement.

As to your original question, with enough LUN carving you can
artificially create many pools. However, ease of management and
focusing on both performance and reliability suggest one put as many
drives in a redundant config in as few a pools as possible, split up
your disk use among top level ZFS filesystems to each group, and then
let them divy up ZFS filesystems with further embedded ZFS file
systems.

>  -Wade
>
>
>
>
>
>  _______________________________________________
>  zfs-discuss mailing list
>  zfs-discuss@opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to