| Hi Chris, I would have thought that managing multiple pools (you
| mentioned 200) would be an absolute administrative nightmare. If you
| give more details about your storage needs like number of users, space
| required etc it might become clearer what you're thinking of setting
| up.

 Every university department has to face the issue of how to allocate
disk space to people. Here, we handle storage allocation decisions
through the relatively simple method of selling fixed-size chunks of
storage to faculty (either single professors or groups of them) for a
small one-time fee.

(We used fixed size chunks partly because it is simpler to administer
and to set prices, and partly because it is our current model in our
Solaris 8 + DiskSuite + constant-sized partitions environment.)

 So, we are always going to have a certain number of logical pools of
storage space to manage. The question is whether to handle them as
separate ZFS pools or aggregate them into fewer ZFS pools and then
administer them as sub-hierarchies using quotas[*], and our current
belief is that doing the former is simpler to administer and simpler to
explain to users.

 200 pools on a single server is probably pessimistic (hopefully there
will be significantly fewer), but could happen if people go wild with
separate pools and there is a failover situation where a single physical
server has to handle several logical fileservers at once.

| Also, I see you were considering 200 pools on a single
| server. Considering that you'll want redundancy in each pool, if
| you're forming your pools from complete physical disks, you are
| looking at 400 disks minimum if you use a simple 2-disk mirror for
| each pool. I think it's not recommended t use partial disk slices to
| form pools -- use whole disks.

 We're not going to use local disk storage on the fileservers for
various reasons, including failover and easier long-term storage
management and expansion. We have pretty much settled on iSCSI
(mirroring each ZFS vdev across two controllers, so our fileservers do
not panic if we lose a single controller). The fixed-size chunks will be
done at the disk level, either as slices from a single LUN on Solaris or
as individual LUNs sliced out of each disk on the iSCSI target.

(Probably the latter, because it lets us use more slices per disk and
we have some number of 'legacy' 35 GB disk chunks that we cannot really
give free size upgrades to.)

(Using full disks as the chunk size is infeasible for several reasons.)

        - cks
[*: we've experimented, and quotas turn out to work better than reservations
    for this purpose. If anyone wants more details, see
    http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSReservationsVsQuotas
]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to