Hi Chris, I would have thought that managing multiple pools (you mentioned 200) 
would be an absolute administrative nightmare. If you give more details about 
your storage needs like number of users, space required etc it might become 
clearer what you're thinking of setting up.

Also, I see you were considering 200 pools on a single server. Considering that 
you'll want redundancy in each pool, if you're forming your pools from complete 
physical disks, you are looking at 400 disks minimum if you use a simple 2-disk 
mirror for each pool. I think it's not recommended t use partial disk slices to 
form pools -- use whole disks.

I'll be bold here and make some assumptions. You have a lot of students/staff 
there and you have a need, say, for 10TB of data. You could create one pool, 
using sixteen 1TB disks. The pool could be formed as two RAIDZ2 vdevs, each 
vdev containing five disks for data and two for parity. Additionally, for extra 
safety, you could add two hot spares.

Something like:

zpool create unipool RAIDZ2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 RAIDZ2 
disk8 disk9 disk10 disk11 disk12 disk13 disk14 spare disk15 disk16

This arrangement would give 10TB of double-parity redundant data, meaning that 
it would survive 2 disks failing within each vdev (there's 2 of them), and when 
a disk or two fail from a vdev, spares would be used. Sun's X4500 can house 48 
disks for around 24TB of storage if you need more capacity.

Then, according to your needs, you could create multiple filesystems, using 
different mountpoints and access permissions of your choosing.

Using the approach of one pool and multiple filesystems avoids the problems 
associated with one pool getting full whilst other pools being under-used. The 
pool will provide capacity across all the filesystems.

You will also want to consider using snapshots to help avoid data loss, and 
these can be used very elegantly to perform incremental backups to another 
large capacity backup server on your network -- see zfs send / recv with the -i 
option, for example.

I'm not an expert with ZFS yet, so I hope that others correct any mistakes I 
may have made here. The above is just based on my current limited experience 
and knowledge of ZFS :)

Anyway, hope it helps.
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to