Hi!

 Today I have ten computers with Xen and Linux, each with 2 discs of 500G in 
raid1, each node sees only its own raid1 volume, I do not have live motion of 
my virtual machines... and moving the data from one hypervisor to another is a 
pain task...

 Now that I discovered this awesome file system! I want that the ZFS manages 
all my discs in a network environment.

 But I don't know the best way to make a pool using all my 20 discs in one big 
pool with 10T of capacity.

 My first contact with Solaris, was with the OpenSolaris 2008.11, as a virtual 
machine (paravirtual domU) on a Linux (Debian 5.0) dom0. I also have more 
opensolaris on real machines to make the tests...

 I'm thinking in export all my 20 discs, through the AoE protocol, and in my 
dom0 that I'm running the opensolaris domU (in HA through the Xen), I will make 
the configuration file for it (zfs01.cfg) with 20 block devices of 500G and 
inside the opensolaris domu, I will share the pool via iSCSI targets and/or NFS 
back to the domUs of my cluster...  Is this a good idea?

 Or it's better install one OpenSolaris per hypervisor, and then, let to the 
ten opensolaris domUs, the task to export and share the discs by its own way 
(not using the AoE anymore)? But I imagine that I need only one opensolaris (in 
HA) with one big pool...

 Resuming, how is the best way to make a pool with twenty discs of the network?

 Thanks for any enlightenment!

Regards,
Thiago
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to