On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web <[email protected]> wrote: > Hi! > > Today I have ten computers with Xen and Linux, each with 2 discs of 500G in > raid1, each node sees only its own raid1 volume, I do not have live motion of > my virtual machines... and moving the data from one hypervisor to another is > a pain task... > > Now that I discovered this awesome file system! I want that the ZFS manages > all my discs in a network environment. > > But I don't know the best way to make a pool using all my 20 discs in one > big pool with 10T of capacity. > > My first contact with Solaris, was with the OpenSolaris 2008.11, as a > virtual machine (paravirtual domU) on a Linux (Debian 5.0) dom0. I also have > more opensolaris on real machines to make the tests... > > I'm thinking in export all my 20 discs, through the AoE protocol, and in my > dom0 that I'm running the opensolaris domU (in HA through the Xen), I will > make the configuration file for it (zfs01.cfg) with 20 block devices of 500G > and inside the opensolaris domu, I will share the pool via iSCSI targets > and/or NFS back to the domUs of my cluster... Is this a good idea?
I share a three disk pool over NFS for some VMWare ESXi based hosting. There is considerably high disk I/O caused by the apps that run on these VMs. ZFS + NFS is working fine for me. I intend to experiment with iSCSI later when I free up some machines for such an experiment. -- Sriram _______________________________________________ zfs-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
