On 2013-09-10 12:50, wessels wrote:
That's quite some offtopic discussion my original question triggered. Since the original question remains unanswered I'll add my 2cents to "the zfs on a cloud instance" discussion as well. ... So Illumos is certainly not the last word in operating systems like ZFS is in file systems.
As a counterargument in defence of hypervisor virtualization, in tone with your reference to live migration, I'd mention also file systems line VMFS which can be shared from the same storage by several hosts. Beside sharing free space across the VM farm, this also allows easier re-launching of tasks (VMs) on several different nodes other than their default one. For example, I happen to see many Intel MFSYS boxes deployed, which include 6 compute blades and 1 or 2 (redundant) storage controllers which control "pools" on up to 14 HDDs mounted into the chassis and LUNs from these pools are dedicated to, or shared by, compute nodes. While the networking for the nodes is from 1 to 4 1Gbit links, their internal storage links are 3 or 6(?) Gbit/s, and there is no PCI expansion, so we can't really get similar bandwidths via iSCSI/NFS served by one host (like would likely be the case with HyperV shared storage for example). On one hand, the chassis controller does not expose raw disks and we can't run ZFS directly on that layer; though we can (and sometimes do) run ZFS in the pools provided by chassis and use fixed configurations (one LUN - one ZFS pool - one *Solaris-derivate and some zones in it). Or we can on another hand set up ESXi on the nodes and share the common disk space as one big VMFS (or a few for local redundancy against at least something). This way any VM can run on any host, be it live migration or cold (via restart or sudden death of a host, but without data copying). And then we do too face the problem of ZFS inside VMs (though usually we can make one VM with many zones together doing a particular job, and compared to appservers I wouldn't say that ZFS is extremely hungry), as well as uneasiness about storage of checksummed data on obscure storage devices. Well, at least we can mirror across two VMFS pools served by two different controllers (by default) :) Since these machines with all their perks are a successful building block in many of our solutions, despite some of their drawbacks, I thought suitable to mention them in the context of VM vs. baremetal as a fact of life I cope with regularly ;) //Jim _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss