> Ewan (and team), > > When we spoke on the phone last week, you mentioned that Citrix was > going > down the path of using a Utility VM to run Nova in. I've been kicking > around the ideas of using both and I wanted to run some of the Pros and > Cons I came up with to maybe get your viewpoints on it. > > Nova Compute within dom0 > > Pros: > Ease of management > Can be installed/updated as an RPM. > Shares Hypervisor Auth/Networking > Direct access to all operations > > Cons: > Shares resources with dom0 > Currently reliant on packages installed into dom0. (python 2.4 vs 2.6 > which Nova requires) > Additional packages would have to be installed that may deviate from > factory build of XenServer > > Nova Compute within a VM residing on the Hypervisor > > Pros: > Can be installed/updated as a brand new VM. > Can be deployed in a similar manner as our images. > Can utilize extra resources on the server. > Dependencies can be added without having to worry about breaking > XenServer > > Cons: > Additional authentication/networking for every host in the Cloud > Another layer for System Operations to worry about. > Indirect access for VM operations.
To the tradeoffs I would add: o In dom0 == simpler solution in general. o In a VM == more reliable: a failure inside the appliance VM cannot bring down either dom0 or the customer workloads. o In a VM == restartable nova-compute: you can just reboot the VM if it goes wrong. (Either the Ops team do this, or you could even do it automatically, with a heartbeat.) o In a VM == slightly better security properties (disaggregation between dom0 and the appliance VM). Marginal benefit IMHO, but if you have good intrusion detection etc you might see a defence-in-depth benefit. o In a VM == can be tested in reduced-hardware configuration (i.e. N VMs, all acting as if they're in a cloud, but all actually resident on the same host). I would highlight the important pros that you mention above: o You don't need to install Python 2.6 inside dom0 parallel with the 2.4 that's already there. o The VM can have its own resources dedicated. You can cap it to stop disk streaming from impacting pre-existing customer workloads, and it's not in competition with the critical resources in dom0. Those two are the clinchers for me. > We'll need to get something put together so we can begin testing > multiple > compute nodes together. If we go down the path of putting together a > dedicated VM for Nova, have you guys had any ideas about communication > to > dom0? Dedicated communication bridge from Nova to dom0? There is a network called the "guest installer network". This is something that you can set up that connects directly between dom0 and a VM. Dom0 (and therefore xapi) is reachable on 192.168.128.1 on this network, and the guest sees a private DHCP server that gives it 192.168.128.2. > Authentication? Through this channel, you're just seeing xapi like normal, so you still need to authenticate like normal. Nova Compute would need an appropriate xapi password. We are planning to create a PAM module for the VM that authenticates using xapi through this channel. The VM would have this PAM module installed, and then you effectively end up with the same credentials being effective for both the appliance VM and dom0. This could alleviate your concern above about having two sets of credentials to manage. The idea would be that the appliance VM wouldn't have its own password at all -- it would just use dom0's credentials. > A consistent setup per XenServer would be ideal to limit the amount of > maintenance we'd have to do over thousands of XenServer hosts once this > is > out in production. Yes, absolutely. I think that the separate VM is a help here too. I'd rather upgrade a VM by installing a new one and destroying the old one, than make sure that a bulk RPM upgrade worked. Maybe the difference is marginal as long as all your RPMs are well packaged, but the truth is, they rarely are. It's great to be able to test a fixed complete environment, and not worry about the pre-existing software when you roll into production. Cheers, Ewan. _______________________________________________ Mailing list: https://launchpad.net/~openstack-xenapi Post to : openstack-xenapi@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-xenapi More help : https://help.launchpad.net/ListHelp