They do not. They receive a link-local ip address that is used for host agent 
to VR communication. All VR commands are proxied through the host agent. Host 
agent to VR communication is over SSH.


________________________________
From: Rafael Weingärtner <rafaelweingart...@gmail.com>
Sent: Friday, January 12, 2018 1:42 PM
To: dev
Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer

but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sah...@cloudops.com> wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tmac...@gmail.com> wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



--
Rafael Weingärtner

Reply via email to