2013/12/10 Clint Byrum <cl...@fewbar.com>

> Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:
> > And one more thing,
> >
> > Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
> > Its design is somewhat different and can be expressed by the following
> > formulae:
> >
> > App -> Host (XenStore) <-> Guest Agent
> >
> > (taken from the wiki [3])
> >
> > It has an obvious disadvantage - it is hypervisor dependent and currently
> > implemented for Xen only. On the other hand such design should not have
> > shared facility vulnerability as Agent accesses the server not directly
> but
> > via XenStore (which AFAIU is compute node based).
> >
>
> I don't actually see any advantage to this approach. It seems to me that
> it would be simpler to expose and manage a single network protocol than
> it would be to expose hypervisor level communications for all hypervisors.
>

I think the Rackspace agent design could be expanded as follows:

Controller (Savanna/Trove) <-> AMQP/ZeroMQ <-> Agent on Compute host <->
XenStore <-> Guest Agent

That is somewhat speculative because if I understood it correctly the
opened code covers only the second part of exchange:

Python API / CMD interface <-> XenStore <-> Guest Agent

Assuming I got it right:
While more complex, such design removes pressure from AMQP/ZeroMQ
providers: on the 'Agent on Compute' you can easily control the amount of
messages emitted by Guest with throttling. It is easy since such agent runs
on a compute host. In the worst case, if it is happened to be abused by a
guest, it affect this compute host only and not the whole segment of
OpenStack.



> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to