That's a great idea. How about the proposal below be changed such that the 
metadata-proxy forwards the /connect like calls to marconi queue A, and the 
response like ur's go to queue B.

The agent wouldn't need to know which queue's in marconi its talking to then, 
and could always talk to it.

Any of the servers (savana/trove) that wanted to control the agents would then 
just have to push marconi into queue A and get responses from queue B.

http is then used all the way through the process, which should make things 
easy to implement and scale.

Thanks,
Kevin

________________________________________
From: Sylvain Bauza [sylvain.ba...@gmail.com]
Sent: Thursday, December 12, 2013 11:43 PM
To: OpenStack Development Mailing List, (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Why the notifications couldn't be handled by Marconi ?

That would be up to Marconi's team to handle security issues while it is part 
of their mission statement to deliver a messaging service in between VMs.

Le 12 déc. 2013 22:09, "Fox, Kevin M" 
<kevin....@pnnl.gov<mailto:kevin....@pnnl.gov>> a écrit :
Yeah, I think the extra nic is unnecessary too. There already is a working 
route to 169.254.169.254, and a metadata proxy -> server running on it.

So... lets brainstorm for a minute and see if there are enough pieces already 
to do most of the work.

We already have:
  * An http channel out from private vm's, past network namespaces all the way 
to the node running the neutron-metadata-agent.

We need:
  * Some way to send a command, plus arguments to the vm to execute some action 
and get a response back.

OpenStack has focused on REST api's for most things and I think that is a great 
tradition to continue. This allows the custom agent plugins to be written in 
any language that can speak http (All of them?) on any platform.

A REST api running in the vm wouldn't be accessible from the outside though on 
a private network.

Random thought, can some glue "unified guest agent" be written to bridge the 
gap?

How about something like the following:

The "unified guest agent" starts up, makes an http request to 
169.254.169.254/unified-agent/<http://169.254.169.254/unified-agent/><cnc_type_from_configfile>/connect
If at any time the connection returns, it will auto reconnect.
It will block as long as possible and the data returned will be an http 
request. The request will have a special header with a request id.
The http request will be forwarded to localhost:<someportfromconfigfile> and 
the response will be posted to 
169.254.169.254/unified-agent/cnc_type/response/<http://169.254.169.254/unified-agent/cnc_type/response/><response_id>

The neutron-proxy-server would need to be modified slightly so that, if it sees 
a /unified-agent/<cnc_type>/* request it:
looks in its config file, unified-agent section, and finds the ip/port to 
contact for a given <cnc_type>', and forwards the request to that server, 
instead of the regular metadata one.

Once this is in place, savana or trove can have their webapi registered with 
the proxy as the server for the "savana" or "trove" cnc_type. They will be 
contacted by the clients as they come up, and will be able to make web requests 
to them, an get responses back.

What do you think?

Thanks,
Kevin

________________________________________
From: Ian Wells [ijw.ubu...@cack.org.uk<mailto:ijw.ubu...@cack.org.uk>]
Sent: Thursday, December 12, 2013 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12 December 2013 19:48, Clint Byrum 
<cl...@fewbar.com<mailto:cl...@fewbar.com><mailto:cl...@fewbar.com<mailto:cl...@fewbar.com>>>
 wrote:
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
> On 12/10/2013 03:49 PM, Ian Wells wrote:
> > On 10 December 2013 20:55, Clint Byrum 
> > <cl...@fewbar.com<mailto:cl...@fewbar.com><mailto:cl...@fewbar.com<mailto:cl...@fewbar.com>>
> > <mailto:cl...@fewbar.com<mailto:cl...@fewbar.com><mailto:cl...@fewbar.com<mailto:cl...@fewbar.com>>>>
> >  wrote:
> I've read through this email thread with quite a bit of curiosity, and I
> have to say what Ian says above makes a lot of sense to me. If Neutron
> can handle the creation of a "management vNIC" that has some associated
> iptables rules governing it that provides a level of security for guest
> <-> host and guest <-> $OpenStackService, then the transport problem
> domain is essentially solved, and Neutron can be happily ignorant (as it
> should be) of any guest agent communication with anything else.
>

Indeed I think it could work, however I think the NIC is unnecessary.

Seems likely even with a second NIC that said address will be something
like 169.254.169.254 (or the ipv6 equivalent?).

There *is* no ipv6 equivalent, which is one standing problem.  Another is that 
(and admittedly you can quibble about this problem's significance) you need a 
router on a network to be able to get to 169.254.169.254 - I raise that because 
the obvious use case for multiple networks is to have a net which is *not* 
attached to the outside world so that you can layer e.g. a private DB service 
behind your app servers.

Neither of these are criticisms of your suggestion as much as they are standing 
issues with the current architecture.
--
Ian.


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to