On 12/12/2013 10:24 AM, Dmitry Mescheryakov wrote:
Clint, Kevin,

Thanks for reassuring me :-) I just wanted to make sure that having direct access from VMs to a single facility is not a dead end in terms of security and extensibility. And since it is not, I agree it is much simpler (and hence better) than hypervisor-dependent design.


Then returning to two major suggestions made:
 * Salt
 * Custom solution specific to our needs

The custom solution could be made on top of oslo.messaging. That gives us RPC working on different messaging systems. And that is what we really need - an RPC into guest supporting various transports. What it lacks at the moment is security - it has neither authentication nor ACL.

Salt also provides RPC service, but it has a couple of disadvantages: it is tightly coupled with ZeroMQ and it needs a server process to run. A single transport option (ZeroMQ) is a limitation we really want to avoid. OpenStack could be deployed with various messaging providers, and we can't limit the choice to a single option in the guest agent. Though it could be changed in the future, it is an obstacle to consider.

Running yet another server process within OpenStack, as it was already pointed out, is expensive. It means another server to deploy and take care of, +1 to overall OpenStack complexity. And it does not look it could be fixed any time soon.

For given reasons I give favor to an agent based on oslo.messaging.


An agent based on oslo.messaging is a potential security attack vector and a possible scalability problem. We do not want the guest agents communicating over the same RPC servers as the rest of OpenStack
Thanks,

Dmitry



2013/12/11 Fox, Kevin M <kevin....@pnnl.gov <mailto:kevin....@pnnl.gov>>

    Yeah. Its likely that the metadata server stuff will get more
    scalable/hardened over time. If it isn't enough now, lets fix it
    rather then coming up with a new system to work around it.

    I like the idea of using the network since all the hypervisors
    have to support network drivers already. They also already have to
    support talking to the metadata server. This keeps OpenStack out
    of the hypervisor driver business.

    Kevin

    ________________________________________
    From: Clint Byrum [cl...@fewbar.com <mailto:cl...@fewbar.com>]
    Sent: Tuesday, December 10, 2013 1:02 PM
    To: openstack-dev
    Subject: Re: [openstack-dev] Unified Guest Agent proposal

    Excerpts from Dmitry Mescheryakov's message of 2013-12-10 12:37:37
    -0800:
    > >> What is the exact scenario you're trying to avoid?
    >
    > It is DDoS attack on either transport (AMQP / ZeroMQ provider)
    or server
    > (Salt / Our own self-written server). Looking at the design, it
    doesn't
    > look like the attack could be somehow contained within a tenant
    it is
    > coming from.
    >

    We can push a tenant-specific route for the metadata server, and a
    tenant
    specific endpoint for in-agent things. Still simpler than
    hypervisor-aware
    guests. I haven't seen anybody ask for this yet, though I'm sure
    if they
    run into these problems it will be the next logical step.

    > In the current OpenStack design I see only one similarly vulnerable
    > component - metadata server. Keeping that in mind, maybe I just
    > overestimate the threat?
    >

    Anything you expose to the users is "vulnerable". By using the
    localized
    hypervisor scheme you're now making the compute node itself
    vulnerable.
    Only now you're asking that an already complicated thing
    (nova-compute)
    add another job, rate limiting.

    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to