On 03/19/2014 10:54 PM, Joe Gordon wrote:
On Wed, Mar 19, 2014 at 9:25 AM, Miguel Angel Ajo <majop...@redhat.com <mailto:majop...@redhat.com>> wrote: An advance on the changes that it's requiring to have a py->c++ compiled rootwrap as a mitigation POC for havana/icehouse. https://github.com/mangelajo/__shedskin.rootwrap/commit/__e4167a6491dfbc71e2d0f6e28ba93b__c8a1dd66c0 <https://github.com/mangelajo/shedskin.rootwrap/commit/e4167a6491dfbc71e2d0f6e28ba93bc8a1dd66c0> The current translation output is included. It looks like doable (almost killed 80% of the translation problems), but there are two big stones: 1) As Joe said, no support for Subprocess (we're interested in popen), I'm using a dummy os.system() for the test. 2) No logging support. I'm not sure on how complicated could be getting those modules implemented for shedkin. Before sorting out of we can get those working under shedskin, any preliminary performance numbers from neutron when using this?
Sure, totally agree. I'm trying to put up a conversion without 1 & 2, to run a benchmark on it, and then I'll post the results. I suppose, we couldn't use it in neutron itself without Popen support (not sure) but at least I could get an estimate out of the previous numbers and the new ones. Best, Miguel Ángel.
On 03/18/2014 09:14 AM, Miguel Angel Ajo wrote: Hi Joe, thank you very much for the positive feedback, I plan to spend a day during this week on the shedskin-compatibility for rootwrap (I'll branch it, and tune/cut down as necessary) to make it compile under shedskin [1] : nothing done yet. It's a short-term alternative until we can have a rootwrap agent, together with it's integration in neutron (for Juno). As, for the compiled rootwrap, if it works, and if it does look good (security wise) then we'd have a solution for Icehouse/Havana. help in [1] is really welcome ;-) I'm available in #openstack-neutron as 'ajo'. Best regards, Miguel Ángel. [1] https://github.com/mangelajo/__shedskin.rootwrap <https://github.com/mangelajo/shedskin.rootwrap> On 03/18/2014 12:48 AM, Joe Gordon wrote: On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo <mangel...@redhat.com <mailto:mangel...@redhat.com> <mailto:mangel...@redhat.com <mailto:mangel...@redhat.com>>> wrote: I have included on the etherpad, the option to write a sudo plugin (or several), specific for openstack. And this is a test with shedskin, I suppose that in more complicated dependecy scenarios it should perform better. [majopela@redcylon tmp]$ cat <<EOF >test.py > import sys > print "hello world" > sys.exit(0) > EOF [majopela@redcylon tmp]$ time python test.py hello world real 0m0.016s user 0m0.015s sys 0m0.001s This looks very promising! A few gotchas: * Very limited library support https://code.google.com/p/__shedskin/wiki/docs#Library___Limitations <https://code.google.com/p/shedskin/wiki/docs#Library_Limitations> * no logging * no six * no subprocess * no *args support * https://code.google.com/p/__shedskin/wiki/docs#Python___Subset_Restrictions <https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions> that being said I did a quick comparison with great results: $ cat tmp.sh #!/usr/bin/env bash echo $0 $@ ip a $ time ./tmp.sh foo bar> /dev/null real 0m0.009s user 0m0.003s sys 0m0.006s $ cat tmp.py #!/usr/bin/env python import os import sys print sys.argv print os.system("ip a") $ time ./tmp.py foo bar > /dev/null min: real 0m0.016s user 0m0.004s sys 0m0.012s max: real 0m0.038s user 0m0.016s sys 0m0.020s shedskin tmp.py && make $ time ./tmp foo bar > /dev/null real 0m0.010s user 0m0.007s sys 0m0.002s Based in these results I think a deeper dive into making rootwrap supportshedskin is worthwhile. [majopela@redcylon tmp]$ shedskin test.py *** SHED SKIN Python-to-C++ Compiler 0.9.4 *** Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE) [analyzing types..] ******************************__**100% [generating c++ code..] [elapsed time: 1.59 seconds] [majopela@redcylon tmp]$ make g++ -O2 -march=native -Wno-deprecated -I. -I/usr/lib/python2.7/site-__packages/shedskin/lib /tmp/test.cpp /usr/lib/python2.7/site-__packages/shedskin/lib/sys.cpp /usr/lib/python2.7/site-__packages/shedskin/lib/re.cpp /usr/lib/python2.7/site-__packages/shedskin/lib/builtin.__cpp -lgc -lpcre -o test [majopela@redcylon tmp]$ time ./test hello world real 0m0.003s user 0m0.000s sys 0m0.002s ----- Original Message ----- > We had this same issue with the dhcp-agent. Code was added that paralleled > the initial sync here: https://review.openstack.org/#__/c/28914/ <https://review.openstack.org/#/c/28914/> that made > things a good bit faster if I remember correctly. Might be worth doing > something similar for the l3-agent. > > Best, > > Aaron > > > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gord...@gmail.com <mailto:joe.gord...@gmail.com> <mailto:joe.gord...@gmail.com <mailto:joe.gord...@gmail.com>> > wrote: > > > > > > > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gord...@gmail.com <mailto:joe.gord...@gmail.com> <mailto:joe.gord...@gmail.com <mailto:joe.gord...@gmail.com>> > wrote: > > > > I looked into the python to C options and haven't found anything promising > yet. > > > I tried Cython, and RPython, on a trivial hello world app, but git similar > startup times to standard python. > > The one thing that did work was adding a '-S' when starting python. > > -S Disable the import of the module site and the site-dependent manipulations > of sys.path that it entails. > > Using 'python -S' didn't appear to help in devstack > > #!/usr/bin/python -S > # PBR Generated from u'console_scripts' > > import sys > import site > site.addsitedir('/mnt/stack/__oslo.rootwrap/oslo/rootwrap') > > > > > > > I am not sure if we can do that for rootwrap. > > > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c > hello world > > real 0m0.021s > user 0m0.000s > sys 0m0.020s > jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c > hello world > > real 0m0.021s > user 0m0.000s > sys 0m0.020s > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py > hello world > > real 0m0.010s > user 0m0.000s > sys 0m0.008s > > jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py > hello world > > real 0m0.010s > user 0m0.000s > sys 0m0.008s > > > > On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo < > mangel...@redhat.com <mailto:mangel...@redhat.com> <mailto:mangel...@redhat.com <mailto:mangel...@redhat.com>> > wrote: > > > Hi Carl, thank you, good idea. > > I started reviewing it, but I will do it more carefully tomorrow morning. > > > > ----- Original Message ----- > > All, > > > > I was writing down a summary of all of this and decided to just do it > > on an etherpad. Will you help me capture the big picture there? I'd > > like to come up with some actions this week to try to address at least > > part of the problem before Icehouse releases. > > > > https://etherpad.openstack.__org/p/neutron-agent-exec-__performance <https://etherpad.openstack.org/p/neutron-agent-exec-performance> > > > > Carl > > > > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo < majop...@redhat.com <mailto:majop...@redhat.com> <mailto:majop...@redhat.com <mailto:majop...@redhat.com>> > > > wrote: > > > Hi Yuri & Stephen, thanks a lot for the clarification. > > > > > > I'm not familiar with unix domain sockets at low level, but , I wonder > > > if authentication could be achieved just with permissions (only users in > > > group "neutron" or group "rootwrap" accessing this service. > > > > > > I find it an interesting alternative, to the other proposed solutions, > > > but > > > there are some challenges associated with this solution, which could make > > > it > > > more complicated: > > > > > > 1) Access control, file system permission based or token based, > > > > > > 2) stdout/stderr/return encapsulation/forwarding to the caller, > > > if we have a simple/fast RPC mechanism we can use, it's a matter > > > of serializing a dictionary. > > > > > > 3) client side implementation for 1 + 2. > > > > > > 4) It would need to accept new domain socket connections in green threads > > > to > > > avoid spawning a new process to handle a new connection. > > > > > > The advantages: > > > * we wouldn't need to break the only-python-rule. > > > * we don't need to rewrite/translate rootwrap. > > > > > > The disadvantages: > > > * it needs changes on the client side (neutron + other projects), > > > > > > > > > Cheers, > > > Miguel Ángel. > > > > > > > > > > > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote: > > >> > > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran > > >> < stephen.g...@theguardian.com <mailto:stephen.g...@theguardian.com> <mailto:stephen.gran@__theguardian.com <mailto:stephen.g...@theguardian.com>> <mailto: stephen.g...@theguardian.com <mailto:stephen.g...@theguardian.com> <mailto:stephen.gran@__theguardian.com <mailto:stephen.g...@theguardian.com>> >> > > >> wrote: > > >> > > >> Hi, > > >> > > >> Given that Yuriy says explicitly 'unix socket', I dont think he > > >> means 'MQ' when he says 'RPC'. I think he just means a daemon > > >> listening on a unix socket for execution requests. This seems like > > >> a reasonably sensible idea to me. > > >> > > >> > > >> Yes, you're right. > > >> > > >> On 07/03/14 12:52, Miguel Angel Ajo wrote: > > >> > > >> > > >> I thought of this option, but didn't consider it, as It's somehow > > >> risky to expose an RPC end executing priviledged (even filtered) > > >> commands. > > >> > > >> > > >> subprocess module have some means to do RPC securely over UNIX sockets. > > >> I does this by passing some token along with messages. It should be > > >> secure because with UNIX sockets we don't need anything stronger since > > >> MITM attacks are not possible. > > >> > > >> If I'm not wrong, once you have credentials for messaging, you can > > >> send messages to any end, even filtered, I somehow see this as a > > >> higher > > >> risk option. > > >> > > >> > > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some > > >> local UNIX socket with very simple RPC over it. > > >> > > >> And btw, if we add RPC in the middle, it's possible that all those > > >> system call delays increase, or don't decrease all it'll be > > >> desirable. > > >> > > >> > > >> Every call to rootwrap would require the following. > > >> > > >> Client side: > > >> - new client socket; > > >> - one message sent; > > >> - one message received. > > >> > > >> Server side: > > >> - accepting new connection; > > >> - one message received; > > >> - one fork-exec; > > >> - one message sent. > > >> > > >> This looks like way simpler than passing through sudo and rootwrap that > > >> requires three exec's and whole lot of configuration files opened and > > >> parsed. > > >> > > >> -- > > >> > > >> Kind regards, Yuriy. > > >> > > >> > > >> _________________________________________________ > > >> OpenStack-dev mailing list > > >> OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > > >> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > >> > > > > > > _________________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > > > _________________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > _________________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> <mailto:OpenStack-dev@lists.__openstack.org <mailto:OpenStack-dev@lists.openstack.org>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> _________________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> _________________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> _________________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org <mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev