Re: [lxc-devel] Getting some hooks into the container configuration

2012-06-07 Thread Serge Hallyn
Quoting Stéphane Graber (stgra...@ubuntu.com):
> On 05/25/2012 04:17 AM, Matthijs Kooijman wrote:
> > Hi Stéphane,
> > 
> >>  - stop: Is run after the container died
> >> [...]
> >> Potential other hooks include pre-start and post-stop
> > What would be the difference between stop and post-stop, if stop also
> > runs _after_ the container died?
> > 
> > Gr.
> > 
> > Matthijs
> 
> It'd be run after the umount has been done.
> 
> But that got me to go and read the OpenVZ definition of these and made
> me catch a "small" detail I had missed.
> 
> The start and stop hooks in OpenVZ are actually run in the container's
> namespaces.
> 
> Basically the timeline would be:
>  - HOOK: pre-start (host namespace)

For encrypted container rootfs, we probably want to be roughly
here.  We want to happen before lxc fstab entries get mounted.
But, we'd want that done in the container namespace, but before
all other mounts happen.

By 'host namespace' did you mean 'pre-pivotroot'?

>  - LXC: mount rootfs and fstab entries
>  - HOOK: mount (host namespace)

Special bind mounts with mounts propagation from the host could
be done here.  But again they should be done in the container
namespace, but before pivot-root.

Special device creation.

>  - LXC: spawn init
>  - HOOK: start (container namespace)

Hm, how do you see this lining up with init's exec?  Does init
get stopped on exec with ptrace, or does the hook just run in
parallel with init?

>  - USER: do whatever they want in the container
>  - LXC: stops the container
>  - HOOK: stop (container namespace)

Dunno.

>  - LXC: kill the container
>  - HOOK: umount (host namespace)

Not sure.  umounting should get done automatically by the
namespace disappearing.

>  - LXC: umount rootfs and fstab entries
>  - HOOK: post-stop (host namespace)

ping an admin?  not sure.

> Based on OpenVZ documentation, if we aim at implementing something
> similar, then:
>  - "start" would be run inside the container (but script lives outside
> of it) and called right before init is spawned.
>  - "stop" would be run inside the container (but script lives outside of
> it) and called right after init dies.
> 
> I must admit never having used these two and I'm a bit unsure whether
> they are really that useful and whether we can even implement them with
> the current state of things.
> 
> 
> Something else I didn't mention in my original post is the behavior on
> exit failure for the hooks. OpenVZ typically treats any non-zero return
> code as a failure and tries to kill the container but without calling
> any additional hook.
> For example, a failure in the "start" hook will cause the container to
> be shutdown and unmounted but without calling the stop, umount or
> post-stop hooks.

I think that's reasonable.

> (I'm mostly looking at
> http://download.openvz.org/doc/OpenVZ-Users-Guide.pdf in the "OpenVZ
> Action Scripts" section)

-serge

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


[lxc-devel] liblxc api

2012-06-07 Thread Serge Hallyn
Hi,

we've been talking (including a session at UDS) about getting a nice
API out of liblxc, with exports to python etc.  This could be used: to
easily replace some of the tools which are currently scripts;  to
increase code sharing with other projects - most immediately arkose and
libvirt;  and to make automated testing easier.

I tried my hand at implementing the start of that at
https://code.launchpad.net/~serge-hallyn/+junk/lxcwithapi .
The api is in src/lxc/lxccontainer.h, and the testcases in src/tests show
how to use it.  For instance, you could

struct lxc_container *c;
c = lxc_container_new("p1");
if (!c->load_config(c, NULL)) {
EXIT_FAIL("Loading configuration file %s for %s\n",
c->configfile, c->name);
}

c->want_daemonize(c);
if (!c->start(c, 0, NULL)) {
EXIT_FAIL("failed to start container %s\n", c->name);
}

if (!c->wait(c, "RUNNING")) {
EXIT_FAIL("failed waiting for %s to be RUNNING\n",
c->name);
}

INFO("container %s is running\n", c->name);
lxc_container_put(c);

or, presumably in python (once the wrappers are there)

c = LXC.lxc()
c->want_daemonize()
c->start()
c->wait("RUNNING")

The bits of the api i've implemented may be all we need for arkose to
switch from using lxc-start etc binaries to using the api (not sure,
Stéphane can yell at me if not).

I had to do a little bit of shuffling liblxc code around to let me
reuse it, but far less so than I'd expected!

Does this seem worthwhile?  Comments - or competing implementations -
greatly appreciated.

thanks,
-serge

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


Re: [lxc-devel] OT: cgroups - memory controller behavior

2012-06-07 Thread Zhu Yanhai
2012/6/2 Rayson Ho :
> I'm from the Open Grid Scheduler Project (the official open source
> Grid Engine maintainer in the post Sun world), and we are using
> cgroups as the job container in our scheduler.
>
> Since LXC also uses cgroups, I was wondering if the developers want to
> change the behavior of "memory.memsw.limit_in_bytes" of the memory
> controller?
>
> With the "memory.memsw.limit_in_bytes" limit set, when the processes
> in the cgroup exceed the memory usage, then the OOM killer would pick
> a process and kill it. What we want is the setrlimit() behavior, which
> actually sets the upper bound of the process' data segment size, and
> thus sbrk & malloc would get error instead.
>
> See Grid Engine cgroups Integration:
> http://blogs.scalablelogic.com/2012/05/grid-engine-cgroups-integration.html
>
> If LXC is not using setting the "memory.memsw.limit_in_bytes" limit,
> then may be it is not a real concern...
>
> Rayson
>
> 
> Open Grid Scheduler / Grid Engine
> http://gridscheduler.sourceforge.net/
>
> Scalable Grid Engine Support Program
> http://www.scalablelogic.com/
>
> --
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> ___
> Lxc-devel mailing list
> Lxc-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-devel
Hi,
I think your question is somehow similar with this one
http://www.spinics.net/lists/cgroups/msg02622.html
IMHO the global overcommit policy doesn't work for memcg, that's why
you can't make a sbr/mmap fail if a container is near to full.

--
Regards,
Zhu Yanhai

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


Re: [lxc-devel] liblxc api

2012-06-07 Thread Matthew Franz
Absolutely, especially the Python bit!  I wrote a psuedo-OO wrapper
for OpenVZ (basically makes a bunch of vzctl calls at a higher layer
of abstraction so you can/start/stop/configure large numbers of
containers) and was thinking about writing something similar for LXC
that called the lxc-* BUT always I'm always a feel a little weird
about calling .sh from .py so the .c wrappers would be awesome.

- mdf

On Thu, Jun 7, 2012 at 5:57 PM, Serge Hallyn  wrote:
> Hi,
>
> we've been talking (including a session at UDS) about getting a nice
> API out of liblxc, with exports to python etc.  This could be used: to
> easily replace some of the tools which are currently scripts;  to
> increase code sharing with other projects - most immediately arkose and
> libvirt;  and to make automated testing easier.
>
> I tried my hand at implementing the start of that at
> https://code.launchpad.net/~serge-hallyn/+junk/lxcwithapi .
> The api is in src/lxc/lxccontainer.h, and the testcases in src/tests show
> how to use it.  For instance, you could
>
>        struct lxc_container *c;
>        c = lxc_container_new("p1");
>        if (!c->load_config(c, NULL)) {
>                EXIT_FAIL("Loading configuration file %s for %s\n",
>                    c->configfile, c->name);
>        }
>
>        c->want_daemonize(c);
>        if (!c->start(c, 0, NULL)) {
>                EXIT_FAIL("failed to start container %s\n", c->name);
>        }
>
>        if (!c->wait(c, "RUNNING")) {
>                EXIT_FAIL("failed waiting for %s to be RUNNING\n",
>                c->name);
>        }
>
>        INFO("container %s is running\n", c->name);
>        lxc_container_put(c);
>
> or, presumably in python (once the wrappers are there)
>
>        c = LXC.lxc()
>        c->want_daemonize()
>        c->start()
>        c->wait("RUNNING")
>
> The bits of the api i've implemented may be all we need for arkose to
> switch from using lxc-start etc binaries to using the api (not sure,
> Stéphane can yell at me if not).
>
> I had to do a little bit of shuffling liblxc code around to let me
> reuse it, but far less so than I'd expected!
>
> Does this seem worthwhile?  Comments - or competing implementations -
> greatly appreciated.
>
> thanks,
> -serge
>
> --
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> ___
> Lxc-devel mailing list
> Lxc-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-devel



-- 
--
Matthew Franz
mdfr...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel