LXC in a production environment, very brave =)
Put the package on hold, there are several sites that document this
http://forums.debian.net/viewtopic.php?t=240
I would also suggest you consider using Debian rather then Ubuntu as I suspect
you would have less problems with a Debian container =)
On 11/09/2010 05:40 PM, Sagar Dixit wrote:
> Thanks Daniel for reply. I'll go through directions of cgroup. I was just
> curious to know, what happens if I spawn 2 LXC containers, and load 1
> container with lots of CPU intensive processes causing it's CPU usage to
> exceed beyond some threshold wh
Thanks Daniel for reply. I'll go through directions of cgroup. I was just
curious to know, what happens if I spawn 2 LXC containers, and load 1
container with lots of CPU intensive processes causing it's CPU usage to
exceed beyond some threshold which may lead to 'denial / delay of service'
type of
On 11/09/2010 01:24 AM, Yuji NISHIDA wrote:
> # rpm -q glibc
> glibc-2.8-3.x86_64
> glibc-2.8-3.i686
>
> Should that be the reason?
> I need to add defines in utmp.c then it works.
>
> #define TFD_CLOEXEC O_CLOEXEC
> #define TFD_NONBLOCK O_NONBLOCK
hmm, weird. The man page says it is supported sin
On 11/08/2010 10:22 PM, Sagar Dixit wrote:
> Hi,
>
> I am curious to know if there is fine grained resource control in LXC
> containers. For ex. Can I allocate some fixed part of CPU (some %) to a
> container and control this configuration while creating LXC container ? Or
> can I update it dynam
# rpm -q glibc
glibc-2.8-3.x86_64
glibc-2.8-3.i686
Should that be the reason?
I need to add defines in utmp.c then it works.
#define TFD_CLOEXEC O_CLOEXEC
#define TFD_NONBLOCK O_NONBLOCK
-
Yuji Nishida
nish...@nict.go.jp
On 2010/11/09, at 0:10, Daniel Lezcano wrote:
> On 11/08/2010 10:48 A