The idea works, you just have to make the /dev/kvm device
visible in the lxc with something like:
lxc.cgroup.devices.allow = c 10:* rwm"
The kvm device has a minor that could be detected before lxc configuration
to avoid the * giving the lxc visibility to all the miscellaneous devices.
Of cours
In my machines there is also sshd and other things that are usually there :
[Clown1.1.1> ps -ef
UIDPID PPID C STIME TTY TIME CMD
root 1 0 0 21:47 ?00:00:00 init [2]
root 253 1 0 21:47 ?00:00:00 /usr/sbin/rsyslogd -c4
root 264 1
Hello,
You can have either 147 machines divided in 3 square grids of 49 connected
running OLSR or 75 machines (3 square grids of 25) running OSPF all in ONE
good (intel core i7) PC.
Here is the 3*25 topo:
Clown1.1.1
|
o o o o o
o o o o o
o o o o o Cloonix1
o o o o o
o o o o o
\
Hello,
Some of you may remember that an ospf (quagga) network running in lxc
virtual machines did not work as well as the same network in uml virtual
machines.
I have identified a culprit: process watchquagga but I do not know why
this process regularly kills my normal ospfd process when run in a l
Hello,
I have tests that seem to show that if many lxc machines emit icmp
messages at the same time, the rate is globaly measured and some get
dropped.
The following setting in each of the lxc:
echo 0 > /proc/sys/net/ipv4/icmp_ratemask
permits to have lots of icmp at t
Hello,
I have 2 exactly identical topologies running ospf: one based on uml
virtual machines and one based on lxc virtual machines.
The 2 topologies are available at clownix.net. (version 10)
My problem is that the lxc based demo does not work well, it stays stable
only for moments before reloadin