Re: [lxc-devel] did the new kernel 2.6.36 support a full sysfs namespace for tun/tap device?
Thank you! I've found the kernel 2.6.35.9 has the sysfs tagging feature which supports to open a tap/tun devices directly in the VM. 2010/12/7 Daniel Lezcano > On 12/07/2010 11:10 AM, 贺鹏 wrote: > >> Hi, all: >>did the new kernel 2.6.36 support a full sysfs namespace for >> tun/tap >> device? >> >> >> > I am not sure, but yes it should. sysfs per namespace is in place since > 2.6.35 AFAIR. > -- hepeng ICT -- This SF Dev2Dev email is sponsored by: WikiLeaks The End of the Free Internet http://p.sf.net/sfu/therealnews-com___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel
[lxc-devel] macvlan configuration
Hi all: I use macvlan as the network configuration for my LXC VMs. Here is my configuration for LXC VMs: lxc.network.type = macvlan lxc.network.macvlan.mode = vepa lxc.network.flags = up lxc.network.link = tap0 and I config 4 virtual eth in this VMs, each has the tap0 as its network link. I use a tap device as the link for the macvlan, and I run a process to read and write this tap device. when I write a ARP quest to the tap device, the macvlan didn't broadcast the packet to the network interface in the VM. I put some debug code in the macvlan.c and runs this test. and I find the skb did not even enter the macvlan_handle_frame function. Could someone tell me is there any thing wrong in my config? thx~~ -- hepeng ICT -- This SF Dev2Dev email is sponsored by: WikiLeaks The End of the Free Internet http://p.sf.net/sfu/therealnews-com___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel
Re: [lxc-devel] macvlan configuration
The problem is the tun/tap device need to use the write/read functions to send pkt into the netstack but I use the send/recv. problem solved. 2010/12/9 贺鹏 > > Hi all: >I use macvlan as the network configuration for my LXC VMs. > >Here is my configuration for LXC VMs: > >lxc.network.type = macvlan >lxc.network.macvlan.mode = vepa >lxc.network.flags = up >lxc.network.link = tap0 > >and I config 4 virtual eth in this VMs, each has the tap0 as its > network link. > >I use a tap device as the link for the macvlan, and I run a process > to read and write this tap device. >when I write a ARP quest to the tap device, the macvlan didn't > broadcast the packet to the network interface in the VM. > >I put some debug code in the macvlan.c and runs this test. and I > find the skb did not even enter the macvlan_handle_frame function. > >Could someone tell me is there any thing wrong in my config? > >thx~~ > > > > -- > > hepeng > ICT > -- hepeng ICT -- This SF Dev2Dev email is sponsored by: WikiLeaks The End of the Free Internet http://p.sf.net/sfu/therealnews-com___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel
[lxc-devel] Packet loss when high network traffic load
Hi, i'm experiencing some packet loss under high network traffic. Here is the scenario: i have on host with one guest running as a proxy (squid) So, when i start downloading 10-15 dvd image from my client (wget http://ftp.proxad.net/mirrors/ftp.mandriva.com/MandrivaLinux/official/iso/2010.1/mandriva-linux-free-2010-spring-i586.iso -O /dev/null &) with the right http_proxy option positionned; i can see some pacjet loss and latency pinging the host: menil...@antares:~$ ping -c 30 balblair.u06 result: --- balblair.u06.univ-nantes.prive ping statistics --- 30 packets transmitted, 27 received, 10% packet loss, time 33082ms rtt min/avg/max/mdev = 6.264/8.009/9.007/0.818 ms Without downloading something: --- balblair.u06.univ-nantes.prive ping statistics --- 30 packets transmitted, 30 received, 0% packet loss, time 29042ms rtt min/avg/max/mdev = 0.955/1.068/1.285/0.073 ms Here is the network config part of the lxc guest: le reseau lxc.network.type = veth lxc.network.flags = up lxc.network.link = V2003 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.hwaddr = BA:BE:BA:BE:04:01 lxc.network.veth.pair = squid-lmb-clus lxc.network.type = veth lxc.network.flags = up lxc.network.link = V2002 lxc.network.name = eth1 lxc.network.mtu = 1500 lxc.network.hwaddr = BA:BE:BA:BE:04:02 lxc.network.veth.pair = squid-lmb-serv Where V2002 and V2003 are bridge interfaces: r...@balblair:~# brctl show V2002 8000.0024e86f3e03 no eth2.2002 squid-lmb-serv V2003 8000.0024e86f3e03 no eth2.2003 squid-lmb-clus Is it the expected behaviour, or i'm missing some configuration somewhere? Regards. <>-- ___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel
Re: [lxc-devel] Packet loss when high network traffic load
09.12.2010 18:29, Menil Jean-Philippe wrote: > Hi, > > i'm experiencing some packet loss under high network traffic. > Here is the scenario: > i have on host with one guest running as a proxy (squid) > > > So, when i start downloading 10-15 dvd image from my client (wget > http://ftp.proxad.net/mirrors/ftp.mandriva.com/MandrivaLinux/official/iso/2010.1/mandriva-linux-free-2010-spring-i586.iso > -O /dev/null &) with the right http_proxy option positionned; i can see > some pacjet loss and latency pinging the host: > > menil...@antares:~$ ping -c 30 balblair.u06 > result: > --- balblair.u06.univ-nantes.prive ping statistics --- > 30 packets transmitted, 27 received, 10% packet loss, time 33082ms > rtt min/avg/max/mdev = 6.264/8.009/9.007/0.818 ms How about trying the same without lxc? > Is it the expected behaviour, or i'm missing some configuration somewhere? I think when you saturate your link, some packet loss is to be expected. /mjt -- ___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel
Re: [lxc-devel] Packet loss when high network traffic load
Le 09/12/2010 16:36, Michael Tokarev a écrit : 09.12.2010 18:29, Menil Jean-Philippe wrote: Hi, i'm experiencing some packet loss under high network traffic. Here is the scenario: i have on host with one guest running as a proxy (squid) So, when i start downloading 10-15 dvd image from my client (wget http://ftp.proxad.net/mirrors/ftp.mandriva.com/MandrivaLinux/official/iso/2010.1/mandriva-linux-free-2010-spring-i586.iso -O /dev/null&) with the right http_proxy option positionned; i can see some pacjet loss and latency pinging the host: menil...@antares:~$ ping -c 30 balblair.u06 result: --- balblair.u06.univ-nantes.prive ping statistics --- 30 packets transmitted, 27 received, 10% packet loss, time 33082ms rtt min/avg/max/mdev = 6.264/8.009/9.007/0.818 ms How about trying the same without lxc? Is it the expected behaviour, or i'm missing some configuration somewhere? I think when you saturate your link, some packet loss is to be expected. /mjt Hmmm, i must apologize. I simply reach the limit between my post and the switch. If i ping the host from another pc, ther's no problem at all. Sorry for the noise. <>-- ___ Lxc-devel mailing list Lxc-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel