You might need to check on the physical nic card instead of br-ex or br-tun. You can disable lro on physical interface using "ethtool -k ethX lro off"
*Rahul Sharma* *MS in Computer Science, 2016* College of Computer and Information Science, Northeastern University Mobile: 801-706-7860 Email: rahulsharma...@gmail.com Linkedin: www.linkedin.com/in/rahulsharmaait On Mon, Sep 28, 2015 at 2:48 AM, applyhhj <apply...@163.com> wrote: > Hi Sharma, > Thank you very much for your reply. You really helped me a lot. After > trying ping VM with larger packet, the result shows that when set -s > parameter to larger than 1430 VM can not be pinged no matter throw external > ips or internal ips. So I checked settings of both physical network > interfaces and virtual nic through ethtool -k command, there are many > differences between those nics. Firstly setting of eth0 in VM is different > from setting of br-tun(all bridges have the same setting). The setting for > bridge is as follows where bolded items are different from settings of eth0 > in VM. I think the problem may caused by different setting of > *tx-gre-segmentation > *but I am not sure and I do not know how to change the setting to test. > Do you or anyone else has any idea about this problem? Thank you very much!! > > > Features for br-tun: > > *rx-checksumming: off [fixed] * > > tx-checksumming: on > > tx-checksum-ipv4: off [fixed] > > tx-checksum-ip-generic: on > > tx-checksum-ipv6: off [fixed] > > tx-checksum-fcoe-crc: off [fixed] > > tx-checksum-sctp: off [fixed] > > scatter-gather: on > > tx-scatter-gather: on > > tx-scatter-gather-fraglist: on > > tcp-segmentation-offload: on > > tx-tcp-segmentation: on > > tx-tcp-ecn-segmentation: on > > tx-tcp6-segmentation: on > > udp-fragmentation-offload: on > > generic-segmentation-offload: on > > generic-receive-offload: on > > large-receive-offload: off [fixed] > > rx-vlan-offload: off [fixed] > > *tx-vlan-offload: on * > > ntuple-filters: off [fixed] > > receive-hashing: off [fixed] > > highdma: on > > *rx-vlan-filter: off [fixed] * > > vlan-challenged: off [fixed] > > *tx-lockless: on [fixed] * > > netns-local: off [fixed] > > tx-gso-robust: off [fixed] > > tx-fcoe-segmentation: off [fixed] > > *tx-gre-segmentation: on * > > *tx-ipip-segmentation: on * > > *tx-sit-segmentation: on * > > *tx-udp_tnl-segmentation: on * > > fcoe-mtu: off [fixed] > > *tx-nocache-copy: off * > > loopback: off [fixed] > > rx-fcs: off [fixed] > > rx-all: off [fixed] > > tx-vlan-stag-hw-insert: off [fixed] > > rx-vlan-stag-hw-parse: off [fixed] > > rx-vlan-stag-filter: off [fixed] > > l2-fwd-offload: off [fixed] > > busy-poll: off [fixed] > > > Regards > hjh > > 2015-09-28 > ------------------------------ > applyhhj > ------------------------------ > *发件人:*Rahul Sharma <rahulsharma...@gmail.com> > *发送时间:*2015-09-28 02:41 > *主题:*Re: [Openstack] Data transmission failure between VM and > outsidemachines > *收件人:*"Mike Spreitzer"<mspre...@us.ibm.com> > *抄送:*"applyhhj"<apply...@163.com>,"openstack"< > openstack@lists.openstack.org> > > Hi hjh, > > If you are able to ping to that particular instance (which I hope you > would be since the initial tcp handshake gets completed), the other thing > which might cause such an issue if the MTU size. We have also seen such > issue when the physical host's nic was unable to deal correctly with the > fragmented packets. You can test this by first sending ping packets, then > increasing their size to 1500 and see if its successful. In our case, we > had to disable Large Receive Offload(LRO) on the nic card and then it > worked fine. Other could be increasing the MTU size on the nic itself. Do > give it a try and see if it helps. > > Thanks. > > *Rahul Sharma* > *MS in Computer Science, 2016* > College of Computer and Information Science, Northeastern University > Mobile: 801-706-7860 > Email: rahulsharma...@gmail.com > Linkedin: www.linkedin.com/in/rahulsharmaait > > On Sun, Sep 27, 2015 at 12:29 PM, Mike Spreitzer <mspre...@us.ibm.com> > wrote: > >> > From: "applyhhj" <apply...@163.com> >> > To: "openstack" <openstack@lists.openstack.org> >> > Date: 09/27/2015 11:16 AM >> > Subject: [Openstack] Data transmission failure between VM and outside >> machines >> > >> > Hi, I have setup the openstack cloud and launched VMs in the cloud. >> > At first everything went very well. But yesterday evening, due to >> > some reasons our lab lost power and all servers were shutdown. This >> > morning I turned on all nodes and try to connect to the VM by ssh >> > but failed. I used netstat to check the status of port 22. It shows >> > that connection between VM and machine in the external network can >> > be established. However the ssh process just stuck at >> > SSH2_MSG_KEXINIT sent. Also I setup a rabbitmq server in the VM and >> > the same situation happened. When connecting rabbitmq server through >> > web ui, netstat shows that connections have been established however >> > the web browser can not get any data from the server, it shows a >> > blank page. Also I tried to ssh from one VM to another VM through >> > internal network, the 192.168.1.0/24 network, similar things >> > happened. Does anyone know how to fix this problem? Thank you! By >> > the way the br-ex bridge can only be brought up manually, so after >> > boot the network node I brought up the br-ex bridge and restart all >> > relevant network services in network node. Please help me with this >> > problem. Thank you very much!! >> >> When faced with networking mysteries like that my next step is usually to >> start taking packet traces at various points. >> >> Regards, >> Mike >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack@lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack