You can use "ethtool -k eth0" to view the setting and use "ethtool -K eth0 gro off" to turn off GRO.
On Fri, Oct 25, 2013 at 3:03 PM, Martinx - ジェームズ <thiagocmarti...@gmail.com>wrote: > Hi Rick, > > On 25 October 2013 13:44, Rick Jones <rick.jon...@hp.com> wrote: > >> On 10/25/2013 08:19 AM, Martinx - ジェームズ wrote: >> >>> I think can say... "YAY!!" :-D >>> >>> With "LibvirtOpenVswitchDriver" my internal communication is the double >>> now! It goes from ~200 (with LibvirtHybridOVSBridgeDriver) to >>> *_400Mbit/s_* (with LibvirtOpenVswitchDriver)! Still far from 1Gbit/s >>> >>> (my physical path limit) but, more acceptable now. >>> >>> The command "ethtool -K eth1 gro off" still makes no difference. >>> >> >> Does GRO happen if there isn't RX CKO on the NIC? > > > > Ouch! I missed that lesson... hehe > > No idea, how can I check / test this? > > If I "disable RX CKO" (using ethtool?) on the NIC, how can I verify if the > GRO is actually happening or not? > > Anyway, I'm goggling about all this stuff right now. Thanks for pointing > it out! > > Refs: > > * JLS2009: Generic receive offload - http://lwn.net/Articles/358910/ > > > Can your NIC peer-into a GRE tunnel (?) to do CKO on the encapsulated >> traffic? >> > > > Again, no idea... No idea... :-/ > > Listen, maybe this sounds too dumb from my part but, it is the first time > I'm talking about this stuff (like "NIC peer-into GRE" ?, or GRO / CKO... > > GRE tunnels sounds too damn complex and problematic... I guess it is time > to try VXLAN (or NVP ?)... > > If you guys say: VXLAN is a completely different beast (i.e. it does not > touch with ANY GRE tunnel), and it works smoothly (without GRO / CKO / MTU > / lags / low speed troubles and issues), I'll move to it right now (is > VXLAN docs ready?). > > NOTE: I don't want to hijack this thread because of other (internal > communication VS "Directional network performance issues with Neutron + > OpenvSwitch" thread subject) problems with my OpenStack environment, > please, let me know if this becomes a problem for you guys. > > > >> So, there is only 1 remain problem, when traffic pass trough L3 / >>> Namespace, it is still useless. Even the SSH connection into my >>> Instances, via its Floating IPs, is slow as hell, sometimes it just >>> stops responding for a few seconds, and becomes online again >>> "out-of-nothing"... >>> >>> I just detect a weird "behavior", when I run "apt-get update" from >>> instance-1, it is slow as I said plus, its ssh connection (where I'm >>> running apt-get update), stops responding right after I run "apt-get >>> update" AND, _all my others ssh connections also stops working too!_ For >>> >>> a few seconds... This means that when I run "apt-get update" from within >>> instance-1, the SSH session of instance-2 is affected too!! There is >>> something pretty bad going on at L3 / Namespace. >>> >>> BTW, do you think that a ~400MBit/sec intra-vm-communication (GRE >>> tunnel) on top of a 1Gbit ethernet is acceptable?! It is still less than >>> a half... >>> >> >> I would suggest checking for individual CPUs maxing-out during the 400 >> Mbit/s transfers. > > > Okay, I'll. > > >> >> >> rick jones >> > > Thiago > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack@lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack