Ooops...It seems that I have been confused..

The pasted part is indeed from the node when I was looking somewhere else....

Thanks a lot for noticing that Adrian!!!!

I will turn it off on the nodes and test again!

Should it be off on both the nodes and the VMs?

Regards,

George


That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis"  escribió:

I believe that they are already disabled.

Here is the ethtool output:

# ethtool --show-offload eth1
Features for eth1:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off
        tx-checksum-unneeded: off
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
loopback: off [fixed]

Regards,

George

Disable offloading on the nodes with: ethtool -K interfaceName gro
off
gso off tso off

And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis"  escribió:

Hi all!

In my OpenStack installation (Icehouse and use nova legacy
networking) the VMs are talking to each other over a 1Gbps
network
link.

My issue is that although file transfers between physical
(hypervisor) nodes can saturate that link transfers between VMs
reach very lower speeds e.g. 30MB/s (approx. 240Mbps).

My tests are performed by scping a large image file (approx.
4GB)
between the nodes and between the VMs.

I have updated my images to use e1000 nic driver but the
results
remain the same.

What are any other limiting factors?

Does it has to do with the disk driver I am using? Does it play
significant role the filesystem of the hypervisor node?

Any ideas on how to approach the saturation of the 1Gbps link?

Best regards,

George

_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[1] [1]
Post to     : openstack@lists.openstack.org [2] [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[3] [3]

Links:
------
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4]
[2] mailto:openstack@lists.openstack.org [5]
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[6]
[4] mailto:gior...@acmac.uoc.gr [7]


Links:
------
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[5] mailto:openstack@lists.openstack.org
[6] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[7] mailto:gior...@acmac.uoc.gr
[8] mailto:gior...@acmac.uoc.gr

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to