Which SO and network cards are you using?
El 14/12/2014 19:35, "Gonzalo Aguilar Delgado"
escribió:
> Hi all,
>
> Question is. Why I have to restart neutron-plugin-openvswitch-agent to
> recover networking from new machines?
>
>
> I found a problem creating machines in a new node I've built. And r
27;t fail on any other circumstances. And I'm
> running ceph! It needs reliable network.
>
> What are you looking at?
>
>
> El dom, 14 de dic 2014 a las 9:11 , Adrián Norte Fernández <
> adr...@bashlines.com> escribió:
>
> Which SO and network cards are you
Interface "qvo377f7953-d2"
> Port "eth0"
> Interface "eth0"
> Port "qvod1b2e6dc-7f"
> tag: 1
> Interface "qvod1b2e6dc-7f"
> Bridge br-ex
> Port br-ex
>
You can do this before launching:
glance image-update \
--property hw_disk_bus=scsi \
--property hw_cdrom_bus=ide \
--property hw_vif_model=e1000 \
f16-x86_64-openstack-sda
It's from
http://docs.openstack.org/user-guide/content/cli_manage_images.html
But if you need to change it
escribió:
> Hi !
>
> It depends what type of flag You want to add but generaly all stuff is
> done on compute-host.
>
> For example If You want to make some changes in CPU config You should take
> a look on
>
> nova/virt/libvirt/driver.py
>
> Regards
>
> Blazje
>
Have you tried disabling offloading on the network cards?
El 15/12/2014 18:21, "André Aranha" escribió:
> Our kernel version in controller is 3.13.0-37-generic, on ComputeNode
> is 3.13.0-24-generic and in the NetworkNode is 3.13.0-35-generic.
>
> On 13 December 2014 at 04:39, Min Pae wrote:
>>
Disable offloading on the nodes with: ethtool -K interfaceName gro off gso
off tso off
And then try it again
El 16/12/2014 18:36, "Georgios Dimitrakakis"
escribió:
>
> Hi all!
>
> In my OpenStack installation (Icehouse and use nova legacy networking) the
> VMs are talking to each other over a 1G
That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis"
escribió:
> I believe that they are already disabled.
>
> Here is the ethtool output:
>
> # ethtool --show-offload eth1
> Features for eth1:
> rx-checksumming: on
> tx-checksumming: on
> tx-
Disabling it only on the nodes should boost the speed but disabling it in
the vms too improves greatly the speed
El 16/12/2014 19:13, "Georgios Dimitrakakis"
escribió:
> Ooops...It seems that I have been confused..
>
> The pasted part is indeed from the node when I was looking somewhere
> else...
Try enabling the gso and the tso but keeping the gro disabled
El 16/12/2014 19:38, "Georgios Dimitrakakis"
escribió:
> I have changed that on both the node and the VMs and actually made things
> worse.
>
> I did that on both eth1 and br100 interfaces on the physical node.
>
> The transfer speed n
Try with the rtl8139 driver and try again
El 16/12/2014 20:09, "Georgios Dimitrakakis"
escribió:
> Changing
>
> gso on
> tso on
> gro off
>
>
> got me back to the initial status.
>
>
> Although now it starts with approximately 65-70MB/s for a few seconds but
> then it drops down to 30MB/s
>
> Re
I use Neutron so..
El 16/12/2014 20:27, "Rick Jones" escribió:
> On 12/16/2014 11:09 AM, Georgios Dimitrakakis wrote:
>
>> Changing
>>
>> gso on
>> tso on
>> gro off
>>
>>
>> got me back to the initial status.
>>
>>
>> Although now it starts with approximately 65-70MB/s for a few seconds
>> but
Read the names carefully again :)
I was suggesting what I used to do in the past when on a new OpenStack
install I had this problem.
El 16/12/2014 20:35, "Rick Jones" escribió:
> On 12/16/2014 11:28 AM, Adrián Norte Fernández wrote:
>
>> I use Neutron so..
>>
>
Why don't use a hook?
El 02/03/2015 15:47, "Sandy Walsh" escribió:
> Enable notifications on the compute nodes and you'll get
> compute.instance.create.end and compute.instance.delete.end notifications
> for these operations. It sounds like you only have them enabled on the
> scheduler/api nodes
14 matches
Mail list logo