Hi all.
These days ,I tested the vm  by using ovs-dpdk ,and found a new problem as 
fellows:

My test environment :
Host:
Linux version 3.10.0-229.14.1.el7.x86_64 (buil...@kbuilder.dev.centos.org) (gcc 
version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Tue Sep 15 15:05:51 UTC 
2015
Dpdk: version 2.2
Ovs: version 2.5
QEMU version 2.3.1,

Guest : Linux version 3.10.0-229.el7.x86_64 (buil...@kbuilder.dev.centos.org) 
(gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Fri Mar 6 
11:36:42 UTC 2015

Ovs:

1 S root      61984      1  0  80   0 - 11923 poll_s 14:29 ?        00:00:31 
ovsdb-server -v --remote=punix:/usr/local/var/run/openvswitch/db.sock 
--remote=db:Open_vSwitch,Open_vSwitch,manager_options 
--private-key=db:Open_vSwitch,SSL,private_key 
--certificate=db:Open_vSwitch,SSL,certificate 
--bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
5 S root      61998      1 99  80   0 - 2745735 poll_s 14:29 ?      12:25:52 
/usr/local/sbin/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 400 -- 
unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach

Ovs-port cfg:

    Bridge "br1"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
       Port "br1"
            Interface "br1"
                type: internal
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "vxlan-1"
            Interface "vxlan-1"
                type: vxlan
               options: {remote_ip="7.0.0.2"}
        Port "vhost-user-0"
            Interface "vhost-user-0"
                type: dpdkvhostuser


The numa huagepages and vcpu configes in xml :

<memoryBacking>
    <hugepages>
      <page size='2048' unit='KiB'/>
    </hugepages>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu>
    <numa>
      <cell id='0' cpus='0-3' memory='4000000' unit='KiB' memAccess='shared'/>
    </numa>


Test steps:
Step 1: create a vm for port vhost-user-0
Step 2: create other 15 vms likes step 1
Step 3 :destroy the 15 vms created by step 2
Step 4 repeat step 2 and step 3

Then sometimes I find I can’t  reach the vm created by step 1, every time, this 
occurred in step 2

The logs of guest as fellows:

localhost kernel: virtio_net virtio0: output.0:id 222 is not a head!
localhost kernel: net eth0: Unexpected TXQ (0) queue failure: -5
localhost kernel: net eth0: Unexpected TXQ (0) queue failure: -5
localhost kernel: net eth0: Unexpected TXQ (0) queue failure: -5
then reboot the vm it can be recovered.

It is obvious that this is network problem. In my opinion, this the problems of 
the hugepages. But indeed , the hugepages has been mapped when the ovs-dpdk 
started by “/usr/local/sbin/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 400 -- 
unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach”I am not sure. 
How the hugepages recovery when the vm is destroyed. If somebody met the same 
problem with me , I am do not know if this is a bug of ovs-dpdk,or dpdk???

Thanks
Eric wang

Wang Huaxia ---JD.COM

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to