2014-05-22 10:20 GMT+08:00 t22330033 <t22330...@gmail.com>:

>
> 2014-05-21 0:25 GMT+08:00 Qin, Xiaohong <xiaohong....@emc.com>:
>
> Can you session into dhcp namespace to run tcpdump on that tap interface
>> to see if the dhcp traffic is been exchanged between dns process and VM?
>>
>>
> tcpdump in the dhcp namespace captures nothing no matter I start an
> instance from nova controller or compute node. however the network of VM in
> controller node is OK but not OK in compute node.
>
>
sorry, I think I made a mistake here. tcpdump did capture the DHCP packets
when starting a VM from controller node. it was just shown extremely slow
that I thought there's nothing. I'm sure that there're still no DHCP
packets for VM started in compute node.



> another finding regarding the manually-created bridge that works for vxlan
> is that if I set the fail-mode to secure, it won't send vxlan packet
> anymore. I saw the setting on bridges created by neutron are all set to
> secure. it that the root cause?
>
>
>>
>>
>> Dennis
>>
>>
>>
>> *From:* discuss [mailto:discuss-boun...@openvswitch.org] *On Behalf Of *
>> t22330033
>> *Sent:* Tuesday, May 20, 2014 12:20 AM
>> *To:* discuss@openvswitch.org
>> *Subject:* Re: [ovs-discuss] network problem with vxlan
>>
>>
>>
>> 2014-05-19 23:58 GMT+08:00 Qin, Xiaohong <xiaohong....@emc.com>:
>>
>> You have to start with the dhcp problem first in your case. “ip netns” on
>> your controller node should list a dhcp name space,
>>
>>
>>
>> ip netns
>>
>> qdhcp-3fc234e5-335f-463d-ba1d-bcf1bdd8f479
>>
>> qrouter-6df76d30-17fc-4024-8d01-4cfe007ab531
>>
>>
>>
>> then session into that dhcp name space,
>>
>>
>>
>> sudo ip netns exec qdhcp-3fc234e5-335f-463d-ba1d-bcf1bdd8f479 ifconfig
>>
>>
>>
>> it should list a tap interface,
>>
>>
>>
>> tapef85f5c3-c5 Link encap:Ethernet  HWaddr fa:16:3e:53:ad:f2
>>
>>
>>
>> then check the dnsmasq is launched against that interface,
>>
>>
>>
>> dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces
>> --interface=tapef85f5c3-c5 -
>>
>>
>>
>> everything on the controller behaves the same as you mentioned. and I
>> don't have problem on instances running on the controller node. only those
>> in compute node have network problem.
>>
>>
>>
>>
>>
>> Also I don’t see ovs physical bridge is been created in your show table.
>>
>>
>>
>> do you mean br-ex? the output is from the compute node. I thought there
>> are only br-tun and br-int on the compute node.
>>
>> I also manually created a bridge to compare with those created by
>> openstack and I can see vxlan packets being sent out through physical NIC.
>> I don't understand what the difference is. here is what I did
>>
>> # ip tuntap add mode tap vnet0
>> # ip link set vnet0 up
>> # ovs-vsctl add-br br-vxlan
>> # ovs-vsctl add-port br-vxlan vnet0
>> # ovs-vsctl add-port br-vxlan tep0 -- set interface tep0 type=vxlan
>> options:remote_ip=172.31.0.125
>> # qemu-system-x86_64 -m 64 -net nic -net
>> tap,ifname=vnet0,script=no,downscript=no -hda cirros-0.3.2-x86_64-disk.img
>> -display vnc=:1
>>
>>
>> when I ran ping in the guest VM, I can capture vxlan packets carrying ARP
>> requests on eth0 of the host. output of ovs-vsctl show of my manually
>> created bridge is as follows
>>
>>     Bridge br-vxlan
>>         Controller "tcp:172.31.0.125:6633"
>>         Port "vnet0"
>>             Interface "vnet0"
>>         Port "tep0"
>>             Interface "tep0"
>>                 type: vxlan
>>                 options: {remote_ip="172.31.0.125"}
>>         Port br-vxlan
>>             Interface br-vxlan
>>                 type: internal
>>
>> but ovs-ofctl dump-flows shows nothing
>>
>>     OFPST_FLOW reply (OF1.3) (xid=0x2):
>>
>>
>>
>
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to