I think I've identified a problem - when I create a new network and
subnet via [neutron net-create and neutron subnet-create], it creates
the networks fine, and then I start the dhcp service on the network node:
/etc/init.d/neutron-dhcp-agent start
It seems to add a tap interface to br-int:
#
I think I'm getting a little closer.. I see that on my neutron network
node my DHCP server (dnsmasq) is on 10.200.0.3:
# ip netns
qrouter-eefaa5c8-95fc-4b3d-a8d2-27eebc449337
qdhcp-e4448083-ec61-4293-ad0e-62239986965f
# ip netns exec qdhcp-e4448083-ec61-4293-ad0e-62239986965f ifconfig
lo
And additionally here's my "iptables -L" on the compute node with the VM
running on it, if anyone can see anything wrong with it that might block
ARP responses... I didn't modify this manually, looks like Open vSwitch
did all the tweaking. Remember 10.200.0.3 is the dnsmasq server and
10.200.
No one can see anything wrong?
Here's "ovs-vsctl show" on the neutron network node, if it helps:
# ovs-vsctl show
52702cef-6433-4627-ade8-51561b4e8126
Bridge "br-eth2"
Port "phy-br-eth2"
Interface "phy-br-eth2"
Port "eth2"
Interface "eth2"
Port
And one more, I think I found something. On the network node, I see on
phy-br-eth2:
# tcpdump -n -e -vv -i phy-br-eth2
tcpdump: WARNING: phy-br-eth2: no IPv4 address assigned
tcpdump: listening on phy-br-eth2, link-type EN10MB (Ethernet), capture
size 65535 bytes
15:23:47.394584 fa:16:3e:5d:1
Sorry, one more... ;)
Wile trying to ping the dnsmasq host (10.200.0.3) from the VM
(statically configured to 10.200.0.6), if I tcpdump specifically on vlan
200 on the network node:
# tcpdump -ni eth2 vlan 200
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: verbose output suppresse
I tried additionally checking the dnsmasq instance on the network node
while trying to ping it from the VM (I set the VMs IP statically to
10.200.0.6 and tried to ping dnsmasq which is 10.200.0.3). From the
network node, I did:
# ip netns
qdhcp-e4448083-ec61-4293-ad0e-62239986965f
[root@clou
Hi Y'all,
I'm making some progress on my neutron VLAN deployment issues, but it's
still not working as expected. I have my compute nodes data port
connected to a switchport that is a trunk, allowing VLANs 200-209 to
flow over the trunk. The neutron node also has its internal data port
on th
Good day.
I've successfully install and configure baremetall copy of openstack (1
compute, 1 controller and 1 neutron server).
But then I was asked to repeat configuration under virtual box (with
software qemu as hypervisor). It working mostly, but I completely stuck
with neutron internal ne