have a look at this
https://answers.launchpad.net/quantum/+question/227321
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpa
its not really obvious, but I believe the iscsi_ip_address needs to be set in
the nova.conf on the **controller** - just want to check you did it there.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsu
is nova configured to use cinder?
in nova.conf
volume_api_class=nova.volume.cinder.API
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://he
it seems the ips for the targets are set at the time they were created
$ mysqlmysql> use cinder;
mysql> select provider_location from volumes;
Try creating a new volume - does it get the new iscsi_ip_address ?
___
Mailing list: https://launchpad.ne
Hi,
I don't have an answer, but here are a couple of troubleshooting tips:
- use iptables-save -c to see which chains are being hit. Do a ping and run
iptables-save -c again to see which counters increased.
- use tcpdump to find out where the packets are getting lost. You could start
with the
how are you doing the additional snatting outside of openstack in order for
addresses on 192.168 /16 to access the internet?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launch
can you try using -e with tcpdump to see the ethernet headers - it may be arps
from the router to ff:ff:ff:ff:ff:ff that are not getting across in that
direction. You should continue tcpdumping on the devices along the path to the
instance to see where the arp request (or reply) stops. You do no
In my setup I see the arp replies being answered by the instance - not dnsmasq.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launc
I'm not sure how to rectify that. You may have to delete the bad row from the
DB and restart the agents:
mysql> use quantum;
mysql> select * from ovs_tunnel_endpoints;
...
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@list
Hi,
I reckon this is because dnsmasq is isolated in the dhcp namespace. You can use
DHCP to push out a specific nameserver:
$ quantum subnet-update sub1 --dns_nameservers 8.8.4.4 8.8.8.8
I can't seem to figure out the syntax to pass just one (list=false is not
working). Or you could use Horizo
the reference to 10.0.2.15 is strange. This happens to be the address the
VirtualBox DHCP server gives to its VMs configured for NAT. Are you using
VirtualBox? Even so I have never found this to be a problem because a Quantum
DHCP namespace is isolated from the main one. Can you provide the '
those extensions are specific to the Nicira NVP plugin
https://github.com/openstack/python-quantumclient/commit/d77f86218e4c0c2f5371accce64605e7cfff41c5
>
> From: Qinglong.Meng
>To: openstack@lists.launchpad.net; "openstack-...@lists.openstack.org"
>;
>china
the bridge qbr876fed87-40 is supposed to be a Linux bridge and should not
appear in 'ovs-vsctl show'.
What is the value of BRCOMPAT in /etc/default/openvswitch-switch ?
What is the OS+level and how did you install OVS?
>
> From: Qinglong.Meng
>To: openstack@list
BRCOMPAT should be 'no'. What version of ubuntu is this?
>
> From: Qinglong.Meng
>To: Darragh O'Reilly
>Sent: Tuesday, 21 May 2013, 10:40
>Subject: Re: [Openstack] [openstack][quantum] How to understand 'ovs-vsctl
>sh
Hi,
the ping error "connect: Network is unreachable" means a route could not be
found.
The gateway 10.245.124.253 for the external subnet is not in the subnet CIDR
10.245.124.64/26.
So I guess a default route was not setup here:
netnode$ ip netns exec route -n
You will need to create the su
not sure - try clearing the cookie for the Horizon IP.
>
> From: Rajesh Upadhayay
>To: "'openstack@lists.launchpad.net'"
>Sent: Wednesday, 29 May 2013, 12:02
>Subject: [Openstack] Grizzly all-in-one error
>
>
>
>
>Hi,
>
>Any idea about below errors as I ha
or iptables. Anyway try reducing the mtu for now.
Darragh.
- Original Message -
> From: Farhan Patwa
> To: Darragh O'Reilly ; OpenStack Maillist
>
> Cc:
> Sent: Wednesday, 29 May 2013, 18:14
> Subject: Re: [Openstack] VM Issues on Grizzly Install on Ubuntu 12.04
y at their default 1500. This may be a more practical
way as long as all the hardware between the endpoints can cope with this mtu
size.
I can't say if this is a bug yet, but it needs to be documented.
Darragh.
>
> From: Farhan Patwa
>To: Rahul S
size: 2 option: 26:mtu 05:ae
FYI, here is what Cisco say about mtu when using VXLAN
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-702975.html#wp989
Darragh.
- Original Message -
> From: Darragh O'Reilly
> To: Farhan Patwa ; Rahul Shar
can the nodes ping each other via the switch? You will need to get that working
first.
>
> From: Mahardhika
>To: Sergey Skripnick ; openstack@lists.launchpad.net
>Sent: Wednesday, 26 June 2013, 9:04
>Subject: Re: [Openstack] Vlan or Gre on Switch
>
>
>Could yo
__
> From: Mahardhika
>To: Darragh O'Reilly ; Sergey Skripnick
>; "openstack@lists.launchpad.net"
>
>Sent: Wednesday, 26 June 2013, 10:44
>Subject: Re: [Openstack] Vlan or Gre on Switch
>
>
>
>Sure it is, but it's on vlan configuration, i am worki
all - take off the 'proto GRE' from tcpdump. Or try again
with the crossover to see how that worked.
>
> From: Mahardhika
>To: Darragh O'Reilly ; Sergey Skripnick
>; "openstack@lists.launchpad.net"
>
>Sent:
E for some reason. Are you
using the same IP addresses for the switch connection as you used for the
direct connection? If you could post 'ovs-vsctl show' and 'ip link' from both
nodes.
>____
> From: Mahardhika
>To: Darragh O'Re
23 matches
Mail list logo