Re: [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Gustavo Randich
Hi Kevin, I confirm that applying the patch the problem is fixed.

Sorry for the inconvenience.


On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:

> Do you have that patch already in your environment? If not, can you
> confirm it fixes the issue?
>
> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> While dumping OVS flows as you suggested, we finally found the cause of
>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>
>> May be the issue is related to this: https://bugs.launchpad.net/neu
>> tron/+bug/1607787
>>
>> Thank you
>>
>>
>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>
>>> Sorry about the long delay.
>>>
>>> Can you dump the OVS flows before and after the outage? This will let us
>>> know if the flows Neutron setup are getting wiped out.
>>>
>>> On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
>>> gustavo.rand...@gmail.com> wrote:
>>>
 Hi Kevin, here is some information aout this issue:

 - if the network outage lasts less than ~1 minute, then connectivity to
 host and instances is automatically restored without problem

 - otherwise:

 - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

 - after about ~1 minute, "ovs-vsctl show" ceases to show "is_connected:
 true" on every bridge

 - upon restoring physical interface (fix outage)

 - "ovs-vsctl show" now reports "is_connected: true" in all
 bridges (br-ex / br-int / br-tun)

- access to host and VMs is NOT restored, although some pings
 are sporadically answered by host (~1 out of 20)


 - to restore connectivity, we:


   - execute "ifdown br-ex; ifup br-ex" -> access to host is
 restored, but not to VMs


   - restart neutron-openvswitch-agent -> access to VMs is restored

 Thank you!




 On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton  wrote:

> With the network down, does ovs-vsctl show that it is connected to the
> controller?
>
> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
> gustavo.rand...@gmail.com> wrote:
>
>> Exactly, we access via a tagged interface, which is part of br-ex
>>
>> # ip a show vlan171
>> 16: vlan171:  mtu 9000 qdisc
>> noqueue state UNKNOWN group default qlen 1
>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>valid_lft forever preferred_lft forever
>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> # ovs-vsctl show
>> ...
>> Bridge br-ex
>> Controller "tcp:127.0.0.1:6633"
>> is_connected: true
>> Port "vlan171"
>> tag: 171
>> Interface "vlan171"
>> type: internal
>> ...
>>
>>
>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>> wrote:
>>
>>> Ok, that's likely not the issue then. I assume the way you access
>>> each host is via an IP assigned to an OVS bridge or an interface that
>>> somehow depends on OVS?
>>>
>>> On Apr 28, 2017 12:04, "Gustavo Randich" 
>>> wrote:
>>>
 Hi Kevin, we are using the default listen address of loopback
 interface:

 # grep -r of_listen_address /etc/neutron
 /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
 = 127.0.0.1


 tcp/127.0.0.1:6640 -> ovsdb-server
 /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
 --remote=punix:/var/run/openvswitch/db.sock
 --private-key=db:Open_vSwitch,SSL,private_key
 --certificate=db:Open_vSwitch,SSL,certificate
 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
 --log-file=/var/log/openvswitch/ovsdb-server.log
 --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor

 Thanks




 On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
 wrote:

> Are you using an of_listen_address value of an interface being
> brought down?
>
> On Apr 25, 2017 17:34, "Gustavo Randich" <
> gustavo.rand...@gmail.com> wrote:
>
>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / VXLAN /
>> l2_population)
>>
>> This sounds very strange (to me): recently, after a switch
>> outage, we lost connectivity to all our Mitaka hosts. We had to 
>> enter via
>> iLO host by host and restart networking service to regain access. 
>> Then
>> restart neutron-openvswitch-agent to regain access to VMs.
>>
>> At first glance we

[Openstack] List user:project associations.

2017-05-31 Thread Ken D'Ambrosio
Hi!  I'm looking for a way to see which users are associated with which 
projects.  The dashboard does it pretty nicely, but I'd prefer from the 
CLI.  Unfortunately, while "openstack role assignment list" seems to be 
what I'd want, it requires *both* a project and a user, which means that 
in order to map everything, I'd have to iterate through every project 
for every user -- about as inefficient a way as I can imagine.  Surely 
there's a better way?


Thanks,

-Ken

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Newton openstack designate installation

2017-05-31 Thread Graham Hayes
On 30/05/17 11:56, Michel Labarre wrote:
> Hi
> I installed designate on Newton openstack in Centos 7.3.
> After complete designate setup as indicated in
> https://docs.openstack.org/project-install-guide/dns/ocata/install-rdo.html
> (I have not found specific doc for newton)
>  - All processes are running (central, api, mdns, worker, producer, sink)
>  - I created zone with my domain.
>  - I updated my network to indicates domain name ( update with --dns-domain)
>  - I updated my server.conf on the 3 HA neutron servers (add
> `[designate]` group directives, add final dot to my `[default]
> dns_domain` attribute and add in `[default]` the driver :
> `external_dns_driver = designate`
>  - I updated my ml2 plugin to add `dns` in `extension_drivers` attribute.
>  - I restarted neutron servers.
> 
> All commands as 'openstack dns service list' and 'openstack zone list
> ans openstack recordset list xxx' are ok.
> Now, when i create a VM from dashboard, the VM is created without
> problem (as without designate) but no recordset is created. I don't see
> any call to port 9001 in api log. It seems that designate plugin is not
> called...
> Any idea? Thank you very much

can you try to create a port on the network and supply the
"--dns_name " parameter? Just to see if the issue is in nova
or neutron.

Is there any logs in neutron that show a failed connection to Designate?

Thanks,

Graham

> 
>  
> 
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Strange: lost physical connectivity to compute hosts when using native (ryu) openflow interface

2017-05-31 Thread Kevin Benton
No prob. Thanks for replying.

On May 31, 2017 10:11 AM, "Gustavo Randich" 
wrote:

> Hi Kevin, I confirm that applying the patch the problem is fixed.
>
> Sorry for the inconvenience.
>
>
> On Tue, May 30, 2017 at 9:36 PM, Kevin Benton  wrote:
>
>> Do you have that patch already in your environment? If not, can you
>> confirm it fixes the issue?
>>
>> On Tue, May 30, 2017 at 9:49 AM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> While dumping OVS flows as you suggested, we finally found the cause of
>>> the problem: our br-ex OVS bridge lacked the secure fail mode configuration.
>>>
>>> May be the issue is related to this: https://bugs.launchpad.net/neu
>>> tron/+bug/1607787
>>>
>>> Thank you
>>>
>>>
>>> On Fri, May 26, 2017 at 6:03 AM, Kevin Benton  wrote:
>>>
 Sorry about the long delay.

 Can you dump the OVS flows before and after the outage? This will let
 us know if the flows Neutron setup are getting wiped out.

 On Tue, May 2, 2017 at 12:26 PM, Gustavo Randich <
 gustavo.rand...@gmail.com> wrote:

> Hi Kevin, here is some information aout this issue:
>
> - if the network outage lasts less than ~1 minute, then connectivity
> to host and instances is automatically restored without problem
>
> - otherwise:
>
> - upon outage, "ovs-vsctl show" reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
> - after about ~1 minute, "ovs-vsctl show" ceases to show
> "is_connected: true" on every bridge
>
> - upon restoring physical interface (fix outage)
>
> - "ovs-vsctl show" now reports "is_connected: true" in all
> bridges (br-ex / br-int / br-tun)
>
>- access to host and VMs is NOT restored, although some pings
> are sporadically answered by host (~1 out of 20)
>
>
> - to restore connectivity, we:
>
>
>   - execute "ifdown br-ex; ifup br-ex" -> access to host is
> restored, but not to VMs
>
>
>   - restart neutron-openvswitch-agent -> access to VMs is restored
>
> Thank you!
>
>
>
>
> On Fri, Apr 28, 2017 at 5:07 PM, Kevin Benton 
> wrote:
>
>> With the network down, does ovs-vsctl show that it is connected to
>> the controller?
>>
>> On Fri, Apr 28, 2017 at 2:21 PM, Gustavo Randich <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> Exactly, we access via a tagged interface, which is part of br-ex
>>>
>>> # ip a show vlan171
>>> 16: vlan171:  mtu 9000 qdisc
>>> noqueue state UNKNOWN group default qlen 1
>>> link/ether 8e:14:8d:c1:1a:5f brd ff:ff:ff:ff:ff:ff
>>> inet 10.171.1.240/20 brd 10.171.15.255 scope global vlan171
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::8c14:8dff:fec1:1a5f/64 scope link
>>>valid_lft forever preferred_lft forever
>>>
>>> # ovs-vsctl show
>>> ...
>>> Bridge br-ex
>>> Controller "tcp:127.0.0.1:6633"
>>> is_connected: true
>>> Port "vlan171"
>>> tag: 171
>>> Interface "vlan171"
>>> type: internal
>>> ...
>>>
>>>
>>> On Fri, Apr 28, 2017 at 3:03 PM, Kevin Benton 
>>> wrote:
>>>
 Ok, that's likely not the issue then. I assume the way you access
 each host is via an IP assigned to an OVS bridge or an interface that
 somehow depends on OVS?

 On Apr 28, 2017 12:04, "Gustavo Randich" 
 wrote:

> Hi Kevin, we are using the default listen address of loopback
> interface:
>
> # grep -r of_listen_address /etc/neutron
> /etc/neutron/plugins/ml2/openvswitch_agent.ini:#of_listen_address
> = 127.0.0.1
>
>
> tcp/127.0.0.1:6640 -> ovsdb-server
> /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info
> --remote=punix:/var/run/openvswitch/db.sock
> --private-key=db:Open_vSwitch,SSL,private_key
> --certificate=db:Open_vSwitch,SSL,certificate
> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir
> --log-file=/var/log/openvswitch/ovsdb-server.log
> --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor
>
> Thanks
>
>
>
>
> On Fri, Apr 28, 2017 at 5:00 AM, Kevin Benton 
> wrote:
>
>> Are you using an of_listen_address value of an interface being
>> brought down?
>>
>> On Apr 25, 2017 17:34, "Gustavo Randich" <
>> gustavo.rand...@gmail.com> wrote:
>>
>>> (using Mitaka / Ubuntu 16 / Neutron DVR / OVS / VXLAN /
>>> l2_population)
>>>
>>> This sounds very strange (to me): recently, after a switch
>>> outage, we lost connectivity to all our Mitaka hosts