[openstack-dev] OpenStack Kuryr IRC meetings change

2016-08-31 Thread Antoni Segura Puimedon
Hi Kuryrs!

Infra merged the change that was agreed in this week's IRC meeting [1]

Starting immediately, all subsequent OpenStack Kuryr IRC meetings will be
held Mondays at 14:00 UTC at #openstack-meeting-4

We'll also try to publish an agenda the latest every Friday at the wiki
[2]. Everybody is welcome to add items to it (Though if there are too many
some may get pushed a week).

Have fun coding and reviewing!

Antoni Segura Puimedon

PS: There is a nice .ics that you can use to add the meetings to your
calendar[3]

[1] https://review.openstack.org/362852
[2] https://wiki.openstack.org/wiki/Meetings/Kuryr
[3] http://eavesdrop.openstack.org/#Kuryr_Project_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread joehuang
Hello, team,

During last weekly meeting, we discussed how to address TCs concerns in 
Tricircle big-tent application. After the weekly meeting, the proposal was 
co-prepared by our contributors: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E

The more doable way is to divide Tricircle into two independent and decoupled 
projects, only one of the projects which deal with networking automation will 
try to become an big-tent project, And Nova/Cinder API-GW will be removed from 
the scope of big-tent project application, and put them into another project:


TricircleNetworking: Dedicated for cross Neutron networking automation in 
multi-region OpenStack deployment, run without or with TricircleGateway. Try to 
become big-tent project in the current application of 
https://review.openstack.org/#/c/338796/.

TricircleGateway: Dedicated to provide API gateway for those who need single 
Nova/Cinder API endpoint in multi-region OpenStack deployment, run without or 
with TricircleNetworking. Live as non-big-tent, non-offical-openstack project, 
just like Tricircle toady’s status. And not pursue big-tent only if the 
consensus can be achieved in OpenStack community, including Arch WG and TCs, 
then decide how to get it on board in OpenStack. A new repository is needed to 
be applied for this project.

And consider to remove some overlapping implementation in Nova/Cinder API-GW 
for global objects like flavor, volume type, we can configure one region as 
master region, all global objects like flavor, volume type, server group, etc 
will be managed in the master Nova/Cinder service. In Nova API-GW/Cinder 
API-GW, all requests for these global objects will be forwarded to the master 
Nova/Cinder, then to get rid of any API overlapping-implementation.

More information, you can refer to the proposal draft 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
your thoughts are welcome, and let's have more discussion in this weekly 
meeting.

Best Regards
Chaoyi Huang(joehuang)

From: joehuang
Sent: 24 August 2016 16:35
To: openstack-dev
Subject: [openstack-dev][tricircle]agenda of weekly meeting Aug.24

Hello, team,



Agenda of Aug.24 weekly meeting:


# progress review and concerns on the features like micro versions, policy 
control, dynamic pod binding, cross pod L2 networking

# How to address TCs concerns in Tricircle big-tent application

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Aug.31

2016-08-31 Thread joehuang
Hello, team,



Agenda of Aug.31 weekly meeting, let's continue the topics:


# progress review and concerns on the features like micro versions, policy 
control, dynamic pod binding, cross pod L2 networking

# How to address TCs concerns in Tricircle big-tent application:

https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )


From: joehuang
Sent: 24 August 2016 16:35
To: openstack-dev
Subject: [openstack-dev][tricircle]agenda of weekly meeting Aug.24

Hello, team,



Agenda of Aug.24 weekly meeting:


# progress review and concerns on the features like micro versions, policy 
control, dynamic pod binding, cross pod L2 networking

# How to address TCs concerns in Tricircle big-tent application: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Python 3 usage and kuryr-kubernetes

2016-08-31 Thread Jaume Devesa
Hi Toni,

  

  

I am +1 on a). Kuryr-kubernetes is an asynchronous service. We are not talking
about some binary/unicode mismatch or list/generators return type that can be
solved with 'six' library or some 'utils' module.  If we go to python2 that
will force us to re-do all the core part of the project when we'll move to
python3.

  

Is there a policy that prevents to run some services on python2 and some on
python3 in distros? What's the reason behind?

  

  

Regards,

  

Jaume Devesa

Software Engineer @ Midokura

![](https://link.nylas.com/open/cxy7986hfe7y675s4rqwe1j9/local-
7df6cd38-7e0a?r=b3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3Jn)

  
On Aug 30 2016, at 4:44 pm, Antoni Segura Puimedon  wrote:  

> Hi fellow kuryrs!  
  

>

> This email is gonna be a bit long, so I'll put it in parts.  
  

>

> Kubernetes integration components  
==  

>

>  

>

> As you know, we are now in the process of upstreaming the Kuryr Kubernetes
PoC that the Kuryr team at Midokura did. This PoC upstreaming effort has two
main parts:  
  
Kuryr watcher: Based on Python3 asyncio, it connects to the ?watch=true
Kubernetes resource endpoints, then passes the seen events to translators that
end up calling Neutron. With the Neutron resource information returned by the
translators, the watching coroutines update the resource that originated the
event.  
  

>

> Kuryr CNI: Py2 and Py3 compatible. It is called by Kubernetes' Kubelet with
the noop container so that the CNI driver does the network plumbing for it.
Basically we use openstack/kuryr binding code to bind Pod veths to Neutron
ports.  
  

>

> Upstream Deployment design  
==  

>

>  

>

> In the Kuryr-Kubernetes integration vision, Kuryr CNI is installed wherever
Kubelet is and the Kuryr watcher (or watchers once we support HA) runs in a
container somewhere that can access the Kubernetes, Neutron and Keystone APIs
(which does not need to be able to access the watcher host on anything else
that established connections). The idea behind allowing it to be in a non-
privileged container somewhere is that in this way you do not need to make
Neutron/Keystone accessible from the Kubernetes worker nodes, just like for a
lot of Nova compute deployments (obviously, depending on you networking
vendor, you have rabbitmq agent access to Neutron).  
  
If one does not need the extra isolation for the Kuryr Watcher, the Watcher
containers could even be started by Kubernetes and the CNI driver would just
let the watcher container in the Host networking instead of on the Overlay, so
Kubernetes would manage the integration deployment.  
  

>

> OpenStack distributions: when the rubber meets the road  
==  

>

>  

>

> If the OpenStack distros, however, would prefer not to run Kuryr Watcher
containerized or they want to, as they probably should, build their own
container (rather than the upstream kuryr/kubernetes one in dockerhub that is
based on alpine linux), they would need to have Python3.5 support. I
understand that at the moment from the most popular OpenStack distros, only
one has Python 3.5 supported.  
  

>

> You can imagine where this is heading... These are the options that I can
see:  
  

>

> a) Work with the OpenStack distributions to ensure python3.5 support is
reached soon for Kuryr and its dependencies (some listed below):  

>

>   * babel

>   * oslo.concurrency

>   * oslo.config

>   * oslo.log

>   * oslo.utils

>   * pbr

>   * pyroute2

>   * python-neutronclient

>   * python-keystoneclient

>

> This also implies that distros should adopt a policy about having OpenStack
services running in Python2, some in Python3, as I think the best is to have
each project move at its own speed (within reason).  

>

> b) As Ilya Chukhnakov from Mirantis proposed, drop Python3 for now and
reimplement it with python-requests and eventlet. He'll work on a PoC to see
its feasibility and how it compares to the asyncio based one.  

>

> Personal position  
=  
  

>

> I see this as a good opportunity for the OpenStack community at wide to
start having Python3-first (and even python3 only) services and allow
OpenStack projects to take full advantage of all the good things Python3 has
to offer and move forward with the rest of the Python community.  
  

>

> There's been some efforts in the part in some projects [1][2] but it seems
implementation was deferred indefinitely probably to the same distribution
issue that we face now.  

>

>  

>

> In light of the recent discussions in this mailing list and the decision
taken by the Technical Committee [3] about alternative languages. I think it
would be very good for the community to set an official plan and incentivize
the projects to move to Python3 in future releases (unfortunately, library
projects like clients and oslo will most likely have to keep python2 for
longer, but it is probably for the best).  
  

>

> While 

Re: [openstack-dev] [kuryr] Python 3 usage and kuryr-kubernetes

2016-08-31 Thread Antoni Segura Puimedon
On Wed, Aug 31, 2016 at 9:23 AM, Jaume Devesa  wrote:

> Hi Toni,
>
>
> I am +1 on a). Kuryr-kubernetes is an asynchronous service. We are not
> talking about some binary/unicode mismatch or list/generators return type
> that can be solved with 'six' library or some 'utils' module.
>


> If we go to python2 that will force us to re-do all the core part of the
> project when we'll move to python3.
>

No question about that. I feel like going with Python2 now potentially
means that OSt projects will not be using py3 only syntax in a long while.


>
> Is there a policy that prevents to run some services on python2 and some
> on python3 in distros? What's the reason behind?
>

I think right now nobody has a policy yet. But there's probably three
options for distros:

x) Do not use py3 until they can support just py3 OSt
y) Allow different stack per OSt service with the libraries and clients
supporting both py2 and py3 stacks. Services are supported for either py2
or py3. Under this case I assume that once a non library/client project has
a mature py3 support, it can deprecate py2 after a cycle (which I think is
a good incentive for developers to see a good path forward to only have to
support one stack, since now supporting py3 means you may end up having to
support two stacks ad infinitum).
z) Like (y), but for projects that have py3 mature, support both py2 and
py3.

I support (y), but it would be nice if the community would have a
recommendation for OSt upstream and it's downstreams.



>
>
> Regards,
>
> Jaume Devesa
> Software Engineer @ Midokura
>
> On Aug 30 2016, at 4:44 pm, Antoni Segura Puimedon 
> wrote:
>
>> Hi fellow kuryrs!
>>
>> This email is gonna be a bit long, so I'll put it in parts.
>>
>> Kubernetes integration components
>> ==
>>
>> As you know, we are now in the process of upstreaming the Kuryr
>> Kubernetes PoC that the Kuryr team at Midokura did. This PoC upstreaming
>> effort has two main parts:
>>
>> Kuryr watcher: Based on Python3 asyncio, it connects to the ?watch=true
>> Kubernetes resource endpoints, then passes the seen events to translators
>> that end up calling Neutron. With the Neutron resource information returned
>> by the translators, the watching coroutines update the resource that
>> originated the event.
>>
>> Kuryr CNI: Py2 and Py3 compatible. It is called by Kubernetes' Kubelet
>> with the noop container so that the CNI driver does the network plumbing
>> for it. Basically we use openstack/kuryr binding code to bind Pod veths to
>> Neutron ports.
>>
>> Upstream Deployment design
>> ==
>>
>> In the Kuryr-Kubernetes integration vision, Kuryr CNI is installed
>> wherever Kubelet is and the Kuryr watcher (or watchers once we support HA)
>> runs in a container somewhere that can access the Kubernetes, Neutron and
>> Keystone APIs (which does not need to be able to access the watcher host on
>> anything else that established connections). The idea behind allowing it to
>> be in a non-privileged container somewhere is that in this way you do not
>> need to make Neutron/Keystone accessible from the Kubernetes worker nodes,
>> just like for a lot of Nova compute deployments (obviously, depending on
>> you networking vendor, you have rabbitmq agent access to Neutron).
>>
>> If one does not need the extra isolation for the Kuryr Watcher, the
>> Watcher containers could even be started by Kubernetes and the CNI driver
>> would just let the watcher container in the Host networking instead of on
>> the Overlay, so Kubernetes would manage the integration deployment.
>>
>> OpenStack distributions: when the rubber meets the road
>> ==
>>
>> If the OpenStack distros, however, would prefer not to run Kuryr Watcher
>> containerized or they want to, as they probably should, build their own
>> container (rather than the upstream kuryr/kubernetes one in dockerhub that
>> is based on alpine linux), they would need to have Python3.5 support. I
>> understand that at the moment from the most popular OpenStack distros, only
>> one has Python 3.5 supported.
>>
>> You can imagine where this is heading... These are the options that I can
>> see:
>>
>> a) Work with the OpenStack distributions to ensure python3.5 support is
>> reached soon for Kuryr and its dependencies (some listed below):
>>
>>- babel
>>- oslo.concurrency
>>- oslo.config
>>- oslo.log
>>- oslo.utils
>>- pbr
>>- pyroute2
>>- python-neutronclient
>>- python-keystoneclient
>>
>> This also implies that distros should adopt a policy about having
>> OpenStack services running in Python2, some in Python3, as I think the best
>> is to have each project move at its own speed (within reason).
>>
>> b) As Ilya Chukhnakov from Mirantis proposed, drop Python3 for now and
>> reimplement it with python-requests and eventlet. He'll work on a PoC to
>> see its feasibility and how it compares to the asyncio based one.
>> Personal po

Re: [openstack-dev] tacker vnf-create is not bringing upalltheinterfaces

2016-08-31 Thread Sridhar Ramaswamy
Both OpenWRT or Cirros images just brings up their first interface (eth0)
to automatically do DHCP (client). Either case, please reach out to us in
the #tacker IRC channel to get some help resolving this issue.

On Mon, Aug 29, 2016 at 5:27 AM, Abhilash Goyal 
wrote:

> Default cirros that is added at the time of installation.
> On 29 Aug 2016 16:25, "Zhi Chang"  wrote:
>
>> My cirros image works ok. What version about your cirros image?
>>
>>
>> -- Original --
>> *From: * "Abhilash Goyal";
>> *Date: * Mon, Aug 29, 2016 06:43 PM
>> *To: * "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] tacker vnf-create is not bringing
>> upalltheinterfaces
>>
>> Hello Chang,
>> thanks a lot, this image worked. Could you guide me the same for cirros
>> image.
>>
>>
>> On Mon, Aug 29, 2016 at 2:54 PM, Zhi Chang 
>> wrote:
>>
>>> OpenWRT image should be enabled first nic's DHCP.
>>>
>>>
>>> -- Original --
>>> *From: * "Abhilash Goyal";
>>> *Date: * Mon, Aug 29, 2016 05:35 PM
>>> *To: * "OpenStack Development Mailing List (not for usage questions)"<
>>> openstack-dev@lists.openstack.org>;
>>> *Subject: * Re: [openstack-dev] tacker vnf-create is not bringing up
>>> alltheinterfaces
>>>
>>> Hi Chang,
>>>
>>> I am using https://downloads.openwrt.org/chaos_calmer/15.05/x86/k
>>> vm_guest/openwrt-15.05-x86-kvm_guest-combined-ext4.img.gz image of
>>> openWRT.
>>> This feature is not working for Cirros either.
>>>
>>> On Mon, Aug 29, 2016 at 2:25 PM, Zhi Chang 
>>> wrote:
>>>
 Hi, Goyal.

 What version about your OpenWRT image? You can get OpenWRT image
 from  this: https://drive.google.com/open?id=0B-ruQ8Tx46wSMktKV3J
 LRWhnLTA


 Thanks
 Zhi Chang

 -- Original --
 *From: * "Abhilash Goyal";
 *Date: * Mon, Aug 29, 2016 05:18 PM
 *To: * "openstack-dev";
 *Subject: * [openstack-dev] tacker vnf-create is not bringing up all
 theinterfaces

 [Tacker]
 Hello team,
 I am trying to make an OpenWRT VNF through tacker using this vnf-d
 .
 VNF is spawning successfully, but expected VNF should have 3 connecting
 points with first one in management network, but this is not happening. It
 is getting up with default network configuration because of this, IPs are
 not getting assigned to it automatically.
 Guidance would be appreciated.

 --
 Abhilash Goyal


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Abhilash Goyal
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Abhilash Goyal
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Bogdan Dobrelya
On 31.08.2016 03:52, joehuang wrote:
> Cells is a good enhancement for Nova scalability, but there are some issues 
> in deployment Cells for massively distributed edge clouds: 
> 
> 1) using RPC for inter-data center communication will bring the difficulty in 
> inter-dc troubleshooting and maintenance, and some critical issue in 
> operation. No CLI or restful API or other tools to manage a child cell 
> directly. If the link between the API cell and child cells is broken, then 
> the child cell in the remote edge cloud is unmanageable, no matter locally or 
> remotely. 
> 
> 2). The challenge in security management for inter-site RPC communication. 
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over 
> the Internet, Over 500 pin holes had to be opened in the firewall to allow 
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells 
> for edge cloud will face same security challenges.
> 
> 3)only nova supports cells. But not only Nova needs to support edge clouds, 
> Neutron, Cinder should be taken into account too. How about Neutron to 
> support service function chaining in edge clouds? Using RPC? how to address 
> challenges mentioned above? And Cinder? 
> 
> 4). Using RPC to do the production integration for hundreds of edge cloud is 
> quite challenge idea, it's basic requirements that these edge clouds may be 
> bought from multi-vendor, hardware/software or both. 
> 
> That means using cells in production for massively distributed edge clouds is 
> quite bad idea. If Cells provide RESTful interface between API cell and child 
> cell, it's much more acceptable, but it's still not enough, similar in 
> Cinder, Neutron. Or just deploy lightweight OpenStack instance in each edge 
> cloud, for example, one rack. The question is how to manage the large number 
> of OpenStack instance and provision service.
> 
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.

> 
> Best Regards
> Chaoyi Huang(joehuang)
> 
> 
> From: Andrew Laski [and...@lascii.com]
> Sent: 30 August 2016 21:03
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> On Tue, Aug 30, 2016, at 05:36 AM, lebre.adr...@free.fr wrote:
>> Dear all
>>
>> Sorry my lack of reactivity, I 've been out for the few last days.
>>
>> According to the different replies, I think we should enlarge the
>> discussion and not stay on the vCPE use-case, which is clearly specific
>> and represents only one use-case among the ones we would like to study.
>> For instance we are in touch with NRENs in France and Poland that are
>> interested to deploy up to one rack in each of their largest PoP in order
>> to provide a distributed IaaS platform  (for further informations you can
>> give a look to the presentation we gave during the last summit [1] [2]).
>>
>> The two questions were:
>> 1./ Understand whether the fog/edge computing use case is in the scope of
>> the Architecture WG and if not, do we need a massively distributed WG?
> 
> Besides the question of which WG this might fall under is the question
> of how any of the work groups are going to engage with the project
> communities. There is a group of developers pushing forward on cellsv2
> in Nova there should be some level of engagement between them and
> whomever is discussing the fog/edge computing use case. To me it seems
> like there's some level of overlap between the efforts even if cellsv2
> is not a full solution. But whatever conversations are taking place
> about fog/edge or large scale distributed use cases seem  to be
> happening in channels that I am not aware of, and I haven't heard any
> other cells developers mention them either.
> 
> So let's please find a way for people who are interested in these use
> cases to talk to the developers who are working on similar things.
> 
> 
>> 2./ How can we coordinate our actions with the ones performed in the
>> Architecture WG?
>>
>> Regarding 1./, according to the different reactions, I propose to write a
>> first draft in an etherpard to present the main goal of the Massively
>> distributed WG and how people interested by such discussions can interact
>> (I will paste the link to the etherpad by tomorrow).
>>
>> Regarding 2./,  I mentioned the Architecture WG because we do not want to
>> develop additional software layers like Tricircle or other solutions (at
>> least for the moment).
>> The goal of the WG is to conduct studies and experiments to identify to
>> what extent current mechanisms can satisfy the needs of such a massi

Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Jay Pipes

On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.


Apologies to all. BT Internet has been out most of the time in the house 
I've been staying at in Cheshire while on holiday and so I've been 
having to trek to a Starbucks to try and get work done. :(



Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be
writing inventory and allocation data via the placement API. Get the
data pumping into the placement API in Newton so we can start using it
in Ocata."


Indeed, that is the objective...


1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and
will be optional for Newton.


Technically, the original revision of that patch *didn't* require the 
placement API service to be in the service catalog. If it wasn't there, 
the scheduler reporting client wouldn't bomb out, it would just log a 
warning and an admin could restart nova-compute when a placement API 
service entry was added to Keystone's service catalog.


But then I was asked to "robustify" things instead of use a simple error 
marker variable in the reporting client to indicate an unrecoverable 
problem with connectivity to the placement service. And the patch I 
pushed for that robustification failed all over the place, quite 
predictably. I was trying to keep the patch size to a minimum originally 
and incrementally add robust retry logic and better error handling. I 
also as noted in the commit message, used the exact same code we were 
using in the Cinder volume driver for finding the service endpoint via 
the keystone service catalog:


https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L71-L83

That has been changed in the latest patch from Sean to use the 
keystoneauth1.session.Session object instead of a requests.Session 
object directly. Not sure why, but that's fine I suppose.


> There are details to be sorted about

if/when to retry connecting to the placement service with or without
requiring a restart of nova-compute, but I don't think those are too hairy.


They are indeed hairy, as most retry logic tends to be. I would be happy 
to get confirmation that we can iteratively add retry logic to followup 
patches, instead of doing it all up front in the 358797 patch.



Jay is working on changes that go on top of that series to push the
inventory and allocation data from the resource tracker to the placement
service.


Yes, I have the inventory patch ready to go but have not pushed it yet 
because I was trying to get the initial patch agreed-upon. I will push 
the inventory patch up shortly as soon as I rebase onto Sean's latest 
revision.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread Duncan Thomas
If you're going with the approach of having a master region (which seems
sensible), you're going to want an admin API that checks all of the regions
setup matches in terms of these objects existing in all regions, for
validation.

On 31 August 2016 at 10:16, joehuang  wrote:

> Hello, team,
>
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal was
> co-prepared by our contributors: https://docs.google.com/presentation/d/
> 1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
>
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder API-GW
> will be removed from the scope of big-tent project application, and put
> them into another project:
>
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with TricircleGateway.
> Try to become big-tent project in the current application of
> https://review.openstack.org/#/c/338796/.
>
>
> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment, run
> without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And not
> pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board in
> OpenStack. A new repository is needed to be applied for this project.
>
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
>
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_
> rq9TvkuczjommJSsisDiKJiurbhaQg7E,
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.
>
> Best Regards
> Chaoyi Huang(joehuang)
>
> *From:* joehuang
> *Sent:* 24 August 2016 16:35
> *To:* openstack-dev
> *Subject:* [openstack-dev][tricircle]agenda of weekly meeting Aug.24
>
> Hello, team,
>
>
>
> Agenda of Aug.24 weekly meeting:
>
>
> # progress review and concerns on the features like micro versions, policy
> control, dynamic pod binding, cross pod L2 networking
>
> # How to address TCs concerns in Tricircle big-tent application
>
> # open discussion
>
>
>
> How to join:
>
>
>
> #  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting
> on every Wednesday starting from UTC 13:00.
>
>
>
> If you  have other topics to be discussed in the weekly meeting, please
> reply the mail.
>
>
>
> Best Regards
>
> Chaoyi Huang ( joehuang )
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Jay Pipes

On 08/31/2016 01:57 AM, Bogdan Dobrelya wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [security] [tc] Add the vulnerability:managed tag to Manila

2016-08-31 Thread John Spray
On Tue, Aug 30, 2016 at 6:07 PM, Jeremy Stanley  wrote:
> Ben has proposed[1] adding manila, manila-ui and python-manilaclient
> to the list of deliverables whose vulnerability reports and
> advisories are overseen by the OpenStack Vulnerability Management
> Team. This proposal is an assertion that the requirements[2] for the
> vulnerability:managed governance tag are met by these deliverables.
> As such, I wanted to initiate a discussion evaluating each of the
> listed requirements to see how far along those deliverables are in
> actually fulfilling these criteria.
>
> 1. All repos for a covered deliverable must meet the criteria or
> else none do. Easy enough, each deliverable has only one repo so
> this isn't really a concern.
>
> 2. We need a dedicated point of contact for security issues. Our
> typical point of contact would be a manila-coresec team in
> Launchpad, but that doesn't exist[3] (yet). Since you have a fairly
> large core review team[4], you should pick a reasonable subset of
> those who are willing to act as the next line of triage after the
> VMT hands off a suspected vulnerability report under embargo. You
> should have at least a couple of active volunteers for this task so
> there's good coverage, but more than 5 or so is probably pushing the
> bounds of information safety. Not all of them need to be core
> reviewers, but enough of them should be so that patches proposed as
> attachments to private bugs can effectively be "pre-approved" in an
> effort to avoid delays merging at time of publication.
>
> 3. The PTL needs to agree to act as a point of escalation or
> delegate this responsibility to a specific liaison. This is Ben by
> default, but if he's not going to have time to serve in that role
> then he should record a dedicated Vulnerability Management Liaison
> in the CPLs list[5].
>
> 4. Configure sharing[6][7][8] on the defect trackers for these
> deliverables so that OpenStack Vulnerability Management team
> (openstack-vuln-mgmt) has "Private Security: All". Once the
> vulnerability:managed tag is approved for them, also remove the
> "Private Security: All" sharing from any other teams (so that the
> VMT can redirect incorrectly reported vulnerabilities without
> prematurely disclosing them to manila reviewers).
>
> 5. Independent security review, audit, or threat analysis... this is
> almost certainly the hardest to meet. After some protracted
> discussion on Kolla's application for this tag, it was determined
> that projects should start supplying threat analyses to a central
> security-analysis[9] repo where they can be openly reviewed and
> ultimately published. No projects have actually completed this yet,
> but there is some process being finalized by the Security Team which
> projects will hopefully be able to follow. You may want to check
> with them on the possibility of being an early adopter for that
> process.

Given that all the drivers live in the Manila repo, will this
requirement for security audits is going to apply to them?  Given the
variety of technologies and network protocols involved in talking to
external storage systems, this strikes me as probably the hardest
part.

John

> 6. Covered deliverables need tests we can rely on to be able to
> evaluate whether privately proposed security patches will break the
> software. A cursory look shows many jobs[10] running in our upstream
> CI for changes to these repos, so that requirement is probably
> addressed (I did not yet check whether those
> unit/functional/integration tests are particularly extensive).
>
> So in summary, it looks like there are still some outstanding
> requirements not yet met for the vulnerability:managed tag but I
> don't see any insurmountable challenges there. Please let me know if
> any of the above is significantly off-track.
>
> [1] https://review.openstack.org/350597
> [2] 
> https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
> [3] https://launchpad.net/~manila-coresec
> [4] https://review.openstack.org/#/admin/groups/213,members
> [5] 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
> [6] https://launchpad.net/manila/+sharing
> [7] https://launchpad.net/manila-ui/+sharing
> [8] https://launchpad.net/pythonmanilaclient/+sharing
> [9] 
> https://git.openstack.org/cgit/openstack/security-analysis/tree/doc/source/templates/
> [10] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml
>
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subj

Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread joehuang
Hello, Duncan,

You got the idea! We have planned to use admin-API (one of the service added in 
the Tricircle) to manage the routing from the master region to other regions. 
For these global objects, will make sure the routing(or say mapping) is correct 
and objects have same attributes, for example, make sure the volume type and 
extra-spec matched from the master region to the specified regions.

Thank you very much.

Best Regards
Chaoyi Huang(joehuang)


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 31 August 2016 17:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]How to address TCs concerns in 
Tricircle big-tent application

If you're going with the approach of having a master region (which seems 
sensible), you're going to want an admin API that checks all of the regions 
setup matches in terms of these objects existing in all regions, for validation.

On 31 August 2016 at 10:16, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, team,

During last weekly meeting, we discussed how to address TCs concerns in 
Tricircle big-tent application. After the weekly meeting, the proposal was 
co-prepared by our contributors: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E

The more doable way is to divide Tricircle into two independent and decoupled 
projects, only one of the projects which deal with networking automation will 
try to become an big-tent project, And Nova/Cinder API-GW will be removed from 
the scope of big-tent project application, and put them into another project:


TricircleNetworking: Dedicated for cross Neutron networking automation in 
multi-region OpenStack deployment, run without or with TricircleGateway. Try to 
become big-tent project in the current application of 
https://review.openstack.org/#/c/338796/.

TricircleGateway: Dedicated to provide API gateway for those who need single 
Nova/Cinder API endpoint in multi-region OpenStack deployment, run without or 
with TricircleNetworking. Live as non-big-tent, non-offical-openstack project, 
just like Tricircle toady’s status. And not pursue big-tent only if the 
consensus can be achieved in OpenStack community, including Arch WG and TCs, 
then decide how to get it on board in OpenStack. A new repository is needed to 
be applied for this project.

And consider to remove some overlapping implementation in Nova/Cinder API-GW 
for global objects like flavor, volume type, we can configure one region as 
master region, all global objects like flavor, volume type, server group, etc 
will be managed in the master Nova/Cinder service. In Nova API-GW/Cinder 
API-GW, all requests for these global objects will be forwarded to the master 
Nova/Cinder, then to get rid of any API overlapping-implementation.

More information, you can refer to the proposal draft 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
your thoughts are welcome, and let's have more discussion in this weekly 
meeting.

Best Regards
Chaoyi Huang(joehuang)

From: joehuang
Sent: 24 August 2016 16:35
To: openstack-dev
Subject: [openstack-dev][tricircle]agenda of weekly meeting Aug.24

Hello, team,



Agenda of Aug.24 weekly meeting:


# progress review and concerns on the features like micro versions, policy 
control, dynamic pod binding, cross pod L2 networking

# How to address TCs concerns in Tricircle big-tent application

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( joehuang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] python-novaclient 6.0.0 release (newton)

2016-08-31 Thread no-reply
We are content to announce the release of:

python-novaclient 6.0.0: Client library for OpenStack Compute API

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-novaclient

With package available at:

https://pypi.python.org/pypi/python-novaclient

Please report issues through launchpad:

https://bugs.launchpad.net/python-novaclient

For more details, please see below.

6.0.0
^


New Features


* The 2.37 microversion is now supported. This introduces the
  following changes:

  * CLI: The **--nic** value for the **nova boot** command now takes
two

   special values, 'auto' and 'none'. If --nic is not specified,
   the CLI defaults to 'auto'.

  * Python API: The **nics** kwarg is required when creating a
server using

   the *novaclient.v2.servers.ServerManager.create* API. The
   **nics** value can be a list of dicts or a string with value
   'auto' or 'none'.


Upgrade Notes
*

* The ability to update the following network-related resources via
  the "nova quota-update" and "nova quota-class-update" commands is
  now deprecated:

  * Fixed IPs

  * Floating IPs

  * Security Groups

  * Security Group Rules

  By default the quota and limits CLIs will not update or show those
  resources using microversion >= 2.36. You can still use them,
  however, by specifying "--os-compute-api-version 2.35". Quota
  information for network resources should be retrieved from python-
  neutronclient or python-openstackclient.

* With the 2.37 microversion, the **nics** kwarg is required when
  creating a server using the
  *novaclient.v2.servers.ServerManager.create* API. The **nics** value
  can be a list of dicts or an enum string with one of the following
  values:

  * **auto**: This tells the Compute service to automatically
allocate a network for the project if one is not available and
then associate an IP from that network with the server. This is
the same behavior as passing nics=None before the 2.37
microversion.

  * **none**: This tells the Compute service to not allocate any
networking for the server.


Deprecation Notes
*

* The following commands are now deprecated:

  * dns-create

  * dns-create-private-domain

  * dns-create-public-domain

  * dns-delete

  * dns-delete-domain

  * dns-domains

  * dns-list

  * fixed-ip-get

  * fixed-ip-reserve

  * fixed-ip-unreserve

  * floating-ip-create

  * floating-ip-delete

  * floating-ip-list

  * floating-ip-pool-list

  * floating-ip-bulk-create

  * floating-ip-bulk-delete

  * floating-ip-bulk-list

  * network-create

  * network-delete

  * network-disassociate

  * network-associate-host

  * network-associate-project

  * network-list

  * network-show

  * scrub

  * secgroup-create

  * secgroup-delete

  * secgroup-list

  * secgroup-update

  * secgroup-add-group-rule

  * secgroup-delete-group-rule

  * secgroup-add-rule

  * secgroup-delete-rule

  * secgroup-list-rules

  * secgroup-list-default-rules

  * secgroup-add-default-rule

  * secgroup-delete-default-rule

  * tenant-network-create

  * tenant-network-delete

  * tenant-network-list

  * tenant-network-show

  With the 2.36 microversion these will fail in the API. The CLI will
  fallback to passing the 2.35 microversion to ease the transition.
  Network resource information should be retrieved from python-
  neutronclient or python-openstackclient.

Changes in python-novaclient 5.1.0..6.0.0
-

c915a02 Fix test_trigger_crash_dump_in_locked_state_nonadmin test
f7dc91d Fixes TypeError on Client instantiation
8ebf859 Updated from global requirements
7d72ed2 Updated from global requirements
744c9b6 Replace functions 'Dict.get' and 'del' with 'Dict.pop'
8751475 Removed unused 'install_venv' module
a14e30e [functional] Do not discover same resources for every test
3b834f2 functional tests fail if cirros image not exist
7a2c412 Updated from global requirements
01de5a9 Pick first image if can't find the specific image
030ce53 Add support for v2.37 and auto-allocated networking
20b721e Cap image API deprecated methods at 2.35
4215e3f Cap baremetal python APIs at 2.35
aaebeb0 Deprecate all the nova-network functions in the python API
578c398 Deprecate network-* commands and clamp to microversion 2.35
9c5c8a6 [functional] Skip tests if API doesn't support microversion
c3b5365 Updated from global requirements
65c1fba Store api_version object in one place
4f0871f Add eggs to gitignore list


Diffstat (except docs and test files)
-

.gitignore |   1 +
novaclient/__init__.py |   2 +-
novaclient/api_versions.py |  22 ++
novaclient/client.py   |   3 +-
novaclient/v2/client.py|  12 +-
novaclient/v2/co

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:


> I agree that RPC design pattern, as it is implemented now, is a major
> blocker for OpenStack in general. It requires a major redesign,
> including handling of corner cases, on both sides, *especially* RPC call
> clients. Or may be it just have to be abandoned to be replaced by a more
> cloud friendly pattern.
>


Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the issues
and the design goals of the replacement, we're unlikely to make progress on
a replacement - even if somebody takes the heroic approach and writes a
full replacement themselves, the odds of getting community by-in are very
low.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Should segments become a default service plugin?

2016-08-31 Thread Hong Hui Xiao
Hi all.


Before routed network, functions in [1] are used to manage
network segment. Routed network introduces a plugin [2] to manage
network segment. So, now there are 2 code paths to do the same job.
Except in ml2, the segments plugin will be used to manage network segments.


Patch [3] [4] will make ml2 use segments plugin to create/delete segments.
This makes code simple and fixes the conflict issue mentioned in [4]. Since
ml2 always uses segments, it brings the requirement to make segments
plugin always be available. Or else, the segments plugin needs to be constructed
when used.


I made this patch [5] to set segments plugin as default plugin.


[1] neutron.db.segments_db
[2] neutron.services.segments.plugin
[3] https://review.openstack.org/#/c/317358/
[4] https://review.openstack.org/#/c/359147/
[5] https://review.openstack.org/#/c/363527/


--

HongHui Xiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][senlin] senlin-dashboard 0.4.0 release (newton)

2016-08-31 Thread no-reply
We are ecstatic to announce the release of:

senlin-dashboard 0.4.0: Senlin Dashboard

This release is part of the newton release series.

Please report issues through launchpad:

https://bugs.launchpad.net/senlin-dashboard

For more details, please see below.

0.4.0
^


New Features


* Paginated list for node objects.


Bug Fixes
*

* Fixed link to cluster in receiver table.

* Fixed installation documentation when using devstack environment.

* Fixed the display of long names which could break the table
  layout.

* Fixed node detail page view.

Changes in senlin-dashboard 0.2.0..0.4.0


cc66565 Release notes for newton-3 milestone
85cb85e Use constraints in tox.ini
6d5a2b5 Prevent long names breaking table layouts
ce0bf02 Fix home-page url in setup.cfg file
faf923e Add "*.swo" to ".gitignore" file
7885425 Fix senlin-dashboard devstack installation guide
26552e8 Fix test_create_policy fail
2f20327 Add Apple OS X ".DS_Store" to ".gitignore" file
efcd7e4 Add unit test for senlin-dashboard rest api
6290784 Imported Translations from Zanata
c004a4b Fix click into node detail bug
3917d8f Updated from global requirements
56aa97b Show name_or_id when profile name is white space
c6e2cd5 show the policy type in policy detail page
0b841a9 There should a space between the two buttons in "Manage Policies" form
fdf73a2 show cluster info in node detail page
bc0068f Imported Translations from Zanata
4509c5b Fix senlin-dashboard unit test
8e06b74 Imported Translations from Zanata
ac8d360 Add deletion action in angular senlin receiver table
c847c87 Add Angular senlin receiver details use registry
7a189ac Add Angular Senlin Receivers Table use registry
6cf9f53 Add missing cluster link in receiver table
f7646c2 Updated from global requirements
fda63a1 Enable eslint and karma test(Javascript test)
088196f Enable pagination in node table
d0baa9f Imported Translations from Zanata
5609185 Horizon selects are now themable: Senlin Panels
f0dcaa0 Updated from global requirements
a7f0512 Imported Translations from Zanata
b560871 Delete duplicate period
89998c8 modify help_text message
d111794 Imported Translations from Zanata
c6d4f2d Add more details in cluster details page
659944b Remove unnecessary 'url' in cluster detail context
8d6d428 Update .gitignore for JetBrains(PyCharm) users
2b1ec4f Add unit test for cluster policy mamagement
4e9c0cb Use entities.reverse() rather sorted(.., reverse=True)
f023a5e Remove unnecessary params
c0ae127 Enable pagination in receiver table
ac409d8 Updated from global requirements
1ad2b6c Remove unnecessary dot
fd1fad3 Imported Translations from Zanata
c7cd770 Enable pagination in policy table
9d7b886 Enable pagination in cluster table
1eeef57 Enable pagination in profile table
22e48c8 Add sort key and dir for node_list
58767ac Update Cluster&Node status
85ecd90 Add sort key and dir for cluster_list
e202c44 Add sort key and dir for profile_list
19468aa Add reno for release notes management
fb66a95 Imported Translations from Zanata
f6df4b9 Add node update action
af47677 Fix empty `Timestamp` column in cluster/node event tables
b3da5a2 Add cluster policies in cluster detail page
e809df5 Fix empty Create/Update column in cluster/profile... tables
ff361f3 Add assert for api test
4b95512 Fix test_cluster_event in clusters test.
c6bc55c Remove policies which had attached to cluster while attaching policy
82596f5 Remove useless variable assignment


Diffstat (except docs and test files)
-

.eslintrc  |  54 ++
.gitignore |   8 +
README.rst |  21 +-
_50_senlin.py.example  |   2 +
package.json   |  29 +
releasenotes/notes/.placeholder|   0
...cluster-link-for-receiver-3146134204071eb5.yaml |   3 +
.../install-instruction-246a41feb20955ec.yaml  |   3 +
.../notes/long-names-a03acbfbc159850d.yaml |   3 +
.../notes/node-detail-view-e605e24a6978be18.yaml   |   3 +
.../node-list-pagination-acf1f0d161f7097f.yaml |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 277 +++
releasenotes/source/index.rst  |   8 +
.../source/locale/ja/LC_MESSAGES/releasenotes.po   |  21 +
.../locale/zh_CN/LC_MESSAGES/releasenotes.po   |  21 +
releasenotes/source/unreleased.rst |   5 +
senlin_dashboard/api/rest/__init__.py  |  23 +
senlin_dashboard/api/rest/senlin.py|  71 ++
senlin_dashboard/api/senlin.py | 185 -
senlin_dashboard/api/utils.py  |  40 +
senlin_dashboard/cluster/clusters/forms.py |  23 +-
senlin_dashboard/cluster/clusters/tables.py|  25 +-
senlin_dashbo

Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-31 Thread Michał Dulko
On 08/25/2016 07:52 PM, Andrew Laski wrote:
>  On Thu, Aug 25, 2016, at 12:22 PM, Everett Toews wrote:
>> Top posting with general comment...
>>
>> It sounds like there's some consensus in Nova-land around these
>> traits (née "capabilities"). The API Working Group [4] is
>> also aware of similar efforts in Cinder [1][2] and Glance [3].
>
> To be clear, we're looking at exposing both traits and capabilities in
> Nova. This puts us in a weird spot where I think our concept of traits
> aligns with cinders capabilities, but I don't see any match for the
> Nova concept of capabilities. So I'm still open to naming suggestions
> but I think capabilities most accurately describes what it is. Dean
> has it right, I think, that what we really have are 'api capabilities'
> and 'host capabilities'. But people will end up just using
> 'capabilities' and cause confusion.

I think I need to clarify this a bit. In Cinder we're already having a
resource called "capabilities". It returns possible hardware features of
a particular volume backend, like compression or QoS support. This is
returned in a form similar to Glance's Metadata Catalog API (aka
Graffiti), so should be easily consumable by Horizon to produce a
structured UI letting admin define meaningful volume type metadata that
will enable particular backend options. As it's based on internal host
and backend names, it's rather admin-facing API. This is what in current
Nova's definition would be called "traits", right?

Now what we're looking to also expose is possible actions per
deployment, volume type, or maybe even a particular volume. An API that
will make answers to questions like "can I create a volume backup in
this cloud?", "can volumes of this type be included in consistency
groups?" easily discoverable. These are more like Nova's "capabilities".

>> If these are truly the same concepts being discussed across projects,
>> it would be great to see consistency in the APIs and have the
>> projects come together under a new guideline. I encourage the
>> projects and people to propose such a guideline and for someone to
>> step up and champion it. Seems like good fodder for a design session
>> proposal at the upcoming summit.
>
> Here's what all of these different things look like to me:
>
> Cinder is looking to expose hardware capabilities. This pretty closely
> aligns with what traits are intending to do in Nova. This answers the
> question of "can I create a resource that needs/does X in this
> deployment?" However in Nova we ultimately want users to be able to
> specify which traits they want for their instance. That may be
> embedded in a flavor or arbitrarily specified in the request but a
> trait is not implicitly available to all resources like it seems it is
> in Cinder. We assume there could be a heterogeneous environment so
> without requesting a trait there's no guarantee of getting it.

Requesting "traits" in Cinder is still based on an admin-defined volume
types and there are no plans to change that yet, so I think that's one
of the main differences - in Nova's case "traits" API must be user-facing.

> Nova capabilities are intended to answer the question of "as user Y
> with resource X what can I do with it?" This is dependent on user
> authorization, hardware "traits" where the resource lives, and service
> version. I didn't see an analog to this in any of the proposals below.
> And one major difference between this and the other proposals is that,
> if possible, we would like the response to map to the API action that
> will perform that capability. So if a user can perform a resize on
> their instance the response might include 'POST
> .../servers//action -d resize' or whatever form we come up with.

Yup, that's basically what [1] wants to implement in Cinder. I think we
should hold up this patch until either we come up with consistent
cross-project solution, or agree that all the projects should go their
own way on this topic.

> The Glance concept of value discovery maps closely to what Nova
> capabilities are in intent in that it answers the question of "what
> can I do in this API request that will be valid?" But the scope is
> completely different in that it doesn't answer the question of which
> API requests can be made, just what values can be used in this
> specific call.
>
>
> Given the above I find that I don't have the imagination required to
> consolidate those into a consistent API concept that can be shared
> across projects. Cinder capabilities and Nova traits could potentially
> work, but the rest seem too different to me. And if we change
> traits->capabilities then we should find another name for what is
> currently Nova capabilities.
>
> -Andrew

I see similarities between Nova's and Cinder's problem space and I
believe we can come up with a consistent API here. This sounds like a
topic suitable for a cross-project discussion at the Design Summit.

[1] https://review.openstack.org/#/c/350310


[openstack-dev] [new][senlin] python-senlinclient 1.0.0 release (newton)

2016-08-31 Thread no-reply
We are tickled pink to announce the release of:

python-senlinclient 1.0.0: OpenStack Clustering API Client Library

This release is part of the newton release series.

For more details, please see below.

1.0.0
^


New Features


* A new command 'senlin cluster-collect' and its corresponding OSC
  plugin command has been added. This new command can be used to
  aggregate a specific property across a cluster.

* A new CLI command 'senlin cluster-run' and a new OSC plugin
  command 'openstack cluster run' have been added. Use the 'help'
  command to find out how to use it.

* The senlin CLI 'node-delete' and the OSC plugin command 'cluster
  node delete' now outputs the action IDs when successful. Error
  messages are printed when appropriate.

* The senlinclient now supports API micro-versioning. Current
  supported version is 'clustering 1.2'.

* A policy-validate command has been added to senlin command line.
  OSC support is added as well.

* A profile-validate command has been added to command line. It can
  be used for validating the spec of a profile without creating it.

* The support to python 3.5 has been verified and gated.


Bug Fixes
*

* The cluster policy list command was broken by new SDK changes and
  then fixed. The 'enabled' field is now renamed to 'is_enabled'.


Other Notes
***

* The 'senlin' CLI will be removed in April 2017. This message is
  now explicitly printed when senlin CLI commands are invoked.

* The receiver creation command (both senlin CLI and OSC plugin
  command) now allow 'cluster' and 'action' to be left unspecified if
  the receiver type is not 'webhook'.

Changes in python-senlinclient 0.5.0..1.0.0
---

df8b937 Release notes for newton-3 milestone
2bf28e5 Clarify senlin CLI deprecation message
6e41ab2 Fix output from cluster/node delete
473129a Revise params of 'receiver create' command
09a03b3 Remove *openstack/common* from flake8 excclude list in tox.ini
6b812ff OSC plugin command for cluster-run
94e6ec9 Updated from global requirements
71d0f6c Fix "openstack cluster policy binding list "
951f7ee Fix cluster-policy-list
a5f8b22 Add 'cluster-run' command
74958f8 Updated from global requirements
e950249 Updated from global requirements
57aa17c Fix get_node
49f18b0 Fix _show_node error
2eca413 Remove mox3 in test-requirement.txt of senlinclient
7b83eec py3:Rmove six.iteritems/iterkeys in python-senlinclient
e3e30ca Imported Translations from Zanata
3a212cd Add policy validate operation to senlinclient
e450261 Add profile validate operation to senlinclient
1ee0877 Imported Translations from Zanata
a619b79 Reorder required parameters
92f5273 Remove 'ProfileAction' and related arguments
1c6496c Remove local cached version of Resource class
f7ecc4b Add cluster_collect support
ae07c57 Add support to micro-version
46f1962 Updated from global requirements
6a015d9 Remove discover from test-requirements
7a286d3 Fix typo
69ba1c6 Fix nodes display in cluster-show
dd21b7a Update senlinclient for new sdk version
8dc1136 Updated from global requirements
bb68b27 Updated from global requirements
3595a65 Add Python 3.5 classifier and venv
cae3ad5 Updated from global requirements
5c83678 Prints  '-' instead of 'None' when data is None
27048ae Updated from global requirements
35d401a Updated from global requirements
986f648 Imported Translations from Zanata
f1b2791 Delete extra space
4437ed8 Use osc_lib instead of cliff
b202160 Updated from global requirements
9891067 Use osc-lib instead of openstackclient
b5dc38d Enhance message related to OS_PROJECT_DOMAIN_NAME
1351bcb Imported Translations from Zanata
8cc30e1 Imported Translations from Zanata
84916ac Fix openstack cluster resize
10ac9cb Remove unused POT file


Diffstat (except docs and test files)
-

.../notes/cli-deprecation-241b9569b85f8fbd.yaml|4 +
.../notes/cluster-collect-a9d1bc8c2e799c7c.yaml|5 +
.../cluster-policy-list-42ff03ef25d64dd1.yaml  |4 +
.../notes/cluster-run-210247ab70b289a5.yaml|5 +
.../notes/deletion-output-a841931367a2689d.yaml|5 +
.../notes/micro-version-a292ce3b00d886af.yaml  |4 +
.../notes/policy-validate-193a5ebb7db3440a.yaml|4 +
.../notes/profile-validate-587f1a964e93c0bf.yaml   |4 +
.../notes/python-3.5-c9fd8e34c4046357.yaml |3 +
.../notes/receiver-create-8305d4efbdf35f35.yaml|5 +
releasenotes/source/conf.py|6 +-
.../locale/zh_CN/LC_MESSAGES/releasenotes.po   |   40 +
requirements.txt   |9 +-
senlinclient/cliargs.py|   30 -
senlinclient/common/format_utils.py|4 +-
senlinclient/common/sdk.py |  107 +-
senlinclient/common/utils.py   |   25 +-
senlinclient/locale/senlinclient.pot   | 1013 
.../zh_CN/LC_MESSAGES/senlinclient-log-info.po 

[openstack-dev] [Neutron] Missing functional job in check queue

2016-08-31 Thread Jakub Libosvar

Hi all,

you might have noticed there was a time period where we run our gate 
without functional job. It was my goof that I appended ubuntu-trusty 
suffix to functional job [1] while the suffix by default make the job 
run on stable branches only. It should be fixed by now [2].


Sorry for caused inconveniences.

Jakub

[1] https://review.openstack.org/#/c/359843/
[2] https://review.openstack.org/#/c/363531/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Jay Pipes

On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.

Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be
writing inventory and allocation data via the placement API. Get the
data pumping into the placement API in Newton so we can start using it
in Ocata."

1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and
will be optional for Newton. There are details to be sorted about
if/when to retry connecting to the placement service with or without
requiring a restart of nova-compute, but I don't think those are too hairy.

Jay is working on changes that go on top of that series to push the
inventory and allocation data from the resource tracker to the placement
service.


OK, just pushed new revision for 358797 that fixes up Sean's patch with 
unit tests and pep8 fixes and fixes the review comment from Alex Xu 
regarding a missing log statement.


Also, pushed the patch that adds inventory recording from the resource 
tracker, rebased onto Sean's patch:


https://review.openstack.org/363583

It needs unit tests added. Will try to get to those this evening but if 
someone is interested in helping, that would be cool.


Best,
-jay


Chris Dent pointed out that there is remaining work to do with the
allocation objects in the placement API, but those can be worked in
parallel to the RT work Jay is doing.

2. Chris is going to cleanup the devstack change that adds the placement
service:

https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least
not by default, so Chris has to take that into account. In Newton, by
default, the Nova API DB will be used for the placement service. You can
optionally configure a separate placement database with the API DB
schema, but we're not going to test with that as the default in devstack
in Newton since that's most likely not what deployers would be doing in
Newton as the placement service is still optional.

3. I'm going to work on a job that runs in the experimental queue and
enables the placement service. So by default in Newton devstack the
placement service will not be configured or running. With the
experimental queue job we can test the Nova changes with and without the
placement service to make sure we didn't completely screw something up.

--

If I've left something out please add it here.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ipam] max_fixed_ips_per_port

2016-08-31 Thread Gary Kotton
Hi,
In commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad [i] and commit 
fc661571765054ff09e41aa6c7fc32f80fd0a98d [ii] this variable was marked as 
deprecated. A number of neutron plugins make use of this to ensure that only 
one IP address can be configured per port per subnet. There are cases where the 
DHCP implementations only enabled one MAC:IP mapping. So this makes life very 
simple for us by setting a configuration variable.
The deprecations and possible removal in N, O or P could cause issues.
Would it be possible to do one of the two options:

1.   revert the two patches below with the deprecation warnings [iii]

2.   Or add a callback that would enable plugins to add their own 
validation logic (I need to push a patch if the option above is a no go.
Thanks
Gary

[i] 
https://github.com/openstack/neutron/commit/37277cf4168260d5fa97f20e0b64a2efe2d989ad
[ii] 
https://github.com/openstack/neutron/commit/fc661571765054ff09e41aa6c7fc32f80fd0a98d
[iii] https://review.openstack.org/#/c/363599/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipam] max_fixed_ips_per_port

2016-08-31 Thread Gary Kotton
I posted https://review.openstack.org/363602 . The deprecation of a mitaka 
release note was silly.
So please let me know what you guys think.

I think that keeping the configuration variable is the sanest option. 
Implementing a callback in each and every plugin that wants to enforce this is 
a overkill.


From: Gary Kotton 
Reply-To: OpenStack List 
Date: Wednesday, August 31, 2016 at 2:36 PM
To: OpenStack List 
Subject: [openstack-dev] [neutron][ipam] max_fixed_ips_per_port

Hi,
In commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad [i] and commit 
fc661571765054ff09e41aa6c7fc32f80fd0a98d [ii] this variable was marked as 
deprecated. A number of neutron plugins make use of this to ensure that only 
one IP address can be configured per port per subnet. There are cases where the 
DHCP implementations only enabled one MAC:IP mapping. So this makes life very 
simple for us by setting a configuration variable.
The deprecations and possible removal in N, O or P could cause issues.
Would it be possible to do one of the two options:

1.  revert the two patches below with the deprecation warnings [iii]

2.  Or add a callback that would enable plugins to add their own validation 
logic (I need to push a patch if the option above is a no go.
Thanks
Gary

[i] 
https://github.com/openstack/neutron/commit/37277cf4168260d5fa97f20e0b64a2efe2d989ad
[ii] 
https://github.com/openstack/neutron/commit/fc661571765054ff09e41aa6c7fc32f80fd0a98d
[iii] https://review.openstack.org/#/c/363599/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Sylvain Bauza



Le 31/08/2016 13:08, Jay Pipes a écrit :

On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.

Keep in mind feature freeze is more or less Thursday 9/1.

Also keep in mind the goal from the midcycle:

"Jay's personal goal for Newton is for the resource tracker to be
writing inventory and allocation data via the placement API. Get the
data pumping into the placement API in Newton so we can start using it
in Ocata."

1. The ResourceTracker work starts here:

https://review.openstack.org/#/c/358797/

That relies on the placement service being in the service catalog and
will be optional for Newton. There are details to be sorted about
if/when to retry connecting to the placement service with or without
requiring a restart of nova-compute, but I don't think those are too 
hairy.


Jay is working on changes that go on top of that series to push the
inventory and allocation data from the resource tracker to the placement
service.


OK, just pushed new revision for 358797 that fixes up Sean's patch 
with unit tests and pep8 fixes and fixes the review comment from Alex 
Xu regarding a missing log statement.


Also, pushed the patch that adds inventory recording from the resource 
tracker, rebased onto Sean's patch:


https://review.openstack.org/363583

It needs unit tests added. Will try to get to those this evening but 
if someone is interested in helping, that would be cool.




I'm already working on adding UTs to 
https://review.openstack.org/#/c/363061 which is about to do the same 
things.


-sylvain


Best,
-jay


Chris Dent pointed out that there is remaining work to do with the
allocation objects in the placement API, but those can be worked in
parallel to the RT work Jay is doing.

2. Chris is going to cleanup the devstack change that adds the placement
service:

https://review.openstack.org/#/c/342362/

The main issue is there isn't a separate placement database, at least
not by default, so Chris has to take that into account. In Newton, by
default, the Nova API DB will be used for the placement service. You can
optionally configure a separate placement database with the API DB
schema, but we're not going to test with that as the default in devstack
in Newton since that's most likely not what deployers would be doing in
Newton as the placement service is still optional.

3. I'm going to work on a job that runs in the experimental queue and
enables the placement service. So by default in Newton devstack the
placement service will not be configured or running. With the
experimental queue job we can test the Nova changes with and without the
placement service to make sure we didn't completely screw something up.

--

If I've left something out please add it here.



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Sean Dague
On 08/31/2016 05:07 AM, Jay Pipes wrote:
> On 08/29/2016 12:40 PM, Matt Riedemann wrote:
>> I've been out for a week and not very involved in the resource providers
>> work, but after talking about the various changes up in the air at the
>> moment a bunch of us thought it would be helpful to lay out next steps
>> for the work we want to get done this week.
> 
> Apologies to all. BT Internet has been out most of the time in the house
> I've been staying at in Cheshire while on holiday and so I've been
> having to trek to a Starbucks to try and get work done. :(
> 
>> Keep in mind feature freeze is more or less Thursday 9/1.
>>
>> Also keep in mind the goal from the midcycle:
>>
>> "Jay's personal goal for Newton is for the resource tracker to be
>> writing inventory and allocation data via the placement API. Get the
>> data pumping into the placement API in Newton so we can start using it
>> in Ocata."
> 
> Indeed, that is the objective...
> 
>> 1. The ResourceTracker work starts here:
>>
>> https://review.openstack.org/#/c/358797/
>>
>> That relies on the placement service being in the service catalog and
>> will be optional for Newton.
> 
> Technically, the original revision of that patch *didn't* require the
> placement API service to be in the service catalog. If it wasn't there,
> the scheduler reporting client wouldn't bomb out, it would just log a
> warning and an admin could restart nova-compute when a placement API
> service entry was added to Keystone's service catalog.
> 
> But then I was asked to "robustify" things instead of use a simple error
> marker variable in the reporting client to indicate an unrecoverable
> problem with connectivity to the placement service. And the patch I
> pushed for that robustification failed all over the place, quite
> predictably. I was trying to keep the patch size to a minimum originally
> and incrementally add robust retry logic and better error handling. I
> also as noted in the commit message, used the exact same code we were
> using in the Cinder volume driver for finding the service endpoint via
> the keystone service catalog:
> 
> https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L71-L83
> 
> That has been changed in the latest patch from Sean to use the
> keystoneauth1.session.Session object instead of a requests.Session
> object directly. Not sure why, but that's fine I suppose.

Because the requests code literally couldn't work. It had no keystone
auth pieces. Using keystoneauth sessions we actually get the token
handling there, and still get a low level interface as was asked for.

The cinder code isn't a good analog here, because it's actually acting
on behalf of a user in their request context.

https://github.com/openstack/nova/blob/1abb6f7b4e190c6ef3f409c7d418fda1c857423e/nova/volume/cinder.py#L71
only works because the context is user generated, and we can convert our
context back into something we can send to keystone client.

This doesn't work in the placement API case, because we're not doing it
with user context, we're doing it behind the scenes with a service user.
Doing context.elevated() and then trying to do this kind of call just
doesn't work (which was in the original patch) because you can't just
conjure keystone admin credentials that way. If so, we'd have a crazy
security issue. :)

Neutron is a better analog here, because we have to do some actions
without a user context.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [FFE] Horizon Profiler feature (and openstack/osprofiler integration)

2016-08-31 Thread Timur Sufiev
Hello, folks!

I'd like to ask for a feature-freeze exception for a Horizon Profiler
feature [1], that has been demoed long ago (during Portland midcycle Feb
2016) and is finally ready. The actual request applies to the 3 patches [2]
that provide the bulk of Profiler functionality.

It is a quite useful feature that is aimed mostly to developers, thus it is
constrained within Developer dashboard and disabled by default - so it
shouldn't have any impact on User-facing Horizon capabilities.

[1]
https://blueprints.launchpad.net/horizon/+spec/openstack-profiler-at-developer-dashboard
[2]
https://review.openstack.org/#/q/topic:bp/openstack-profiler-at-developer-dashboard+status:open
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is this a bug in metadata proxy...

2016-08-31 Thread Paul Michali
Hi,

I had seen something and was not sure if this was a subtle bug or not.

I have a Liberty based openstack setup. The account that is setting up
processes was user=neutron, group=neutron, however the metadata_agent.ini
config file was set up for a different group. So there was a
metadata_proxy_user=neutron, and metadata_proxy_group=foo config setting.

This ini file was used by the metadata agent process, but it was not
included in the DHCP agent process (not sure if I should have included the
metadata_agent.ini in the startup of DHCP or should have added these two
metadata proxy settings to neutron.conf, so that they were available to
DHCP).

In any case, here is what I saw happen...

I created a subnet (not using a router in this setup). It looks like DHCP
starts up the metadata agent proxy daemon) and the DHCP configuration is
used, which does NOT include the metadata_proxy_user/group, so the current
user's uid and gid are used (neutron/neutron) for the
metadata_proxy_user/group settings.

The proxy calls drop_privileges(), which because the group is different,
the log file can no longer be accessed by the daemon. An OSError occurs
with permission denied on the log file for this process, and the process
exits without any indications.

When I then try to use metadata services it fails (obviously). Looking, we
see that the metadata service is running (but the proxy is not, and I don't
see a way for an end user to check that - is there a way?).

Looking in the proxy log, the initial startup messages are seen, showing
all the configuration settings, and then there is nothing more. No
indication that it is lowering privileges to run under some other
user/group, that there was a fatal error, or that it is working and ready
to process requests. Nothing more appears in the log, as it was working and
there were no metadata proxy requests occurring.

I was only able to figure it out, by first checking to see if the proxy was
running, and then manually trying to start the proxy, using the command
line in the log, under a debugger, to find out that there was a permission
denied error.

So, it is likely a misconfiguration error on the user's part, but it was
really hard to figure that out.

Should/could we somehow indicate if there is an error lowering privs?

Is there a (user) way to tell if proxy is running?

Is there some documentation indicating that the proxy user/group settings
need to be available for both the metadata agent and for other agents that
may spawn the proxy (DHCP, L3)?

Regards,

PCM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Michael Still
There is a quick sketch of what a service account might look like at
https://review.openstack.org/#/c/363606/ -- I need to do some more fiddling
to get the new option group working, but I could do that if we wanted to
try and get this into Newton.

Michael

On Wed, Aug 31, 2016 at 7:54 AM, Matt Riedemann 
wrote:

> On 8/30/2016 4:36 PM, Michael Still wrote:
>
>> Sorry for being slow on this one, I've been pulled into some internal
>> things at work.
>>
>> So... Talking to Matt Riedemann just now, it seems like we should
>> continue to pass through the user authentication details when we have
>> them to the plugin. The problem is what to do in the case where we do
>> not (which is mostly going to be when the instance itself makes a
>> metadata request).
>>
>> I think what you're saying though is that the middleware wont let any
>> requests through if they have no auth details? Is that correct?
>>
>> Michael
>>
>>
>>
>>
>> On Fri, Aug 26, 2016 at 12:46 PM, Adam Young > > wrote:
>>
>> On 08/22/2016 11:11 AM, Rob Crittenden wrote:
>>
>> Adam Young wrote:
>>
>> On 08/15/2016 05:10 PM, Rob Crittenden wrote:
>>
>> Review https://review.openstack.org/#/c/317739/
>>  added a new
>> dynamic
>> metadata handler to nova. The basic jist is that rather
>> than serving
>> metadata statically, it can be done dyamically, so that
>> certain values
>> aren't provided until they are needed, mostly for
>> security purposes
>> (like credentials to enroll in an AD domain). The
>> metadata is
>> configured as URLs to a REST service.
>>
>> Very little is passed into the REST call, mostly UUIDs
>> of the
>> instance, image, etc. to ensure a stable API. What this
>> means though
>> is that the REST service may need to make calls into
>> nova or glance to
>> get information, like looking up the image metadata in
>> glance.
>>
>> Currently the dynamic metadata handler _can_ generate
>> auth headers if
>> an authenticated request is made to it, but consider
>> that a common use
>> case is fetching metadata from within an instance using
>> something like:
>>
>> % curl
>> http://169.254.169.254/openstack/2016-10-06/vendor_data2.
>> json
>> > ack/2016-10-06/vendor_data2.json>
>>
>> This will come into the nova metadata service
>> unauthenticated.
>>
>> So a few questions:
>>
>> 1. Is it possible to configure paste (I'm a relative
>> newbie) both
>> authenticated and unauthenticated requests are accepted
>> such that IF
>> an authenticated request comes it, those credentials can
>> be used,
>> otherwise fall back to something else?
>>
>>
>>
>> Only if they are on different URLs, I think.  Its auth_token
>> middleware
>> for all services but Keystone.  Keystone, the rles are
>> similar, but the
>> implementation is a little different.
>>
>>
>> Ok. I'm fine with the unauthenticated path if the service we can
>> just create a separate service user for it.
>>
>> 2. If an unauthenticated request comes in, how best to
>> obtain a token
>> to use? Is it best to create a service user for the REST
>> services
>> (perhaps several), use a shared user, something else?
>>
>>
>>
>> No unauthenticated requests, please.  If the call is to
>> Keystone, we
>> could use the X509 Tokenless approach, but if the call comes
>> from the
>> new server, you won't have a cert by the time you need to
>> make the call,
>> will you?
>>
>>
>> Not sure which cert you're referring too but yeah, the metadata
>> service is unauthenticated. The requests can come in from the
>> instance which has no credentials (via http://169.254.169.254/).
>>
>> Shared service users are probably your best bet.  We can
>> limit the roles
>> that they get.  What are these calls you need to make?
>>
>>
>> To glance for image metadata, Keystone for project information
>> and nova for instance information. The REST call passes in
>> various UUIDs for these so they need to be dereferenced. There
>> is no guarantee that these would be cal

Re: [openstack-dev] [manila] [security] [tc] Add the vulnerability:managed tag to Manila

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 10:18:57 +0100 (+0100), John Spray wrote:
[...]
> Given that all the drivers live in the Manila repo, will this
> requirement for security audits is going to apply to them?  Given the
> variety of technologies and network protocols involved in talking to
> external storage systems, this strikes me as probably the hardest
> part.
[...]

Perhaps, but if you look at the templates the Security Group has
started putting together they're not for a rigorous code audit:

https://git.openstack.org/cgit/openstack/security-analysis/tree/doc/source/templates/

The idea, as I understand it, is that projects will document their
architecture and security model, and call out risks they can
identify posed by different parts thereof. So for drivers in the
manila repo, this is perhaps things like risk in leaking credentials
for the storage backend to unauthorized parties or failure to
enforce isolation of data access between distinct privilege domains.
Then others outside manila can review and provide feedback on this
information, ask questions, and help refine it. Then it eventually
becomes a living document of the project's design and risk profile
both to benefit the users/community at large and also to make it
easier for the VMT and the manila-coresec team to classify reports
of potential vulnerabilities.

If anything, I suspect in-tree drivers for proprietary backends are
going to be harder to map to requirement #6 (ability for the VMT to
test fixes), but the way it's been handled in other already
grandfathered-in projects is that the core security review team for
that project subscribes a subject matter expert to the bug and they
use their knowledge/access to vet proposed fixes under embargo and
then we take their word for it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Matthew Booth
I've just (re-)written an email filter which splits gerrit emails into
those from CI and those from real people. In general I'm almost never
interested in botmail, but they comprise about 80% of gerrit email.

Having looked carefully through some gerrit emails from real people and
CIs, I unfortunately can't find any features which distinguish the CI.
Consequently my filter is just a big enumeration of current known CIs. This
is kinda ugly, and will obviously get out of date.

Is there anything I missed? Or is it possible to unsubscribe from gerrit
mail from bots? Or is there any other good way to achieve what I'm looking
for which doesn't involve maintaining my own bot list? If not, would it be
feasible to add something?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jordan Pittier
On Wed, Aug 31, 2016 at 3:44 PM, Matthew Booth  wrote:
>
> Is there anything I missed? Or is it possible to unsubscribe from gerrit
> mail from bots? Or is there any other good way to achieve what I'm looking
> for which doesn't involve maintaining my own bot list? If not, would it be
> feasible to add something?
>

Most(all?) messages from CI have the lines:

"Patch Set X:
Build (succeeded|failed)."

Not super robust, but that's a start.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] The State of the NFS Driver ...

2016-08-31 Thread Sean McGinnis
Thanks for the write up Jay. This is useful.

Added [Cinder] tag to subject line...

On Tue, Aug 30, 2016 at 10:50:38AM -0500, Jay S. Bryant wrote:
> All,
> 
> I wanted to follow up on the e-mail thread [1] on Cloning support in
> the NFS driver.  The purpose of this e-mail is to provide the plan
> for the NFS driver going forward as I see it.
> 
> First, I am aware that the driver has gone quite some time without
> care and feeding.  For a number of reasons, the Public Cloud team
> within IBM is currently dependent upon the NFS driver working
> properly for the cloud environment we are building.  Given our
> current dependence on the driver we are planning on picking up the
> driver and maintaining it.
> 
> The first step in this process was getting the existing patch that
> adds snapshot support for NFS [2] rebased.  I did this work a couple
> of weeks ago and also got all the unit tests working for the unit
> test environment on the master branch.  I now see that it is in
> merge conflict again, I plan to continue to keep the patch
> up-to-date.
> 
> Erlon has been investigating issues with attaching snapshots.  It
> appears that this may be related to AppArmor running on the system
> where the VM is running and attachment is being attempted.  I am
> hoping to look into the other questions posed in the patch review in
> the next week or two.
> 
> The next step is to create a dependent patch, upon the snapshot
> patch, to implement cloning.  I am planning to also undertake this
> work.  I am assuming that getting the cloning support in place
> shouldn't be too difficult once snapshots are working as it will be
> just a matter of using the support from the remotefs driver.
> 
> The last piece of work we have in flight is working on adding QoS
> support to the NFS driver.  We have the following spec proposed to
> get that work started: [3]
> 
> So, we are in the process of bringing the NFS driver up to good
> standing.  During this process we would greatly appreciate reviews
> and input from those of you who have previously worked on the driver
> in order to expedite integration of the necessary changes. I feel it
> is in the best interest of the community to get the driver updated
> and supported given that it is the 4th most used driver according to
> our user survey.  I think it would not look good to our users if it
> were to suddenly be removed.
> 
> Thanks to all of your for your support in this effort!
> 
> Jay
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html
> 
> [2] https://review.openstack.org/#/c/147186/
> 
> [3] https://review.openstack.org/361456
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this a bug in metadata proxy...

2016-08-31 Thread ZZelle
Hi,

Are you sure metadata_proxy_user==neutron?

neutron-metadata-proxy must be able to connect to the metadata-agent socket
and watchs its log files and neutron user should be able to do both with
usual file permissions.

Otherwise the metadata proxy is generally no more able to:
- watch log[1] so you should set metadata_proxy_watch_log=False
- connect to the metadata-agent because of socket permissions, so you
should set metadata_proxy_socket_mode option[2] in order to let the
metadata agent set the correct perms on metadata socket.

If you provide metadata_proxy_user/group in l3/dhcp-agent and
metadata-agent config then neutron should be able to deduce both
metadata_proxy_watch_log and metadata_proxy_socket_mode values.



[1] https://review.openstack.org/#/c/161494/
[2] https://review.openstack.org/#/c/165115/

Cédric/ZZelle

On Wed, Aug 31, 2016 at 2:16 PM, Paul Michali  wrote:

> Hi,
>
> I had seen something and was not sure if this was a subtle bug or not.
>
> I have a Liberty based openstack setup. The account that is setting up
> processes was user=neutron, group=neutron, however the metadata_agent.ini
> config file was set up for a different group. So there was a
> metadata_proxy_user=neutron, and metadata_proxy_group=foo config setting.
>
> This ini file was used by the metadata agent process, but it was not
> included in the DHCP agent process (not sure if I should have included the
> metadata_agent.ini in the startup of DHCP or should have added these two
> metadata proxy settings to neutron.conf, so that they were available to
> DHCP).
>
> In any case, here is what I saw happen...
>
> I created a subnet (not using a router in this setup). It looks like DHCP
> starts up the metadata agent proxy daemon) and the DHCP configuration is
> used, which does NOT include the metadata_proxy_user/group, so the current
> user's uid and gid are used (neutron/neutron) for the
> metadata_proxy_user/group settings.
>
> The proxy calls drop_privileges(), which because the group is different,
> the log file can no longer be accessed by the daemon. An OSError occurs
> with permission denied on the log file for this process, and the process
> exits without any indications.
>
> When I then try to use metadata services it fails (obviously). Looking, we
> see that the metadata service is running (but the proxy is not, and I don't
> see a way for an end user to check that - is there a way?).
>
> Looking in the proxy log, the initial startup messages are seen, showing
> all the configuration settings, and then there is nothing more. No
> indication that it is lowering privileges to run under some other
> user/group, that there was a fatal error, or that it is working and ready
> to process requests. Nothing more appears in the log, as it was working and
> there were no metadata proxy requests occurring.
>
> I was only able to figure it out, by first checking to see if the proxy
> was running, and then manually trying to start the proxy, using the command
> line in the log, under a debugger, to find out that there was a permission
> denied error.
>
> So, it is likely a misconfiguration error on the user's part, but it was
> really hard to figure that out.
>
> Should/could we somehow indicate if there is an error lowering privs?
>
> Is there a (user) way to tell if proxy is running?
>
> Is there some documentation indicating that the proxy user/group settings
> need to be available for both the metadata agent and for other agents that
> may spawn the proxy (DHCP, L3)?
>
> Regards,
>
> PCM
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][devstack][all] Deprecation of external_network_bridge and CI impact

2016-08-31 Thread Sean M. Collins
Hi,

This probably should have been advertised more widely, before the merge
happened. I would like to apologize for an after-the-fact e-mail
explaining what may be going on for some jobs that are broken.

I recently merged a change to DevStack -
https://review.openstack.org/346282

It's a little cryptic since it's a revert-of-a-revert. However, if you
take a look at the original commit[1], you can get an idea of what is
going on.

Basically, we were relying on a deprecated setting in Neutron that has
been deprecated since liberty[2]. Post 346282, we no longer use that
deprecated setting and instead are creating networks the "correct" way.

Some jobs that were relying on provider attributes for their networking
may be seeing some errors similar to what has happened to Shade[3].
Basically Shade was trying to create a public network using the same
provider attributes, that post 346282, we now create during a DevStack
run[4].

I know jroll is currently also trying to figure out how to unblock
Ironic's CI, since they too were using the provider networking API
extension. I imagine there may be some other jobs that are broken
(networking-generic-switch seems to be very sensitive), so please take a
look at the links and hopefully that will help.

[1]: https://review.openstack.org/#/c/343072/

[2]: https://bugs.launchpad.net/neutron/+bug/1511578

[3]: 
http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/console.html#_2016-08-30_18_56_58_838512

[4]: 
http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/logs/devstacklog.txt.gz#_2016-08-30_18_46_38_671
 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Mike Bayer

We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: 
a foreign key constraint fails (`vaceciqnzs`.`resource_entity`, 
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` 
(`id`))')


for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child 
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`, 
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` 
(`id`))')


this impacts oslo.db's "exception re-handling" functionality which tries 
to classify this exception as a DBNonExistentConstraint exception.   It 
also breaks oslo.db's test suite locally, but in a downstream project 
would only impact its ability to intercept this exception appropriately.


now that "23000" there looks like a bug.  The above gerrit proposes to 
work around it.  However, if we didn't push out the above gerrit, we'd 
instead have to change requirements:


https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Ihar Hrachyshka

Mike Bayer  wrote:


We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a  
foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT  
`foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')


for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child  
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,  
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`  
(`id`))')


this impacts oslo.db's "exception re-handling" functionality which tries  
to classify this exception as a DBNonExistentConstraint exception.   It  
also breaks oslo.db's test suite locally, but in a downstream project  
would only impact its ability to intercept this exception appropriately.


now that "23000" there looks like a bug.  The above gerrit proposes to  
work around it.  However, if we didn't push out the above gerrit, we'd  
instead have to change requirements:


https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.


Unless we fix the bug in next pymysql, it’s not either/or but both will be  
needed, and also minimal oslo.db version bump.


I suggest we:
- block 0.7.7 to unblock upper-constraints updates;
- land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all  
stable branches;

- release new oslo.db releases for L-N;
- at least for N, bump minimal version of the library in  
global-requirements.txt;

- sync the bump to all consuming projects;
- later, maybe unblock 0.7.7.

In the meantime, interested parties may work with pymysql folks to get the  
issue fixed. It may take a while, so I would not make this step part of our  
short term plan.


Now, I understand that this does not really sound ideal, but I assume we  
are not in requirements freeze yet (the deadline for that is tomorrow), and  
this plan will solve the issue for users of all versions of pymysql.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][ironic] bifrost 2.0.0 release (newton)

2016-08-31 Thread no-reply
We are chuffed to announce the release of:

bifrost 2.0.0: Deployment of physical machines using OpenStack Ironic
and Ansible

This release is part of the newton release series.

For more details, please see below.

2.0.0
^


New Features


* Allows to create VMs with custom names instead of using testvm or
  NODE_BASE and sequential prefixes. This can be achieved by passing
  the TEST_VM_NODE_NAMES env var.

* The ironic install role has been split into 3 phases. "install"
  phase installs all ironic packages and dependencies. "bootstrap"
  phase generates configs and initializes the ironic db. "start" phase
  starts all ironic services and dependencies. Each phase is run by
  default and can be skipped by defining skip_package_install,
  skip_bootstrap and skip_start respectively.

* Add support for kvm acceleration for the VMs created by bifrost-
  create-vm-nodes. The default domain type for the created VMs is qemu
  which uses tcg acceleration. In order to use kvm acceleration, users
  need to set VM_DOMAIN_TYPE to kvm.

* A new playbook was added to redeploy nodes. The playbook
  transitions each node's provision state to 'available', waiting for
  the nodes to reach that state.  Next, the playbook deploys the
  nodes, waiting for the nodes to reach provision state 'active'.  The
  playbook is redeploy-dynamic.yaml in the playbooks directory.


Upgrade Notes
*

* A new test playbook, test-bifrost.yaml, has been added. This
  playbook merges the functionality of the existing test-bifrost-
  dynamic.yaml and test-bifrost-dhcp.yaml playbooks.

* Bifrost has changed to using TinyIPA as the default IPA image for
  testing. TinyIPA has a smaller footprint for downloads and memory
  utilization. Users can continue to utilize CoreOS or diskimage-
  builder based IPA images, however this was done to improve testing
  performance and reliability. If the pre-existing IPA image is
  removed, bifrost will automatically re-download the file upon being
  updated in an installation process.  Otherwise, the pre-existing IPA
  image will be utilized.


Deprecation Notes
*

* test-bifrost-dynamic.yaml and test-bifrost-dhcp.yaml have been
  superseded by test-bifrost.yaml and will be removed in the Ocata
  release.


Other Notes
***

* A new install_dib varible has been introduced to the ironic
  install role to control installation of disk image builder and dib-
  utils. To maintain  the previous behavior install_dib will default
  to the value of create_image_via_dib.

Changes in bifrost 1.0.0..2.0.0
---

8e99369 Update IPA info in troubleshooting.rst
92a2d28 Install the net-tools package in scripts/env-setup.sh
101551a Split --syntax-check and --list-tasks steps in test-bifrost.sh
28797b0 Specify node_network_info is a dict
2a6a7f7 Remove 'auth' fact initialization from bifrost-deploy-nodes-dynamic
a75b1a8 Update release notes for Newton
3775863 Use upper-constraints for all tox targets
356954b Fix /etc/hosts before starting the rabbitmq server
2b6cf52 Restore stable-2.0 as the default Ansible branch
4d59ba5 Add SUSE support in scripts/env-setup.sh
939c244 Allow to define vms with independent names
5506f5e Only set hostname on 127.0.0.1 if not present in /etc/hosts
6fa28b1 Introduce support for kvm acceleration
1141182 Fix release notes formatting issue
1d19d28 Change Vagrant VM to mirror memory/cpu in CI
292f3d6 Fix typo when querying the python version in scripts/env-setup.sh
8c947c5 Set OS_AUTH_TOKEN to dummy string, instead of empty space
84414b3 Initialize 'auth' to empty dict on bifrost_configdrives_dynamic
2b9ccfc Remove auth line to fallback on default(omit) behaviour
e3f8f86 Fix DHCP test scenario
d9dbbd3 Change Bifrost to TinyIPA as the default
dc732e6 Updated from global requirements
97b3998 Fix some spelling mistakes
353a1c0 Remove discover from test-requirements
7b109a6 Updated from global requirements
82d7650 Change none to noop network interface for bifrost
ba9e746 Disable flat network driver
2e0ff63 Unify test playbooks
ddb3965 Fix testing script permission settings for logs
32ea65d Increase timeout for downloading IPA ramdisk and kernel
155bb48 Updated from global requirements
ab0755b Correct name for test
022d05e split ironic install role into install,bootstrap,start phases
b8a97b8 Add redeploy-dynamic playbook
e7fc06a Make ansible installation directory configurable
3f21388 Updated from global requirements
1dc6d85 Make boolean usage consistent across playbooks
9d8e8de Unify testing scripts
10f3ba5 Make booleans in templates explicit
241180e Remove invalid directory_mode from ironic install
e1d20a6 Document that ssh_public_key_path must be set
995b779 Fix Bug #1583539 - rpm part
b208b84 introduce install_dib varible
7585b5c Install libssl-dev and libffi-dev
9b0106b Use constraints for all the things
d87b5cf Updated from global requirements
4e41737 Add pycrypto to requirements
3308b31 Add Ubuntu 16.04 defaults for ironic 

Re: [openstack-dev] [Horizon] [FFE] Horizon Profiler feature (and openstack/osprofiler integration)

2016-08-31 Thread David Lyle
As a developer feature, I would vote for merging in early Ocata rather
than as a FFE. Since the potential risk is to users and operators and
they won't generally benefit from the feature, I don't see the upside
outweighing the potential risk.  It's not a localized change either.

That said, I think the profiler work will be extremely valuable in
Ocata and beyond. Thanks for your continued efforts on bringing it to
life.

David

On Wed, Aug 31, 2016 at 6:14 AM, Timur Sufiev  wrote:
> Hello, folks!
>
> I'd like to ask for a feature-freeze exception for a Horizon Profiler
> feature [1], that has been demoed long ago (during Portland midcycle Feb
> 2016) and is finally ready. The actual request applies to the 3 patches [2]
> that provide the bulk of Profiler functionality.
>
> It is a quite useful feature that is aimed mostly to developers, thus it is
> constrained within Developer dashboard and disabled by default - so it
> shouldn't have any impact on User-facing Horizon capabilities.
>
> [1]
> https://blueprints.launchpad.net/horizon/+spec/openstack-profiler-at-developer-dashboard
> [2]
> https://review.openstack.org/#/q/topic:bp/openstack-profiler-at-developer-dashboard+status:open
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The State of the NFS Driver ...

2016-08-31 Thread Jay S. Bryant

On 08/30/2016 08:50 PM, Matt Riedemann wrote:

On 8/30/2016 10:50 AM, Jay S. Bryant wrote:

All,

I wanted to follow up on the e-mail thread [1] on Cloning support in the
NFS driver.  The purpose of this e-mail is to provide the plan for the
NFS driver going forward as I see it.

First, I am aware that the driver has gone quite some time without care
and feeding.  For a number of reasons, the Public Cloud team within IBM
is currently dependent upon the NFS driver working properly for the
cloud environment we are building.  Given our current dependence on the
driver we are planning on picking up the driver and maintaining it.

The first step in this process was getting the existing patch that adds
snapshot support for NFS [2] rebased.  I did this work a couple of weeks
ago and also got all the unit tests working for the unit test
environment on the master branch.  I now see that it is in merge
conflict again, I plan to continue to keep the patch up-to-date.

Erlon has been investigating issues with attaching snapshots. It
appears that this may be related to AppArmor running on the system where
the VM is running and attachment is being attempted.  I am hoping to
look into the other questions posed in the patch review in the next week
or two.

The next step is to create a dependent patch, upon the snapshot patch,
to implement cloning.  I am planning to also undertake this work.  I am
assuming that getting the cloning support in place shouldn't be too
difficult once snapshots are working as it will be just a matter of
using the support from the remotefs driver.

The last piece of work we have in flight is working on adding QoS
support to the NFS driver.  We have the following spec proposed to get
that work started: [3]

So, we are in the process of bringing the NFS driver up to good
standing.  During this process we would greatly appreciate reviews and
input from those of you who have previously worked on the driver in
order to expedite integration of the necessary changes. I feel it is in
the best interest of the community to get the driver updated and
supported given that it is the 4th most used driver according to our
user survey.  I think it would not look good to our users if it were to
suddenly be removed.

Thanks to all of your for your support in this effort!

Jay

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html 



[2] https://review.openstack.org/#/c/147186/

[3] https://review.openstack.org/361456


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



IMO priority #1 is getting the NFS job passing consistently, who is 
working on that? Last I checked it was failing a bunch because it was 
running snapshot and clone tests, which obviously don't work since 
that support isn't implemented in the driver. I think configuring 
tempest in the devstack-plugin-nfs repo is fairly straightforward, 
someone just needs to do it.


But at least that gets you closer to a clean NFS job run which gets it 
out of the experimental queue (possibly) and as a non-voting job in 
Cinder so you can see if you're regressing anything (or if anything 
else regresses it once you have clean CI runs).


My 2 cents.


Matt,

This is good feedback.  I will put a story on our backlog on this for it 
and try to get that working ASAP.


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Mike Bayer



On 08/31/2016 10:48 AM, Ihar Hrachyshka wrote:


Unless we fix the bug in next pymysql, it’s not either/or but both will
be needed, and also minimal oslo.db version bump.


upstream issue:

https://github.com/PyMySQL/PyMySQL/issues/507

PyMySQL tends to be very responsive to issues (plus I think I'm a 
committer anyway, even I could commit a fix I suppose)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Julien Danjou
On Wed, Aug 31 2016, Ihar Hrachyshka wrote:

> I suggest we:
> - block 0.7.7 to unblock upper-constraints updates;
> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all 
> stable
> branches;
> - release new oslo.db releases for L-N;
> - at least for N, bump minimal version of the library in
> global-requirements.txt;
> - sync the bump to all consuming projects;
> - later, maybe unblock 0.7.7.
>
> In the meantime, interested parties may work with pymysql folks to get the
> issue fixed. It may take a while, so I would not make this step part of our
> short term plan.
>
> Now, I understand that this does not really sound ideal, but I assume we are
> not in requirements freeze yet (the deadline for that is tomorrow), and this
> plan will solve the issue for users of all versions of pymysql.

This is also my plan and proposal. At least the Gnocchi gate is broken
now until oslo.db is fixed, so we're really eager to see things moving… :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

Duncan Thomas wrote:

On 31 August 2016 at 11:57, Bogdan Dobrelya mailto:bdobre...@mirantis.com>> wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.


+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a 
writeup IMHO would help at least clear the waters a little bit, and 
explain the blocker of the current RPC design pattern (which is 
multidimensional because most people are probably thinking RPC == rabbit 
when it's actually more than that now, ie zeromq and amqp1.0 and ...) 
and try to centralize on a better replacement.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2016-08-31 16:48:09 +0200:
> Mike Bayer  wrote:
> 
> > We need to decide how to handle this:
> >
> > https://review.openstack.org/#/c/362991/
> >
> >
> > Basically, PyMySQL normally raises an error message like this:
> >
> > (pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a  
> > foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT  
> > `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')
> >
> > for some reason, PyMySQL 0.7.7 is now raising it like this:
> >
> > (pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child  
> > row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,  
> > CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`  
> > (`id`))')
> >
> > this impacts oslo.db's "exception re-handling" functionality which tries  
> > to classify this exception as a DBNonExistentConstraint exception.   It  
> > also breaks oslo.db's test suite locally, but in a downstream project  
> > would only impact its ability to intercept this exception appropriately.
> >
> > now that "23000" there looks like a bug.  The above gerrit proposes to  
> > work around it.  However, if we didn't push out the above gerrit, we'd  
> > instead have to change requirements:
> >
> > https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z
> >
> > It seems like at least one or the other would be needed for Newton.
> 
> Unless we fix the bug in next pymysql, it’s not either/or but both will be  
> needed, and also minimal oslo.db version bump.
> 
> I suggest we:
> - block 0.7.7 to unblock upper-constraints updates;
> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all  
> stable branches;
> - release new oslo.db releases for L-N;
> - at least for N, bump minimal version of the library in  
> global-requirements.txt;
> - sync the bump to all consuming projects;
> - later, maybe unblock 0.7.7.
> 
> In the meantime, interested parties may work with pymysql folks to get the  
> issue fixed. It may take a while, so I would not make this step part of our  
> short term plan.
> 
> Now, I understand that this does not really sound ideal, but I assume we  
> are not in requirements freeze yet (the deadline for that is tomorrow), and  
> this plan will solve the issue for users of all versions of pymysql.

Even if we were frozen, this seems like the sort of thing we'd want to
deal with through a patch release.

I've already create the stable/newton branch for oslo.db, so we'll need
to backport the fix to have a 4.13.1 release.

Doug

> 
> Ihar
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][FFE]Support a param to specify subnet or fixed IP when creating port

2016-08-31 Thread David Lyle
I agree this seems reasonable and a good addition for users with minimal risk.

David

On Tue, Aug 30, 2016 at 10:29 AM, Rob Cresswell (rcresswe)
 wrote:
> I’m happy to allow this personally, but wanted to get others input and give 
> people the chance to object.
>
> My reasoning for allowing this:
> - It’s high level, doesn’t affect any base horizon lib features.
> - It is mature code, has multiple patch sets and a +2
>
> I’ll give it a few days to allow others a chance speak up, then we can move 
> forward.
>
> Rob
>
>> On 29 Aug 2016, at 07:17, Kenji Ishii  wrote:
>>
>> Hi, horizoners
>>
>> I'd like to request a feature freeze exception for the feature.
>> (This is a bug ticket, but the contents written in this ticket is a new 
>> feature.)
>> https://bugs.launchpad.net/horizon/+bug/1588663
>>
>> This is implemented by the following patch.
>> https://review.openstack.org/#/c/325104/
>>
>> It is useful to be able to create a port with using subnet or IP address 
>> which a user want to use.
>> And this has already reviewed by many reviewers, so I think the risk in this 
>> patch is very few.
>>
>> ---
>> Best regards,
>> Kenji Ishii
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 16:13:16 +0200 (+0200), Jordan Pittier wrote:
> Most(all?) messages from CI have the lines:
> 
> "Patch Set X:
> Build (succeeded|failed)."
> 
> Not super robust, but that's a start.

Also we have naming conventions for third-party CI accounts that
suggest they should end in " CI" so you could match on that.

Further, we have configuration in Gerrit to not send E-mail for
comments from accounts in
https://review.openstack.org/#/admin/groups/270,members so if you
are seeing E-mail from Gerrit for third-party CI system comments see
whether they're in that group already (in which case let the Infra
team know because we might have a bug to look into) or ask one of
the members of
https://review.openstack.org/#/admin/groups/440,members to add the
stragglers (or even volunteer to become part of that coordinators
group and help maintain the list).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread Monty Taylor
On 08/31/2016 02:16 AM, joehuang wrote:
> Hello, team,
> 
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal
> was co-prepared by our
> contributors: 
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
> 
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder
> API-GW will be removed from the scope of big-tent project application,
> and put them into another project:
> 
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with
> TricircleGateway. Try to become big-tent project in the current
> application of https://review.openstack.org/#/c/338796/.

Great idea.

> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment,
> run without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And
> not pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board
> in OpenStack. A new repository is needed to be applied for this project.
> 
> 
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
> 
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
> 
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.

I think this is a great approach Joe.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Matthew Booth
On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:

> On 2016-08-31 16:13:16 +0200 (+0200), Jordan Pittier wrote:
> > Most(all?) messages from CI have the lines:
> >
> > "Patch Set X:
> > Build (succeeded|failed)."
> >
> > Not super robust, but that's a start.
>
> Also we have naming conventions for third-party CI accounts that
> suggest they should end in " CI" so you could match on that.
>

Yeah, all except 'Jenkins' :) This makes me a bit nervous because all mail
still comes from 'rev...@openstack.org', with the name component set to the
name of the CI. I was nervous of false positives, so I chose to name them
all in full.


>
> Further, we have configuration in Gerrit to not send E-mail for
> comments from accounts in
> https://review.openstack.org/#/admin/groups/270,members so if you
> are seeing E-mail from Gerrit for third-party CI system comments see
> whether they're in that group already (in which case let the Infra
> team know because we might have a bug to look into) or ask one of
> the members of
> https://review.openstack.org/#/admin/groups/440,members to add the
> stragglers (or even volunteer to become part of that coordinators
> group and help maintain the list).
>

This sounds interesting. All the CIs I get gerrit spam from are on that
list except Jenkins. Do I have to enable something specifically to exclude
them? I mashed all the links I could find seemingly related to gerrit
settings and I couldn't find anything which looked promising.

Thanks again,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [FFE] Horizon Profiler feature (and openstack/osprofiler integration)

2016-08-31 Thread Timur Sufiev
David,

I understand your concerns. Early Ocata instead of late Newton makes sense
to me. Thank you for a quick response.

On Wed, Aug 31, 2016 at 5:54 PM David Lyle  wrote:

> As a developer feature, I would vote for merging in early Ocata rather
> than as a FFE. Since the potential risk is to users and operators and
> they won't generally benefit from the feature, I don't see the upside
> outweighing the potential risk.  It's not a localized change either.
>
> That said, I think the profiler work will be extremely valuable in
> Ocata and beyond. Thanks for your continued efforts on bringing it to
> life.
>
> David
>
> On Wed, Aug 31, 2016 at 6:14 AM, Timur Sufiev 
> wrote:
> > Hello, folks!
> >
> > I'd like to ask for a feature-freeze exception for a Horizon Profiler
> > feature [1], that has been demoed long ago (during Portland midcycle Feb
> > 2016) and is finally ready. The actual request applies to the 3 patches
> [2]
> > that provide the bulk of Profiler functionality.
> >
> > It is a quite useful feature that is aimed mostly to developers, thus it
> is
> > constrained within Developer dashboard and disabled by default - so it
> > shouldn't have any impact on User-facing Horizon capabilities.
> >
> > [1]
> >
> https://blueprints.launchpad.net/horizon/+spec/openstack-profiler-at-developer-dashboard
> > [2]
> >
> https://review.openstack.org/#/q/topic:bp/openstack-profiler-at-developer-dashboard+status:open
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Davanum Srinivas
On Wed, Aug 31, 2016 at 12:09 PM, Doug Hellmann  wrote:
> Excerpts from Ihar Hrachyshka's message of 2016-08-31 16:48:09 +0200:
>> Mike Bayer  wrote:
>>
>> > We need to decide how to handle this:
>> >
>> > https://review.openstack.org/#/c/362991/
>> >
>> >
>> > Basically, PyMySQL normally raises an error message like this:
>> >
>> > (pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a
>> > foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT
>> > `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')
>> >
>> > for some reason, PyMySQL 0.7.7 is now raising it like this:
>> >
>> > (pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child
>> > row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,
>> > CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`
>> > (`id`))')
>> >
>> > this impacts oslo.db's "exception re-handling" functionality which tries
>> > to classify this exception as a DBNonExistentConstraint exception.   It
>> > also breaks oslo.db's test suite locally, but in a downstream project
>> > would only impact its ability to intercept this exception appropriately.
>> >
>> > now that "23000" there looks like a bug.  The above gerrit proposes to
>> > work around it.  However, if we didn't push out the above gerrit, we'd
>> > instead have to change requirements:
>> >
>> > https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z
>> >
>> > It seems like at least one or the other would be needed for Newton.
>>
>> Unless we fix the bug in next pymysql, it’s not either/or but both will be
>> needed, and also minimal oslo.db version bump.
>>
>> I suggest we:
>> - block 0.7.7 to unblock upper-constraints updates;
>> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all
>> stable branches;
>> - release new oslo.db releases for L-N;
>> - at least for N, bump minimal version of the library in
>> global-requirements.txt;
>> - sync the bump to all consuming projects;
>> - later, maybe unblock 0.7.7.
>>
>> In the meantime, interested parties may work with pymysql folks to get the
>> issue fixed. It may take a while, so I would not make this step part of our
>> short term plan.
>>
>> Now, I understand that this does not really sound ideal, but I assume we
>> are not in requirements freeze yet (the deadline for that is tomorrow), and
>> this plan will solve the issue for users of all versions of pymysql.
>
> Even if we were frozen, this seems like the sort of thing we'd want to
> deal with through a patch release.
>
> I've already create the stable/newton branch for oslo.db, so we'll need
> to backport the fix to have a 4.13.1 release.

+1 to 4.13.1

Thanks,
Dims

>
> Doug
>
>>
>> Ihar
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:
> 
> > I agree that RPC design pattern, as it is implemented now, is a major
> > blocker for OpenStack in general. It requires a major redesign,
> > including handling of corner cases, on both sides, *especially* RPC call
> > clients. Or may be it just have to be abandoned to be replaced by a more
> > cloud friendly pattern.
> >
> 
> 
> Is there a writeup anywhere on what these issues are? I've heard this
> sentiment expressed multiple times now, but without a writeup of the issues
> and the design goals of the replacement, we're unlikely to make progress on
> a replacement - even if somebody takes the heroic approach and writes a
> full replacement themselves, the odds of getting community by-in are very
> low.

Right, this is exactly the sort of thing I'd like to gather a group of
design-minded folks around in an Architecture WG. Oslo is busy with the
implementations we have now, but I'm sure many oslo contributors would
like to come up for air and talk about the design issues, and come up
with a current design, and some revisions to it, or a whole new one,
that can be used to put these summit hallway rumors to rest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Chris Dent



On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.


There was another hangout today where we caught up on where we are.
Some notes were added to the etherpad
https://etherpad.openstack.org/p/placement-next

There is code either merged or pending merge that allows the
resource tracker to ensure that resource providers exist and have
the correct inventory.

The major concern and blocker at this point is setting and deleting
allocations, for which the assistance of Jay is required. Some
details follow with a summary of Jay's todos at the bottom.

There are two patches, starting at
https://review.openstack.org/#/c/363209/

The first is a hack to get the object side handling for
AllocationList.create_all and delete_all. As noted in the comments
there we're not sure about the atomicity in create_all and need Jay
to determine if what's there can be made to work, or as suggested we
need a mondo SQL thing to get it right. If the latter, we need Jay
to write it :)

I'm going to carry on with those patches now and try to add some
generation handling back in to protect against inventory changing
out from under us while making allocations, but I'm not confident
of getting it anything more that possible adequate and great would
be better.

During that I'm also going to try to adjust things so that we can
update an existing allocation, not just create them, as we've
determined that's required. set_all, not create_all, basically.

The other missing piece is the client side of setting and deleting
allocations, from the resource tracker. We'd like Jay to start this
too or if we're all lucky maybe it is started already?

And finally there's a question we didn't know how to answer: What
will the process be for healing instances that already exist before
the placement service is started, and thus have no allocations?

So to summarize Jay's to do list (please and thank you very much):

* Look at https://review.openstack.org/#/c/363209/ and decide if it
  is good enough to get rolling or needs to be completely altered.
* If the latter, alter it.
* Write the allocation client.
* Consult on healing instance allocations.

Meanwhile several people are involved in related clean up patches in
both nova and devstack to smooth off rough edges while we pushed a
lot of code.

Thanks to everyone today for pushing so hard. We look pretty close to
getting the must haves happening.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread lebre . adrien
As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome. 
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> Hello, Joshua,
> 
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
> 
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
> 
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
> 
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all  thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now,  the topics to be discussed could be as follows :
> 
>0. Scenario
>1, Use cases
>2, Requirements  in detail
>3, Gaps in OpenStack
>4, Proposal to be discussed
> 
>   Architecture level proposal discussion
>1, Proposals
>2, Pros. and Cons. comparation
>3, Challenges
>4, next step
> 
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
> 
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in security management for inter-site RPC
> > communication. Please refer to the slides[1] for the challenge 3:
> > Securing OpenStack over the Internet, Over 500 pin holes had to be
> > opened in the firewall to allow this to work – Includes ports for
> > VNC and SSH for CLIs. Using RPC in cells for edge cloud will face
> > same security challenges.
> >
> > 3)only nova supports cells. But not only Nova needs to support edge
> > clouds, Neutron, Cinder should be taken into account too. How
> > about Neutron to support service function chaining in edge clouds?
> > Using RPC? how to address challenges mentioned above? And Cinder?
> >
> > 4). Using RPC to do the production integration for hundreds of edge
> > cloud is quite challenge idea, it's basic requirements that these
> > edge clouds may be bought from multi-vendor, hardware/software or
> > both.
> >
> > That means using cells in production for massively distributed edge
> > clouds is quite bad idea. If Cells provide RESTful interface
> > between API cell and child cell, it's much more acceptable, but
> > it's still not enough, similar in Cinder, Neutron. Or just deploy
> > lightweight OpenStack instance in each edge cloud, for example,
> > one rack. The question is how to manage the large number of
> > OpenStack instance and provision service.
> >
> > [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf
> >
> > Best Regards
> > Chaoyi Huang(joehuang)
> >
> 
> Very interesting questions,
> 
> I'm starting to think that the API you want isn't really nova,
> neutron,
> or cinder at this point though. At some point it feels like the
> efforts
> you are spending in things like service chaining (there is a south
> park
> episode I almost linked here, but decided I proba

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 18:54, Joshua Harlow  wrote:

> Duncan Thomas wrote:
>
>> On 31 August 2016 at 11:57, Bogdan Dobrelya > > wrote:
>>
>> I agree that RPC design pattern, as it is implemented now, is a major
>> blocker for OpenStack in general. It requires a major redesign,
>> including handling of corner cases, on both sides, *especially* RPC
>> call
>> clients. Or may be it just have to be abandoned to be replaced by a
>> more
>> cloud friendly pattern.
>>
>>
>>
>> Is there a writeup anywhere on what these issues are? I've heard this
>> sentiment expressed multiple times now, but without a writeup of the
>> issues and the design goals of the replacement, we're unlikely to make
>> progress on a replacement - even if somebody takes the heroic approach
>> and writes a full replacement themselves, the odds of getting community
>> by-in are very low.
>>
>
> +2 to that, there are a bunch of technologies that could replace the
> rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup
> IMHO would help at least clear the waters a little bit, and explain the
> blocker of the current RPC design pattern (which is multidimensional
> because most people are probably thinking RPC == rabbit when it's actually
> more than that now, ie zeromq and amqp1.0 and ...) and try to centralize on
> a better replacement.
>
>
Is anybody who dislikes the current pattern(s) and implementation(s)
volunteering to start this documentation? I really am not aware of the
issues, and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] cells v2 next steps

2016-08-31 Thread Matt Riedemann
Just to recap a call with Laski, Sean and Dan, the goal for the next 24 
hours with cells v2 is to get this nova change landed:


https://review.openstack.org/#/c/356138/

That depends on a set of grenade changes:

https://review.openstack.org/#/q/topic:setup_cell0_before_migrations

There are similar devstack changes to those:

https://review.openstack.org/#/q/topic:cell0_db

cell0 is optional in newton, so we don't want to add a required change 
in grenade that forces an upgrade to newton to require cell0.


And since cell0 is optional in newton, we don't want devstack in newton 
running with cell0 in all jobs.


So the plan is for Dan (or someone) to add a flag to devstack, mirrored 
in grenade, that will be used to conditionally create the cell0 database 
and run the simple_cell_setup command.


Then I'm going to set that flag in devstack-gate and from select jobs in 
project-config, so one of the grenade jobs (either single node or 
multi-node grenade), and then the placement-api job which is non-voting 
in the nova check queue and is our new dumping ground for running 
optional things, like the placement service and cell0.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 17:59:43 +0100 (+0100), Matthew Booth wrote:
> On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:
[...]
> > Also we have naming conventions for third-party CI accounts that
> > suggest they should end in " CI" so you could match on that.
> 
> Yeah, all except 'Jenkins' :)
[...]

Right, that was mainly because there were more than a few people who
expressed a desire to be able to receive E-mail messages on comments
from the "Jenkins" account but not from third-party CI systems.

> All the CIs I get gerrit spam from are on that list except
> Jenkins. Do I have to enable something specifically to exclude
> them?
[...]

No, as I understand it, since we set capability.emailReviewers="deny
group Third-Party CI" in the global Gerrit configuration it should
avoid sending E-mail for any of their comments.
https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
I guess we should troubleshoot that.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
Hi,

I am upgrading from Juno to Kilo and from that to Liberty.

I understand I need to nova-manage db migrate_flavor_data before upgrading
from Kilo to Liberty to let VMs that were spawned while the system was in
Juno to flavor migrate to Kilo.

Depending on the number of computes, complete upgrade can potentially be
spanned for longer duration, days if not months.

While migrate_flavor_data seem to flavor migrate meta data of the VMs that
were spawned before upgrade procedure, it doesn't seem to flavor migrate
for the VMs that were spawned during the upgrade procedure more
specifically after openstack controller upgrade and before compute upgrade.
Am I missing something here or is it by intention?

Since, the compute upgrade procedure could last for days, would it be
practical to block spawning work load VMs for that long duration?
Otherwise, next upgrade will fail right?

thanks
Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The State of the NFS Driver ...

2016-08-31 Thread Erlon Cruz
Hi Jay,

Thanks for the update. I can give a look in the NFS job, it will need some
care, like configuring the slave to be a Ubuntu Xenial and setting
apparmor, so when you finish the cloning support we have an operational job.

Erlon

On Wed, Aug 31, 2016 at 11:50 AM, Jay S. Bryant <
jsbry...@electronicjungle.net> wrote:

> On 08/30/2016 08:50 PM, Matt Riedemann wrote:
>
>> On 8/30/2016 10:50 AM, Jay S. Bryant wrote:
>>
>>> All,
>>>
>>> I wanted to follow up on the e-mail thread [1] on Cloning support in the
>>> NFS driver.  The purpose of this e-mail is to provide the plan for the
>>> NFS driver going forward as I see it.
>>>
>>> First, I am aware that the driver has gone quite some time without care
>>> and feeding.  For a number of reasons, the Public Cloud team within IBM
>>> is currently dependent upon the NFS driver working properly for the
>>> cloud environment we are building.  Given our current dependence on the
>>> driver we are planning on picking up the driver and maintaining it.
>>>
>>> The first step in this process was getting the existing patch that adds
>>> snapshot support for NFS [2] rebased.  I did this work a couple of weeks
>>> ago and also got all the unit tests working for the unit test
>>> environment on the master branch.  I now see that it is in merge
>>> conflict again, I plan to continue to keep the patch up-to-date.
>>>
>>> Erlon has been investigating issues with attaching snapshots. It
>>> appears that this may be related to AppArmor running on the system where
>>> the VM is running and attachment is being attempted.  I am hoping to
>>> look into the other questions posed in the patch review in the next week
>>> or two.
>>>
>>> The next step is to create a dependent patch, upon the snapshot patch,
>>> to implement cloning.  I am planning to also undertake this work.  I am
>>> assuming that getting the cloning support in place shouldn't be too
>>> difficult once snapshots are working as it will be just a matter of
>>> using the support from the remotefs driver.
>>>
>>> The last piece of work we have in flight is working on adding QoS
>>> support to the NFS driver.  We have the following spec proposed to get
>>> that work started: [3]
>>>
>>> So, we are in the process of bringing the NFS driver up to good
>>> standing.  During this process we would greatly appreciate reviews and
>>> input from those of you who have previously worked on the driver in
>>> order to expedite integration of the necessary changes. I feel it is in
>>> the best interest of the community to get the driver updated and
>>> supported given that it is the 4th most used driver according to our
>>> user survey.  I think it would not look good to our users if it were to
>>> suddenly be removed.
>>>
>>> Thanks to all of your for your support in this effort!
>>>
>>> Jay
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-Augu
>>> st/102193.html
>>>
>>> [2] https://review.openstack.org/#/c/147186/
>>>
>>> [3] https://review.openstack.org/361456
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> IMO priority #1 is getting the NFS job passing consistently, who is
>> working on that? Last I checked it was failing a bunch because it was
>> running snapshot and clone tests, which obviously don't work since that
>> support isn't implemented in the driver. I think configuring tempest in the
>> devstack-plugin-nfs repo is fairly straightforward, someone just needs to
>> do it.
>>
>> But at least that gets you closer to a clean NFS job run which gets it
>> out of the experimental queue (possibly) and as a non-voting job in Cinder
>> so you can see if you're regressing anything (or if anything else regresses
>> it once you have clean CI runs).
>>
>> My 2 cents.
>>
>> Matt,
>
> This is good feedback.  I will put a story on our backlog on this for it
> and try to get that working ASAP.
>
> Jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Dan Smith
> While migrate_flavor_data seem to flavor migrate meta data of the VMs
> that were spawned before upgrade procedure, it doesn't seem to flavor
> migrate for the VMs that were spawned during the upgrade procedure more
> specifically after openstack controller upgrade and before compute
> upgrade. Am I missing something here or is it by intention?

You can run the flavor migration as often as you need, and can certainly
run it after your last compute is upgraded before you start to move into
liberty.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Ian Wells
On 31 August 2016 at 10:12, Clint Byrum  wrote:

> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> wrote:
> >
> > > I agree that RPC design pattern, as it is implemented now, is a major
> > > blocker for OpenStack in general. It requires a major redesign,
> > > including handling of corner cases, on both sides, *especially* RPC
> call
> > > clients. Or may be it just have to be abandoned to be replaced by a
> more
> > > cloud friendly pattern.
> >
> >
> > Is there a writeup anywhere on what these issues are? I've heard this
> > sentiment expressed multiple times now, but without a writeup of the
> issues
> > and the design goals of the replacement, we're unlikely to make progress
> on
> > a replacement - even if somebody takes the heroic approach and writes a
> > full replacement themselves, the odds of getting community by-in are very
> > low.
>
> Right, this is exactly the sort of thing I'd like to gather a group of
> design-minded folks around in an Architecture WG. Oslo is busy with the
> implementations we have now, but I'm sure many oslo contributors would
> like to come up for air and talk about the design issues, and come up
> with a current design, and some revisions to it, or a whole new one,
> that can be used to put these summit hallway rumors to rest.
>

I'd say the issue is comparatively easy to describe.  In a call sequence:

1. A sends a message to B
2. B receives messages
3. B acts upon message
4. B responds to message
5. A receives response
6. A acts upon response

... you can have a fault at any point in that message flow (consider
crashes or program restarts).  If you ask for something to happen, you wait
for a reply, and you don't get one, what does it mean?  The operation may
have happened, with or without success, or it may not have gotten to the
far end.  If you send the message, does that mean you'd like it to cause an
action tomorrow?  A year from now?  Or perhaps you'd like it to just not
happen?  Do you understand what Oslo promises you here, and do you think
every person who ever wrote an RPC call in the whole OpenStack solution
also understood it?

I have opinions about other patterns we could use, but I don't want to push
my solutions here, I want to see if this is really as much of a problem as
it looks and if people concur with my summary above.  However, the right
approach is most definitely to create a new and more fitting set of oslo
interfaces for communication patterns, and then to encourage people to move
to the new ones from the old.  (Whether RabbitMQ is involved is neither
here nor there, as this is really a question of Oslo APIs, not their
implementation.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Deprecating old CLI in python-fuelclient

2016-08-31 Thread Roman Prykhodchenko
Fuelers,

We are proud to announce that we finally managed to reach the point when the 
old CLI in python-fuelclient (aka the old Fuel Client) can be deprecated and 
replaced with fuel2 — a cliff based CLI. Support of the full set of commands 
that are required to operate Fuel was already implemented in fuel2. After the 
deprecation plan is done, users will no longer be able to use old commands 
unless they install an older version of python-fuelclient from PyPi.

I have published a specification [1] that describes in details what changes are 
going to be made and how can different users can live with those changes. The 
specification also contains a table that compares old and new CLI, so the 
migration process will be as smooth as possible.


References:

1. https://review.openstack.org/#/c/361049


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] don't wait to the last minute

2016-08-31 Thread Doug Hellmann
Folks, we've had more than the usual number of validation errors
and -1s for version number choice on patches in openstack/releases
this week. Please don't wait to the last minute to submit your
milestone 3 tag request, and keep an eye on responses in case you
need to rework it.

Being present in #openstack-release is a good way to ensure you're
aware of any review issues.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for ec2-api integration

2016-08-31 Thread Sven Anderson
Hi,

I'm working on the integration of the puppet-ec2api module. It is a
(probably) very straight forward task. The only thing that is a current
impediment is that puppet CI is currently not deploying and running
tempest on puppet-ec2api. I'm currently working on getting the ec2
credentials created within puppet-tempest, which are needed to run
tempest on ec2api. Once this is done, it should be very quick thing.
Here the changes that are not yet ready/merged. The change for THT is
still missing.

https://review.openstack.org/#/c/357971
https://review.openstack.org/#/c/356442
https://review.openstack.org/#/c/336562

I'd like to formally request an FFE for this.

Thanks,

Sven

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Summit planning etherpad

2016-08-31 Thread Rob Cresswell
Hi all,

Etherpad for planning summit sessions: 
https://etherpad.openstack.org/p/horizon-ocata-summit

Please note the sessions have been requested, not scheduled, so the actual 
number we get may not be the same.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE request for ec2-api integration

2016-08-31 Thread Emilien Macchi
On Wed, Aug 31, 2016 at 4:31 PM, Sven Anderson  wrote:
> Hi,
>
> I'm working on the integration of the puppet-ec2api module. It is a
> (probably) very straight forward task. The only thing that is a current
> impediment is that puppet CI is currently not deploying and running
> tempest on puppet-ec2api. I'm currently working on getting the ec2
> credentials created within puppet-tempest, which are needed to run
> tempest on ec2api. Once this is done, it should be very quick thing.
> Here the changes that are not yet ready/merged. The change for THT is
> still missing.
>
> https://review.openstack.org/#/c/357971
> https://review.openstack.org/#/c/356442
> https://review.openstack.org/#/c/336562
>
> I'd like to formally request an FFE for this.
>
> Thanks,
>
> Sven
>

I'll have the same kind of remark as I gave for Ceph RGW.

I haven't seen any effort to bring EC2API support in TripleO during
Newton cycle, I don't think it is the right choice to push the feature
at the end of the cycle, so close from release.
We're currently overloaded with all FFE and bugs we're trying to
land/fix, I don't think adding more bits in the stack will help.
As a retrospective for next time, I suggest to start the work at the
beginning of the cycle so we stop pushing-all-we-can at the end of
cycles.

My 2 cents again.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Reviews in queue for newton-3

2016-08-31 Thread Nikhil Komawar
Hi all,


I've proposed a release patch up [1] where I am collecting all the
reviews that are in queue and you'd like them in Newton 3. Please leave
a comment up there and I will try to get to reviewing it soon. Based on
the progress, time available, freeze, etc. a determination about the
feasibility of it making into Newton will be done and if the review link
is posted on it in the next 4 hours, you'd expect a note on it
indicating if it will make it or otherwise.

Thanks for your co-operation and appreciate all the help in setting up
Newton release!


[1] https://review.openstack.org/#/c/363930/


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Dan Smith
> Thanks Dan for your response. While I do run that before I start my
> move to liberty, what I see is that it doesn't seem to flavor migrate
> meta data for the VMs that are spawned after controller upgrade from
> juno to kilo and before all computes upgraded from juno to kilo. The
> current work around is to delete those VMs that are spawned after
> controller upgrade and before all computes upgrade, and then initiate
> liberty upgrade. Then it works fine.

I can't think of any reason why that would be, or why it would be a
problem. Instances created after the controllers are upgraded should not
have old-style flavor info, so they need not be touched by the migration
code.

Maybe filing a bug is in order describing what you see?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
>> While migrate_flavor_data seem to flavor migrate meta data of the VMs
>> that were spawned before upgrade procedure, it doesn't seem to flavor
>> migrate for the VMs that were spawned during the upgrade procedure more
>> specifically after openstack controller upgrade and before compute
>> upgrade. Am I missing something here or is it by intention?

>You can run the flavor migration as often as you need, and can certainly
>run it after your last compute is upgraded before you start to move into
>liberty.
>
>--Dan


Thanks Dan for your response. While I do run that before I start my
move to liberty, what I see is that it doesn't seem to flavor migrate
meta data for the VMs that are spawned after controller upgrade from
juno to kilo and before all computes upgraded from juno to kilo. The
current work around is to delete those VMs that are spawned after
controller upgrade and before all computes upgrade, and then initiate
liberty upgrade. Then it works fine.


Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] FF is active now

2016-08-31 Thread Vitaly Gridnev
Dear team,

So, N3 release is almost done, so from now feature can't be merged without
feature freeze exception. Feature Freeze status can be found at [0], along
with several features that already has FFE. FFE can be requested by the
question at openstack-dev mailing list.

[0] https://etherpad.openstack.org/p/sahara-review-priorities
[1] https://review.openstack.org/#/c/363932/

-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
Sure, will file a bug with my observations.

On Wed, Aug 31, 2016 at 2:17 PM, Dan Smith  wrote:

> > Thanks Dan for your response. While I do run that before I start my
> > move to liberty, what I see is that it doesn't seem to flavor migrate
> > meta data for the VMs that are spawned after controller upgrade from
> > juno to kilo and before all computes upgraded from juno to kilo. The
> > current work around is to delete those VMs that are spawned after
> > controller upgrade and before all computes upgrade, and then initiate
> > liberty upgrade. Then it works fine.
>
> I can't think of any reason why that would be, or why it would be a
> problem. Instances created after the controllers are upgraded should not
> have old-style flavor info, so they need not be touched by the migration
> code.
>
> Maybe filing a bug is in order describing what you see?
>
> --Dan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Matt Riedemann

On 8/31/2016 12:30 PM, Chris Dent wrote:



On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.


There was another hangout today where we caught up on where we are.
Some notes were added to the etherpad
https://etherpad.openstack.org/p/placement-next

There is code either merged or pending merge that allows the
resource tracker to ensure that resource providers exist and have
the correct inventory.

The major concern and blocker at this point is setting and deleting
allocations, for which the assistance of Jay is required. Some
details follow with a summary of Jay's todos at the bottom.

There are two patches, starting at
https://review.openstack.org/#/c/363209/

The first is a hack to get the object side handling for
AllocationList.create_all and delete_all. As noted in the comments
there we're not sure about the atomicity in create_all and need Jay
to determine if what's there can be made to work, or as suggested we
need a mondo SQL thing to get it right. If the latter, we need Jay
to write it :)

I'm going to carry on with those patches now and try to add some
generation handling back in to protect against inventory changing
out from under us while making allocations, but I'm not confident
of getting it anything more that possible adequate and great would
be better.

During that I'm also going to try to adjust things so that we can
update an existing allocation, not just create them, as we've
determined that's required. set_all, not create_all, basically.

The other missing piece is the client side of setting and deleting
allocations, from the resource tracker. We'd like Jay to start this
too or if we're all lucky maybe it is started already?

And finally there's a question we didn't know how to answer: What
will the process be for healing instances that already exist before
the placement service is started, and thus have no allocations?

So to summarize Jay's to do list (please and thank you very much):

* Look at https://review.openstack.org/#/c/363209/ and decide if it
  is good enough to get rolling or needs to be completely altered.
* If the latter, alter it.
* Write the allocation client.
* Consult on healing instance allocations.


I think the healing thing is something we can deal with after feature 
freeze right? I just don't want to become distracted by it.




Meanwhile several people are involved in related clean up patches in
both nova and devstack to smooth off rough edges while we pushed a
lot of code.

Thanks to everyone today for pushing so hard. We look pretty close to
getting the must haves happening.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Matt Riedemann

On 8/31/2016 4:17 PM, Dan Smith wrote:

Thanks Dan for your response. While I do run that before I start my
move to liberty, what I see is that it doesn't seem to flavor migrate
meta data for the VMs that are spawned after controller upgrade from
juno to kilo and before all computes upgraded from juno to kilo. The
current work around is to delete those VMs that are spawned after
controller upgrade and before all computes upgrade, and then initiate
liberty upgrade. Then it works fine.


I can't think of any reason why that would be, or why it would be a
problem. Instances created after the controllers are upgraded should not
have old-style flavor info, so they need not be touched by the migration
code.

Maybe filing a bug is in order describing what you see?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Also, are you running with the latest kilo patch update? There were some 
bug fixes backported after the release from what I remember.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Ian Wells's message of 2016-08-31 12:30:45 -0700:
> On 31 August 2016 at 10:12, Clint Byrum  wrote:
> 
> > Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> > wrote:
> > >
> > > > I agree that RPC design pattern, as it is implemented now, is a major
> > > > blocker for OpenStack in general. It requires a major redesign,
> > > > including handling of corner cases, on both sides, *especially* RPC
> > call
> > > > clients. Or may be it just have to be abandoned to be replaced by a
> > more
> > > > cloud friendly pattern.
> > >
> > >
> > > Is there a writeup anywhere on what these issues are? I've heard this
> > > sentiment expressed multiple times now, but without a writeup of the
> > issues
> > > and the design goals of the replacement, we're unlikely to make progress
> > on
> > > a replacement - even if somebody takes the heroic approach and writes a
> > > full replacement themselves, the odds of getting community by-in are very
> > > low.
> >
> > Right, this is exactly the sort of thing I'd like to gather a group of
> > design-minded folks around in an Architecture WG. Oslo is busy with the
> > implementations we have now, but I'm sure many oslo contributors would
> > like to come up for air and talk about the design issues, and come up
> > with a current design, and some revisions to it, or a whole new one,
> > that can be used to put these summit hallway rumors to rest.
> >
> 
> I'd say the issue is comparatively easy to describe.  In a call sequence:
> 
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
> 
> ... you can have a fault at any point in that message flow (consider
> crashes or program restarts).  If you ask for something to happen, you wait
> for a reply, and you don't get one, what does it mean?  The operation may
> have happened, with or without success, or it may not have gotten to the
> far end.  If you send the message, does that mean you'd like it to cause an
> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
> happen?  Do you understand what Oslo promises you here, and do you think
> every person who ever wrote an RPC call in the whole OpenStack solution
> also understood it?
> 
> I have opinions about other patterns we could use, but I don't want to push
> my solutions here, I want to see if this is really as much of a problem as
> it looks and if people concur with my summary above.  However, the right
> approach is most definitely to create a new and more fitting set of oslo
> interfaces for communication patterns, and then to encourage people to move
> to the new ones from the old.  (Whether RabbitMQ is involved is neither
> here nor there, as this is really a question of Oslo APIs, not their
> implementation.)

I think it's about time we get some Architecture WG meetings started,
and put "Document RPC design" on the agenda.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread John Villalovos
On Wed, Aug 31, 2016 at 11:58 AM, Jeremy Stanley  wrote:
> No, as I understand it, since we set capability.emailReviewers="deny
> group Third-Party CI" in the global Gerrit configuration it should
> avoid sending E-mail for any of their comments.
> https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
> I guess we should troubleshoot that.


I also see emails from at least the Cisco CI:

Cisco CI has posted comments on this change.

Change subject: Allow suppressing ramdisk logs collection
..


Patch Set 2:

Build failed. For help on isolating this failure, please contact
cisco-openstack-neutron...@cisco.com. To re-run, post a
'cisco-ironic-recheck' comment.

- tempest-dsvm-ironic-pxe_iscsi_cimc
http://192.133.158.2:8080/job/tempest-dsvm-ironic-pxe_iscsi_cimc/3269
: FAILURE in 2h 02m 36s
- tempest-dsvm-ironic-pxe_ucs
http://192.133.158.2:8080/job/tempest-dsvm-ironic-pxe_ucs/2684 :
FAILURE in 1h 55m 32s

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] python-client push bug?

2016-08-31 Thread Tim Hinrichs
Hi all,

As I was sanity checking the latest python-client, which we need to release
by tomorrow, I may have found a bug that we should fix before releasing.
If it's a bug in the server, that can be fixed later, but if it's a bug in
the client, we should get that fixed now.

https://bugs.launchpad.net/congress/+bug/1619065

Masahito: could you double-check that I'm running the right commands in the
client?

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-31 Thread Monty Taylor
On 08/25/2016 04:14 PM, Sean Dague wrote:
> On 08/25/2016 01:13 PM, Steve Martinelli wrote:
>> The keystone team is pursuing a trigger-based approach to support
>> rolling, zero-downtime upgrades. The proposed operator experience is
>> documented here:
>>
>>   http://docs.openstack.org/developer/keystone/upgrading.html
>>
>> This differs from Nova and Neutron's approaches to solve for rolling
>> upgrades (which use oslo.versionedobjects), however Keystone is one of
>> the few services that doesn't need to manage communication between
>> multiple releases of multiple service components talking over the
>> message bus (which is the original use case for oslo.versionedobjects,
>> and for which it is aptly suited). Keystone simply scales horizontally
>> and every node talks directly to the database.
>>
>> Database triggers are obviously a new challenge for developers to write,
>> honestly challenging to debug (being side effects), and are made even
>> more difficult by having to hand write triggers for MySQL, PostgreSQL,
>> and SQLite independently (SQLAlchemy offers no assistance in this case),
>> as seen in this patch:
>>
>>   https://review.openstack.org/#/c/355618/
>>
>> However, implementing an application-layer solution with
>> oslo.versionedobjects is not an easy task either; refer to Neutron's
>> implementation:
>>
>>
>> https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db
>>
>>
>> Our primary concern at this point are how to effectively test the
>> triggers we write against our supported database systems, and their
>> various deployment variations. We might be able to easily drop SQLite
>> support (as it's only supported for our own test suite), but should we
>> expect variation in support and/or actual behavior of triggers across
>> the MySQLs, MariaDBs, Perconas, etc, of the world that would make it
>> necessary to test each of them independently? If you have operational
>> experience working with triggers at scale: are there landmines that we
>> need to be aware of? What is it going to take for us to say we support
>> *zero* dowtime upgrades with confidence?
> 
> I would really hold off doing anything triggers related until there was
> sufficient testing for that, especially with potentially dirty data.
> 
> Triggers also really bring in a whole new DSL that people need to learn
> and understand, not just across this boundary, but in the future
> debugging issues. And it means that any errors happening here are now in
> a place outside of normal logging / recovery mechanisms.
> 
> There is a lot of value that in these hard problem spaces like zero down
> uptime we keep to common patterns between projects because there are
> limited folks with the domain knowledge, and splitting that even further
> makes it hard to make this more universal among projects.

I said this the other day in the IRC channel, and I'm going to say it
again here. I'm going to do it as bluntly as I can - please keeping in
mind that I respect all of the humans involved.

I think this is a monstrously terrible idea.

There are MANY reasons for this -but I'm going to limit myself to two.

OpenStack is One Project


Nova and Neutron have an approach for this. It may or may not be ideal -
but it exists right now. While it can be satisfying to discount the
existing approach and write a new one, I do not believe that is in the
best interests of OpenStack as a whole. To diverge in _keystone_ - which
is one of the few projects that must exist in every OpenStack install -
when there exists an approach in the two other most commonly deployed
projects - is such a terrible example of the problems inherent in
Conway's Law that it makes me want to push up a proposal to dissolve all
of the individual project teams and merge all of the repos into a single
repo.

Make the oslo libraries Nova and Neutron are using better. Work with the
Nova and Neutron teams on a consolidated approach. We need to be driving
more towards an OpenStack that behaves as if it wasn't written by
warring factions of developers who barely communicate.

Even if the idea was one I thought was good technically, the above would
still trump that. Work with Nova and Neutron. Be more similar.

PLEASE

BUT - I also don't think it's a good technical solution. That isn't
because triggers don't work in MySQL (they do) - but because we've spent
the last six years explicitly NOT writing raw SQL. We've chosen an
abstraction layer (SQLAlchemy) which does its job well.

IF this were going to be accompanied by a corresponding shift in
approach to not support any backends by MySQL and to start writing our
database interactions directly in SQL in ALL of our projects - I could
MAYBE be convinced. Even then I think doing it in triggers is the wrong
place to put logic.

"Database triggers are obviously a new challenge for developers to
write, honestly challenging to debug (being side effects), and are made
even more difficult by having to hand 

[openstack-dev] Deprecated fields in upgrade.

2016-08-31 Thread Suresh Vinapamula
Hi,

What is the typical protocol/guideline followed in the community to handle
deprecated fields during upgrade procedure?

Should the fields be removed by the user/admin before upgrade is initiated
or would the -manage db_sync, or migrate_flavor_data etc... or any
other command take care of that seamlessly?

For example, compute_port in compute endpoint url is deprecated and remove
in L version. But, keystone-manage db_sync doesn't seem to take care while
upgrading from kilo and kilo happened to have compute_port in the compute
endpoint url. I see a deprecated warning in juno also, and I didn't go
further down, if it were already taken care in the upgrade procedure.

Is there a typical guideline on who handles deprecated fields during
upgrade procedure? Should it be the user or tool that does the version
upgrade of data?

thanks
Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] important instructions or Newton Milestone #3 Release today 8/31/2015 @ ~23:30 UTC

2016-08-31 Thread Steven Dake (stdake)
Hey folks,

Milestone 3 will be submitted for tagging to the release team today around my 
end of work day.  All milestone 3 blueprints and bugs will be moved to rc1 in 
the case they don't make the August 31st(today) deadline.

We require fernet in rc1, so if there is anything that can be done to 
accelerate Shuan's work there, please chip in.  I'd like this to be our highest 
priority blueprint merge.  The earlier it merges (when functional) the more 
time we have to test the changes.  Please iterate on this review and review 
daily until merged.

We have made tremendous progress in milestone 3.  We ended up carrying over 
some blueprints as FFEs to rc1 which are all in review state right now and 
nearly complete.

The extension for  features concludes September 15th, 2016 when rc1 is tagged.  
If features don't merge by that time, they will be retargeted for Ocata.  When 
we submit the rc1 tag, master will branch.  After rc1, we will require bug 
backports from master to newton (and mitaka and liberty if appropriate).

We have a large bug backlog.  If folks could tackle that, it would be 
appreciated.  I will be spending most of my time doing that sort of work and 
would appreciate everyone on the team to contribute.  Tomorrow afternoon I will 
have all the rc1 bugs prioritized as seems fitting.

Please do not workflow+1 any blueprint work in the kolla repo until rc1 has 
been tagged.  Master of kolla is frozen for new features not already listed in 
the rc1 milestone.  Master of kolla-kubernetes is open for new features as we 
have not made a stable deliverable out of this repository (a 1.0.0 release).  
As a result, no branch will be made of the kolla-kubernetes repository (I 
think..).  If a branch is made, I'll request it be deleted.

If you have a bug that needs fixing and it doesn't need a backport, just use 
TrivialFix to speed up the process.  If it needs a backport, please use a bug 
id.  After rc1, all patches will need backports so everything should have a bug 
id.  I will provide further guidance after rc1.

A big shout out goes to our tremendous community that has pulled off 3 
milestones on schedule and in functional working order for the Kolla repository 
while maintaining 2 branches and releasing 4 z streams on a 45 day schedule.  
Fantastic work everyone!

Kolla-kubernetes also deserves a shout out – we have a functional compute-kit 
kubernetes underlay that deploys Kolla containers using mostly native 
kuberenetes functionality  We are headed towards a fully Kubernetes 
implementation.  The deployment lacks the broad feature-set of kolla-ansible 
but uses the json API to our containers and is able to spin up nova virtual 
machines with full network (OVS) connectivity – which is huge!

Cheers!
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread James Bottomley
On Tue, 2016-08-30 at 03:08 +, joehuang wrote:
> Hello, Jay,
> 
> Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did 
> not carry the thread message-id information in the reply.  I'll check 
> and avoid to create a new thread for reply in existing thread.

It's a common problem with outlook.  Microsoft created their own
threading standards for email which are adopted by no-one.  Whenever
you get these headers in your email:

Thread-topic: 
Thread-index: 

And not these:

In-reply-to:
References: 

It usually means exchange has decided the other end is a microsoft
entity and it doesn't need to use the internet standard reply types. 

Unfortunately, this isn't fixable in outlook because Exchange (the MTA)
not outlook (the MUA) does the threading.  There are some thoughts
floating around the internet on how to fix exchange; if you're lucky
and you have exchange 2003, this might fix it:

https://support.microsoft.com/en-us/kb/908027

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] don't wait to the last minute

2016-08-31 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-08-31 16:07:13 -0400:
> Folks, we've had more than the usual number of validation errors
> and -1s for version number choice on patches in openstack/releases
> this week. Please don't wait to the last minute to submit your
> milestone 3 tag request, and keep an eye on responses in case you
> need to rework it.
> 
> Being present in #openstack-release is a good way to ensure you're
> aware of any review issues.
> 
> Doug
> 

You may also find it useful to run "tox -e validate" on your patch
before you submit it (commit the patch locally , then run the
validator).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 18:58:31 + (+), Jeremy Stanley wrote:
> On 2016-08-31 17:59:43 +0100 (+0100), Matthew Booth wrote:
> > On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:
> [...]
> > > Also we have naming conventions for third-party CI accounts that
> > > suggest they should end in " CI" so you could match on that.
> > 
> > Yeah, all except 'Jenkins' :)
> [...]
> 
> Right, that was mainly because there were more than a few people who
> expressed a desire to be able to receive E-mail messages on comments
> from the "Jenkins" account but not from third-party CI systems.
> 
> > All the CIs I get gerrit spam from are on that list except
> > Jenkins. Do I have to enable something specifically to exclude
> > them?
> [...]
> 
> No, as I understand it, since we set capability.emailReviewers="deny
> group Third-Party CI" in the global Gerrit configuration it should
> avoid sending E-mail for any of their comments.
> https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
> I guess we should troubleshoot that.

I spoke with Khai about it a bit in IRC, and he suggests the
description in the docs is quite literal. Basically it avoids
sending you those messages if you are only a reviewer on the change
or a watcher of the project, but if you're a change owner/author or
have starred the change you probably still receive them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-31 Thread Giulio Fidente

On 08/30/2016 10:50 PM, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 03:25:30PM -0400, Emilien Macchi wrote:

Here's my 2 cents:

The patch in puppet-ceph has been here for long time now and it still
doesn't work (recent update of today, puppet-ceph is not idempotent
when deploying RGW service. It must be fixed in order to get
successful deployment).
Puppet CI is still not gating on Ceph RGW (scenario004 still in
progress and really low progress to make it working recently).


This does sound concerning, Giulio, can you provide any feedback on work
in-progress or planned to improve this?


we invested quite some time today testing and updating the patches as needed

I've a got a successful deployment where by just adding the Member role 
to my user I could use the regular swiftclient to operate against RadosGW


This is by pulling in:

https://review.openstack.org/#/c/347956/
https://review.openstack.org/#/c/363164/

https://review.openstack.org/#/c/334081/ (and its dependencies)

https://review.openstack.org/#/c/289027/

Emilien can you re-evaluate the status of the puppet-ceph and 
puppet-tripleo submissions?



My opinion says we should not push to have it in Newton. Work to do it
was not extremely pushed during the cycle I see zero reason to push
for it now the cycle is ending.


agreed, this might not have been pushed much during the cycle as other 
priorities needed attention too but it seems to be an interesting 
feature for those deploying Ceph and in decent state; also as per 
Steven's comment below, it'll be optional in TripleO, we'll continue to 
deploy Swift by default so it's not going to have a great impact on 
other existing work



I agree this is being proposed too late, but given it will be disabled by
default that does mitigate the risk somewhat.

Giulio - can you confirm this will just be a new service template and
puppet profile, and that it's not likely to require rework outside of the
composable services interface?  If so I'm inclined to say OK even if we
know the puppet module needs work.


no rework of the composable services interface will be needed, the tht 
submission is, in addition to adding the new service template, adding an 
output to the endpoint map for the new service, the puppet submission is 
adding a new role


https://review.openstack.org/#/c/289027/

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] need help on requesting release for networking-sfc

2016-08-31 Thread Cathy Zhang
CC OpenStack alias.

From: Cathy Zhang
Sent: Wednesday, August 31, 2016 5:19 PM
To: Armando Migliaccio; Ihar Hrachyshka; Cathy Zhang
Subject: need help on requesting release for networking-sfc

Hi Armando/Ihar,

I would like to submit a request for a networking-sfc release. I did this for 
previous branch release by submitting a bug request in launchpad before. I see 
that other subproject, such as L2GW, did this in Launchpad for mitaka release 
too.
But the Neutron stadium link 
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
 states that "A sub-project owner proposes a patch to openstack/releases 
repository with the intended git hash. The Neutron release liaison should be 
added in Gerrit to the list of reviewers for the patch".

Could you advise which way I should go or should I do both?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Support for an "undo" operation for migrating back resource_properties_data

2016-08-31 Thread Crag Wolfe
I'm working on a migrate utility for
https://review.openstack.org/#/c/363415 . Quick summary: that means
moving resource.properties_data and event.properties_data to a new
table, resource_properties_data. Migrating to the new model is easy. The
questions come up with the inverse operation.

1) Would we even want to support undoing a migrate? I lean towards "no"
but if the answer is "yes," the next question comes up:

2) We need to indicate somewhere which resource_properties_data rows
were migrated to begin with. The reason being, we shouldn't support
trying to migrate recent (non-legacy) resource_properties_data data
backwards into the legacy columns in the resource and event tables.
There are a couple of ways to do that: a) add an is_legacy column to
resource_properties_data b) add another table which stores id's of those
that events and resources that have been migrated or c) for the super
paranoid same as b) only also store an extra copy of the original data
(partially motivated by the unfortunate situation we have of
events.properties_data being a PickleType and something going wrong with
the conversion to a Json column, not that I see that happening). c) also
opens up another can of worms with encrypt/decrypt operations. I lean
towards "b" here (well, after null set ;-).

Thanks,
--Crag

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread joehuang
Hello, Monty,

Thank you very much for your guide and encouragement, then let's move on this 
direction.

Best regards
Chaoyi Huang (joehuang)

From: Monty Taylor [mord...@inaugust.com]
Sent: 01 September 2016 0:37
To: joehuang; openstack-dev
Subject: Re: [openstack-dev][tricircle]How to address TCs concerns in Tricircle 
big-tent application

On 08/31/2016 02:16 AM, joehuang wrote:
> Hello, team,
>
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal
> was co-prepared by our
> contributors: 
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
>
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder
> API-GW will be removed from the scope of big-tent project application,
> and put them into another project:
>
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with
> TricircleGateway. Try to become big-tent project in the current
> application of https://review.openstack.org/#/c/338796/.

Great idea.

> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment,
> run without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And
> not pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board
> in OpenStack. A new repository is needed to be applied for this project.
>
>
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
>
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
>
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.

I think this is a great approach Joe.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-sfc] need help on requesting release for networking-sfc

2016-08-31 Thread Armando M.
On 31 August 2016 at 17:31, Cathy Zhang  wrote:

> CC OpenStack alias.
>
>
>
> *From:* Cathy Zhang
> *Sent:* Wednesday, August 31, 2016 5:19 PM
> *To:* Armando Migliaccio; Ihar Hrachyshka; Cathy Zhang
> *Subject:* need help on requesting release for networking-sfc
>
>
>
> Hi Armando/Ihar,
>
>
>
> I would like to submit a request for a networking-sfc release. I did this for
> previous branch release by submitting a bug request in launchpad before.
> I see that other subproject, such as L2GW, did this in Launchpad for mitaka
> release too.
>
> But the Neutron stadium link http://docs.openstack.org/
> developer/neutron/stadium/sub_project_guidelines.html#sub-
> project-release-process states that “A sub-project owner proposes a patch
> to openstack/releases repository with the intended git hash. The Neutron
> release liaison should be added in Gerrit to the list of reviewers for the
> patch”.
>
>
>
> Could you advise which way I should go or should I do both?
>

Consider the developer documentation the most up to date process, so please
go ahead with a patch against the openstack/releases repo.


>
>
> Thanks,
>
> Cathy
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] cells v2 next steps

2016-08-31 Thread Matt Riedemann

On 8/31/2016 1:44 PM, Matt Riedemann wrote:

Just to recap a call with Laski, Sean and Dan, the goal for the next 24
hours with cells v2 is to get this nova change landed:

https://review.openstack.org/#/c/356138/

That depends on a set of grenade changes:

https://review.openstack.org/#/q/topic:setup_cell0_before_migrations

There are similar devstack changes to those:

https://review.openstack.org/#/q/topic:cell0_db

cell0 is optional in newton, so we don't want to add a required change
in grenade that forces an upgrade to newton to require cell0.

And since cell0 is optional in newton, we don't want devstack in newton
running with cell0 in all jobs.

So the plan is for Dan (or someone) to add a flag to devstack, mirrored
in grenade, that will be used to conditionally create the cell0 database
and run the simple_cell_setup command.

Then I'm going to set that flag in devstack-gate and from select jobs in
project-config, so one of the grenade jobs (either single node or
multi-node grenade), and then the placement-api job which is non-voting
in the nova check queue and is our new dumping ground for running
optional things, like the placement service and cell0.



FYI, this is the change I'm using to test the grenade/devstack series:

https://review.openstack.org/#/c/363971/

That's similar to what's proposed in the job updates in project-config:

https://review.openstack.org/#/c/363937/

We have a dependency chain going on now where the top devstack change 
depends on a nova change that depends on the top grenade change, so it's 
all kind of self-testing.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/30/2016 05:36 PM, Michael Still wrote:
Sorry for being slow on this one, I've been pulled into some internal 
things at work.


So... Talking to Matt Riedemann just now, it seems like we should 
continue to pass through the user authentication details when we have 
them to the plugin. The problem is what to do in the case where we do 
not (which is mostly going to be when the instance itself makes a 
metadata request).


I think what you're saying though is that the middleware wont let any 
requests through if they have no auth details? Is that correct?



Yes, that is correct.


Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young > wrote:


On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/
 added a new
dynamic
metadata handler to nova. The basic jist is that
rather than serving
metadata statically, it can be done dyamically, so
that certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs
of the
instance, image, etc. to ensure a stable API. What
this means though
is that the REST service may need to make calls into
nova or glance to
get information, like looking up the image metadata in
glance.

Currently the dynamic metadata handler _can_ generate
auth headers if
an authenticated request is made to it, but consider
that a common use
case is fetching metadata from within an instance
using something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json


This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative
newbie) both
authenticated and unauthenticated requests are
accepted such that IF
an authenticated request comes it, those credentials
can be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we
can just create a separate service user for it.

2. If an unauthenticated request comes in, how best to
obtain a token
to use? Is it best to create a service user for the
REST services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to
Keystone, we
could use the X509 Tokenless approach, but if the call
comes from the
new server, you won't have a cert by the time you need to
make the call,
will you?


Not sure which cert you're referring too but yeah, the
metadata service is unauthenticated. The requests can come in
from the instance which has no credentials (via
http://169.254.169.254/).

Shared service users are probably your best bet.  We can
limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information
and nova for instance information. The REST call passes in
various UUIDs for these so they need to be dereferenced. There
is no guarantee that these would be called in all cases but it
is a possibility.

rob


I guess if config_drive is True then this isn't really
a problem as
the metadata will be there in the instance already.

thanks

rob


__


OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<

Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/31/2016 07:56 AM, Michael Still wrote:
There is a quick sketch of what a service account might look like at 
https://review.openstack.org/#/c/363606/ -- I need to do some more 
fiddling to get the new option group working, but I could do that if 
we wanted to try and get this into Newton.


So, I don't think we need it.  I think that doing an identity for the 
new node *in order* to register it with an IdP is backwards: register 
it, and use the identity from the IdP via Federation.


Anything authenticated should be done from the metadata server or from 
Nova itself, based on the token used to launch the workflow.




Michael

On Wed, Aug 31, 2016 at 7:54 AM, Matt Riedemann 
mailto:mrie...@linux.vnet.ibm.com>> wrote:


On 8/30/2016 4:36 PM, Michael Still wrote:

Sorry for being slow on this one, I've been pulled into some
internal
things at work.

So... Talking to Matt Riedemann just now, it seems like we should
continue to pass through the user authentication details when
we have
them to the plugin. The problem is what to do in the case
where we do
not (which is mostly going to be when the instance itself makes a
metadata request).

I think what you're saying though is that the middleware wont
let any
requests through if they have no auth details? Is that correct?

Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young
mailto:ayo...@redhat.com>
>> wrote:

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review
https://review.openstack.org/#/c/317739/

> added a new
dynamic
metadata handler to nova. The basic jist is
that rather
than serving
metadata statically, it can be done
dyamically, so that
certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call,
mostly UUIDs
of the
instance, image, etc. to ensure a stable API.
What this
means though
is that the REST service may need to make
calls into
nova or glance to
get information, like looking up the image
metadata in
glance.

Currently the dynamic metadata handler _can_
generate
auth headers if
an authenticated request is made to it, but
consider
that a common use
case is fetching metadata from within an
instance using
something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

   
>

This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a
relative
newbie) both
authenticated and unauthenticated requests are
accepted
such that IF
an authenticated request comes it, those
credentials can
be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token
middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the
service we can
just create a separate service user for it.

2. If an unauthenticated request comes in, how
best to
obtain a token
to use? Is it best to cr

Re: [openstack-dev] [heat] Support for an "undo" operation for migrating back resource_properties_data

2016-08-31 Thread Steve Baker

On 01/09/16 12:57, Crag Wolfe wrote:

I'm working on a migrate utility for
https://review.openstack.org/#/c/363415 . Quick summary: that means
moving resource.properties_data and event.properties_data to a new
table, resource_properties_data. Migrating to the new model is easy. The
questions come up with the inverse operation.

1) Would we even want to support undoing a migrate? I lean towards "no"
but if the answer is "yes," the next question comes up:
No, OpenStack hasn't supported data migration downgrades for a while 
now. Migration failures are ideally fixed by failing forward. As a last 
resort rollbacks can be performed by restoring the database from backup.

2)

(redacted)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
I just pointed out the issues for RPC which is used between API cell and child 
cell if we deploy child cells in edge clouds. For this thread is about 
massively distributed cloud, so the RPC issues inside current 
Nova/Cinder/Neutron are not the main focus(it could be another important and 
interesting topic), for example, how to guarantee the reliability for rpc 
message:

> Cells is a good enhancement for Nova scalability, but there are some issues
>  in deployment Cells for massively distributed edge clouds:
>
> 1) using RPC for inter-data center communication will bring the difficulty
> in inter-dc troubleshooting and maintenance, and some critical issue in
> operation.  No CLI or restful API or other tools to manage a child cell
> directly. If the link between the API cell and child cells is broken, then
> the child cell in the remote edge cloud is unmanageable, no matter locally
> or remotely.
>
> 2). The challenge in security management for inter-site RPC communication.
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over
> the Internet, Over 500 pin holes had to be opened in the firewall to allow
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells
> for edge cloud will face same security challenges.
>
> 3)only nova supports cells. But not only Nova needs to support edge clouds,
> Neutron, Cinder should be taken into account too. How about Neutron to
> support service function chaining in edge clouds? Using RPC? how to address
> challenges mentioned above? And Cinder?
>
> 4). Using RPC to do the production integration for hundreds of edge cloud is
> quite challenge idea, it's basic requirements that these edge clouds may
> be bought from multi-vendor, hardware/software or both.
> That means using cells in production for massively distributed edge clouds
> is quite bad idea. If Cells provide RESTful interface between API cell and
> child cell, it's much more acceptable, but it's still not enough, similar
> in Cinder, Neutron. Or just deploy lightweight OpenStack instance in each
> edge cloud, for example, one rack. The question is how to manage the large
> number of OpenStack instance and provision service.
>
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

That's also my suggestion to collect all candidate proposals, and discuss these 
proposals and compare their cons. and pros. in the Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site communication 
for edge clouds, and provide Nova/Cinder/Neutron API as the umbrella for all 
edge clouds. This is the pattern of Tricircle: 
https://github.com/openstack/tricircle/

If there is other proposal, please don't hesitate to share and let's compare.

Best Regards
Chaoyi Huang(joehuang)


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 01 September 2016 2:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 31 August 2016 at 18:54, Joshua Harlow 
mailto:harlo...@fastmail.com>> wrote:
Duncan Thomas wrote:
On 31 August 2016 at 11:57, Bogdan Dobrelya 
mailto:bdobre...@mirantis.com>
>> wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.

+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup IMHO 
would help at least clear the waters a little bit, and explain the blocker of 
the current RPC design pattern (which is multidimensional because most people 
are probably thinking RPC == rabbit when it's actually more than that now, ie 
zeromq and amqp1.0 and ...) and try to centralize on a better replacement.


Is anybody who dislikes the current pattern(s) and implementation(s) 
volunteering to start this documentation? I really am not aware of the issues, 
and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-31 Thread Vincent.Chao
Hi Neutrons,

I met this situation once in the release Liberty.
Here is the thing.
When the create_port_chain() is called,
(@networking-sfc/networking_sfc/services/sfc/drivers/ovs/driver.py)
it goes the following code path
   -> _thread_update_path_nodes()
   ->_update_path_node_flowrules()
   ->_update_path_node_port_flowrules()
   ->_build_portchain_flowrule_body()
   ->_update_path_node_next_hops()
   ->_get_port_subnet_gw_info_by_port_id
   ->_get_port_subnet_gw_info()raise exc.SfcNoSubnetGateway
if you didn't give the network a router, it raises SfcNoSubnetGateway .
And then back to the plugin.py: create_port_chain(), cache the exception
sfc_exc.SfcDriverError as e
In this exception, there is a delete_port_chain() method.
But due to the synchronization problem between DB and ovs-bridge, it will
delete failure.
I hope this info. could help anyone who uses a liberty version.
Next time, don't forget giving a router before creating a port chain.

I don't see this code path in the master branch.
It may be better in mitaka.

Thanks
Vincent



2016-08-31 2:19 GMT+08:00 Cathy Zhang :

> Hi Alioune,
>
>
>
> It is weird that when you create a port chain, you get a “chain delete
> failed” error message.
>
> We never had this problem. Chain deletion is only involved when you do
> “delete chain” or “update chain”.
>
> Not sure which networking code file combination you are using or whether
> it is because your system is not properly cleaned up or not properly
> installed.
>
> We are going to release the networking-sfc mitaka version soon.
>
> I would suggest that you wait a little bit and then use the official
> released mitaka version and reinstall the feature on your system.
>
>
>
> Thanks,
>
> Cathy
>
>
>
> *From:* Alioune [mailto:baliou...@gmail.com]
> *Sent:* Tuesday, August 30, 2016 8:03 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Cathy Zhang; Mohan Kumar; Henry Fourie
> *Subject:* Re: [openstack-dev][neutron][networking-sfc] Unable to create
> openstack SFC
>
>
>
> Hi,
>
> Have you received my previous email ?
>
>
>
> Regards,
>
>
>
> On 15 August 2016 at 13:39, Alioune  wrote:
>
> Hi all,
>
> I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
> Web Server (DST) and the DHCP namespace as the SRC.
>
> I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
> the neutron L2-agent runs correctly.
>
> I followed the process by creating classifier, port pairs and port_group
> but I got a wrong message "delete_port_chain failed." when creating
> port_chain [2]
>
> I tried to create the neutron ports with and without the option
> "--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
> packets don't go through the SFs.
>
>
>
> Can anyone advice to fix? that ?
>
> What's your channel on IRC ?
>
>
>
> Regards,
>
>
>
>
>
> [1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
>
> [2]
>
> vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
>
> delete_port_chain failed.
>
> vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
>
> #!/bin/bash
>
>
>
> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
> --flow-classifier FC1 PC1
>
>
>
> [3] Output OVS Flows
>
>
>
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
>
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>
>  cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
> n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
>
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
> n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,20)
>
>  cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
> n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,22)
>
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0,
> n_bytes=0, priority=1,tun_id=0x40e actions=push_vlan:0x8100,set_
> field:4097->vlan_vid,resubmit(,10)
>
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0,
> n_bytes=0, priority=1 actions=learn(table=20,hard_
> timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_
> VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],
> output:NXM_OF_IN_PORT[]),output:1
>
>  cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5,
> n_bytes=490, priority=0 actions=resubmit(,22)
>
>  cookie=0xbc2e9105125301dc, duration=9615.342s, table=22, n_packets=

Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Ed Leafe
On Aug 31, 2016, at 12:30 PM, Chris Dent  wrote:

> So to summarize Jay's to do list (please and thank you very much):
> 
> * Look at https://review.openstack.org/#/c/363209/ and decide if it
>  is good enough to get rolling or needs to be completely altered.
> * If the latter, alter it.

I took Jay’s stuff in 363209 and added back the logic to delete existing 
allocations. The tests are all passing for me locally, so now we just have to 
verify that this is indeed the behavior we need.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-31 Thread Yujun Zhang
Hi, Ifat,

The static configuration contains definitions of `entities` and *their*
`relationships while the scenario templates contains a definition section
which includes `entities` and `relationships` *between them*. An outline of
these two format are as below.

static configuration

- entities
  - {entity}
  - {entity}

for each entity

- name:
  id:
  relationship:
- {relationship}
- {relationship}

scenario templates

- definitions
  - entities
- {entity}
- {entity}
  - relationships
- {relationship}
- {relationship}

Though serving different purpose, they both

   1. describe entities and relationships
   2. use a dedicated key (id/template_id) to reference the items
   3. include a source entity and target entity in relationship

The main differences between the two are


   - scenario *defines rules *(entity and relationship matching)*, *graph
   update is triggered when entities are added by datasource.
   - static configuration *defines rules* and also *add entities* to graph

The rule definition are common to these two modules. We may define the
static configuration using the same format as scenario template. And then
simulate an entity discovery from the same file.

By reusing the template parsing engine and workflow, we may reduce the work
in maintenance and bring in new features more easily.

We may discuss it further if anything unclear.

On Tue, Aug 30, 2016 at 11:07 PM Afek, Ifat (Nokia - IL) <
ifat.a...@nokia.com> wrote:

> Hi Yujun,
>
> From: Yujun Zhang
> Date: Monday, 29 August 2016 at 11:59
>
> entities:
>  - type: switch
>name: switch-1
>id: switch-1 # should be same as name
>state: available
>relationships:
>  - type: nova.host
>name: host-1
>id: host-1 # should be same as name*   is_source: true # entity is 
> `source` in this relationship
> *   relation_type: attached - type: switch   name: switch-2   
> id: switch-2 # should be same as name
> *   is_source: false # entity is `target` in this relationship*   
> relation_type: backup
>
>
> I think that’s the idea, instead of making this assumption in the code.
>
> But I wonder why the static physical configuration file use a different
> format from vitrage template definitions[1]
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst
>
>
> What do you mean? The purpose of the templates is to describe the
> condition-action behaviour, wheres the purpose of the static configuration
> is to define resources to be added to vitrage graph. Can you please explain
> how you would make the formats more similar?
>
> Best Regards,
> Ifat.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] important instructions or Newton Milestone #3 Release today 8/31/2015 @ ~23:30 UTC

2016-08-31 Thread Vikram Hosakote (vhosakot)
Great work kolla and kolla-kubernetes communities!

Big thanks to the openstyack-infra team as well :)

Regards,
Vikram Hosakote
IRC:  vhosakot

From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 31, 2016 at 6:37 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] important instructions or Newton Milestone #3 
Release today 8/31/2015 @ ~23:30 UTC

Hey folks,

Milestone 3 will be submitted for tagging to the release team today around my 
end of work day.  All milestone 3 blueprints and bugs will be moved to rc1 in 
the case they don't make the August 31st(today) deadline.

We require fernet in rc1, so if there is anything that can be done to 
accelerate Shuan's work there, please chip in.  I'd like this to be our highest 
priority blueprint merge.  The earlier it merges (when functional) the more 
time we have to test the changes.  Please iterate on this review and review 
daily until merged.

We have made tremendous progress in milestone 3.  We ended up carrying over 
some blueprints as FFEs to rc1 which are all in review state right now and 
nearly complete.

The extension for  features concludes September 15th, 2016 when rc1 is tagged.  
If features don't merge by that time, they will be retargeted for Ocata.  When 
we submit the rc1 tag, master will branch.  After rc1, we will require bug 
backports from master to newton (and mitaka and liberty if appropriate).

We have a large bug backlog.  If folks could tackle that, it would be 
appreciated.  I will be spending most of my time doing that sort of work and 
would appreciate everyone on the team to contribute.  Tomorrow afternoon I will 
have all the rc1 bugs prioritized as seems fitting.

Please do not workflow+1 any blueprint work in the kolla repo until rc1 has 
been tagged.  Master of kolla is frozen for new features not already listed in 
the rc1 milestone.  Master of kolla-kubernetes is open for new features as we 
have not made a stable deliverable out of this repository (a 1.0.0 release).  
As a result, no branch will be made of the kolla-kubernetes repository (I 
think..).  If a branch is made, I'll request it be deleted.

If you have a bug that needs fixing and it doesn't need a backport, just use 
TrivialFix to speed up the process.  If it needs a backport, please use a bug 
id.  After rc1, all patches will need backports so everything should have a bug 
id.  I will provide further guidance after rc1.

A big shout out goes to our tremendous community that has pulled off 3 
milestones on schedule and in functional working order for the Kolla repository 
while maintaining 2 branches and releasing 4 z streams on a 45 day schedule.  
Fantastic work everyone!

Kolla-kubernetes also deserves a shout out - we have a functional compute-kit 
kubernetes underlay that deploys Kolla containers using mostly native 
kuberenetes functionality  We are headed towards a fully Kubernetes 
implementation.  The deployment lacks the broad feature-set of kolla-ansible 
but uses the json API to our containers and is able to spin up nova virtual 
machines with full network (OVS) connectivity - which is huge!

Cheers!
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
Some evaluation aspect were added to the etherpad 
https://etherpad.openstack.org/p/massively-distributed_WG_description for 
massively distributed edge clouds, so we can evaluate each proposals. Your 
comments for these consideration are welcome :

- Security management over the WAN: how manage the inter-site communication and 
edge cloud securely.
- Fail-safe: each edge cloud should be able to run independently, one edge 
cloud crash should not impact other edge clouds running and operation.
- Maintainability: each edge cloud installation/upgrading/patch should be able 
to be managed indepently, don't have to upgrade all edge clouds at the same 
time.
- Manageable: no island even if some link broken
- Easy integration: need to support easy integration for multi-vendors for 
handreds or thousands of edge cloud.
- Consistency: eventually consistent information(stable status) should be 
achieved for distributed system.

And also prepared one skeleton for candidate proposals discussion: 
https://etherpad.openstack.org/p/massively-distributed_WG_candidate_proposals_ocata,
 and linked it into the etherpad mentioned above.

Consider that Tricircle is moving to divide it into two projects: 
TricircleNetworking and TricircleGateway: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
So I listed these two sub-projects in the etherpad, these two projects can work 
together or separately.

Best Regards
Chaoyi Huang(joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 01 September 2016 1:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome.
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> Hello, Joshua,
>
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
>
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
>
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
>
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now, the topics to be discussed could be as follows :
>
> 0. Scenario
> 1, Use cases
> 2, Requirements in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Architecture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Challenges
> 4, next step
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in security management for inter-site RPC
> > communication.

[openstack-dev] Upstream - Barcelona

2016-08-31 Thread Adam Lawson
I seemed to have missed the thread where upstream opps we're being
announced and/or opened. Who do I contact to get in on this? I had table
duty last year and couldn't do it.

//adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] what permission is required to create a Keystone trust

2016-08-31 Thread Matt Jia
Hi,

I am experimenting the Keystone Trusts feature with a script which creates
a trust between two users.

import keystoneclient.v3 as keystoneclient
#import swiftclient.client as swiftclient


auth_url_v3 = 'http:/xxxt.com:5000/v3/'


demo = keystoneclient.Client(auth_url=auth_url_v3,
 username='demo',
 password='openstack',
 project='demo')
import pdb; pdb.set_trace()
alt_demo = keystoneclient.Client(auth_url=auth_url_v3,
 username='alt_demo',
 password='openstack',
 project='alt_demo')

trust = demo.trusts.create(trustor_user=demo.user_id,
   trustee_user=alt_demo.user_id,
   project=demo.tenant_id)

When I run this script, I got this error:

Traceback (most recent call last):
  File "test_os_trust_1.py", line 20, in 
project=demo.tenant_id)
  File "/usr/lib/python2.7/site-packages/keystoneclient/v3/contrib/trusts.py",
line 75, in create
**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 72,
in func
return f(*args, **new_kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 328,
in create
self.key)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 151,
in _create
return self._post(url, body, response_key, return_raw, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 165,
in _post
resp, body = self.client.post(url, body=body, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 635, in post
return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 621, in _cs_request
return self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 596, in request
resp = super(HTTPClient, self).request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/baseclient.py",
line 21, in request
return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line
318, in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line
354, in request
raise exceptions.from_response(resp, method, url)
keystoneclient.openstack.common.apiclient.exceptions.Forbidden: You are not
authorized to perform the requested action. (HTTP 403) (Request-ID:
req-6898b073-d467-4f2a-acc0-c4c0ca15970a)

Can anyone explain what sort of permission is required for the demo user to
create a trust?

Cheers, Matt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

joehuang wrote:

I just pointed out the issues for RPC which is used between API cell and
child cell if we deploy child cells in edge clouds. For this thread is
about massively distributed cloud, so the RPC issues inside current
Nova/Cinder/Neutron are not the main focus(it could be another important
and interesting topic), for example, how to guarantee the reliability
for rpc message:


+1 although I'd like to also discuss this, but so be it, perhaps a 
different topic :)




 > Cells is a good enhancement for Nova scalability, but there are
some issues
 > in deployment Cells for massively distributed edge clouds:
 >
 > 1) using RPC for inter-data center communication will bring the
difficulty
 > in inter-dc troubleshooting and maintenance, and some critical
issue in
 > operation. No CLI or restful API or other tools to manage a child
cell
 > directly. If the link between the API cell and child cells is
broken, then
 > the child cell in the remote edge cloud is unmanageable, no
matter locally
 > or remotely.
 >
 > 2). The challenge in security management for inter-site RPC
communication.
 > Please refer to the slides[1] for the challenge 3: Securing
OpenStack over
 > the Internet, Over 500 pin holes had to be opened in the firewall
to allow
 > this to work – Includes ports for VNC and SSH for CLIs. Using RPC
in cells
 > for edge cloud will face same security challenges.
 >
 > 3)only nova supports cells. But not only Nova needs to support
edge clouds,
 > Neutron, Cinder should be taken into account too. How about
Neutron to
 > support service function chaining in edge clouds? Using RPC? how
to address
 > challenges mentioned above? And Cinder?
 >
 > 4). Using RPC to do the production integration for hundreds of
edge cloud is
 > quite challenge idea, it's basic requirements that these edge
clouds may
 > be bought from multi-vendor, hardware/software or both.
 > That means using cells in production for massively distributed
edge clouds
 > is quite bad idea. If Cells provide RESTful interface between API
cell and
 > child cell, it's much more acceptable, but it's still not enough,
similar
 > in Cinder, Neutron. Or just deploy lightweight OpenStack instance
in each
 > edge cloud, for example, one rack. The question is how to manage
the large
 > number of OpenStack instance and provision service.
 >
 >

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf


That's also my suggestion to collect all candidate proposals, and
discuss these proposals and compare their cons. and pros. in the
Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site
communication for edge clouds, and provide Nova/Cinder/Neutron API as
the umbrella for all edge clouds. This is the pattern of Tricircle:
https://github.com/openstack/tricircle/



What is the REST API for tricircle?

When looking at the github I see:

''Documentation: TBD''

Getting a feel for its REST API would really be helpful in determine how 
much of a proxy/request router it is vs being an actual API. I don't 
really want/like a proxy/request router (if that wasn't obvious, ha).


Looking at say:

https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py

That doesn't inspire me so much, since that appears to be more of a 
fork/join across many different clients, and creating a nova like API 
out of the joined results of those clients (which feels sort of ummm, 
wrong). This is where I start to wonder about what the right API is 
here, and trying to map 1 `create_server` top-level API onto M child 
calls feels a little off (because that mapping will likely never be 
correct due to the nature of the child clouds, ie u have to assume a 
very strict homogenous nature to even get close to this working).


Where there other alternative ways of doing this that were discussed?

Perhaps even a new API that doesn't try to 1:1 map onto child calls, 
something along the line of make an API that more directly suits what 
this project is trying to do (vs trying to completely hide that there M 
child calls being made underneath).


I get the idea of becoming a uber-openstack-API and trying to unify X 
other other openstacks under that API with this uber-API but it just 
feels like the wrong way to tackle this.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Joshua Harlow






We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a
foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT
`foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')

for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`
(`id`))')

this impacts oslo.db's "exception re-handling" functionality which tries
to classify this exception as a DBNonExistentConstraint exception.   It
also breaks oslo.db's test suite locally, but in a downstream project
would only impact its ability to intercept this exception appropriately.

now that "23000" there looks like a bug.  The above gerrit proposes to
work around it.  However, if we didn't push out the above gerrit, we'd
instead have to change requirements:

https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.

Unless we fix the bug in next pymysql, it’s not either/or but both will be
needed, and also minimal oslo.db version bump.

I suggest we:
- block 0.7.7 to unblock upper-constraints updates;
- land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all
stable branches;
- release new oslo.db releases for L-N;
- at least for N, bump minimal version of the library in
global-requirements.txt;
- sync the bump to all consuming projects;
- later, maybe unblock 0.7.7.

In the meantime, interested parties may work with pymysql folks to get the
issue fixed. It may take a while, so I would not make this step part of our
short term plan.

Now, I understand that this does not really sound ideal, but I assume we
are not in requirements freeze yet (the deadline for that is tomorrow), and
this plan will solve the issue for users of all versions of pymysql.

Even if we were frozen, this seems like the sort of thing we'd want to
deal with through a patch release.

I've already create the stable/newton branch for oslo.db, so we'll need
to backport the fix to have a 4.13.1 release.


+1 to 4.13.1


I'll get a release review up once 
https://review.openstack.org/#/c/363828/ merges (seems to be on its way 
to merging).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Joshua Harlow


+1 to 4.13.1


I'll get a release review up once
https://review.openstack.org/#/c/363828/ merges (seems to be on its way
to merging).


https://review.openstack.org/#/c/364063/

Enjoy!

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Requirement for bug when reno is present

2016-08-31 Thread Steven Dake (stdake)


From: Dave Walker mailto:em...@daviey.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, August 30, 2016 at 4:24 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] Requirement for bug when reno is present


On 30 August 2016 at 11:42, Paul Bourke 
mailto:paul.bou...@oracle.com>> wrote:
Kolla,

Do people feel we still want to require a bug-id in the commit message for 
features, when reno notes are present? My understanding is that till now we've 
required people to add bugs for non trivial features in order to track them as 
part of releases. Does/should reno supersede this?

-Paul

I'm guess you raised this because my recent comment on a change you did... but 
actually, I agree with you.  I don't think it is a good process, but 
standardisation is key.

The issue comes around because Kolla wanted to use bugs to track potential 
backports to stable/*.  However, I think this is generally overrated and the 
Change-ID is suitable for this purpose.

Open to other options in the future.  Can you explain how change Ids can be 
used to track which bugs need to be backported?


I really hate raising bugs "just because", when realistically many of them are 
not bugs and contain one-line style "Add this $feature" bug description.  It 
just burns through Launchpad bug numbers, and will likely never be looked at 
again.

We shouldn't be using bugs for features IMO and definitely not for release 
candidates where an FFE hasn't been given.

Regards
-seve


--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >