Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Alex Xu
I'm trying so hard to catch up the discussion since I lost few.., it is
really hard...

In my mind , I'm always thinking the request group is only about binding
the trait and the resource class together. Also thinking about whether we
need a explicit tree structure to describe the request. So sounds like
proximity parameter right to me.

2018-04-19 6:45 GMT+08:00 Eric Fried :

> > I have a feeling we're just going to go back and forth on this, as we
> > have for weeks now, and not reach any conclusion that is satisfactory to
> > everyone. And we'll delay, yet again, getting functionality into this
> > release that serves 90% of use cases because we are obsessing over the
> > 0.01% of use cases that may pop up later.
>
> So I vote that, for the Rocky iteration of the granular spec, we add a
> single `proximity={isolate|any}` qparam, required when any numbered
> request groups are specified.  I believe this allows us to satisfy the
> two NUMA use cases we care most about: "forced sharding" and "any fit".
> And as you demonstrated, it leaves the way open for finer-grained and
> more powerful semantics to be added in the future.
>
> -efried
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string

2018-04-19 Thread Szwed, Maciej
Hi Hamdy,
Thanks for quick action.

Regards
Maciej

From: Hamdy Khader [mailto:ham...@mellanox.com]
Sent: Tuesday, April 17, 2018 12:51 PM
To: OpenStack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string


Hi,

I think you're right, will drop the split and push change soon.



Regards,

Hamdy


From: Szwed, Maciej mailto:maciej.sz...@intel.com>>
Sent: Monday, April 16, 2018 4:51 PM
To: OpenStack-dev@lists.openstack.org
Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string


Hi,

I'm wondering why in Os-brick implementation of NVMe-oF in 
os_brick/initiator/connectors/nvme.py, line 97 we do split on 'nqn'. Connection 
properties, including 'nqn', are provided by Cinder driver and when user want 
to implement new driver that will use NVMe-of he/she needs to create NQN string 
with additional string and dot proceeding the desired NQN string. This 
additional string is unused across whole NVMe-oF implementation. This creates 
confusion for people when creating new Cinder driver. What was its purpose? Can 
we drop that split?



Regards,

Maciej
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Balázs Gibizer



On Thu, Apr 19, 2018 at 12:45 AM, Eric Fried  wrote:
 I have a feeling we're just going to go back and forth on this, as 
we
 have for weeks now, and not reach any conclusion that is 
satisfactory to
 everyone. And we'll delay, yet again, getting functionality into 
this
 release that serves 90% of use cases because we are obsessing over 
the

 0.01% of use cases that may pop up later.


So I vote that, for the Rocky iteration of the granular spec, we add a
single `proximity={isolate|any}` qparam, required when any numbered
request groups are specified.  I believe this allows us to satisfy the
two NUMA use cases we care most about: "forced sharding" and "any 
fit".

And as you demonstrated, it leaves the way open for finer-grained and
more powerful semantics to be added in the future.


Can the proximity param specify relationship between the un-numbered 
and the numbered groups as well or only between numbered groups?

Besides that I'm +1 about proxyimity={isolate|any}

Cheers,
gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Chris Dent

On Wed, 18 Apr 2018, Eric Fried wrote:


I have a feeling we're just going to go back and forth on this, as we
have for weeks now, and not reach any conclusion that is satisfactory to
everyone. And we'll delay, yet again, getting functionality into this
release that serves 90% of use cases because we are obsessing over the
0.01% of use cases that may pop up later.


So I vote that, for the Rocky iteration of the granular spec, we add a
single `proximity={isolate|any}` qparam, required when any numbered
request groups are specified.  I believe this allows us to satisfy the
two NUMA use cases we care most about: "forced sharding" and "any fit".
And as you demonstrated, it leaves the way open for finer-grained and
more powerful semantics to be added in the future.


The three most important priorities for me (highest last) are:

* being able to move forward quickly so we can learn from our
  mistakes sooner than later and not cause backlogs in our progress

* the common behavior should require the least syntax. Since (I
  hope) the common behavior has nothing to do with nested, and the
  syntax under discussion only comes into play on granular requests,
  it's not really germane here. But it bears repeating that we are
  outside the domain of useful stuff for most cloudy people, here.

* the API needs to have an easy mental process for translating from
  human utterances to a set of query parameters and vice versa. This
  is why I tend to prefer a single query parameter (like either of
  the two original proposals in this thread, or 'proximity) to
  encoded parameters (like 'resources1{s,d}') or taking the leap
  into a complex JSON query structure in a POST.

One of the advantages of microversions is that we can easily change
it later if we want. It can mean that the underlying data query code
may need to branch more, but that's the breaks, and isn't really
that big of a deal if we're maintaining our tests well.

It is more than likely that we will eventually have to move to POST
at some point (and at that point it wouldn't be completely wrong to
investigate graphql). But we should put that off and let ourselves
progress there in a stepwise fashion.

Let's take one or two use cases, solve for them in what we hope is a
flexible fashion, and move on. If we get it wrong we can fix it. And
it'll be okay. Let's not maintain this painful illusion that we're
writing stone tablets.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Sylvain Bauza
2018-04-19 10:38 GMT+02:00 Balázs Gibizer :

>
>
> On Thu, Apr 19, 2018 at 12:45 AM, Eric Fried  wrote:
>
>>  I have a feeling we're just going to go back and forth on this, as we
>>>  have for weeks now, and not reach any conclusion that is satisfactory to
>>>  everyone. And we'll delay, yet again, getting functionality into this
>>>  release that serves 90% of use cases because we are obsessing over the
>>>  0.01% of use cases that may pop up later.
>>>
>>
>> So I vote that, for the Rocky iteration of the granular spec, we add a
>> single `proximity={isolate|any}` qparam, required when any numbered
>> request groups are specified.  I believe this allows us to satisfy the
>> two NUMA use cases we care most about: "forced sharding" and "any fit".
>> And as you demonstrated, it leaves the way open for finer-grained and
>> more powerful semantics to be added in the future.
>>
>
> Can the proximity param specify relationship between the un-numbered and
> the numbered groups as well or only between numbered groups?
> Besides that I'm +1 about proxyimity={isolate|any}
>
>
What's the default behaviour if we aren't providing the proximity qparam ?
Isolate or any ?


> Cheers,
> gibi
>
>
>
>> -efried
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native

2018-04-19 Thread Andrea Frittoli
Dear all,

a quick update on the current status.

Zuul has been fixed to use the correct branch for roles coming from
different repositories [1].
The backport of the devstack patches to support multinode jobs is almost
complete. All stable/queens patches are merged, stable/pike patches are
almost all approved and going through the gate [2].

The two facts above mean that now the "devstack-tempest" base job defined
in Tempest can be switched to use the "orchestrate-devstack" role and thus
function as a base for multinode jobs [3].
It also means that work on writing grenade jobs in zuulv3 native format can
now be resumed [4].

Kind regards

Andrea Frittoli

[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129217.html
[2]
https://review.openstack.org/#/q/topic:multinode_zuulv3+(status:open+OR+status:merged
)
[3] https://review.openstack.org/#/c/545724/
[4]
https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3


On Mon, Mar 12, 2018 at 2:08 PM Andrea Frittoli 
wrote:

> Dear all,
>
> post-PTG updates:
>
> - the devstack patches for multinode support are now merged on master. You
> can now build your multinode zuulv3 native devstack/tempest test jobs using
> the same base jobs as for single node, and setting a multinode nodeset.
> Documentation landed as well, so you can now find docs on roles [0], jobs
> [1] and a migration guide [2] which will show you which base jobs to start
> with and how to migrate those devstack-gate flags from legacy jobs to the
> zuul v3 jobs.
>
> - the multinode patches including switching of test-matrix (on master) and
> start including the list of devstack services in the base jobs. In doing so
> I used the new neutron service names. That may be causing issues to
> devstack-plugins looking for old service names, so if you encounter an
> issue please reach out in the openstack-qa / openstack-infra rooms. We
> could still roll back to the old names, however the beginning of the cycle
> is probably the best time to sort out issues related to the new names and
> new logic in the neutron - devstack code.
>
> Coming up next:
>
> - backport of devstack patches to stable (queens and pike), so we can
> switch the Tempest job devstack multinode mode and develop grenade zuulv3
> native jobs. I do not plan on backporting the new neutron names to any
> stable branch, let me know if there is any reason to do otherwise.
> - work on grenade is at very early stages [3], so far I got devstack
> running successfully on stable/queens from the /opt/stack/old folder using
> the zuulv3 roles. Next up is actually doing the migration and running all
> relevant checks.
>
> Andrea Frittoli (andreaf)
>
> [0] https://docs.openstack.org/devstack/latest/zuul_roles.html
> [1] https://docs.openstack.org/devstack/latest/zuul_jobs.html
> [2] https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html
>
> [3]
> https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3
>
>
>
> On Tue, Feb 20, 2018 at 9:22 PM Andrea Frittoli 
> wrote:
>
>> Dear all,
>>
>> updates:
>>
>> - host/group vars: zuul now supports declaring host and group vars in the
>> job definition [0][1] - thanks corvus and infra team!
>>   This is a great help towards writing the devstack and tempest base
>> multinode jobs [2][3]
>>   * NOTE: zuul merges dict variables through job inheritance. Variables
>> in host/group_vars override global ones. I will write some examples further
>> clarify this.
>>
>> - stable/pike: devstack ansible changes have been backported to
>> stable/pike, so we can now run zuulv3 jobs against stable/pike too - thank
>> you tosky!
>>   next change in progress related to pike is to provide tempest-full-pike
>> for branchless repositories [4]
>>
>> - documentation: devstack now publishes documentation on its ansible
>> roles [5].
>>   More devstack documentation patches are in progress to provide jobs
>> reference, examples and a job migration how-to [6].
>>
>>
>> Andrea Frittoli (andreaf)
>>
>> [0]
>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host_vars
>>
>> [1]
>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.group_vars
>>
>> [2] https://review.openstack.org/#/c/545696/
>> [3] https://review.openstack.org/#/c/545724/
>> [4] https://review.openstack.org/#/c/546196/
>> [5] https://docs.openstack.org/devstack/latest/roles.html
>> [6] https://review.openstack.org/#/c/545992/
>>
>>
>> On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli <
>> andrea.fritt...@gmail.com> wrote:
>>
>>> Dear all,
>>>
>>> updates:
>>> - tempest-full-queens and tempest-full-py3-queens are now available for
>>> testing of branchless repositories [0]. They are used for tempest and
>>> devstack-gate. If you own a tempest plugin in a branchless repo, you may
>>> consider adding similar jobs to your plugin if you use it for tests on
>>> stable/queen as well.
>>> - if you have migrated jobs based on devstack-tempest please let me
>>> know, I'm building ref

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
gibi-

> Can the proximity param specify relationship between the un-numbered and
> the numbered groups as well or only between numbered groups?
> Besides that I'm +1 about proxyimity={isolate|any}

Remembering that the resources in the un-numbered group can be spread
around the tree and sharing providers...

If applying "isolate" to the un-numbered group means that each resource
you specify therein must be satisfied by a different provider, then you
should have just put those resources into numbered groups.

If "isolate" means that *none* of the numbered groups will land on *any*
of the providers satisfying the un-numbered group... that could be hard
to reason about, and I don't know if it's useful.

So thus far I've been thinking about all of these semantics only in
terms of the numbered groups (although Jay's `can_split` was
specifically aimed at the un-numbered group).

That being the case (is that a bikeshed on the horizon?) perhaps
`granular_policy={isolate|any}` is a more appropriate name than `proximity`.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
Chris-

Thanks for this perspective.  I totally agree.

> * the common behavior should require the least syntax.

To that point, I had been assuming "any fit" was going to be more common
than "explicit anti-affinity".  But I think this is where we are having
trouble agreeing.  So since, as you point out, we're in the weeds to
begin with when talking about nested, IMO mriedem's suggestion (no
default, require behavior to be specified) is a reasonable compromise.

> it'll be okay. Let's not maintain this painful illusion that we're
> writing stone tablets.

This.  I, for one, was being totally guilty of that.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
Sylvain-

> What's the default behaviour if we aren't providing the proximity qparam
> ? Isolate or any ?

What we've been talking about, per mriedem's suggestion, is that the
qparam is required when you specify any numbered request groups.  There
is no default.  If you don't provide the qparam, 400.

(Edge case: the qparam is meaningless if you only provide *one* numbered
request group - assuming it has no bearing on the un-numbered group.  In
that case omitting it might be acceptable... or 400 for consistency.)

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Balázs Gibizer



On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried  wrote:

gibi-

 Can the proximity param specify relationship between the 
un-numbered and

 the numbered groups as well or only between numbered groups?
 Besides that I'm +1 about proxyimity={isolate|any}


Remembering that the resources in the un-numbered group can be spread
around the tree and sharing providers...

If applying "isolate" to the un-numbered group means that each 
resource
you specify therein must be satisfied by a different provider, then 
you

should have just put those resources into numbered groups.

If "isolate" means that *none* of the numbered groups will land on 
*any*
of the providers satisfying the un-numbered group... that could be 
hard

to reason about, and I don't know if it's useful.

So thus far I've been thinking about all of these semantics only in
terms of the numbered groups (although Jay's `can_split` was
specifically aimed at the un-numbered group).


Thanks for the explanation. Now it make sense to me to limit the 
proximity param to the numbered groups.




That being the case (is that a bikeshed on the horizon?) perhaps
`granular_policy={isolate|any}` is a more appropriate name than 
`proximity`.


The policy term is more general than proximity therefore the 
granular_policy=any query fragment isn't descriptive enough any more. 



gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client?

2018-04-19 Thread Doug Hellmann
Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200:
> Swift has had storage policies for a while now. These are enabled by 
> setting the 'X-Storage-Policy' header on a container.
> 
> It looks to me like this is not possible using openstack-client (even in 
> master branch) - while there is a 'set' operation for containers this 
> will *only* set  'Meta-*' type headers.
> 
> It seems to me that adding this would be highly desirable. Is it in the 
> pipeline? If not I might see how much interest there is at my end for 
> adding such - as (famous last words) it looks pretty straightforward to do.
> 
> regards
> 
> Mark
> 

I can't imagine why we wouldn't want to implement that and I'm not
aware of anyone working on it. If you're interested and have time,
please do work on the patch(es).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Documentation meeting minutes for 2018-04-18

2018-04-19 Thread Petr Kovar
===
#openstack-doc: docteam
===


Meeting started by pkovar at 16:02:48 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-04-18-16.02.log.html
.



Meeting summary
---

* Open discussion  (pkovar, 16:04:23)

* docs PTL availability in April/May  (pkovar, 16:05:32)
  * pkovar to have limited online presence for the next 3 weeks
(pkovar, 16:06:10)
  * back in mid-May  (pkovar, 16:06:23)
  * will check email  (pkovar, 16:06:43)

* Vancouver Summit  (pkovar, 16:08:27)
  * Will have a shared 10+10 mins project update slot with i18n, see the
published schedule  (pkovar, 16:08:31)
  * LINK:

https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21627/docsi18n-project-onboarding
(pkovar, 16:09:09)
  * Frank (eumel8) started to fill out the content for project updates
on I18n part.  (ianychoi, 16:09:29)
  * stephenfin to talk about docs tooling updates  (pkovar, 16:10:16)

* Bug Triage Team  (pkovar, 16:19:01)
  * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams
(pkovar, 16:19:06)
  * if foks want to help, sign up  (pkovar, 16:19:55)
  * if folks want to help, sign up at
https://wiki.openstack.org/wiki/Documentation/SpecialityTeams for
the next slot  (pkovar, 16:20:36)
  * for the next cycle, we need to decide if we want to retire ha guide
which is pretty much unmaintained with more and more bugs being
filed  (pkovar, 16:25:39)

* Replacing pbr's autodoc feature with sphinxcontrib-apidoc  (pkovar,
  16:25:43)
  * LINK:
http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html
(pkovar, 16:25:50)
  * kudos to stephenfin for spearheading this  (pkovar, 16:26:00)
  * LINK: https://review.openstack.org/#/c/509297/  (ianychoi, 16:31:09)



Meeting ended at 16:34:04 UTC.



People present (lines said)
---

* pkovar (61)
* ianychoi (22)
* stephenfin (5)
* openstack (4)
* openstackgerrit (1)



Generated by `MeetBot`_ 0.1.4


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig.

2018-04-19 Thread Harald Jensås
Hi,

When configuring TripleO deployments with nodes on routed ctlplane
networks we need to pass some per-network properties to the
NetworkConfig resource[1] in THT. We get the ``ControlPlaneIp``
property using get_attr, but the NIC configs need a couple of more
parameters[2], for example: ``ControlPlaneSubnetCidr``,
``ControlPlaneDefaultRoute`` and ``DnsServers``.

Since queens these templates are jinja templated, to generate things
from from network_data.yaml. When using routed ctlplane networks, the
parameters ``ControlPlaneSubnetCidr`` and ``ControlPlaneDefaultRoute``
will be different. So we need to use static per-role
Net::SoftwareConfig templates, and add parameters such as
``ControlPlaneDefaultRouteLeafX``.

The values the use need to pass in for these are already available in
the neutron ctlplane network configuration on the undercloud. So
ideally we should not need to ask the user to provide them in
parameter_defaults, we should resolve the correct values automatically.


: We can get the port ID using get_attr:

 {get_attr: [, addresses, , 0, port]}

: From there outside of heat we can get the subnet_id:

 openstack port show 2fb4baf9-45b0-48cb-8249-c09a535b9eda \
 -f yaml -c fixed_ips

 fixed_ips: ip_address='172.20.0.10', subnet_id='2b06ae2e-423f-4a73-
97ad-4e9822d201e5'

: And finally we can get the gateway_ip and cidr of the subnet:

  openstack subnet show 2b06ae2e-423f-4a73-97ad-4e9822d201e5 \
  -f yaml -c gateway_ip -c cidr

 cidr: 172.20.0.0/26
 gateway_ip: 172.20.0.62


The problem is getting there using heat ...
a couple of ideas:

a) Use heat's ``external_resource`` to create a port resource,
   and then  a external subnet resource. Then get the data
   from the external resources. We probably would have to make
   it possible for a ``external_resource`` depend on the server
   resource, and verify that these resource have the required
   attributes.

b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as
   well probably) to include the data.

   If we do this we should probably aim to be in parity with
   what is made available to clients getting the configuration
   from dhcp. (mtu, dns_domain, dns_servers, prefixlen,
   gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode
   etc.)

c) Create a new heat function to read properties of any
   openstack resource, without having to make use of the
   external_resource in heat.




[1] https://github.com/openstack/tripleo-heat-templates/blob/9727a0d813
f5078d19b605e445d1c0603c9e777c/puppet/role.role.j2.yaml#L383-L389
[2] https://github.com/openstack/tripleo-heat-templates/blob/9727a0d813
f5078d19b605e445d1c0603c9e777c/network/config/single-nic-
vlans/role.role.j2.yaml#L21-L27

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Matthew Booth
We've had inconsistent naming of recreate/evacuate in Nova for a long
time, and it will persist in a couple of places for a while more.
However, I've proposed the following to rename 'recreate' to
'evacuate' everywhere with no rpc/api impact here:

https://review.openstack.org/560900

One of the things which is renamed is the driver 'supports_recreate'
capability, which I've renamed to 'supports_evacuate'. The above
change updates this for in-tree drivers, but as noted in review this
would impact out-of-tree drivers. If this might affect you, please
follow the above in case it merges.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Doug Hellmann
Today is the deadline for proposing a release for the Rocky-1 milestone.
Please don't forget to include your libraries (client or otherwise) as
well.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-19 Thread Tobias Urdin
Two different setups, very basic.

AggregateInstanceExtraSpecsFilter
RetryFilter
AvailabilityZoneFilter
ComputeFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
ServerGroupAntiAffinityFilter
ServerGroupAffinityFilter

RetryFilter
AvailabilityZoneFilter
RamFilter
ComputeFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
ServerGroupAntiAffinityFilter
ServerGroupAffinityFilter

On 04/18/2018 06:34 PM, Chris Friesen wrote:
> On 04/18/2018 09:17 AM, Artom Lifshitz wrote:
>
>> To that end, we'd like to know what filters operators are enabling in
>> their deployment. If you can, please reply to this email with your
>> [filter_scheduler]/enabled_filters (or
>> [DEFAULT]/scheduler_default_filters if you're using an older version)
>> option from nova.conf. Any other comments are welcome as well :)
> RetryFilter
> ComputeFilter
> AvailabilityZoneFilter
> AggregateInstanceExtraSpecsFilter
> ComputeCapabilitiesFilter
> ImagePropertiesFilter
> NUMATopologyFilter
> ServerGroupAffinityFilter
> ServerGroupAntiAffinityFilter
> PciPassthroughFilter
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging-rpm][meeting] Proposal for new meeting time

2018-04-19 Thread Javier Pena
Hello fellow packagers,

During today's meeting [1], we discussed the schedule conflicts some of us have 
with the current meeting slot. As a result, I would like to propose a new 
meeting time:

- Wednesdays, 1 PM UTC (3 PM CEST)

So far, dirk and jruzicka agreed with the change. If you have an issue, please 
reply now.

Regards,
Javier Peña

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][freezer][release] Pre-release of openstack/freezer-dr failed

2018-04-19 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-04-19 09:38:31 -0400:
> Excerpts from zuul's message of 2018-04-19 13:22:40 +:
> > Build failed.
> > 
> > - release-openstack-python 
> > http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/
> >  : FAILURE in 3m 48s
> > - announce-release announce-release : SKIPPED
> > - propose-update-constraints propose-update-constraints : SKIPPED
> > 
> 
> This failure seems to be caused by a failure to install libvirt when
> trying to build the sdist under tox.
> 
> Doug

It looks like the problem is that freezer-dr is not using the
constraints list, so it is getting libvirt 4.2.0. Thanks to Matt
Thode (prometheanfire) for helping debug that!

Freezer team, I suggest you add constraints to the freezer-dr repository
before the next milestone so the next release job run passes.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Pre-release of openstack/freezer-dr failed

2018-04-19 Thread Doug Hellmann
Excerpts from zuul's message of 2018-04-19 13:22:40 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/
>  : FAILURE in 3m 48s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

This failure seems to be caused by a failure to install libvirt when
trying to build the sdist under tox.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] Scheduler VM distribution

2018-04-19 Thread Andrey Volkov
Hello,

>From my understanding, we have a race between the scheduling
process and host weight update.

I made a simple experiment. On the 50 fake host environment
it was asked to boot 40 VMs those should be placed 1 on each host.
The hosts are equal to each other in terms of inventory.

img=6fedf6a1-5a55-4149-b774-b0b4dccd2ed1
flavor=1
for i in {1..40}; do
nova boot --flavor $flavor --image $img --nic none vm-$i;
sleep 1;
done

The following distribution was gotten:

mysql> select resource_provider_id, count(*) from allocations where
resource_class_id = 0 group by 1;

+--+--+
| resource_provider_id | count(*) |
+--+--+
|1 |2 |
|   18 |2 |
|   19 |3 |
|   20 |3 |
|   26 |2 |
|   29 |2 |
|   33 |3 |
|   36 |2 |
|   41 |1 |
|   49 |3 |
|   51 |2 |
|   52 |3 |
|   55 |2 |
|   60 |3 |
|   61 |2 |
|   63 |2 |
|   67 |3 |
+--+--+
17 rows in set (0.00 sec)

And the question is:
If we have an atomic resource allocation what is the reason
to use compute_nodes.* for weight calculation?

There is a custom log of behavior I described: http://ix.io/18cw

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Jay Pipes

On 04/19/2018 09:15 AM, Matthew Booth wrote:

We've had inconsistent naming of recreate/evacuate in Nova for a long
time, and it will persist in a couple of places for a while more.
However, I've proposed the following to rename 'recreate' to
'evacuate' everywhere with no rpc/api impact here:

https://review.openstack.org/560900

One of the things which is renamed is the driver 'supports_recreate'
capability, which I've renamed to 'supports_evacuate'. The above
change updates this for in-tree drivers, but as noted in review this
would impact out-of-tree drivers. If this might affect you, please
follow the above in case it merges.


I have to admit, Matt, I'm a bit confused by this. I was under the 
impression that we were trying to *remove* uses of the term "evacuate" 
as much as possible because that term is not adequately descriptive of 
the operation and terms like "recreate" were more descriptive?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Migration of Bandit

2018-04-19 Thread Luke Hinds
All,

Please note that Bandits code and issues / docs will be migrated from
OpenStack to PyCQA.

This is expected to happen next week.

No changes are required in any projects or CI, as Bandit will still be
available via pypi and projects / CI are set up to use Bandit in that way
via tox.

READMEs and Key Wiki pages will be updated to inform any visitors of the
new home and how to contribute / raise issues.

Cheers,

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native

2018-04-19 Thread James E. Blair
Andrea Frittoli  writes:

> Dear all,
>
> a quick update on the current status.
>
> Zuul has been fixed to use the correct branch for roles coming from
> different repositories [1].
> The backport of the devstack patches to support multinode jobs is almost
> complete. All stable/queens patches are merged, stable/pike patches are
> almost all approved and going through the gate [2].
>
> The two facts above mean that now the "devstack-tempest" base job defined
> in Tempest can be switched to use the "orchestrate-devstack" role and thus
> function as a base for multinode jobs [3].
> It also means that work on writing grenade jobs in zuulv3 native format can
> now be resumed [4].
>
> Kind regards
>
> Andrea Frittoli
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129217.html
> [2]
> https://review.openstack.org/#/q/topic:multinode_zuulv3+(status:open+OR+status:merged
> )
> [3] https://review.openstack.org/#/c/545724/
> [4]
> https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3

Also, shortly after this update, we made a change to make it slightly
easier for folks with devstack plugin jobs.  You should no longer need
to set the LIBS_FROM_GIT variable manually; instead, just specify the
project in `required-projects`, and the devstack job will set it
automatically.

See https://review.openstack.org/548331 for an example.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
Thanks to everyone who contributed to this discussion.  With just a
teeny bit more bikeshedding on the exact syntax [1], we landed on:

group_policy={none|isolate}

I have proposed this delta to the granular spec [2].

-efried

[1]
http://p.anticdent.org/logs/openstack-placement?dated=2018-04-19%2013:48:39.213790#a1c
[2] https://review.openstack.org/#/c/562687/

On 04/19/2018 07:38 AM, Balázs Gibizer wrote:
> 
> 
> On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried  wrote:
>> gibi-
>>
>>>  Can the proximity param specify relationship between the un-numbered
>>> and
>>>  the numbered groups as well or only between numbered groups?
>>>  Besides that I'm +1 about proxyimity={isolate|any}
>>
>> Remembering that the resources in the un-numbered group can be spread
>> around the tree and sharing providers...
>>
>> If applying "isolate" to the un-numbered group means that each resource
>> you specify therein must be satisfied by a different provider, then you
>> should have just put those resources into numbered groups.
>>
>> If "isolate" means that *none* of the numbered groups will land on *any*
>> of the providers satisfying the un-numbered group... that could be hard
>> to reason about, and I don't know if it's useful.
>>
>> So thus far I've been thinking about all of these semantics only in
>> terms of the numbered groups (although Jay's `can_split` was
>> specifically aimed at the un-numbered group).
> 
> Thanks for the explanation. Now it make sense to me to limit the
> proximity param to the numbered groups.
> 
>>
>> That being the case (is that a bikeshed on the horizon?) perhaps
>> `granular_policy={isolate|any}` is a more appropriate name than
>> `proximity`.
> 
> The policy term is more general than proximity therefore the
> granular_policy=any query fragment isn't descriptive enough any more.
> 
> 
> gibi
> 
>>
>> -efried
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Chris Friesen

On 04/19/2018 08:33 AM, Jay Pipes wrote:

On 04/19/2018 09:15 AM, Matthew Booth wrote:

We've had inconsistent naming of recreate/evacuate in Nova for a long
time, and it will persist in a couple of places for a while more.
However, I've proposed the following to rename 'recreate' to
'evacuate' everywhere with no rpc/api impact here:

https://review.openstack.org/560900

One of the things which is renamed is the driver 'supports_recreate'
capability, which I've renamed to 'supports_evacuate'. The above
change updates this for in-tree drivers, but as noted in review this
would impact out-of-tree drivers. If this might affect you, please
follow the above in case it merges.


I have to admit, Matt, I'm a bit confused by this. I was under the impression
that we were trying to *remove* uses of the term "evacuate" as much as possible
because that term is not adequately descriptive of the operation and terms like
"recreate" were more descriptive?


This is a good point.

Personally I'd prefer to see it go the other way and convert everything to the 
"recreate" terminology, including the external API.


From the CLI perspective, it makes no sense that "nova evacuate" operates after 
a host is already down, but "nova evacuate-live" operates on a running host.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Scheduler VM distribution

2018-04-19 Thread Jay Pipes

Привет, Андрей! Comments inline...

On 04/19/2018 10:27 AM, Andrey Volkov wrote:

Hello,

 From my understanding, we have a race between the scheduling
process and host weight update.

I made a simple experiment. On the 50 fake host environment
it was asked to boot 40 VMs those should be placed 1 on each host.
The hosts are equal to each other in terms of inventory.

img=6fedf6a1-5a55-4149-b774-b0b4dccd2ed1
flavor=1
for i in {1..40}; do
nova boot --flavor $flavor --image $img --nic none vm-$i;
sleep 1;
done

The following distribution was gotten:

mysql> select resource_provider_id, count(*) from allocations where 
resource_class_id = 0 group by 1;


+--+--+
| resource_provider_id | count(*) |
+--+--+
|                    1 |        2 |
|                   18 |        2 |
|                   19 |        3 |
|                   20 |        3 |
|                   26 |        2 |
|                   29 |        2 |
|                   33 |        3 |
|                   36 |        2 |
|                   41 |        1 |
|                   49 |        3 |
|                   51 |        2 |
|                   52 |        3 |
|                   55 |        2 |
|                   60 |        3 |
|                   61 |        2 |
|                   63 |        2 |
|                   67 |        3 |
+--+--+
17 rows in set (0.00 sec)

And the question is:
If we have an atomic resource allocation what is the reason
to use compute_nodes.* for weight calculation?


The resource allocation is only atomic in the placement service, since 
the placement service prevents clients from modifying records that have 
changed since the client read information about the record (it uses a 
"generation" field in the resource_providers table records to provide 
this protection).


What seems to be happening is that a scheduler thread's view of the set 
of HostState objects used in weighing is stale at some point in the 
weighing process. I'm going to guess and say you have 3 scheduler 
processes, right?


In other words, what is happening is something like this:

(Tx indicates a period in sequential time)

T0: thread A gets a list of filtered hosts and weighs them.
T1: thread B gets a list of filtered hosts and weighs them.
T2: thread A picks the first host in its weighed list
T3: thread B picks the first host in its weighed list (this is the same 
host as thread A picked)
T4: thread B increments the num_instances attribute of its HostState 
object for the chosen host (done in the 
HostState._consume_from_request() method)
T5: thread A increments the num_instances attribute of its HostState 
object for the same chosen host.


So, both thread A and B choose the same host because at the time they 
read the HostState objects, the num_instances attribute was 0 and the 
weight for that host was the same (2.0 in the logs).


I'm not aware of any effort to fix this behaviour in the scheduler.

Best,
-jay


There is a custom log of behavior I described: http://ix.io/18cw

--
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-19 Thread Paul Belanger
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 2
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Matthew Booth
On 19 April 2018 at 15:33, Jay Pipes  wrote:
> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>
>> We've had inconsistent naming of recreate/evacuate in Nova for a long
>> time, and it will persist in a couple of places for a while more.
>> However, I've proposed the following to rename 'recreate' to
>> 'evacuate' everywhere with no rpc/api impact here:
>>
>> https://review.openstack.org/560900
>>
>> One of the things which is renamed is the driver 'supports_recreate'
>> capability, which I've renamed to 'supports_evacuate'. The above
>> change updates this for in-tree drivers, but as noted in review this
>> would impact out-of-tree drivers. If this might affect you, please
>> follow the above in case it merges.
>
>
> I have to admit, Matt, I'm a bit confused by this. I was under the
> impression that we were trying to *remove* uses of the term "evacuate" as
> much as possible because that term is not adequately descriptive of the
> operation and terms like "recreate" were more descriptive?

I'm ambivalent, tbh, but I think it's better to pick one. I thought
we'd picked 'evacuate' based on the TODOs from Matt R:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093

Incidentally, this isn't at all core to what I'm working on, but I'm
about to start poking it and thought I'd tidy up as I go (as is my
wont). If there's discussion to be had I don't mind dropping this and
moving on.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Matthew Booth
On 19 April 2018 at 16:46, Chris Friesen  wrote:
> On 04/19/2018 08:33 AM, Jay Pipes wrote:
>>
>> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>>
>>> We've had inconsistent naming of recreate/evacuate in Nova for a long
>>> time, and it will persist in a couple of places for a while more.
>>> However, I've proposed the following to rename 'recreate' to
>>> 'evacuate' everywhere with no rpc/api impact here:
>>>
>>> https://review.openstack.org/560900
>>>
>>> One of the things which is renamed is the driver 'supports_recreate'
>>> capability, which I've renamed to 'supports_evacuate'. The above
>>> change updates this for in-tree drivers, but as noted in review this
>>> would impact out-of-tree drivers. If this might affect you, please
>>> follow the above in case it merges.
>>
>>
>> I have to admit, Matt, I'm a bit confused by this. I was under the
>> impression
>> that we were trying to *remove* uses of the term "evacuate" as much as
>> possible
>> because that term is not adequately descriptive of the operation and terms
>> like
>> "recreate" were more descriptive?
>
>
> This is a good point.
>
> Personally I'd prefer to see it go the other way and convert everything to
> the "recreate" terminology, including the external API.
>
> From the CLI perspective, it makes no sense that "nova evacuate" operates
> after a host is already down, but "nova evacuate-live" operates on a running
> host.

A bit OT, but evacuate-live probably shouldn't exist at all for a
variety of reasons. The implementation is shonky, it's doing
orchestration in the CLI, and the name is misleading, as you say.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-04-19 Thread Chris Dent



Greetings OpenStack community,

As it was just edleafe and I today, we had a quick meeting and went back to 
other things. The main actions were to select one guideline to publish and one 
guideline to freeze. These are listed below. We also briefly discussed that 
though we have not planned any official time and space in Vancouver, we hope to 
engage with anyone interested in APIs in whatever space we can find in the 
lovely hallways of the Vancouver Convention Centre.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Update the errors guidance to use service-type for code
  https://review.openstack.org/#/c/554921/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Matt Riedemann

On 4/19/2018 11:06 AM, Matthew Booth wrote:

I'm ambivalent, tbh, but I think it's better to pick one. I thought
we'd picked 'evacuate' based on the TODOs from Matt R:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093

Incidentally, this isn't at all core to what I'm working on, but I'm
about to start poking it and thought I'd tidy up as I go (as is my
wont). If there's discussion to be had I don't mind dropping this and
moving on.


For reference, I started this rolling ball:

https://review.openstack.org/#/c/508190/

The internal 'recreate' argument to rebuild was always a thorn in my 
side so I renamed it to evacuate because that's what the operation is 
called in the API, how it shows up in bug reports, and how we talk about 
it in IRC. We don't talk about the "recreate" operation, we talk about 
evacuate.


Completely re-doing the end-user API experience with evacuate and 
rebuild including internal plumbing changes is orthogonal to this 
cleanup IMO because we can do the cleanup now to avoid existing 
maintainer confusion rather than hold it up for something that no one is 
working on.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Matt Riedemann

On 4/19/2018 10:46 AM, Chris Friesen wrote:
 From the CLI perspective, it makes no sense that "nova evacuate" 
operates after a host is already down, but "nova evacuate-live" operates 
on a running host.


http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/

If people feel this strongly about the name of the "nova 
host-evacuate-live" CLI, they should propose changes to rename it (or 
deprecate it if it's dangerous and shouldn't exist).


How about deprecating "nova host-evacuate-live" and just add a --batch 
option to the existing "nova live-migration" CLI if people want to 
retain the functionality but hate the other name.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate

2018-04-19 Thread Jay Pipes

On 04/19/2018 12:27 PM, Matt Riedemann wrote:

On 4/19/2018 11:06 AM, Matthew Booth wrote:

I'm ambivalent, tbh, but I think it's better to pick one. I thought
we'd picked 'evacuate' based on the TODOs from Matt R:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985 

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093 



Incidentally, this isn't at all core to what I'm working on, but I'm
about to start poking it and thought I'd tidy up as I go (as is my
wont). If there's discussion to be had I don't mind dropping this and
moving on.


For reference, I started this rolling ball:

https://review.openstack.org/#/c/508190/

The internal 'recreate' argument to rebuild was always a thorn in my 
side so I renamed it to evacuate because that's what the operation is 
called in the API, how it shows up in bug reports, and how we talk about 
it in IRC. We don't talk about the "recreate" operation, we talk about 
evacuate.


Completely re-doing the end-user API experience with evacuate and 
rebuild including internal plumbing changes is orthogonal to this 
cleanup IMO because we can do the cleanup now to avoid existing 
maintainer confusion rather than hold it up for something that no one is 
working on.


I was only asking a question. I wasn't trying to hold anything up.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client?

2018-04-19 Thread Dean Troyer
On Thu, Apr 19, 2018 at 7:51 AM, Doug Hellmann  wrote:
> Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200:
>> Swift has had storage policies for a while now. These are enabled by
>> setting the 'X-Storage-Policy' header on a container.
>>
>> It looks to me like this is not possible using openstack-client (even in
>> master branch) - while there is a 'set' operation for containers this
>> will *only* set  'Meta-*' type headers.
>>
>> It seems to me that adding this would be highly desirable. Is it in the
>> pipeline? If not I might see how much interest there is at my end for
>> adding such - as (famous last words) it looks pretty straightforward to do.
>
> I can't imagine why we wouldn't want to implement that and I'm not
> aware of anyone working on it. If you're interested and have time,
> please do work on the patch(es).

The primary thing that hinders Swift work like this is OSC does not
use swiftclient as it wasn't a standalone thing yet when I wrote that
bit (lifting much of the actual API code from swiftclient) .  We
decided a while ago to not add that dependency and drop the
OSC-specific object code and use the SDK when we start using SDK for
everything else, after there is an SDK 1.0 release.

Moving forward on this today using either OSC's api.object code or the
SDK would be fine, with the same SDK caveat we have with Neutron,
since SDK isn't 1.0 we may have to play catch-up and maintain multiple
SDK release compatibilities (which has happened at least twice).

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits

2018-04-19 Thread Emilien Macchi
Greetings,

As you probably know mcornea on IRC, Marius Cornea has been contributing on
TripleO for a while, specially on the upgrade bits.
Part of the quality team, he's always testing real customer scenarios and
brings a lot of good feedback in his reviews, and quite often takes care of
fixing complex bugs when it comes to advanced upgrades scenarios.
He's very involved in tripleo-upgrade repository where he's already core,
but I think it's time to let him +2 on other tripleo repos for the patches
related to upgrades (we trust people's judgement for reviews).

As usual, we'll vote!

Thanks everyone for your feedback and thanks Marius for your hard work and
involvement in the project.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits

2018-04-19 Thread John Fulton
+1

On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi  wrote:
> Greetings,
>
> As you probably know mcornea on IRC, Marius Cornea has been contributing on
> TripleO for a while, specially on the upgrade bits.
> Part of the quality team, he's always testing real customer scenarios and
> brings a lot of good feedback in his reviews, and quite often takes care of
> fixing complex bugs when it comes to advanced upgrades scenarios.
> He's very involved in tripleo-upgrade repository where he's already core,
> but I think it's time to let him +2 on other tripleo repos for the patches
> related to upgrades (we trust people's judgement for reviews).
>
> As usual, we'll vote!
>
> Thanks everyone for your feedback and thanks Marius for your hard work and
> involvement in the project.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project Teams Gathering- Denver September 10-14th

2018-04-19 Thread Kendall Waters

All aboard! Next stop Denver!

The fourth Project Teams Gathering [1] will be held September 10-14th back at 
the Renaissance Stapleton Hotel [2] in Denver, Colorado (3801 Quebec Street, 
Denver, Colorado 80207). The Project Teams Gathering (PTG) is an event 
organized by the OpenStack Foundation. It provides meeting facilities allowing 
the various technical community groups working with OpenStack (operators, 
development teams, user workgroups, SIGs) to meet in-person, exchange and get 
work done in a productive setting. As you may have heard, this time around the 
Ops Meetup will be co-located with the Denver PTG. We're excited to have these 
two communities under one roof. Registration, travel support program, and the 
discounted hotel block are now live!  
 

REGISTRATION AND HOTEL
Registration is now available here: https://denver2018ptg.eventbrite.com 
 

Ticket prices for this PTG will be tiered, and are significantly subsidized to 
help cover part of the overall event cost:
Early Bird: USD $199 (Deadline May 11 at 6:59 UTC)
Regular: USD $399 (Deadline August 23 at 6:59 UTC)
Late/Onsite: USD $599
 
We've reserved a very limited block of discounted hotel rooms at $149/night USD 
(does not include breakfast) with the Renaissance Denver Stapleton Hotel where 
the event will be held. Please move quickly to reserve a room with 2 queen 
beds[3] or 1 king bed[4] by August 20th or until they sell out!

TRAIN NEAR HOTEL
You may be curious about the train noise situation around the hotel. This was 
due to an unsafe crossing requiring human flaggers and trains signalling using 
horns. After a meeting held in February of 2018, the Director for the RTD 
project stated that “The gate crossings are complete, operational and safe, and 
we feel that it’s appropriate at this time to remove the requirements to have 
grade crossing attendants at those crossings,” Regulatory approvals for the A, 
B and G commuter rail lines have a contracted deadline of June 2nd, 2018 to be 
approved by Federal Railroad Administration Commissioners. Also worth noting, 
right after we left the PTG last September, the hotel installed sound reduction 
windows throughout the property which should help with an overall quality of 
stay for guests. 
 
USA VISA APPLICATIONS
Please note: Due to recent delays in the visa system, please allow as much time 
as possible for the application process if a visa is required in order to 
travel to the United States. We normally recommend applying no later than 60 
days prior to the event.

If you are unsure whether you require a visa or not, please visit this page [5] 
to see if your country is a part of the Visa Waiver Program. If it is not one 
of the countries listed, you will need to obtain a Visa to enter the U.S.

To supplement your Visa application, we can also provide you with a Visa 
Invitation Letter on official OpenStack Foundation letterhead. Requests for 
invitation letters may be submitted here [6] and must be received by Friday, 
August 24, 2018.
 
TRAVEL SUPPORT PROGRAM
The OpenStack Travel Support Program's aim is to facilitate participation of 
key contributors to the OpenStack Project Teams Gathering (PTG) covering costs 
for travel, accommodation, and event pass. Please fill out this form [7]  to 
apply; the application deadline for the first round of sponsorships is July 
1st. If you are interested in donating to the Travel Support Program, you can 
do so on the Eventbrite page [8].
 
SPONSORSHIP
The PTGs are critical to the OpenStack release cycle and community, and 
sponsorship of these events is a public demonstration of your commitment to the 
continued growth and success of OpenStack. Since this is a working event and we 
strive to maintain a distraction-free environment so teams, we have created 
sponsorship packages that are community focused so that all sponsors receive 
prominent recognition for their ongoing support of OpenStack without impacting 
productivity. If your organization is interested in sponsoring the Stein PTG in 
Denver, please review the sponsorship prospectus and contract here 
,
 and send any questions to p...@openstack.org .


Feel free to reach out to me directly with any questions, looking forward to 
seeing everyone in Denver!

Cheers,
Kendall 

Kendall Waters
OpenStack Marketing
kend...@openstack.org 
[1] www.openstack.org/ptg  
[2] 
http://www.marriott.com/hotels/travel/densa-renaissance-denver-stapleton-hotel/ 

[3] 
http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Team%20Gathering%20Two%20Queen%20Beds%5Edensa%60opnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes

Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Eric K
Thank you, Doug. Question: do we need to do a client library release prior
to R-3? The practice seems to change from cycle to cycle.

On 4/19/18, 6:15 AM, "Doug Hellmann"  wrote:

>Today is the deadline for proposing a release for the Rocky-1 milestone.
>Please don't forget to include your libraries (client or otherwise) as
>well.
>
>Doug
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Eric K
Specifically, for client library using the cycle-with-intermediary release
model.

On 4/19/18, 10:52 AM, "Eric K"  wrote:

>Thank you, Doug. Question: do we need to do a client library release prior
>to R-3? The practice seems to change from cycle to cycle.
>
>On 4/19/18, 6:15 AM, "Doug Hellmann"  wrote:
>
>>Today is the deadline for proposing a release for the Rocky-1 milestone.
>>Please don't forget to include your libraries (client or otherwise) as
>>well.
>>
>>Doug
>>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits

2018-04-19 Thread Juan Antonio Osorio
+1 :D hell yeah!

On Thu, 19 Apr 2018, 20:05 John Fulton,  wrote:

> +1
>
> On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi 
> wrote:
> > Greetings,
> >
> > As you probably know mcornea on IRC, Marius Cornea has been contributing
> on
> > TripleO for a while, specially on the upgrade bits.
> > Part of the quality team, he's always testing real customer scenarios and
> > brings a lot of good feedback in his reviews, and quite often takes care
> of
> > fixing complex bugs when it comes to advanced upgrades scenarios.
> > He's very involved in tripleo-upgrade repository where he's already core,
> > but I think it's time to let him +2 on other tripleo repos for the
> patches
> > related to upgrades (we trust people's judgement for reviews).
> >
> > As usual, we'll vote!
> >
> > Thanks everyone for your feedback and thanks Marius for your hard work
> and
> > involvement in the project.
> > --
> > Emilien Macchi
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Doug Hellmann
Libraries do need to release by R-3 so that we have something to use as
a branch point for the stable branches.

We encourage releases earlier than that, for a couple of reasons.

First, because of the way the CI system works, libraries are generally
not used in test jobs unless they are released. (We should not be
testing services with unreleased versions of clients except on
patches to the client library itself.) This means nothing can use the
client modifications until they are actually released.

Second, releasing early and often gives us more time to fix issues,
so we aren't rushing around at deadline trying to solve a problem
while the gate is full of other last minute patches for other
projects.

So, you don't *have* to release a client library this week, but it is
strongly encouraged. And really, is there any reason to wait, if you have
patches that haven't been released?

Excerpts from Eric K's message of 2018-04-19 10:53:53 -0700:
> Specifically, for client library using the cycle-with-intermediary release
> model.
> 
> On 4/19/18, 10:52 AM, "Eric K"  wrote:
> 
> >Thank you, Doug. Question: do we need to do a client library release prior
> >to R-3? The practice seems to change from cycle to cycle.
> >
> >On 4/19/18, 6:15 AM, "Doug Hellmann"  wrote:
> >
> >>Today is the deadline for proposing a release for the Rocky-1 milestone.
> >>Please don't forget to include your libraries (client or otherwise) as
> >>well.
> >>
> >>Doug
> >>
> >>_
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe: 
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Matt Riedemann

On 4/19/2018 1:15 PM, Doug Hellmann wrote:

Second, releasing early and often gives us more time to fix issues,
so we aren't rushing around at deadline trying to solve a problem
while the gate is full of other last minute patches for other
projects.


Yup, case in point: I waited too long to release python-novaclient 10.x 
in Queens and it prevented us from being able to include it in 
upper-constraints for Queens because it negatively impacted some other 
projects due to backward incompatible changes in the 10.x series of 
novaclient. So learn from my mistakes.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Eric K
Got it thanks a lot, Doug and Matt!

On 4/19/18, 11:34 AM, "Matt Riedemann"  wrote:

>On 4/19/2018 1:15 PM, Doug Hellmann wrote:
>> Second, releasing early and often gives us more time to fix issues,
>> so we aren't rushing around at deadline trying to solve a problem
>> while the gate is full of other last minute patches for other
>> projects.
>
>Yup, case in point: I waited too long to release python-novaclient 10.x
>in Queens and it prevented us from being able to include it in
>upper-constraints for Queens because it negatively impacted some other
>projects due to backward incompatible changes in the 10.x series of
>novaclient. So learn from my mistakes.
>
>-- 
>
>Thanks,
>
>Matt
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found

2018-04-19 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2018-04-18 13:44:08 -0400:
> Hi all,
> 
> We have a number of stable branch jobs failing[0] with an error about pep8
> not being importable[1], when it's clearly installed[2]. We first saw this
> when installing networking-generic-switch on queens in our multinode
> grenade job. We hacked a fix there[3], as we couldn't figure it out and
> thought it was a fluke. Now it's showing up elsewhere.
> 
> I suspected a new pycodestyle was the culprit (maybe it kills off the pep8
> package somehow?) but pinning pycodestyle back a version didn't seem to
> help.
> 
> Any ideas what might be going on here? I'm completely lost.
> 
> P.S. if anyone has the side question of why pep8 is being imported at
> install time, it seems that pbr iterates over any entry points under
> 'distutils.commands' for any installed package. flake8 has one of these
> which must import pep8 to be resolved. I'm not sure *why* pbr needs to do
> this, but I'll assume it's necessary.
> 
> [0] https://review.openstack.org/#/c/557441/
> [1]
> http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508
> [2]
> http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822
> [3] https://review.openstack.org/#/c/561358/
> 
> // jim

Reading through that log more carefully, I see an early attempt to pin
pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being
pulled in as a dependency of flake8-import-order==0.12 when neutron's
test-requirements.txt is installed [2]. Then later when ironic's
test-requirements.txt is installed pycodestyle is downgraded to 2.3.1
[3].

Reproducing those install & downgrade steps, I see that pycodestyle
2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that
explains why pep8 is not re-installed when pycodestyle is downgraded.

I think the real problem here is that we have linter dependencies listed
in the test-requirements.txt files for our projects, and they are
somehow being installed without the constraints. I don't think they need
to be installed for devstack at all, so one way to fix it would be to
move those dependencies to the tox.ini section for running pep8, or to
have devstack look at the blacklisted packages and skip installing them.

Doug

[1] 
http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_39_00_392
[2] 
http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_44_56_527
[3] 
http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_120
[4] http://paste.openstack.org/show/719580/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline

2018-04-19 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-04-19 13:34:44 -0500:
> On 4/19/2018 1:15 PM, Doug Hellmann wrote:
> > Second, releasing early and often gives us more time to fix issues,
> > so we aren't rushing around at deadline trying to solve a problem
> > while the gate is full of other last minute patches for other
> > projects.
> 
> Yup, case in point: I waited too long to release python-novaclient 10.x 
> in Queens and it prevented us from being able to include it in 
> upper-constraints for Queens because it negatively impacted some other 
> projects due to backward incompatible changes in the 10.x series of 
> novaclient. So learn from my mistakes.
> 

Thanks, Matt, that's a perfect example of what we're trying to avoid.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Removal of debian-jessie, replaced by debian-stable (stretch)

2018-04-19 Thread Paul Belanger
Greetings,

I'd like to propose now that we have debian-stable (stretch) nodesets online for
nodepool, that we start the process to remove debian-jessie.  As I can see,
there really is only 2 projects using debian-jessie:

  * ARA
  * ansible-hardening

I've already proposed patches to update their jobs to debian-stable, replacing
debian-jessie:

  https://review.openstack.org/#/q/topic:debian-stable

You'll also noticed we are not using debian-stretch directly for the nodeset,
this is on purpose as when the next release of debian happens (buster). We don't
need to make a bunch of in repo changes to projects.  But update the label of
the nodeset from debian-stretch to debian-buster.

Of course, we'd need to give a fair amount of notice when we plan to make that
change, but given this nodeset isn't part of our LTS platform (ubuntu / centos)
I believe this will help us in openstack-infra migrate projects to the latest
distro as fast a possible.

Thoughts?
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] When should we say 'run as root' in the docs?

2018-04-19 Thread Matt Riedemann
How loose are we with saying things like, "you should run this as root" 
in the docs?


I was triaging this nova bug [1] which is saying that the docs should 
tell you to run nova-status (which implies also nova-manage) as root, 
but isn't running admin-level CLIs implied that you need root, or 
something with access to those commands (sudo)?


I'm not sure how prescriptive we should be with stuff like this in the 
docs because if we did start saying this, I feel like we'd have to say 
it everywhere.


[1] https://bugs.launchpad.net/nova/+bug/1764530

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] When should we say 'run as root' in the docs?

2018-04-19 Thread William M Edmonds


Matt Riedemann  wrote on 04/19/2018 06:11:58 PM:
> How loose are we with saying things like, "you should run this as root"
> in the docs?
>
> I was triaging this nova bug [1] which is saying that the docs should
> tell you to run nova-status (which implies also nova-manage) as root,
> but isn't running admin-level CLIs implied that you need root, or
> something with access to those commands (sudo)?
>
> I'm not sure how prescriptive we should be with stuff like this in the
> docs because if we did start saying this, I feel like we'd have to say
> it everywhere.
>
> [1] https://bugs.launchpad.net/nova/+bug/1764530

Maybe instead of treating this as a docs bug, we should fix the command to
return a nicer error when run as non-root. Presumably the caller has root
access, but forgot they were logged in as something else or forgot sudo.
Dumping that stack trace on them is more likely to confuse than anything.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets

2018-04-19 Thread Paul Belanger
Greetings,

With ubuntu-bionic release around the corner we'll be starting discussions about
migrating jobs from ubuntu-xenial to ubuntu-bionic.

On topic I'd like to raise, is round job migrations from legacy to native
zuulv3.  Specifically, I'd like to propose we do not add legacy-ubuntu-bionic
nodesets into openstack-zuul-jobs. Projects should be working towards moving
away from the legacy format, as they were just copypasta from our previous JJB
templates.

Projects would still be free to move them intree, but I would highly encourage
projects do not do this, as it only delays the issue.

The good news is the majority of jobs have already been moved to native zuulv3
jobs, but there are still some projects still depending on the legacy nodesets.
For example, tox bases jobs would not be affected.  It mostly would be dsvm
based jobs that haven't been switch to use the new devstack jobs for zuulv3.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-19 Thread Sangho Shin
Dear Neutron-Release team,

I wonder if any of you can add me to the network-onos-release member.
It seems that Vikram is busy. :-)

Thank you,

Sangho



> On 19 Apr 2018, at 9:18 AM, Sangho Shin  wrote:
> 
> Ian, 
> 
> Thank you so much for your help.
> I have requested Vikram to add me to the release team. 
> He should be able to help me. :-)
> 
> Sangho
> 
> 
>> On 19 Apr 2018, at 8:36 AM, Ian Wienand  wrote:
>> 
>> On 04/19/2018 01:19 AM, Ian Y. Choi wrote:
>>> By the way, since the networking-onos-release group has no neutron
>>> release team group, I think infra team can help to include neutron
>>> release team and neutron release team can help to create branches
>>> for the repo if there is no reponse from current
>>> networking-onos-release group member.
>> 
>> This seems sane and I've added neutron-release to
>> networking-onos-release.
>> 
>> I'm hesitant to give advice on branching within a project like neutron
>> as I'm sure there's stuff I'm not aware of; but members of the
>> neutron-release team should be able to get you going.
>> 
>> Thanks,
>> 
>> -i
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk][masakari] Need newer version of openstacksdk

2018-04-19 Thread Patil, Tushar
Hi SDK team,

Few weeks back we have moved "instance_ha" service code from 
python-masakariclient into openstacksdk project in patch [1] and it got merged. 
Currently, masakari-monitors is totally broken and it requires newer version of 
openstacksdk. We have proposed a patch [2] in masakari-monitors to fix the 
issue but we cannot merge it until a newer version of openstacksdk is available.

Request you to please release newer version of openstacksdk.

Thank you.

[1] : https://review.openstack.org/#/c/555710
[2] : https://review.openstack.org/#/c/546492

Best Regards,
Tushar Patil

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP

2018-04-19 Thread Cory Hawkless
I’m also seeing this issue, but with Routers and networks as well.
The apache server running horizon logs the following

ERROR horizon.tables.base Error while checking action permissions.
Traceback (most recent call last):
  File "/usr/share/openstack-dashboard/horizon/tables/base.py", line 1389, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File "/usr/share/openstack-dashboard/horizon/tables/actions.py", line 139, in 
_allowed
self.allowed(request, datum))
  File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/networks/tables.py",
 line 85, in allowed
usages = quotas.tenant_quota_usages(request, targets=('network', ))
  File "/usr/share/openstack-dashboard/horizon/utils/memoized.py", line 95, in 
wrapped
value = cache[key] = func(*args, **kwargs)
  File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", 
line 419, in tenant_quota_usages
_get_tenant_network_usages(request, usages, disabled_quotas, tenant_id)
  File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", 
line 320, in _get_tenant_network_usages
details = neutron.tenant_quota_detail_get(request, tenant_id)
  File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", 
line 1457, in tenant_quota_detail_get
response = neutronclient(request).get('/quotas/%s/details' % tenant_id)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
354, in get
headers=headers, params=params)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
331, in retry_request
headers=headers, params=params)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
294, in do_request
self._handle_fault_response(status_code, replybody, resp)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
269, in _handle_fault_response
exception_handler_v20(status_code, error_body)
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 93, 
in exception_handler_v20
request_ids=request_ids)
Forbidden: User does not have admin privileges: Cannot GET resource for non 
admin tenant.
Neutron server returns request_ids: ['req-3db6924c-1937-4c34-b5fa-bd3ae52f0c10']

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, 9 April 2018 10:03 PM
To: OpenStack List 
Subject: [openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP

Hi,
From Queens onwards we have a issue with horizon and L2GW. We are unable to 
create a floating IP. This does not occur when using the CLI only via horizon. 
The error received is
‘Error: User does not have admin privileges: Cannot GET resource for non admin 
tenant. Neutron server returns request_ids: 
['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’
This is due to: 
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316
This worked in Ocata and not sure what has changed since then ☹. Maybe in the 
past the Ocata quota’s were not checking L2gw.
Any ideas?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev