[openstack-dev] [vitrage] Cinder Datasource

2016-04-12 Thread Weyl, Alexey (Nokia - IL)
Hi,

Here is the design of the Cinder datasource of Vitrage.

Currently Cinder datasource is handling only Volumes.
This datasource listens to cinder volumes notifications on the oslo bus, and 
updates the topology accordingly.
Currently Cinder Volume can be attached only to instance (Cinder design).

Future Steps:
We want to perform research on what other data we can bring from Cinder.
For example: 
1. To what zone we can connect the volume
2. To what image we can connect the volume

Alexey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-12 Thread Balázs Gibizer
> -Original Message-
> From: Juvonen, Tomi (Nokia - FI/Espoo) [mailto:tomi.juvo...@nokia.com]
> Sent: April 11, 2016 09:06
> 
> Hi,
> 
> Looking the discussion so far: 
> -Suggestion to have extended information for maintenance to somewhere
> outside Nova.
> -Notification about Nova state changes.
> 
> So how about if the whole logic of maintenance would be triggered by Nova
> API disable/enable service notification, but otherwise the business logic
> would be outside Nova?!

I think in this scenario the module that holds the business logic outside of
Nova can be used by the admin to trigger the maintenance and one of the
business logic piece would be to set the respective service(s) disabled in
OpenStack. 

> 
> -Extended host information needed by maintenance should be outside of
> Nova (extended information like maintenance window, more precise
> maintenance state and version information) -Extended server information
> needed by maintenance should be outside of Nova (configuration for
> automatic actions in different use cases) -The communicating and action flow
> with server owner and admin should be outside of Nova.
> 
> One thing is now as there is accepted that host fault monitoring is to be
> external, it might be best fit also for some of this maintenance logic.
> Monitoring SW is also the place with the best knowledge about the host
> state and if looking to build any automated actions on fault scenarios, then
> surely maintenance would be close to that also. Monitoring also need to
> know what host is in maintenance. Logic is very similar from server point of
> view when looking server actions and communication with server owner.
> 
> Might this be the way to go?

What impact this solution has on Nova? As far as I see it is very limited if not
zero.  

Cheers,
Gibi

> 
> Br,
> Tomi
> 
> > -Original Message-
> > From: EXT Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > Sent: Friday, April 08, 2016 2:38 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> >
> > On Fri, Apr 08, 2016 at 09:52:31AM +, Balázs Gibizer wrote:
> > > > -Original Message-
> > > > From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > > Sent: April 07, 2016 15:42
> > > >
> > > > The only gap based on my limited understanding is that nova is not
> > emitting
> > > > events compute host state changes. This knowledge is still kept
> > > > inside
> > nova
> > > > as some service states. If that info is posted to oslo messaging,
> > > > a lot
> > of usage
> > > > scenarios can be enabled and we can avoid too much churns to nova
> > itself.
> > >
> > > Nova does not really know the state of the compute host, it knows
> > > only
> > the state of the nova-compute service running on the compute host. In
> > Mitaka we added notification about the service status[2].
> > > Also there is a proposal about notification about hypervisor info
> > > change
> > [1].
> > >
> > > Cheers,
> > > Gibi
> > >
> > > [1] https://review.openstack.org/#/c/299807/
> > > [2]
> > > http://docs.openstack.org/developer/nova/notifications.html#existing
> > > -
> > versioned-notifications
> > >
> >
> > Thanks for the sharing, Balázs. The mitaka service status notification
> > looks pretty useful, I'll try it.
> >
> > Regards,
> >   Qiming
> >
> > > >
> > > > Regards,
> > > >   Qiming
> > > >
> > > >
> > > >
> __
> > > > 
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-
> > > > requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> __
> 
> > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-12 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

> -Original Message-
> From: EXT Balázs Gibizer [mailto:balazs.gibi...@ericsson.com]
> Sent: Tuesday, April 12, 2016 10:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> > -Original Message-
> > From: Juvonen, Tomi (Nokia - FI/Espoo) [mailto:tomi.juvo...@nokia.com]
> > Sent: April 11, 2016 09:06
> >
> > Hi,
> >
> > Looking the discussion so far:
> > -Suggestion to have extended information for maintenance to somewhere
> > outside Nova.
> > -Notification about Nova state changes.
> >
> > So how about if the whole logic of maintenance would be triggered by Nova
> > API disable/enable service notification, but otherwise the business logic
> > would be outside Nova?!
> 
> I think in this scenario the module that holds the business logic outside
> of
> Nova can be used by the admin to trigger the maintenance and one of the
> business logic piece would be to set the respective service(s) disabled in
> OpenStack.

Yes.
> 
> >
> > -Extended host information needed by maintenance should be outside of
> > Nova (extended information like maintenance window, more precise
> > maintenance state and version information) -Extended server information
> > needed by maintenance should be outside of Nova (configuration for
> > automatic actions in different use cases) -The communicating and action
> flow
> > with server owner and admin should be outside of Nova.
> >
> > One thing is now as there is accepted that host fault monitoring is to be
> > external, it might be best fit also for some of this maintenance logic.
> > Monitoring SW is also the place with the best knowledge about the host
> > state and if looking to build any automated actions on fault scenarios,
> then
> > surely maintenance would be close to that also. Monitoring also need to
> > know what host is in maintenance. Logic is very similar from server point
> of
> > view when looking server actions and communication with server owner.
> >
> > Might this be the way to go?
> 
> What impact this solution has on Nova? As far as I see it is very limited
> if not
> zero.

If the conclusion is really this, then by fast no changes to Nova.

> 
> Cheers,
> Gibi
> 
> >
> > Br,
> > Tomi
> >
> > > -Original Message-
> > > From: EXT Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > Sent: Friday, April 08, 2016 2:38 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > 
> > > Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> > >
> > > On Fri, Apr 08, 2016 at 09:52:31AM +, Balázs Gibizer wrote:
> > > > > -Original Message-
> > > > > From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> > > > > Sent: April 07, 2016 15:42
> > > > >
> > > > > The only gap based on my limited understanding is that nova is not
> > > emitting
> > > > > events compute host state changes. This knowledge is still kept
> > > > > inside
> > > nova
> > > > > as some service states. If that info is posted to oslo messaging,
> > > > > a lot
> > > of usage
> > > > > scenarios can be enabled and we can avoid too much churns to nova
> > > itself.
> > > >
> > > > Nova does not really know the state of the compute host, it knows
> > > > only
> > > the state of the nova-compute service running on the compute host. In
> > > Mitaka we added notification about the service status[2].
> > > > Also there is a proposal about notification about hypervisor info
> > > > change
> > > [1].
> > > >
> > > > Cheers,
> > > > Gibi
> > > >
> > > > [1] https://review.openstack.org/#/c/299807/
> > > > [2]
> > > > http://docs.openstack.org/developer/nova/notifications.html#existing
> > > > -
> > > versioned-notifications
> > > >
> > >
> > > Thanks for the sharing, Balázs. The mitaka service status notification
> > > looks pretty useful, I'll try it.
> > >
> > > Regards,
> > >   Qiming
> > >
> > > > >
> > > > > Regards,
> > > > >   Qiming
> > > > >
> > > > >
> > > > >
> > __
> > > > > 
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: OpenStack-dev-
> > > > > requ...@lists.openstack.org?subject:unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > >
> > __
> > 
> > > 
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > >
> > >
> > __
> > 
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Dina Belova
Matt,

Thanks for sharing the information about your benchmark. Indeed we need to
follow up on this topic (I'll attend the summit). Let's try to collect as
much information as possible prior Austin to have more facts to operate.
I'll try to figure out why local context cache did not work at least on my
environment, and, due to the results, most probably it did not act as
supposed during your benchmarking as well.

Cheers,
Dina

On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer  wrote:

> On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova  wrote:
>
>> Hey, openstackers!
>>
>> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
>> using this set of changes
>> 
>>  (that's
>> currently on review - some final steps are required there to finish the
>> work) and OSprofiler.
>>
>> Some preliminary results (all in one OpenStack node) can be found here
>> 
>>  (raw
>> OSprofiler reports are not yet merged to some place and can be found here
>> ). The full plan
>> 
>>  of
>> what's going to be tested  can be found in the docs as well. In short I
>> wanted to take a look how does Keystone changed its DB/Cache usage from
>> Liberty to Mitaka, keeping in mind that there were several changes
>> introduced:
>>
>>- federation support was added (and made DB scheme a bit more complex)
>>- Keystone moved to oslo.cache usage
>>- local context cache was introduced during Mitaka
>>
>> First of all - *good job on making Keystone less DB-extensive in case of
>> cache turned on*! If Keystone caching is turned on, number of DB queries
>> done to Keystone DB in Mitaka is averagely twice less than in Liberty,
>> comparing the same requests and topologies. Thanks Keystone community to
>> make it happen :)
>>
>> Although, I faced *two strange issues* during my experiments, and I'm
>> kindly asking you, folks, to help me here:
>>
>>- I've created #1567403
>> bug to share
>>information - when I turned caching on, local context cache should cache
>>identical per API requests function calls not to ping Memcache too often.
>>Although I faced such calls, Keystone still used Memcache to gather this
>>information. May someone take a look on this and help me figure out what 
>> am
>>I observing? At the first sight local context cache should work ok, but 
>> for
>>some reason I do not see it's being used.
>>- One more filed bug - #1567413
>> - is about a bit
>>opposite situation :) When I turned cache off explicitly in the
>>keystone.conf file, I still observed some of the values being fetched from
>>Memcache... Your help is very appreciated!
>>
>> Thanks in advance and sorry for a long email :)
>>
>> Cheers,
>> Dina
>>
>>
> Dina,
>
> Thanks for starting this conversation. I had some weird perf results
> comparing L to an RC release of Mitaka, but I was holding them until
> someone else confirmed what I saw. I'm testing token creation and
> validation. From what I saw, token validation slowed down in Mitaka. After
> doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
> it was in Liberty. That implies more caching but 8x is a lot and even
> memcache references are not free.
>
> I know some of the Keystone folks are looking into this so it will be good
> to follow-up on it. Maybe we could talk about this at the summit?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-04-12 Thread Martin Magr
Greetings guys, 


  I will have to step down from PTL responsibility. TBH I haven't have time to 
work on Packstack lately and I probably won't have in the future because of my 
other responsibilities. So from my point of view it is not correct to lead the 
project (though I'd like to contribute/do review from time to time).

Thanks for understanding,
Martin


- Original Message -
From: "Emilien Macchi" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Javier Pena" , "David Moreau Simard" 

Sent: Wednesday, March 16, 2016 3:50:37 PM
Subject: Re: [openstack-dev] [packstack] Update packstack core list

On Wed, Mar 16, 2016 at 6:35 AM, Alan Pevec  wrote:
> 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
>>> ...
>>> - Martin Mágr
>>> - Iván Chavero
>>> - Javier Peña
>>> - Alan Pevec
>>>
>>> I have a doubt about Lukas, he's contributed an awful lot to
>>> Packstack, just not over the last 90 days. Lukas, will you be
>>> contributing in the future? If so, I'd include him in the proposal as
>>> well.
>>
>> Thanks, yeah I do plan to contribute just haven't had time lately for
>> packstack.
>
> I'm also adding David Simard who recently contributed integration tests.
>
> Since there hasn't been -1 votes for a week, I went ahead and
> implemented group membership changes in gerrit.
> Thanks to the past core members, we will welcome you back on the next
>
> One more topic to discuss is if we need PTL election? I'm not sure we
> need formal election yet and de-facto PTL has been Martin Magr, so if
> there aren't other proposal let's just name Martin our overlord?

Packstack is not part of OpenStack big tent so de-facto does not need
a PTL to work.
It's up to the project team to decide if whether or not a PTL is needed.

Oh and of course, go ahead Martin ;-)

> Cheers,
> Alan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-12 Thread Michael Still
On 12 Apr 2016 12:19 AM, "Sean Dague"  wrote:
>
> On 04/11/2016 10:08 AM, Ed Leafe wrote:
> > On 04/11/2016 08:38 AM, Julien Danjou wrote:
> >
> >> There's a lot of assumption in oslo.log about Nova, such as talking
> >> about "instance" and "context" in a lot of the code by default. There's
> >> even a dependency on oslo.context. >.<
> >>
> >> That's being an issue for projects not being Nova, where we end up
> >> having configuration options talking about "instances" and with default
> >> values referring to that.
> >> I'm at least taking that as being a serious UX issue for telemetry
> >> projects.
> >>
> >> I'd love to sanitize that library a bit. So, is this an option, or
would
> >> I be better off starting something new?
> >
> > The nova team spent a lot of time in Mitaka starting to clean up the
> > config options that were scattered all over the codebase, and improve
> > the help text for each of them so that you didn't need to grep the
> > source code to find out what they did.
> >
> > I could see a similar effort for oslo.log (and maybe other oslo
> > projects), and I would be happy to help out.
>
> This isn't so much about scattered options, oslo.log options are all in
> one place already, it's about the application specific ones that are
> embedded.
>
> I agree that "instance" being embedded all the way back to oslo.log is
> weird. Ideally we'd have something like "resource" that if specified
> would be the primary resource the request was acting on. Or could even
> just build some custom loggers Nova side to inject the instance when we
> have it.

Instance was put in there a long time ago, before Oslo existed. It was then
just forklifted out blindly. I like the idea of changing to something like
"resource" or even better "resources", but we do need to present this
information in a structured way. We need to remember this was added because
of specific concerns around operational usability of or logging.

> I'm not sure why oslo.context is an issue though. That's mostly about
> putting in the common information about the identity of the requester
> into the stream.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-12 Thread Steven Hardy
On Tue, Apr 12, 2016 at 12:01:39PM +1200, Steve Baker wrote:
>On 12/04/16 11:48, Jeremy Stanley wrote:
> 
>  On 2016-04-12 11:43:06 +1200 (+1200), Steve Baker wrote:
> 
>  Can I suggest a sub-team for
>  os-collect-config/os-refresh-config/os-apply-config? I ask since
>  these tools also make up the default heat agent, and there is
>  nothing in them which is TripleO specific.
> 
>  Could make sense similarly for diskimage-builder, as there is a lot
>  of TripleO/Infra cross-over use and contribution happening there.
> 
>+1, this tool is general purpose and has diverse contributors and
>consumers

This already exists:

https://review.openstack.org/#/admin/groups/115,members
https://github.com/openstack-infra/project-config/blob/master/gerrit/acls/openstack/diskimage-builder.config

It's already in use and a number of folks not (or no longer) active in
tripleo-core have been added as they are involved with DiB.

There is also already a sub-team for os-apply-config but it currently only
contains tripleo-core:
https://github.com/openstack-infra/project-config/blob/master/gerrit/acls/openstack/os-apply-config.config
https://review.openstack.org/#/admin/groups/142,members

For os-*-config I'm not sure we gain much as the contribution rate is
pretty low - like there's maybe one or two outstanding patches right now
accross all three projects.  Thus it could be argued that there's little
justification for actively managing a sub-team in that case.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread 张晨
Thx for the answer, i finally locate the implementation in 
nova.network.linux_net.create_ovs_vif_port


but how could nova execute the ovs-vsctl for the compute-node hypervisor just 
in the control-node?








At 2016-04-12 13:13:48, "Sławek Kapłoński"  wrote:
>Hello,
>
>I don't know this ODL and how it works but for ovs-agent nova-compute is part 
>which adds port to ovs bridge (see for example nova/virt/libvirt/vif.py)
>
>-- 
>Pozdrawiam / Best regards
>Sławek Kapłoński
>sla...@kaplonski.pl
>
>Dnia wtorek, 12 kwietnia 2016 12:31:01 CEST 张晨 pisze:
>> Hello everyone,
>> 
>> 
>> I have a question about Neutron. I learn that the ovs-agent receives the
>> update-port rpc notification,and updates ovsdb data for VM port.
>> 
>> 
>> But what is the situation when i use SDN controllers instead of OVS
>> mechanism driver? I found no where in ODL to add the VM port to ovs.
>> 
>> 
>> I asked the author of the related ODL plugin, but he told me that OpenStack
>> adds the VM port to ovs.
>> 
>> 
>> Then, where is the implementation in OpenStack to  add the VM port to ovs,
>> when i'm using ODL replacing the OVSmechanism driver?
>> 
>> 
>> Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][ceph] Puppet-ceph is now a formal member of puppet-openstack

2016-04-12 Thread Bogdan Dobrelya
Great news and good job!

On 04/11/2016 11:24 PM, Andrew Woodward wrote:
> It's been a while since we started the puppet-ceph module on stackforge
> as a friend of OpenStack. Since then Ceph's usage in OpenStack has
> increased greatly and we have both the puppet-openstack deployment
> scenarios as well as check-trippleo running against the module. 
> 
> We've been receiving leadership from the puppet-openstack team for a
> while now and our small core team has struggled to keep up. As such we
> have added the puppet-openstack cores to the review ACL's in gerrit and
> have been formally added to the puppet-openstack project in governance. [1]
> 
> I thank the puppet-openstack team for their support and, I am glad to
> see the module move under their leadership.
> 
> [1] https://review.openstack.org/300191
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Paul Bourke
I've said in the past I'm not a fan of nitpicking docs. That said, I 
feel it's important for spelling and grammar to be correct. The 
quickstart guide is the first point of contact for many people to the 
project, and rightly or wrongly it will give an overall impression of 
the quality of the project.


When I review doc changes I try not to nitpick on the process being 
described - e.g. if an otherwise fine patch is already 5 iterations in 
and the example given to configure a service could be done in 3 lines 
less bash, I'll usually comment but still +2. If on the otherhand it is 
rife with typos (which by the way is easily solved with a spellchecker), 
and reads really badly I will flag it.


-Paul

On 11/04/16 19:27, Steven Dake (stdake) wrote:

My proposal was for docs-only patches not code contributions with docs.
Obviously we want a high bar for code contributions.  This is part of the
reason we have the DocImpact flag (for folks that don't feel comfortable
writing documentation because perhaps of ESL, or other reasons).

We already have a way to decouple code from docs with DocImpact.

Regards
-steve

On 4/11/16, 6:17 AM, "Michał Jastrzębski"  wrote:


So one way to approach it is to decouple docs from code, make it 2
reviews. We can -1 code without docs and ask to create separate
patchset depending on one in question with docs. Then we can nitpick
all we want:) new contributor will get his/hers code merged, at least
one patchset, so it will work better on morale, and we'll be able to
keep high bar for QSG and other docs. There is possibility that author
will leave docs patch after code merge, but well, we can take over
docs review.

What do you think guys? I'd really like to keep high quality standard
all the way and don't scare off new commiters at the same time.

Cheers,
Michal

On 11 April 2016 at 03:50, Steven Dake (stdake)  wrote:



On 4/11/16, 1:38 AM, "Gerard Braad"  wrote:


Hi,

On Mon, Apr 11, 2016 at 4:20 PM, Steven Dake (stdake) 
wrote:

On 4/11/16, 12:54 AM, "Gerard Braad"  wrote:
as

at the moment getting an environment up-and-running according to the
quickstart guide is a hit and miss

I don't think deployment is not hit or miss as long as the QSG are
followed to a T :)


Maybe saying "at the moment" was incorrect. As the deployment
according to the QSG has been a few weeks ago. Sorry about this... as
you guys have put a lot of effort into it recently.



I agree we need more clarity in what belongs in the QSG.

This can be a separate discussion (Not intending to hijack this thread).


I am not a core reviewer, but I keep it as-is. I do not see a need for


Even though your not a core reviewer, your comments are valued.  The
reason I addressed core reviewers specifically as they have +2
permissions
and I would like more leniency on new documentation in other files
outside
those listed above (philosophy document, QSG) with a pubic statement of
such.


a lower-bar. Although, documentation is the entry-point into a
community (as user and potential contributor) and therefore it should
be of a high quality. Maybe I would be to provide more suggestions
instead of just indication of 'change this for that'.


The issue I see with our QSG is it has the highest bar for review
passage
of any file in the repository.  Any QSG change typically requires 10 or
more patch sets to make it through the core reviewer gauntlet.  This
discourages people from writing new documentation.  I don't want this to
carry over into other parts of the documentation that are as of yet
unwritten.  I'd like new documentation to be ok with misspellings,
grammar
errors, formatting problems, ESL authors, and that sort of thing.

The QSG should tolerate none of these types of errors at this point - it
must be absolutely perfect (at least in English:) as to not cause
confusion to new operators.

Regards
-steve



regards,


Gerard


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/opens

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Thierry Carrez

Tony Breeds wrote:

On Mon, Apr 11, 2016 at 03:49:16PM -0500, Matt Riedemann wrote:

A few people have been asking about planning for the nova midcycle for
newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work the
best. R-14 is close to the US July 4th holiday, R-13 is during the week of
the US July 4th holiday, and R-12 is the week of the n-2 milestone.


Thanks for starting this now.  It really helps  to know these things early.

This cycle *may* be harder than typical with:
 https://www.openstack.org/summit/austin-2016/summit-schedule/events/9478

Having said that, either of those options work for me.


No, this cycle (Newton) is not affected. The proposed change would 
happen after the Barcelona summit (5-month Ocata cycle, new event 
inserted between the Barcelona and the North America summit in May). So 
feel free to plan mid-cycle events for Newton if you think your team 
would benefit from having them.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-12 Thread Dmitry Tantsur

On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:

Hi!

I'd like to propose Anton to the ironic-inspector core reviewers team.
His stats are pretty nice [1], he's making meaningful reviews and he's
pushing important things (discovery, now tempest).

Members of the current ironic-inspector-team and everyone interested,
please respond with your +1/-1. A lazy consensus will be applied: if
nobody objects by the next Tuesday, the change will be in effect.


The change is in effect, congratulations! :)



Thanks

[1] http://stackalytics.com/report/contribution/ironic-inspector/60

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-12 Thread Sam Betts (sambetts)
Congrats Anton!!! 

Sam

On 12/04/2016 10:52, "Dmitry Tantsur"  wrote:

>On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:
>> Hi!
>>
>> I'd like to propose Anton to the ironic-inspector core reviewers team.
>> His stats are pretty nice [1], he's making meaningful reviews and he's
>> pushing important things (discovery, now tempest).
>>
>> Members of the current ironic-inspector-team and everyone interested,
>> please respond with your +1/-1. A lazy consensus will be applied: if
>> nobody objects by the next Tuesday, the change will be in effect.
>
>The change is in effect, congratulations! :)
>
>>
>> Thanks
>>
>> [1] http://stackalytics.com/report/contribution/ironic-inspector/60
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [api] Fixing bugs in old microversions

2016-04-12 Thread Vladyslav Drok
Thank you Jay, Dmitry and Sean for your input! On the yesterday's ironic
meeting the consensus
was to leave the removal possibility in older api versions and not to
bikeshed with new microversions.

Vlad

On Mon, Apr 11, 2016 at 5:36 PM, Jay Pipes  wrote:

> On 04/11/2016 10:11 AM, Sean Dague wrote:
>
>> On 04/11/2016 09:54 AM, Jay Pipes wrote:
>>
>>> On 04/11/2016 09:48 AM, Dmitry Tantsur wrote:
>>>
 On 04/11/2016 02:00 PM, Jay Pipes wrote:

> On 04/11/2016 04:48 AM, Vladyslav Drok wrote:
>
>> Hi all!
>>
>> There is a bug  in
>> ironic API that allows to remove node name using any API version,
>> while node names were added in version 1.5. There are concerns that
>> fixing this might
>> be a breaking change, and I'm not sure how to proceed with that.
>> Here is
>> a change  that
>> fixes the bug by just forbidding to do PATCH remove request on /name
>> path if requested
>> API version is less than 1.5. Is it enough to just mention this in a
>> release note, maybe
>> both in fixes and upgrade sections? As bumping API microversion to fix
>> some previous
>> microversion seems weird to me.
>>
>> Any suggestions?
>>
>
> I think the approach you've taken -- just fix it and not add a new
> microversion -- is the correct approach.
>

 Do we really allow breaking API changes, covering old microversions?

>>>
>>> Generally we have said that if a patch is fixing only an error response
>>> code (as would be the case here -- changing from a 202 to a 400 when
>>> name is attempted to be changed) then it doesn't need a microversion.
>>>
>>> Sean, am I remembering that correctly?
>>>
>>
>> No, in Nova land a 2xx -> 4xx would use a microversion. These sorts of
>> things actually break people (we've seen it happen in Tempest / Shade).
>>
>> Fixing a 5xx does not, as the server is never supposed to 5xx. 5xx is
>> always a bug.
>>
>
> OK, my apologies Vlad and Dmitry. This is why I defer to Sean :)
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable] proposing Rob Cresswell for Horizon stable core

2016-04-12 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 12/04/16 05:12, Tony Breeds wrote:
> On Thu, Apr 07, 2016 at 12:01:31PM +0200, Matthias Runge wrote:
>> Hello,
>> 
>> I'm proposing Rob Cresswell to become stable core for Horizon. I 
>> thought, in the past all PTL were in stable team, but this
>> doesn't seem to be true any more.
> 
> This *may* have been true when the project specific team were
> created, which was before my time, but it isn't true now.
> 
>> Please chime in with +1/-1
> 
> -1
> 
> As with core status in other parts of OpenStack its merit /
> evidence based. That is to say if you're doing good work, showing
> an understanding of the stable policy then great lets do this
> thing.
> 
> A quite check of reviewstats:
> 
> stable-liberty-horizon-120.txt :
> http://paste.openstack.org/show/493706 stable-kilo-horizon-120.txt
> : http://paste.openstack.org/show/493707
> 
> Shows that Rob has done 2 stable reviews in the last 120 days.
> 
> This is absolutely no reflection on Rob's contribution to Horizon
> as a whole.
> 
We never took review stats for stable branches as basis for proposing
or voting for someone being stable-maint.

In the past, people were proposed (and approved) to become stable
maintainer, because people were convinced the new person will care
about stable status in Horizon.

And after approval, someone would act as mentor to educate the new
core on stable review guidelines. That's a quite lightweight process,
it currently didn't fail for us, and we're in the situation, where
we're desperately looking for stable reviewers and cores.

- -- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Paul Argiry, Charles Cachera, Michael Cunningham,
Michael O'Neill
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXDMiMAAoJEBdpskgc9eOLiF4P/jqGY2CaTavyvKgpRloXCpvP
bLT5VDKG5HFVG9KA2KBgVxC8ZbSRiFZeFBwVKzGa9ROscj21QUt3bo1DDqHBObys
AS0ZMW3ZuJQ+Meb4TX3WqemeT68qSrR4u5/BiUldtG8Fd8Ib2c8aKDeqdqKh/X2f
VyodWWt/IW2yFEpuWIQU+RRRz5UMr6I6lf2LGWGmHA5LuLotjs0L+6Gnob67K6Iz
PuU05TlV9pekfPcOr0neiKv4Bkz7mhS6gtQPo0be5coAGdFITxr6Vqnjvmixrowi
SsnhNZvDDF8Q/BRddsEeNtv+X6iqkKCAaSJKRvVLlVCQX+kubDxTWyxUKDUwt++r
wliMQKuAmqbJJvvLkvMvMM51obQaqL9WMkmp5SNBA/7YGf6+aujZuR5IBNHoE9uf
nEh5MQ7RwJmhJWYBIn14oZ6t5o/8EFQbXcFYnUyvSIvIefw/EFxt+ZG0Xg6NMIk2
aUlGO273Tj+EXoHopa/r/FmTORkffOENUmq8+WJoP8c2T2lKSVnYOebZvMa4wDHD
F2dbFWIqmHBDd4vPY49QhOlqDZpozJe4k8R1tqNa6h+lZrZQZ1Jx6yNC3/jly+Tp
m8t1S8b5sKVtXBMJBp2v/Bm9gyaE2d5gujeMzpxlk4jk0NJKKwhROp1Rd/ek+WIF
46tflQkI60Wy7EbKXiP+
=7abv
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Andreas Scheuring
Hi together, 
I wanted to start discussion about Live Migration problem that currently exists 
in the nova neutron communication.

Basics Live Migration and Nova - Neutron communication
--
On a high level, Nova Live Migration happens in 3 stages. (--> is what's 
happening from network perspective)
#1 pre_live_migration
   --> libvirtdriver: nova plugs the interface (for ovs hybrid sets up the 
linuxbridge + veth and connects it to br-int)
#2 live_migration_operation
   --> instance is being migrated (using libvirt with the domain.xml that is 
currently active on the migration source)
#3 post_live_migration
   --> binding:host_id is being updated for the port
   --> libvirtdriver: domain.xml is being regenerated  
More details can be found here [1]

The problem - portbinding fails
---
With this flow, ML2 portbinding is triggered in post_live_migration. At this 
point, the instance has already been migrated and is active on the migration 
destination.
Part of the port-binding is happening in the mechanism drivers, where the vif 
information for the port (vif-type, vif-details,..) is being updated.
If this portbinding fails, port will get the binding:vif_type "binding_failed".
After that the nova libvirt driver starts generating the domain xml again to 
persist it. Part of this generation is also generating the interface 
definition. 
This fails as the vif_type is "binding_failed". Nova will set the instance to 
error state. --> There is no rollback, as it's already too late!

Just a remark: There is no explicit check for the vif_type binding_failed. I 
have the feeling that it (luckily) fails by accident when generating the xml.

--> Ideally we would trigger the portbinding before the migration started - in 
pre_live_migration. Then, if binding fails, we could abort migration before it 
even started. The instance would still be
active and fully functional on the source host. I have a WIP patchset out 
proposing this change [2]


The impact
--
Patchset [2] propose updating the host_id already in pre_live_migration. 
During migration, the port would already be owned by the migration target 
(although the guest is still active on the source)
Technically this works fine for all the reference implementations, but this 
could be a problem for some third party mech drivers, if they shut down the 
port on the source and activate it on the target - although instance is still 
on the source

Any thoughts on this?


Additional use cases that would be enabled with this change
---
When updating the host_id in pre_live_migration, we could modify the domain.xml 
with the new vif information before live migration (see patch [2] and nova spec 
[4]).
This enables the following use cases

#1 Live Migration between nodes that run different l2 agents
   E.g. you could migrate a instance from an ovs node to an lb node and vice 
versa. This could be used as l2 agent transition strategy!
#2 Live Migration with macvtap agent
   It would enable the macvtap agent to live migrate instances between hosts, 
that use a different physical_interface_mapping. See bug [3]

--> #2 is the use case that made me thinking about this whole topic

Potential other solutions
-
#1 Have something like simultaneous portbinding - On migration, a port is bound 
to 2 hosts (like a dvr port can today).
Therefore some database refactorings would be required (work has already been 
started in the DVR context [7])
And the Rest API would need to be changed in a way, that there's not a single 
binding, but a list of bindings returned. Of course also create & update that 
list.

#2 execute portbinding without saving it to db
we could also introduce a new api( like update port, with live migration flag), 
that would run through the portbinding code and would return the port
information for the target node, but would not persist this information. Son on 
port-show you would still get the old information. Update would only happen if 
the migration flag is not present (in post_live_migration like today)
Alternatively the generated protbidning could be stored in the port context and 
be used on the final port_update be instead of running through all the code 
pathes again.


Other efforts in the area nova neutron live migration
-
Just for reference, those are the other activities around nova-neutron live 
migration I'm aware of. But non of them is related to this IMO.

#1 ovs-hybrid plug wait for vi-plug event before doing live migration
see patches [5]
--> on nova plug, creates the linuxbridge and the veth pair and plugs it into 
the br-int. This plug is being detected by the ovs agent, which then reports 
the device as up
which again triggers this vif-plug event. This does not solve the problem as 
portbinding is not involved anyhow. This patch ca

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Tony Breeds
On Tue, Apr 12, 2016 at 11:34:48AM +0200, Thierry Carrez wrote:

> No, this cycle (Newton) is not affected. The proposed change would happen
> after the Barcelona summit (5-month Ocata cycle, new event inserted between
> the Barcelona and the North America summit in May). So feel free to plan
> mid-cycle events for Newton if you think your team would benefit from having
> them.

Ahh thanks for the clarification.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Proposing Anton Arefiev (aarefiev) for ironic-inspector-core

2016-04-12 Thread Anton Arefiev
Thanks everyone.  I'm very glad  to join the team.
I'm looking forward to meeting all in Austin.

On Tue, Apr 12, 2016 at 12:54 PM, Sam Betts (sambetts) 
wrote:

> Congrats Anton!!!
>
> Sam
>
> On 12/04/2016 10:52, "Dmitry Tantsur"  wrote:
>
> >On 04/05/2016 12:24 PM, Dmitry Tantsur wrote:
> >> Hi!
> >>
> >> I'd like to propose Anton to the ironic-inspector core reviewers team.
> >> His stats are pretty nice [1], he's making meaningful reviews and he's
> >> pushing important things (discovery, now tempest).
> >>
> >> Members of the current ironic-inspector-team and everyone interested,
> >> please respond with your +1/-1. A lazy consensus will be applied: if
> >> nobody objects by the next Tuesday, the change will be in effect.
> >
> >The change is in effect, congratulations! :)
> >
> >>
> >> Thanks
> >>
> >> [1] http://stackalytics.com/report/contribution/ironic-inspector/60
> >>
> >>
> >>_
> >>_
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Anton Arefiev
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Sylvain Bauza



Le 11/04/2016 22:49, Matt Riedemann a écrit :
A few people have been asking about planning for the nova midcycle for 
newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 
work the best. R-14 is close to the US July 4th holiday, R-13 is 
during the week of the US July 4th holiday, and R-12 is the week of 
the n-2 milestone.


R-16 is too close to the summit IMO, and R-10 is pushing it out too 
far in the release. I'd be open to R-14 though but don't know what 
other people's plans are.




Yeah, agreed. Using R-15 or R-14 weeks could be nice for non-prioritary 
specs that could be reviewed in a sprint before the FeatureFreeze for them.


Using the R-11 week means that we would already hit the non-prio FF, but 
that also means that we could use that sprint time for reviewing the 
priorities like we did previously (and which worked great).


So, I'm not sold about which week we should prefer, just commenting 
about what we could discuss during that midcycle.



As far as a venue is concerned, I haven't heard any offers from 
companies to host yet. If no one brings it up by the summit, I'll see 
if hosting in Rochester, MN at the IBM site is a possibility.


[1] http://releases.openstack.org/newton/schedule.html



Just rephrasing what you said in IRC, we expect to have around 40 people 
for 3 days, in case people wonder whether they can support us.


-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Rossella Sblendido
On 04/12/2016 12:05 PM, Andreas Scheuring wrote:
> Hi together, 
> I wanted to start discussion about Live Migration problem that currently 
> exists in the nova neutron communication.
> 
> Basics Live Migration and Nova - Neutron communication
> --
> On a high level, Nova Live Migration happens in 3 stages. (--> is what's 
> happening from network perspective)
> #1 pre_live_migration
>--> libvirtdriver: nova plugs the interface (for ovs hybrid sets up the 
> linuxbridge + veth and connects it to br-int)
> #2 live_migration_operation
>--> instance is being migrated (using libvirt with the domain.xml that is 
> currently active on the migration source)
> #3 post_live_migration
>--> binding:host_id is being updated for the port
>--> libvirtdriver: domain.xml is being regenerated  
> More details can be found here [1]
> 
> The problem - portbinding fails
> ---
> With this flow, ML2 portbinding is triggered in post_live_migration. At this 
> point, the instance has already been migrated and is active on the migration 
> destination.
> Part of the port-binding is happening in the mechanism drivers, where the vif 
> information for the port (vif-type, vif-details,..) is being updated.
> If this portbinding fails, port will get the binding:vif_type 
> "binding_failed".
> After that the nova libvirt driver starts generating the domain xml again to 
> persist it. Part of this generation is also generating the interface 
> definition. 
> This fails as the vif_type is "binding_failed". Nova will set the instance to 
> error state. --> There is no rollback, as it's already too late!
> 
> Just a remark: There is no explicit check for the vif_type binding_failed. I 
> have the feeling that it (luckily) fails by accident when generating the xml.
> 
> --> Ideally we would trigger the portbinding before the migration started - 
> in pre_live_migration. Then, if binding fails, we could abort migration 
> before it even started. The instance would still be
> active and fully functional on the source host. I have a WIP patchset out 
> proposing this change [2]
> 
> 
> The impact
> --
> Patchset [2] propose updating the host_id already in pre_live_migration. 
> During migration, the port would already be owned by the migration target 
> (although the guest is still active on the source)
> Technically this works fine for all the reference implementations, but this 
> could be a problem for some third party mech drivers, if they shut down the 
> port on the source and activate it on the target - although instance is still 
> on the source
> 
> Any thoughts on this?

+1 on this anyway let's hear back from third party drivers maintainers.

> 
> 
> Additional use cases that would be enabled with this change
> ---
> When updating the host_id in pre_live_migration, we could modify the 
> domain.xml with the new vif information before live migration (see patch [2] 
> and nova spec [4]).
> This enables the following use cases
> 
> #1 Live Migration between nodes that run different l2 agents
>E.g. you could migrate a instance from an ovs node to an lb node and vice 
> versa. This could be used as l2 agent transition strategy!
> #2 Live Migration with macvtap agent
>It would enable the macvtap agent to live migrate instances between hosts, 
> that use a different physical_interface_mapping. See bug [3]
> 
> --> #2 is the use case that made me thinking about this whole topic
> 
> Potential other solutions
> -
> #1 Have something like simultaneous portbinding - On migration, a port is 
> bound to 2 hosts (like a dvr port can today).
> Therefore some database refactorings would be required (work has already been 
> started in the DVR context [7])
> And the Rest API would need to be changed in a way, that there's not a single 
> binding, but a list of bindings returned. Of course also create & update that 
> list.
>

I don't like this one. This would require lots of code changes and I am
not sure it would solve the problem completely. The model of having a
port bound to two hosts just because it's migrating, it's confusing.


> #2 execute portbinding without saving it to db
> we could also introduce a new api( like update port, with live migration 
> flag), that would run through the portbinding code and would return the port
> information for the target node, but would not persist this information. Son 
> on port-show you would still get the old information. Update would only 
> happen if the migration flag is not present (in post_live_migration like 
> today)
> Alternatively the generated protbidning could be stored in the port context 
> and be used on the final port_update be instead of running through all the 
> code pathes again.
> 

Another possible solution is to apply the same strategy we use for
instance creation. Nova should wait to get a

Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread Ihar Hrachyshka

张晨  wrote:

Thx for the answer, i finally locate the implementation in  
nova.network.linux_net.create_ovs_vif_port


but how could nova execute the ovs-vsctl for the compute-node hypervisor  
just in the control-node?




Nova still runs nova-compute service on your hypervisor to manage instances  
scheduled to the node. That service is the one that will prepare the VIF  
port.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Kevin Benton
We can't change the host_id until after the migration or it will break
l2pop other drivers that use the host as a location indicator (e.g. many
top of rack drivers do this to determine which switch port should be wired
up).

There is already a patch that went in to inform Neutron of the destination
host here for proactive DVR wiring: https://review.openstack.org/#/c/275420/
During this port update phase, we can validate the destination host is
'bindable' with the given port info and fail it otherwise. This should
block Nova from continuing.

However, we have to figure out how ML2 will know if something is
'bindable'. The only interface we have right now is bind_port. It is
possible that we can do a faked bind_port attempt using what the port
host_id would look like after migration. It's made clear in the ML2 driver
API that bind_port results may not be committed:
https://github.com/openstack/neutron/blob/4440297a2ff5a6893b748c2400048e840283c718/neutron/plugins/ml2/driver_api.py#L869

So the workflow would be something like:
* Nova calls Neutron port update with the destination host in the binding
details
* In ML2 port update, the destination host is placed into a copy of the
port in the host_id field and bind_port is called.
* If bind_port is unsuccessful, it fails the port update for Nova to
prevent migration.
* If bind_port is successful, the results of the port update are committed
(with the original host_id and the new host_id in the destination_host
field).
* Workflow continues as normal here.

So this heavily exploits the fact that bind_port is supposed to be
mutation-free in the ML2 driver API. We may encounter drivers that don't
follow this now, but they are already exposed to other bugs if they mutate
state so I think the onus would be on them to fix their driver.

Cheers,
Kevin Benton

On Tue, Apr 12, 2016 at 3:33 AM, Rossella Sblendido 
wrote:

> On 04/12/2016 12:05 PM, Andreas Scheuring wrote:
> > Hi together,
> > I wanted to start discussion about Live Migration problem that currently
> exists in the nova neutron communication.
> >
> > Basics Live Migration and Nova - Neutron communication
> > --
> > On a high level, Nova Live Migration happens in 3 stages. (--> is what's
> happening from network perspective)
> > #1 pre_live_migration
> >--> libvirtdriver: nova plugs the interface (for ovs hybrid sets up
> the linuxbridge + veth and connects it to br-int)
> > #2 live_migration_operation
> >--> instance is being migrated (using libvirt with the domain.xml
> that is currently active on the migration source)
> > #3 post_live_migration
> >--> binding:host_id is being updated for the port
> >--> libvirtdriver: domain.xml is being regenerated
> > More details can be found here [1]
> >
> > The problem - portbinding fails
> > ---
> > With this flow, ML2 portbinding is triggered in post_live_migration. At
> this point, the instance has already been migrated and is active on the
> migration destination.
> > Part of the port-binding is happening in the mechanism drivers, where
> the vif information for the port (vif-type, vif-details,..) is being
> updated.
> > If this portbinding fails, port will get the binding:vif_type
> "binding_failed".
> > After that the nova libvirt driver starts generating the domain xml
> again to persist it. Part of this generation is also generating the
> interface definition.
> > This fails as the vif_type is "binding_failed". Nova will set the
> instance to error state. --> There is no rollback, as it's already too late!
> >
> > Just a remark: There is no explicit check for the vif_type
> binding_failed. I have the feeling that it (luckily) fails by accident when
> generating the xml.
> >
> > --> Ideally we would trigger the portbinding before the migration
> started - in pre_live_migration. Then, if binding fails, we could abort
> migration before it even started. The instance would still be
> > active and fully functional on the source host. I have a WIP patchset
> out proposing this change [2]
> >
> >
> > The impact
> > --
> > Patchset [2] propose updating the host_id already in pre_live_migration.
> > During migration, the port would already be owned by the migration
> target (although the guest is still active on the source)
> > Technically this works fine for all the reference implementations, but
> this could be a problem for some third party mech drivers, if they shut
> down the port on the source and activate it on the target - although
> instance is still on the source
> >
> > Any thoughts on this?
>
> +1 on this anyway let's hear back from third party drivers maintainers.
>
> >
> >
> > Additional use cases that would be enabled with this change
> > ---
> > When updating the host_id in pre_live_migration, we could modify the
> domain.xml with the new vif information before live migration (see patch
> [2] and

Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-04-12 Thread Marcus Furlong
On 11 April 2016 at 22:15, Emilien Macchi  wrote:
> On Mon, Apr 11, 2016 at 1:57 AM, Marcus Furlong  wrote:
>> On 29 March 2016 at 09:53, Emilien Macchi  wrote:
>>> Puppet OpenStack team has the immense pleasure to announce the release
>>> of 24 Puppet modules.
>
>> Also (related to the above), are there any plans to do point releases
>> including the bugfixes from the stable/mitaka branches? I ask because
>> currently, if modules are installed from puppet forge, you still need
>> to apply patches locally for things that have been fixed in the git
>> stable branch. I've just noticed that I have patches for nearly every
>> module for liberty. This is because after the initial puppet forge
>> release, there were no further updates to any of the modules.
>
> What we do currently is backporting bugfix in stable branches (ex:
> stable/liberty) but we don't produce tags quite often. It might change
> in the future but until now we don't have much sub-releases between
> cycles.
> I would suggest you to deploy modules from git/branch (using your
> stable version) and if any backport is required, feel free to submit
> it and our team will review it according to our backport policy:
> https://wiki.openstack.org/wiki/Puppet/Backport_policy

Is there any point releasing the modules on puppet forge if they are
mostly going to be in a broken state (i.e. the initial release)? That
is likely to reflect badly on the project?

Cheers,
Marcus.

-- 
Marcus Furlong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][horizon] metric-list not complete and show metrics in horizon dashboard

2016-04-12 Thread Safka, JaroslavX
Hi all,

I found strange behavior of command 'ceilometer metric-list' , ceilometer 
database and horizon.
First the command 'ceilometer metric-list'.
It displays only "Name cpu.cpu" and that is all. But If I look into database of 
ceilometer, then I see this:
MariaDB [ceilometer]> select * from meter;
++---++---+
| id | name  | type   | unit  |
++---++---+
| 58 | cpu   | cumulative | ns|
|  1 | cpu.cpu   | gauge  | jiffies   |
| 74 | cpu.delta | delta  | ns|
| 71 | cpu_util  | gauge  | % |
...
64 | disk.usage| gauge  | B |
| 62 | disk.write.bytes  | cumulative | B |
| 72 | disk.write.bytes.rate | gauge  | B/s   |
| 49 | disk.write.requests   | cumulative | request   |
| 66 | disk.write.requests.rate  | gauge  | request/s |
| 14 | hugepages.free_hugepages  | gauge  | None  |
| 23 | hugepages.nr_hugepages| gauge  | None  |
...

I have also proper resources and samples in db.

*
And my question is: How is connected the database table meter and the command 
metric-list?
Second question how I can propagate the metrics to horizon dashboard "Resource 
usage"?
(I'm able only see cpu metric from started worker image)

Can you please point me out?
Thanks a lot!
Jarek

Background:
I'm writing plugin which connects collectd and ceilometer and I want to see the 
collectd metrics in the horizon or at least in the ceilometer shell.


--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][horizon] metric-list not complete and show metrics in horizon dashboard

2016-04-12 Thread Matthias Runge
On 12/04/16 13:47, Safka, JaroslavX wrote:
> Hi all,
> 

> Second question how I can propagate the metrics to horizon dashboard 
> "Resource usage"?
> (I'm able only see cpu metric from started worker image)

Jaroslav,

I would suggest you to look at gnocchi instead of ceilometer for Horizon
integration. We (in Horizon) did not manage to deprecate ceilometer yet
[1], but that being said, Ceilometer is not the right tool for
displaying (lots of) metering data in horizon.


[1] https://review.openstack.org/#/c/272644/

-- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Paul Argiry, Charles Cachera, Michael Cunningham,
Michael O'Neill

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][puppet] Puppet OpenStack 8.0.0 release (mitaka)

2016-04-12 Thread Emilien Macchi
On Tue, Apr 12, 2016 at 7:45 AM, Marcus Furlong  wrote:
> On 11 April 2016 at 22:15, Emilien Macchi  wrote:
>> On Mon, Apr 11, 2016 at 1:57 AM, Marcus Furlong  wrote:
>>> On 29 March 2016 at 09:53, Emilien Macchi  wrote:
 Puppet OpenStack team has the immense pleasure to announce the release
 of 24 Puppet modules.
>>
>>> Also (related to the above), are there any plans to do point releases
>>> including the bugfixes from the stable/mitaka branches? I ask because
>>> currently, if modules are installed from puppet forge, you still need
>>> to apply patches locally for things that have been fixed in the git
>>> stable branch. I've just noticed that I have patches for nearly every
>>> module for liberty. This is because after the initial puppet forge
>>> release, there were no further updates to any of the modules.
>>
>> What we do currently is backporting bugfix in stable branches (ex:
>> stable/liberty) but we don't produce tags quite often. It might change
>> in the future but until now we don't have much sub-releases between
>> cycles.
>> I would suggest you to deploy modules from git/branch (using your
>> stable version) and if any backport is required, feel free to submit
>> it and our team will review it according to our backport policy:t
>> https://wiki.openstack.org/wiki/Puppet/Backport_policy
>
> Is there any point releasing the modules on puppet forge if they are
> mostly going to be in a broken state (i.e. the initial release)? That
> is likely to reflect badly on the project?

Like other projects we work by iterations where each release contains
new features, bugfixes, etc. When we fix bugs after a release, we do
our best to backport it to stable branches, and provide a new release
when possible. Most of our operators do not run master branch
directly, and I think we're doing a good job at backports, thanks to
CI and reviewers.
During Liberty & Mitaka, release management was handled by a very few
human resources. It will change during Newton cycle, we are working
hard to transfer the release management to OpenStack release team and
hopefully it will help to provide more releases than before.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][tempest] disabling 'full' tempest tests for ceilometer changes in CI

2016-04-12 Thread Ryota Mibu
Hi,


Can we disable 'full' tempest tests for ceilometer?

- gate-tempest-dsvm-ceilometer-mongodb-full
- gate-tempest-dsvm-ceilometer-mysql-full
- gate-tempest-dsvm-ceilometer-mysql-neutron-full
- gate-tempest-dsvm-ceilometer-postgresql-full
- gate-tempest-dsvm-ceilometer-es-full

We've merged ceilometer test cases from tempest repo to ceilometer/aodh repo as 
tempest plugin [1-2].
So, I suppose we can disable these jobs for ceilometer checks and gates, once 
we enabled ceilometer tempest tests with migrated codes [3].

But, it will stop other tempest tests not in ceilometer dir of tempest in gate 
jobs for ceilometer, as current setup is to kick 'full' tempest tests even for 
ceilometer changes.
Let me know if there is any problem.
I might miss the original intention of Jenkins job setup for ceilometer.

[1] https://blueprints.launchpad.net/ceilometer/+spec/tempest-plugin
[2] https://blueprints.launchpad.net/ceilometer/+spec/tempest-aodh-plugin
[3] https://review.openstack.org/#/c/303921/


Thanks,
Ryota


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-12 Thread Xie, Xianshan
Hi, Duncan & michael,
Thanks a lot for your replies.

Definitely I agree with you that the microversion is the best approach to solve 
the backwards compat,
and the neutron is also going to adopt it [1]. But it will take a long time to 
totally introduce it into neutron I think.
So IMO, we can continue this discussion and then implement this feature in 
parallel with the microversion.

Actually, according to the design [2], only a slight change will be done once 
the microversion landed, i.e.
replace the ' new header ' with the microversion to control the final format 
about the error message in the wsgi interface.

[1] https://review.openstack.org/#/c/136760/
[2] https://review.openstack.org/#/c/298704

Best regards,
xiexs

-Original Message-
From: michael mccune [mailto:m...@redhat.com] 
Sent: Monday, April 11, 2016 11:11 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [API]Make API errors conform to the 
common error message without microversion

please forgive my lack of direct knowledge about the neutron process and how 
this fits in. i'm just commenting from the perspective of someone looking at 
this from the api-wg.

On 04/11/2016 09:52 AM, Duncan Thomas wrote:
> So by adding the handling of a header to change the behaviour of the 
> API, you're basically implementing a subset of microversions, with a 
> non-standard header (See the API WG spec on non-proliferation of 
> headers). You'll find it takes much of the work that implementing 
> microversions does, and explodes your API test matrix some more.
>
> Sounds like something that should go on hold until microversions is 
> done, assuming that microversions are desired anyway. Standard error 
> messages are not such a big win that they're worth non-standard 
> headers and yet more API weirdness that needs to sit around 
> potentially for a very long time (see the API WG rules on removing 
> APIs, which is basically never)
>

i think this advice sounds reasonable. adding a side-channel around 
microversions sounds like work that would itself need a microversion bump when 
it is finally removed ;)

i also agree with the reasoning about the benefit from the standardized error 
messages. it is nice to get a standard error message produced, but i think 
adding microversions is probably a bigger win in the near term because it will 
make these other transitions smoother.

> On 8 April 2016 at 11:23, Xie, Xianshan  > wrote:
>
> Hi, all,
>
> We are attempting to make the neutron API conform to the common
> error message format recommended by API-WG [1]. As this change will
> introduce a new error format into neutron which is different from
> existing  format [2], we should think of some solutions to preserve
> the backward compat.
>
> The easiest way to do that is microversion, just like the cinder
> does [3] although which is still in progress. But unfortunately,
> there are many projects in which the microversion haven't been
> landed yet, e.g. neutron,  glance, keystone etc. Thus during the
> interim period we have to find other approaches to keep the backward
> compat.
>
> According to the discussion, a new header would be a good idea to
> resolve this issue [4], we think.
> For instance:
> curl -X DELETE "http://xxx:9696/v2.0/networks/xxx"; -H
> "Neutron-Common-Error-Format: True"
>
> But we haven't decided which header name will be used yet.
> So how do you think which is the best appropriate one?
> A: Neutron-Common-Error-Format
> B: OpenStack-Neutron-Common-Error-Format
> C: other (Could you please specify it? Thanks in advance)
>
> Any comments would be appreciated.
>
> [1] http://specs.openstack.org/openstack/api-wg/guidelines/errors.html
> [2] https://review.openstack.org/#/c/113570/
> [3] https://review.openstack.org/#/c/293306/
> [4] https://bugs.launchpad.net/neutron/+bug/1554869
>
> Best regards,
> xiexs
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> --
> Duncan Thomas
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [ceilometer][tempest] disabling 'full' tempest tests for ceilometer changes in CI

2016-04-12 Thread gordon chung
i'd be in favour of dropping the full cases -- never really understood 
why we ran all the tests everywhere. ceilometer/aodh are present at the 
end of the workflow so i don't think we need to be concerned with any of 
the other tests, only the ones explicitly related to ceilometer/aodh.

On 12/04/2016 8:12 AM, Ryota Mibu wrote:
> Hi,
>
>
> Can we disable 'full' tempest tests for ceilometer?
>
>  - gate-tempest-dsvm-ceilometer-mongodb-full
>  - gate-tempest-dsvm-ceilometer-mysql-full
>  - gate-tempest-dsvm-ceilometer-mysql-neutron-full
>  - gate-tempest-dsvm-ceilometer-postgresql-full
>  - gate-tempest-dsvm-ceilometer-es-full
>
> We've merged ceilometer test cases from tempest repo to ceilometer/aodh repo 
> as tempest plugin [1-2].
> So, I suppose we can disable these jobs for ceilometer checks and gates, once 
> we enabled ceilometer tempest tests with migrated codes [3].
>
> But, it will stop other tempest tests not in ceilometer dir of tempest in 
> gate jobs for ceilometer, as current setup is to kick 'full' tempest tests 
> even for ceilometer changes.
> Let me know if there is any problem.
> I might miss the original intention of Jenkins job setup for ceilometer.
>
> [1] https://blueprints.launchpad.net/ceilometer/+spec/tempest-plugin
> [2] https://blueprints.launchpad.net/ceilometer/+spec/tempest-aodh-plugin
> [3] https://review.openstack.org/#/c/303921/
>
>
> Thanks,
> Ryota
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][tempest] disabling 'full' tempest tests for ceilometer changes in CI

2016-04-12 Thread Chris Dent

On Tue, 12 Apr 2016, gordon chung wrote:


i'd be in favour of dropping the full cases -- never really understood
why we ran all the tests everywhere. ceilometer/aodh are present at the
end of the workflow so i don't think we need to be concerned with any of
the other tests, only the ones explicitly related to ceilometer/aodh.


+1
--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][horizon] metric-list not complete and show metrics in horizon dashboard

2016-04-12 Thread gordon chung


On 12/04/2016 7:47 AM, Safka, JaroslavX wrote:

> *
> And my question is: How is connected the database table meter and the command 
> metric-list?

i assume you mean meter-list. it uses a combination of data from meter 
table and resource table[1]. this is because it lists all the meters and 
the resources associated with it. yes, it's complicated (which is why 
we'd recommend Gnocchi or your own special solution)

> Second question how I can propagate the metrics to horizon dashboard 
> "Resource usage"?
> (I'm able only see cpu metric from started worker image)

horizon uses ceilometerclient to grab data so i imagine they are using 
some combination of resource-list, sample-list, meter-list. as Matthias 
mentioned, you probably shouldn't rely on the existing horizon interface 
as the hope is to deprecate the view in horizon since no one is really 
sure what it's designed to show.

>
> Background:
> I'm writing plugin which connects collectd and ceilometer and I want to see 
> the collectd metrics in the horizon or at least in the ceilometer shell.

is this an addition to the work that Emma did[1]? i'm assuming so, given 
your locale/company.

[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L539-L549
[1] https://github.com/openstack/collectd-ceilometer-plugin

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-12 Thread Ihar Hrachyshka

Xianshan  wrote:


Hi, Duncan & michael,
Thanks a lot for your replies.

Definitely I agree with you that the microversion is the best approach to  
solve the backwards compat,
and the neutron is also going to adopt it [1]. But it will take a long  
time to totally introduce it into neutron I think.
So IMO, we can continue this discussion and then implement this feature  
in parallel with the microversion.


Actually, according to the design [2], only a slight change will be done  
once the microversion landed, i.e.
replace the ' new header ' with the microversion to control the final  
format about the error message in the wsgi interface.


[1] https://review.openstack.org/#/c/136760/
[2] https://review.openstack.org/#/c/298704


If no one is going to help with microversioning, it won’t ever happen. I  
suggest we consolidate whichever resources we have to get it done, instead  
of working on small API iterations as proposed in the thread.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Versioned notification work continues in Newton

2016-04-12 Thread Balázs Gibizer
Hi, 

I just want to let the community know that the versioned notification
work we started in Mitaka is planned to be continued in Newton. 
The planned goals for Newton:
* Transform the most important notification to the new format [1]
* Help others to use the new framework adding new notifications [2], [3], [4]
* Further help notification consumers by adding json schemas for the 
notifications [5]

I will be in Austin during the whole summit week to discuss these ideas. 
I will restart the notification subteam meeting [6] after the summit to have
a weekly synchronization point. All this work is followed up on the etherpad 
[7].

Cheers,
Gibi

[1] 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-newton
 
[2] https://blueprints.launchpad.net/nova/+spec/add-swap-volume-notifications 
[3] https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-api 
[4] https://blueprints.launchpad.net/nova/+spec/hypervisor-notification 
[5] https://blueprints.launchpad.net/nova/+spec/json-schema-for-notifications 
[6] https://wiki.openstack.org/wiki/Meetings/NovaNotification 
[7] https://etherpad.openstack.org/p/nova-versioned-notifications 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Andreas Scheuring


> Another possible solution is to apply the same strategy we use for
> instance creation. Nova should wait to get a confirmation from Neutron
> before declaring the migration successful.

But that wouldn't solve the problem I think. The problem is, that the
protbidning happens after migration already finished. Today nova already
detects if it fails (when generating the xml on the target) and sets the
instance to error state. So there's already something like that.

But I want to prevent such a invalid migration right from the beginning
(which is technically possible).

Thanks!


-- 
-
Andreas (IRC: scheuran) 

On Di, 2016-04-12 at 12:33 +0200, Rossella Sblendido wrote:
> On 04/12/2016 12:05 PM, Andreas Scheuring wrote:
> > Hi together, 
> > I wanted to start discussion about Live Migration problem that currently 
> > exists in the nova neutron communication.
> > 
> > Basics Live Migration and Nova - Neutron communication
> > --
> > On a high level, Nova Live Migration happens in 3 stages. (--> is what's 
> > happening from network perspective)
> > #1 pre_live_migration
> >--> libvirtdriver: nova plugs the interface (for ovs hybrid sets up the 
> > linuxbridge + veth and connects it to br-int)
> > #2 live_migration_operation
> >--> instance is being migrated (using libvirt with the domain.xml that 
> > is currently active on the migration source)
> > #3 post_live_migration
> >--> binding:host_id is being updated for the port
> >--> libvirtdriver: domain.xml is being regenerated  
> > More details can be found here [1]
> > 
> > The problem - portbinding fails
> > ---
> > With this flow, ML2 portbinding is triggered in post_live_migration. At 
> > this point, the instance has already been migrated and is active on the 
> > migration destination.
> > Part of the port-binding is happening in the mechanism drivers, where the 
> > vif information for the port (vif-type, vif-details,..) is being updated.
> > If this portbinding fails, port will get the binding:vif_type 
> > "binding_failed".
> > After that the nova libvirt driver starts generating the domain xml again 
> > to persist it. Part of this generation is also generating the interface 
> > definition. 
> > This fails as the vif_type is "binding_failed". Nova will set the instance 
> > to error state. --> There is no rollback, as it's already too late!
> > 
> > Just a remark: There is no explicit check for the vif_type binding_failed. 
> > I have the feeling that it (luckily) fails by accident when generating the 
> > xml.
> > 
> > --> Ideally we would trigger the portbinding before the migration started - 
> > in pre_live_migration. Then, if binding fails, we could abort migration 
> > before it even started. The instance would still be
> > active and fully functional on the source host. I have a WIP patchset out 
> > proposing this change [2]
> > 
> > 
> > The impact
> > --
> > Patchset [2] propose updating the host_id already in pre_live_migration. 
> > During migration, the port would already be owned by the migration target 
> > (although the guest is still active on the source)
> > Technically this works fine for all the reference implementations, but this 
> > could be a problem for some third party mech drivers, if they shut down the 
> > port on the source and activate it on the target - although instance is 
> > still on the source
> > 
> > Any thoughts on this?
> 
> +1 on this anyway let's hear back from third party drivers maintainers.
> 
> > 
> > 
> > Additional use cases that would be enabled with this change
> > ---
> > When updating the host_id in pre_live_migration, we could modify the 
> > domain.xml with the new vif information before live migration (see patch 
> > [2] and nova spec [4]).
> > This enables the following use cases
> > 
> > #1 Live Migration between nodes that run different l2 agents
> >E.g. you could migrate a instance from an ovs node to an lb node and 
> > vice versa. This could be used as l2 agent transition strategy!
> > #2 Live Migration with macvtap agent
> >It would enable the macvtap agent to live migrate instances between 
> > hosts, that use a different physical_interface_mapping. See bug [3]
> > 
> > --> #2 is the use case that made me thinking about this whole topic
> > 
> > Potential other solutions
> > -
> > #1 Have something like simultaneous portbinding - On migration, a port is 
> > bound to 2 hosts (like a dvr port can today).
> > Therefore some database refactorings would be required (work has already 
> > been started in the DVR context [7])
> > And the Rest API would need to be changed in a way, that there's not a 
> > single binding, but a list of bindings returned. Of course also create & 
> > update that list.
> >
> 
> I don't like this one. This would require lots of code changes and I am
> not sure 

[openstack-dev] [nova] Nova API sub-team meeting

2016-04-12 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-12 Thread Andreas Scheuring
Hi Kevin, thanks for your input! See my comments in line



-- 
-
Andreas (IRC: scheuran) 

On Di, 2016-04-12 at 04:12 -0700, Kevin Benton wrote:
> We can't change the host_id until after the migration or it will break
> l2pop other drivers that use the host as a location indicator (e.g.
> many top of rack drivers do this to determine which switch port should
> be wired up).

I was assuming something like that. Thanks for clarification.


> There is already a patch that went in to inform Neutron of the
> destination host here for proactive DVR
> wiring: https://review.openstack.org/#/c/275420/ During this port
> update phase, we can validate the destination host is 'bindable' with
> the given port info and fail it otherwise. This should block Nova from
> continuing.
> 
> 
> However, we have to figure out how ML2 will know if something is
> 'bindable'. The only interface we have right now is bind_port. It is
> possible that we can do a faked bind_port attempt using what the port
> host_id would look like after migration. It's made clear in the ML2
> driver API that bind_port results may not be
> committed: 
> https://github.com/openstack/neutron/blob/4440297a2ff5a6893b748c2400048e840283c718/neutron/plugins/ml2/driver_api.py#L869

Agree, that was one of the alternatives I had in mind as well. The
introduced API looks good and could be used I think!

> So the workflow would be something like:
> * Nova calls Neutron port update with the destination host in the
> binding details
> * In ML2 port update, the destination host is placed into a copy of
> the port in the host_id field and bind_port is called.
> * If bind_port is unsuccessful, it fails the port update for Nova to
> prevent migration.
> * If bind_port is successful, the results of the port update are
> committed (with the original host_id and the new host_id in the
> destination_host field).
Ideally the update host would then return the copy binding information
instead of the real ones. Doing so, I could use this data to modifiy the
domain.xml before migration with the relevant information.
> * Workflow continues as normal here.
> 
> 
> So this heavily exploits the fact that bind_port is supposed to be
> mutation-free in the ML2 driver API. We may encounter drivers that
> don't follow this now, but they are already exposed to other bugs if
> they mutate state so I think the onus would be on them to fix their
> driver.
> 
> 
> Cheers,
> Kevin Benton
> 

> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-12 Thread Jim Rollenhagen
On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> Maybe we can continue the discussion here, as there's no enough time in the
> irc meeting :)

Someone mentioned this would make a good summit session, as there's a
few competing proposals that are all good options. I do welcome
discussion here until then, but I'm going to put it on the schedule. :)

// jim

> 
> On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu  wrote:
> 
> >
> > Ironic is currently using shellinabox to provide a serial console, but
> > it's not compatible
> > with nova, so I would like to propose a new console type and a custom HTTP
> > proxy [1]
> > which validate token and connect to ironic console from nova.
> >
> > On Horizon side, we should add support for the new console type [2] as
> > well, here are some screenshots from my local environment.
> >
> >
> >
> > ​
> >
> > Additionally, shellinabox console ports management should be improved in
> > ironic, instead of manually specified, we should introduce dynamically
> > allocation/deallocation [3] mechanism.
> >
> > Functionality is being implemented in Nova, Horizon and Ironic:
> > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
> > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
> > https://review.openstack.org/#/q/status:open+topic:bug/1526371
> >
> >
> > PS: to achieve this goal, we can also add a new console driver in ironic
> > [4], but I think it doesn't conflict with this, as shellinabox is capable
> > to integrate with nova, and we should support all console drivers.
> >
> >
> > [1] https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
> > [2]
> > https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
> > [3] https://bugs.launchpad.net/ironic/+bug/1526371
> > [4] https://bugs.launchpad.net/ironic/+bug/1553083
> >
> > --
> > Best Regards,
> > Zhenguo Niu
> >
> 
> 
> 
> -- 
> Best Regards,
> Zhenguo Niu




> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-12 Thread Andreas Scheuring
I must admit, I really like this idea of getting rid of all the devstack
params. It's always a mess looking up the functionality of the various
variables in the code when trying out something new.

I also understand the concern that was raised by somebody (couldn't find
it anymore) that the Neutron config defaults must not necessarily be the
values that make sense for devstack. So what about having some code in
the neutron devstack plugin defining the default values without using
fancy variables - so just using the initset?

On the other hand I find it useful to have access to some variables like
the HOST_IP or PHYSICAL_NETWORK. I don't want to hardcode my IP into
every setting and maybe I want to define a interface or bridgemapping
that takes use of the PHYSICAL_NETWORK var.
However it's still possible to define you own var in the local.conf and
use them in post_config...

But you already kind of convinced me to to that post-config way with the
macvtap integration... cause then it's just a matter of documentation...

Thanks


-- 
-
Andreas (IRC: scheuran) 

On Fr, 2016-04-08 at 15:07 +, Sean M. Collins wrote:
> Prior to the introduction of local.conf, the only way to configure
> OpenStack components was to introduce code directly into DevStack, so
> that DevStack would pick it up then inject it into the configuration
> file.
> 
> This was because DevStack writes out new configuration files on each
> run, so it wasn't possible for you to make changes to any configuration
> file (nova.conf, neutron.conf, ml2_plugin.ini, etc..).
> 
> So, someone who wanted to set the Linux Bridge Agent's
> physical_interface_mappings setting for Neutron would have to use
> $LB_INTERFACE_MAPPINGS in DevStack, which would then be invoked by
> DevStack[1].
> 
> The local.conf functionality was introduced quite a while back, and
> I think it's time to have a conversation about why we should start
> moving away from the previous practice of declaring variables in
> DevStack, and then having them injected into the configuration files.
> 
> The biggest issue is: There is a disconnect between the developers
> using DevStack and someone who is an operator or who has been editing
> OpenStack conf files directly. So, for example I can tell you all about
> how DevStack has a bunch of variables for configuring Neutron (which is
> Not a Good Thing™), and how those go into DevStack and then end up coming
> out the other side in a Neutron configuration file.
> 
> Really, I would like to get rid of the intermediate layer (DevStack)
> and get both Devs and Deployers to be able to just say: Here's my
> neutron.conf - let's diff mine and yours and see what we need to sync.
> 
> Matt Kassawara and I have had this issue, since he's coming from the
> OSAD side, and I'm coming from the DevStack side. We both know what the
> Neutron configuration should end up as, but DevStack having its own set
> of variables and how those variables are handled and eventually rendered
> as a Neutron config file makes things more difficult than it needs to
> be, since Matt has to now go and learn about how DevStack handles all
> these Neutron specific variables.
> 
> The Neutron refactor[2] that I am working on, I am trying to configure
> as little as possible in DevStack. Neutron should be able to, out of the
> box, Just Work™. If it can't, then that needs to be fixed in Neutron.
> 
> Secondly, the Neutron refactor will be getting rid of all the things
> like $LB_INTERFACE_MAPPINGS - I would *much* prefer that someone using
> DevStack actually set the apropriate line in their local.conf
> 
> Such as:
> 
> [[post-config|/$Q_PLUGIN_CONF_FILE]]
> [linux_bridge]
> physical_interface_mappings = foo:bar
> 
> 
> The advantage of this is, when someone is working with DevStack, the
> things they are configuring are the same as all the other OpenStack 
> documentation.
> 
> For example, someone could read the Networking Guide, read the example
> configuration[3] and the only thing they'd need to learn is our syntax
> for specifying what file the contents go in (the 
> "[[post-config|/$Q_PLUGIN_CONF_FILE]]" piece).
> 
> Thoughts?
> 
> [1]: 
> https://github.com/openstack-dev/devstack/blob/1195a5b7394fc5b7a1cb1415978e9997701f5af1/lib/neutron_plugins/linuxbridge_agent#L63
> 
> [2]: https://review.openstack.org/168438
> 
> [3]: 
> http://docs.openstack.org/liberty/networking-guide/scenario-classic-lb.html#example-configuration
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Change openstack.node to openstack.cluster

2016-04-12 Thread Weyl, Alexey (Nokia - IL)
Hi,

After some discussion, we have decided to change "openstack.node" to 
"openstack.cluster" as the "type" of the openstack cluster in the graph, to 
conform with the standard openstack terminology.

Alexey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][glance] glance_store 0.13.1 release (mitaka)

2016-04-12 Thread no-reply
We are tickled pink to announce the release of:

glance_store 0.13.1: OpenStack Image Service Store Library

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/glance_store

With package available at:

https://pypi.python.org/pypi/glance_store

Please report issues through launchpad:

http://bugs.launchpad.net/glance-store

For more details, please see below.

Changes in glance_store 0.13.0..0.13.1
--

33c08d8 Update .gitreview for stable/mitaka
4600adb Mock swiftclient's functions in tests

Diffstat (except docs and test files)
-

.gitreview  |  1 +
2 files changed, 47 insertions(+), 3 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] doc8 0.7.0 release

2016-04-12 Thread no-reply
We are content to announce the release of:

doc8 0.7.0: Style checker for Sphinx (or other) RST documentation

For more details, please see below.

Changes in doc8 0.6.0..0.7.0


a96c31f Skip long line check for rst definition list term
f6cb930 Remove argparse from requirements
5abd304 Remove support for py33/py26
8ed2fba remove python 2.6 trove classifier
0e7e4b1 Put py34 first in the env order of tox
b0dd12d Deprecated tox -downloadcache option removed
864e093 Added the launchpad bug url and fixed one typo
1791030 Update .gitreview for new namespace
5115b86 Use a more relevant launchpad home-page url
17f56bb Fix grammar issue in README.rst
65138ae Fix invalid table formatting
64f22e4 Fix issue of checking max_len for directives

Diffstat (except docs and test files)
-

.gitignore|  2 +-
.gitreview|  2 +-
CONTRIBUTING.rst  |  3 +++
README.rst| 28 ++--
requirements.txt  |  1 -
setup.cfg |  5 +
tox.ini   |  5 +
9 files changed, 76 insertions(+), 26 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 86a2ba2..8fcc512 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +4,0 @@
-argparse



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-12 Thread Jim Rollenhagen
On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > Hi Nova, Ops, stackers,
> >  
> > I am trying to figure out different use cases and requirements there
> > would be for host maintenance and would like to get feedback and
> > transfer all this to spec and discussion what could and should land for
> > Nova or other places.
> >  
> > As working in OPNFV Doctor project that has the Telco perspective about
> > related requirements, I started to draft a spec based on something
> > smaller that would be nice to have in Nova and less complicated to have
> > it in single cycle. Anyhow the feedback from Nova API team was to look
> > this as a whole and gather more. This is why asking this here and not
> > just trough spec, to get input for requirements and use cases with wider
> > audience. Here is the draft spec proposing first just maintenance window
> > to be added:
> > _https://review.openstack.org/296995/_
> >  
> > Here is link to OPNFV Doctor requirements:
> > _http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance_
> > _http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance_
> >  
> > Here is what I could transfer as use cases, but would ask feedback to
> > get more:
> >  
> > As admin I want to set maintenance period for certain host.
> >  
> > As admin I want to know when host is ready to actions to be done by admin
> > during the maintenance. Meaning physical resources are emptied.
> >  
> > As owner of a server I want to prepare for maintenance to minimize downtime,
> > keep capacity on needed level and switch HA service to server not
> > affected by
> > maintenance.
> >  
> > As owner of a server I want to know when my servers will be down because of
> > host maintenance as it might be servers are not moved to another host.
> >  
> > As owner of a server I want to know if host is to be totally removed, so
> > instead of keeping my servers on host during maintenance, I want to move
> > them
> > to somewhere else.
> >  
> > As owner of a server I want to send acknowledgement to be ready for host
> > maintenance and I want to state if servers are to be moved or kept on host.
> > Removal and creating of server is in owner's control already. Optionally
> > server
> > Configuration data could hold information about automatic actions to be
> > done
> > when host is going down unexpectedly or in controlled manner. Also
> > actions at
> > the same if down permanently or only temporarily. Still this needs
> > acknowledgement from server owner as he needs time for application level
> > controlled HA service switchover.
> 
> While I definitely understand the value of these in a deployement, I'm a
> bit concerned of baking all this structured data into Nova itself. As it
> effectively means putting some degree of a ticket management system in
> Nova that's specific to a workflow you've decided on here. Baked in
> workflow is hard to change when the needs of an industry do.
> 
> My counter proposal on your spec was to provide a free form field
> associated with maintenance mode which could contain a url linking to
> the details. This could be a jira ticket, or a REST url for some other
> service. This would actually be much like how we handle images in Nova,
> with a url to glance to find more info.

FWIW, this is what we do in ironic. A maintenance boolean, and a
maintenance_reason text field that operators can dump text/links/etc in.

As an example:
$ ironic node-set-maintenance $uuid on --reason "Dead fiber // ticket 123 // 
jroll 2016/04/12"

It's worked well for Rackspace's deployment, at least, and I seem to
remember others being happy with it as well.

// jim

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-12 Thread Andrew Laski


On Tue, Apr 5, 2016, at 09:57 AM, Ryan McNair wrote:
> >It is believed that reservation help to to reserve a set of resources
> >beforehand and hence eventually preventing any other upcoming request
> >(serial or parallel) to exceed quota if because of original request the
> >project might have reached the quota limits.
> >
> >Questions :-
> >1. Does reservation in its current state as used by Nova, Cinder, Neutron
> >help to solve the above problem ?
> 
> In Cinder the reservations are useful for grouping quota
> for a single request, and if the request ends up failing
> the reservation gets rolled back. The reservations also
> rollback automatically if not committed within a certain
> time. We also use reservations with Cinder nested quotas
> to group a usage request that may propagate up to a parent
> project in order to manage commit/rollback of the request
> as a single unit.
> 
> >
> >2. Is it consistent, reliable ?  Even with reservation can we run into
> >in-consistent behaviour ?
> 
> Others can probably answer this better, but I have not
> seen the reservations be a major issue. In general with
> quotas we're not doing the check and set atomically which
> can get us in an inconsistent state with quota-update,
> but that's unrelated to the reservations.
> 
> >
> >3. Do we really need it ?
> >
> 
> Seems like we need *some* way of keeping track of usage
> reserved during a particular request and a way to easily
> roll that back at a later time. I'm open to alternatives
> to reservations, just wondering what the big downside of
> the current reservation system is.

Jay goes into it a little bit in his response to another quota thread
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090560.html
and I share his thoughts here.

With a reservation system you're introducing eventual consistency into
the system rather than being strict because reservations are not tied to
a concrete thing. You can't do a point in time check of whether the
reserved resources are going to eventually be used if something happens
like a service restart and a request is lost. You have to have no
activity for the duration of the expiration time to let things settle
before getting a real view of quota usages.

Instead if you tie quota usage to the resource records then you can
always get a view of what's actually in use.

One thing that should probably be clarified in all of these discussion
is what exactly is the quota on. I see two answers: the quota is against
the actual resource usage, or the quota is against the records tracking
usage. Since we currently track quotas with a reservation system I think
it's fair to say that we're not actually tracking against resource like
disk/RAM/CPU being in use. I would consider the resource to be in use as
soon as there's a db record for something like an instance or volume
which indicates that there will be consumption of those resources. What
that means in practice is that quotas can be committed right away and
there doesn't seem to be any need for a reservation system.



> 
> - Ryan McNair (mc_nair)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] pbr 1.9.0 release

2016-04-12 Thread no-reply
We are satisfied to announce the release of:

pbr 1.9.0: Python Build Reasonableness

With source available at:

http://git.openstack.org/cgit/openstack-dev/pbr

Please report issues through launchpad:

http://bugs.launchpad.net/pbr

For more details, please see below.

Changes in pbr 1.8.1..1.9.0
---

d4e29cb Updated from global requirements
9b5f422 package: fix wrong catch in email parsing
b4d2158 Do not convert git tags when searching history
139110c Include wsgi_scripts in generated wheels
3897786 Correct the indentation in the classifiers example
11ffee9 Do not silently drop markers that fail to evaluate
87dcab8 Clarifications around tags and version numbers
3ee5171 Correct typo - s/enabeld/enabled/
09c6529 Use apt-cache generated packages to provide build deps
639edb4 fix some variable names
1facf73 Don't attempt to test with 0.6c11 with Py3
d19459d Support entry point patching on setuptools < 12
8992a9a Updated from global requirements
da9ab10 Split changelog on nulls instead of (
578b51b Add libjpeg and liberasurecode for tests
7898882 Handle markers to support sdist on pip < 6
fb0e9de Deprecated tox -downloadcache option removed
4afcabe passenv integration environment variables re-enabling integration tests
768c534 Enable pep8 H405 tests
379e42a Add patch to properly get all commands from dist
6ae3d1b doc: Remove 'MANIFEST.in'
3348fdc doc: Trivial cleanup of 'index.rst'
ff48260 doc: Add deprecation note for 'requirements-pyN'
2508161 doc: Restructure 'Requirements' section
934491c doc: Restructure 'Usage' section
04afb06 doc: Add details of manifest generation
31a0e97 Support git://, git+ssh://, git+https:// without -e flag.
a5d46d5 More support Sphinx >=1.3b1 and <1.3.1
3198305 Fix docs for markers
b8cbe78 Do not error when running pep8 with py3
00b8d5a Ensure changelog log output is written if it already exists
0aeca90 Cleanup jeepyb and pypi-mirror special casing
a4cf44a Add build_sphinx test coverage

Diffstat (except docs and test files)
-

MANIFEST.in   |  13 -
pbr/builddoc.py   |   4 +-
pbr/cmd/main.py   |  10 +-
pbr/core.py   |  12 +-
pbr/git.py|  55 ++-
pbr/packaging.py  |  90 +++--
pbr/util.py   |  43 ++-
test-requirements.txt |  24 +-
tools/integration.sh  |  14 +-
tox.ini   |   4 +-
23 files changed, 1075 insertions(+), 459 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 5802d7c..7cbebf4 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4,3 +4,3 @@
-coverage>=3.6
-discover
-fixtures>=1.3.1
+coverage>=3.6 # Apache-2.0
+discover # BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
@@ -8,9 +8,9 @@ hacking<0.11,>=0.10.0
-mock>=1.2
-python-subunit>=0.0.18
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-six>=1.9.0
-testrepository>=0.0.18
-testresources>=0.2.4
-testscenarios>=0.4
-testtools>=1.4.0
-virtualenv
+mock>=1.2 # BSD
+python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+six>=1.9.0 # MIT
+testrepository>=0.0.18 # Apache-2.0/BSD
+testresources>=0.2.4 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+virtualenv # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][reno] Broken releasenotes in glance_store

2016-04-12 Thread Andreas Jaeger
On 2016-04-12 08:31, Andreas Jaeger wrote:
> On 2016-04-12 03:33, Nikhil Komawar wrote:
>> To close this:
>>
>> This has been fixed as a part of the earlier opened bug
>> https://bugs.launchpad.net/glance-store/+bug/1568767 and other is
>> duplicated.
>>
> 
> Yes, it's fixed now.

And you have release notes:

http://docs.openstack.org/releasenotes/glance_store/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Sean McGinnis
Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Michael Johnson
Armando,

Is there any way we can move the "Neutron: Development track: future
of *-aas projects" session?  I overlaps with the LBaaS talk:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/6893?goback=1

Michael


On Mon, Apr 11, 2016 at 9:56 PM, Armando M.  wrote:
> Hi folks,
>
> A provisional schedule for the Neutron project is available [1]. I am still
> working with the session chairs and going through/ironing out some details
> as well as gathering input from [2].
>
> I hope I can get something more final by the end of this week. In the
> meantime, please free to ask questions/provide comments.
>
> Many thanks,
> Armando
>
> [1]
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-12 Thread Jim Rollenhagen
On Thu, Apr 07, 2016 at 02:42:09AM +, Jeremy Stanley wrote:
> On 2016-04-06 18:33:06 +0300 (+0300), Igor Belikov wrote:
> [...]
> > I suppose there are security issues when we talk about running
> > custom code on bare metal slaves, but I'm not sure I understand
> > the difference from running custom code on a virtual machine if
> > bare metal nodes are isolated, don't contain any sensitive data
> > and follow a regular redeployment procedure.
> [...]
> 
> With a virtual machine, you can delete it and create a new one.
> Nothing remains behind.
> 
> With a physical machine, arbitrary code running in the scope of a
> test with root access can do _nasty_ things like backdoor your
> server firmware with shims that even masquerade as the firmware
> updater and persist through redeployments that include firmware
> refreshes.
> 
> Physical servers persist, and are therefore vulnerable in this
> scenario in ways which virtual servers are not.

Right, it's a huge effort to run a secure bare metal cloud running
arbitrary code. Homogenous hardware and vendor cooperation is a must,
and that's only part of it.

I don't foresee the infra team having the resources to take on such a
task any time soon (but of course, I'm not well-informed on the infra
team's workload).

Another option for baremetal in the gate is baremetal flavors in other
public clouds - Rackspace has one (OnMetal) but doesn't yet support
custom images, and others have launched or are working on one. Once
there's two clouds that support baremetal with custom images, we could
put those resources in the upstream CI pool.

// jim

> -- 
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-12 Thread Chris Dent


This discussion needs to be happening on openstack-dev too, so
cc'ing that list in as well. The top of the thread is at
http://lists.openstack.org/pipermail/openstack/2016-April/015864.html

On Tue, 12 Apr 2016, Chris Dent wrote:


On Tue, 12 Apr 2016, Nadya Shakhat wrote:


   I'd like to discuss one question with you. Perhaps, you remember that
in Liberty we decided to get rid of transformers on polling agents [1]. I'd
like to describe several issues we are facing now because of this decision.
1. pipeline.yaml inconsistency.


The original idea with centralizing the transformers to just the
notification agents was to allow a few different things, only one of
which has happened:

* Make the pollster code less complex with few dependencies,
 easing deployment options (this has happened), maintenance
 and custom pollsters.

 With the transformers in the pollsters they must maintain a
 considerable amount of state that makes effective use of eventlet
 (or whatever the chosen concurrent solution is) more difficult.

 The ideal pollster is just something that spits out a dumb piece
 of identified data every interval. And nothing else.

* Make it far easier to use and conceptualize the use of pollsters
 outside of the ceilometer environment as simple data collectors.
 In that context transformation would occur only close to the data
 consumption not at the data production.

 This, following the good practice of services doing one thing
 well.

* Migrate away from the pipeline.yaml that conflated sources and
 sinks to a model that is good both for computers and humans:

 * sources over here
 * sinks over here

That these other things haven't happened means we're in an awkward
situation.

Are the options the following?

* Do what you suggest and pull transformers back into the pollsters.
 Basically revert the change. I think this is the wrong long term
 solution but might be the best option if there's nobody to do the
 other options.

* Implement a pollster.yaml for use by the pollsters and consider
 pipeline.yaml as the canonical file for the notification agents as
 there's where the actual _pipelines_ are. Somewhere in there kill
 interval as a concept on pipeline side.

 This of course doesn't address the messaging complexity. I admit
 that I don't understand all the issues there but it often feels
 like we are doing that aspect of things completely wrong, so I
 would hope that before we change things there we consider all the
 options.

What else?

One probably crazy idea: What about figuring out the desired end-meters
of common transformations and making them into dedicated pollsters?
Encapsulating that transformation not at the level of the polling
manager but at the individual pollster.




--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco

On 11/04/16 16:53 +, Adrian Otto wrote:

Amrith,

I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through the journey of
understanding the details of how such a thing would be implemented, that such
an API would either be (1) the lowest common denominator (LCD) of all compute
types, or (2) an exceedingly complex interface. 


You expressed a sentiment below that trying to offer choices for VM, Bare Metal
(BM), and Containers for Trove instances “adds considerable complexity”.
Roughly the same complexity would accompany the use of a comprehensive compute
API. I suppose you were imagining an LCD approach. If that’s what you want,
just use the existing Nova API, and load different compute drivers on different
host aggregates. A single Nova client can produce VM, BM (Ironic), and
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
configured in this way. That’s what we do. Flavors determine which compute type
you get.

If what you meant is that you could tap into the power of all the unique
characteristics of each of the various compute types (through some modular
extensibility framework) you’ll likely end up with complexity in Trove that is
comparable to integrating with the native upstream APIs, along with the
disadvantage of waiting for OpenStack to continually catch up to the pace of
change of the various upstream systems on which it depends. This is a recipe
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are
sufficiently different than what the Nova API already offers. Containers APIs
have limited similarities, so when you try to make a universal interface to all
of them, you end up with a really complicated mess. It would be even worse if
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
approach is to offer the upstream native API’s for the different container
orchestration engines (COE), and compose Bays for them to run on that are built
from the compute types that OpenStack supports. We do this by using different
Heat orchestration templates (and conditional templates) to arrange a COE on
the compute type of your choice. With that said, there are still gaps where not
all storage or network drivers work with Ironic, and there are non-trivial
security hurdles to clear to safely use Bays composed of libvirt-lxc instances
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum,
and if it does, create a bay with the flavor type specified for whatever
compute type you want, and then use the native API for the COE you selected for
that bay. Start your instance on the COE, just like you use Nova today. This
way, you have low complexity in Trove, and you can scale both the number of
instances of your data nodes (containers), and the infrastructure on which they
run (Nova instances).



I've been researching on this area and I've reached pretty much the same
conclusion. I've had moments of wondering whether creating bays is something
Trove should do but I now think it should.

The need of handling the native API is the part I find a bit painful as that
means more code needs to happen in Trove for us to provide this provisioning
facilities. I wonder if a common *library* would help here, at least to handle
those "simple" cases. Anyway, I look forward to chatting with you all about 
this.

It'd be great if you (and other magnum folks) could join this session:

https://etherpad.openstack.org/p/trove-newton-summit-container

Thanks for chiming in, Adrian.
Flavio


Regards,

Adrian




   On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:

   Monty, Dims, 


   I read the notes and was similarly intrigued about the idea. In particular,
   from the perspective of projects like Trove, having a common Compute API is
   very valuable. It would allow the projects to have a single view of
   provisioning compute, as we can today with Nova and get the benefit of bare
   metal through Ironic, VM's through Nova VM's, and containers through
   nova-docker.

   With this in place, a project like Trove can offer database-as-a-service on
   a spectrum of compute infrastructures as any end-user would expect.
   Databases don't always make sense in VM's, and while containers are great
   for quick and dirty prototyping, and VM's are great for much more, there
   are databases that will in production only be meaningful on bare-metal.

   Therefore, if there is a move towards offering a common API for VM's,
   bare-metal and containers, that would be huge.

   Without such a mechanism, consuming containers in Trove adds considerable
   complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a
   working prototype of Trove leveraging Ironic, VM

Re: [openstack-dev] [stackalytics] Proposal for some code/feature changes

2016-04-12 Thread Ilya Shakhat
Hi Nikhil,

2016-04-12 5:59 GMT+03:00 Nikhil Komawar :

> Hello,
>
> I was hoping to make some changes to the stackalytics dashboard
> specifically of this type [1] following my requested suggestions here
> [2]; possibly add a few extra columns for +0s and just Bot +1s. I think
> having this info gives much clearer picture of the kind of reviews
> someone is/wants to be involved in. I couldn't find documentation in the
> README or anywhere else and the minimal amount of docstrings are making
> it difficult for me to figure the changes.
>
> What's the best possible route to accomplish this?
>

Well, I see two different metrics here: the first counts +0s and the second
is additional analytic over existing reviews.

Counting +0s or comments is something that is asked from time to time and
something that I'd like to avoid. The reason is that retrieving comments
lead to higher load on Gerrit and slows down the update cycle.

However Stackalytics already retrieves comments for some projects (those
that have external CIs, like nova), so we can try this new metric there as
experiment. The metric should probably be different from "reviews" not to
skew the current numbers. As of implementation, the changes should be in
both processor and dashboard; the code similar to existing counting reviews.

The second feature is counting votes against patch-sets posted by bots.
This one should be easier to implement and all data is already available.
In the report like [1] every vote record can be extended with info from
patch-set; the filtering should take into account the author's company, all
bots are assigned to '*robots'.

Thanks,
Ilya


>
> [1] http://stackalytics.com/report/contribution/astara-group/30
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/091836.html
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if Trove had
access to a single Compute API, there were other significant complications
further down the road, and I didn’t pursue the project further at the time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month and my
conclusion is pretty much the above. There's no silver bullet in this area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ in such
a way (feature-wise) that it'd not be good, as far as deploying databases goes,
for there to be one compute API. Containers allow for a different deployment
architecture than VMs and so does bare metal.



We will be discussing Trove and Containers in Austin [1] and I’ll try and close
the loop with you on this while we’re in Town. I still would like to come up
with some way in which we can offer users the option of provisioning database
as containers.


As the person leading this session, I'm also looking forward to providing such
provisioning facilities to Trove users. Let's do this.

Cheers,
Flavio



Thanks,



-amrith



[1] https://etherpad.openstack.org/p/trove-newton-summit-container



From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Monday, April 11, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)

Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)



Amrith,



I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through the journey of
understanding the details of how such a thing would be implemented, that such
an API would either be (1) the lowest common denominator (LCD) of all compute
types, or (2) an exceedingly complex interface. 




You expressed a sentiment below that trying to offer choices for VM, Bare Metal
(BM), and Containers for Trove instances “adds considerable complexity”.
Roughly the same complexity would accompany the use of a comprehensive compute
API. I suppose you were imagining an LCD approach. If that’s what you want,
just use the existing Nova API, and load different compute drivers on different
host aggregates. A single Nova client can produce VM, BM (Ironic), and
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
configured in this way. That’s what we do. Flavors determine which compute type
you get.



If what you meant is that you could tap into the power of all the unique
characteristics of each of the various compute types (through some modular
extensibility framework) you’ll likely end up with complexity in Trove that is
comparable to integrating with the native upstream APIs, along with the
disadvantage of waiting for OpenStack to continually catch up to the pace of
change of the various upstream systems on which it depends. This is a recipe
for disappointment.



We concluded that wrapping native APIs is a mistake, particularly when they are
sufficiently different than what the Nova API already offers. Containers APIs
have limited similarities, so when you try to make a universal interface to all
of them, you end up with a really complicated mess. It would be even worse if
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
approach is to offer the upstream native API’s for the different container
orchestration engines (COE), and compose Bays for them to run on that are built
from the compute types that OpenStack supports. We do this by using different
Heat orchestration templates (and conditional templates) to arrange a COE on
the compute type of your choice. With that said, there are still gaps where not
all storage or network drivers work with Ironic, and there are non-trivial
security hurdles to clear to safely use Bays composed of libvirt-lxc instances
in a multi-tenant environment.



My suggestion to get what you want for Trove is to see if the cloud has Magnum,
and if it does, create a bay with the flavor type specified for whatever
compute type you want, and then use the native API for the COE you selected for
that bay. Start your instance on the COE, just like you use Nova today. This
way, you have low complexity in Trove, and you can scale both the number of
instances of your data nodes (containers), and the infrastructure on which they
run (Nova instances).



Regards,



Adrian







   On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:



   Monty, Dims, 


   I read the notes and was similarly intrigued abou

Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread D'Angelo, Scott
I'll throw this out there: Fort Collins HPE site is available.

Scott D'Angelo (scottda)

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Tuesday, April 12, 2016 8:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Newton Midcycle Planning

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings,
but wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start
planning for it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work
out pretty well, but I also think it might be useful to hold it a little
earlier in the cycle to keep some momentum going and make sure things
stay pretty focused for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US
Independence Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it
would be good to try to avoid an overlap there. Our linked midcycle
session worked out well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host,
just let me know. We need a facility with wifi, able to hold ~30-40
people, wifi, close to an airport. And wifi.

At some point I still think it would be nice for our international folks
to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
Collins our somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] RDO Mitaka packages released

2016-04-12 Thread Rich Bowen
The RDO community is pleased to announce the general availability of the
RDO build for OpenStack Mitaka for RPM-based distributions - CentOS
Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building
private, public, and hybrid clouds and Mitaka is the 13th release from
the OpenStack project (http://openstack.org), which is the work of more
than 2500 contributors from around the world.
(Source: http://stackalytics.com/ )

See Red Hat Stack
(http://redhatstackblog.redhat.com/2016/03/21/learn-whats-coming-in-openstack-mitaka/
for a brief overview of what's new in Mitaka.

The RDO community project (https://www.rdoproject.org/) curates,
packages, builds, tests, and maintains a complete OpenStack component
set for RHEL and CentOS Linux and is a founding member of the CentOS
Cloud Infrastructure SIG
(https://wiki.centos.org/SpecialInterestGroup/Cloud). The Cloud
Infrastructure SIG focuses on delivering a great user experience for
CentOS Linux users looking to build and maintain their own on-premise,
public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack
Platform, is 100% open source, with all code changes going upstream first.

For a complete list of what's in RDO, see the RDO projects yaml file
(https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml).


Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware,
try the RDO QuickStart (http://rdoproject.org/Quickstart)  You can run
RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart
(https://www.rdoproject.org/tripleo/) and you'll be running a production
cloud in short order.

Finally, if you want to try out OpenStack, but don't have the time or
hardware to run it yourself, visit TryStack (http://trystack.org/),
where you can use a free public OpenStack instance, running RDO
packages, to experiment with the OpenStack management interface and API,
launch instances, configure networks, and generally familiarize yourself
with OpenStack.


Getting Help

The RDO Project participates in a Q&A service at
http://ask.openstack.org, for more developer oriented content we
recommend joining the rdo-list mailing list
(https://www.redhat.com/mailman/listinfo/rdo-list). Remember to post a
brief introduction about yourself and your RDO story. You can also find
extensive documentation on the RDO docs site
(https://www.rdoproject.org/documentation).

We also welcome comments and requests on the CentOS Mailing lists
(https://lists.centos.org/) and the CentOS IRC Channels ( #centos on
irc.freenode.net ), however we have a more focused audience in the RDO
venues.


Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO
community pages (https://www.rdoproject.org/community/) and the CentOS
Cloud SIG page (https://wiki.centos.org/SpecialInterestGroup/Cloud). See
also the RDO packaging documentation
(https://www.rdoproject.org/packaging/).

Join us in #rdo on the Freenode IRC network, and follow us at
@RDOCommunity (http://twitter.com/rdocommunity) on Twitter.

And, if you're going to be in Austin for the OpenStack Summit two weeks
from now, join us on Thursday at 4:40pm for the RDO community BoF
(https://goo.gl/P6kyWR).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Anita Kuno
On 04/11/2016 09:45 PM, Tony Breeds wrote:
> On Mon, Apr 11, 2016 at 03:49:16PM -0500, Matt Riedemann wrote:
>> A few people have been asking about planning for the nova midcycle for
>> newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work the
>> best. R-14 is close to the US July 4th holiday, R-13 is during the week of
>> the US July 4th holiday, and R-12 is the week of the n-2 milestone.
> 
> Thanks for starting this now.  It really helps  to know these things early.
> 
> This cycle *may* be harder than typical with:
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9478
> 
> Having said that, either of those options work for me.
> 
>> As far as a venue is concerned, I haven't heard any offers from companies to
>> host yet. If no one brings it up by the summit, I'll see if hosting in
>> Rochester, MN at the IBM site is a possibility.
> 
> +1 would Rochester again.  The drive from MSP was trivial ;P

Also consider the local international airport, it is not bad.

Thanks,
Anita.

> 
> Yours Tony.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [API]Make API errors conform to the common error message without microversion

2016-04-12 Thread Brandon Logan
As a note, there will be a design session around the API refactor
efforts going on.  Microversioning will be a topic.

On Tue, 2016-04-12 at 14:59 +0200, Ihar Hrachyshka wrote:
> Xianshan  wrote:
> 
> > Hi, Duncan & michael,
> > Thanks a lot for your replies.
> >
> > Definitely I agree with you that the microversion is the best approach to  
> > solve the backwards compat,
> > and the neutron is also going to adopt it [1]. But it will take a long  
> > time to totally introduce it into neutron I think.
> > So IMO, we can continue this discussion and then implement this feature  
> > in parallel with the microversion.
> >
> > Actually, according to the design [2], only a slight change will be done  
> > once the microversion landed, i.e.
> > replace the ' new header ' with the microversion to control the final  
> > format about the error message in the wsgi interface.
> >
> > [1] https://review.openstack.org/#/c/136760/
> > [2] https://review.openstack.org/#/c/298704
> 
> If no one is going to help with microversioning, it won’t ever happen. I  
> suggest we consolidate whichever resources we have to get it done, instead  
> of working on small API iterations as proposed in the thread.
> 
> Ihar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread ClaytonLuce, Timothy
And I'll add Raleigh NC NetApp is available too.  Nothing better than summer in 
the South :)

-Original Message-
From: D'Angelo, Scott [mailto:scott.dang...@hpe.com] 
Sent: Tuesday, April 12, 2016 10:39 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Cinder] Newton Midcycle Planning

I'll throw this out there: Fort Collins HPE site is available.

Scott D'Angelo (scottda)

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Tuesday, April 12, 2016 8:05 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] Newton Midcycle Planning

Hey Cinder team (and those interested),

We've had a few informal conversations on the channel and in meetings, but 
wanted to capture some things here and spread awareness.

I think it would be good to start planning for our Newton midcycle.
These have been incredibly productive in the past (at least in my
opinion) so I'd like to get it on the schedule so folks can start planning for 
it.

For Mitaka we held our midcycle in the R-10 week. That seemed to work out 
pretty well, but I also think it might be useful to hold it a little earlier in 
the cycle to keep some momentum going and make sure things stay pretty focused 
for the rest of the cycle.

For reference, here is the current release schedule for Newton:

http://releases.openstack.org/newton/schedule.html

R-10 puts us in the last week of July.

I would have a conflict R-16, R-15. We probably want to avoid US Independence 
Day R-13, and milestone weeks R-18 and R12.

So potential weeks look like:

* R-17
* R-14
* R-11
* R-10
* R-9

Nova is in the process of figuring out their date. If we have that, it would be 
good to try to avoid an overlap there. Our linked midcycle session worked out 
well, but probably better if they don't conflict.

We also need to work out locations. Anyone able and willing to host, just let 
me know. We need a facility with wifi, able to hold ~30-40 people, wifi, close 
to an airport. And wifi.

At some point I still think it would be nice for our international folks to be 
able to do a non-US midcycle, but I'm fine if we end up back in Ft Collins our 
somewhere similar.

Thanks!

Sean (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Anita Kuno
On 04/12/2016 10:05 AM, Sean McGinnis wrote:
> Hey Cinder team (and those interested),
> 
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
> 
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
> 
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
> 
> For reference, here is the current release schedule for Newton:
> 
> http://releases.openstack.org/newton/schedule.html
> 
> R-10 puts us in the last week of July.
> 
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
> 
> So potential weeks look like:
> 
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
> 
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.

Thank you!

> 
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.

Working wiki, yeah?

> 
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
> 
> Thanks!
> 
> Sean (smcginnis)

Thanks Sean, appreciate this being discussed now.

Thanks again,
Anita.

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Morgan Fainberg
Fixes have been proposed for both of these bugs.

Cheers,
--Morgan

On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova  wrote:

> Matt,
>
> Thanks for sharing the information about your benchmark. Indeed we need to
> follow up on this topic (I'll attend the summit). Let's try to collect as
> much information as possible prior Austin to have more facts to operate.
> I'll try to figure out why local context cache did not work at least on my
> environment, and, due to the results, most probably it did not act as
> supposed during your benchmarking as well.
>
> Cheers,
> Dina
>
> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
> wrote:
>
>> On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
>> wrote:
>>
>>> Hey, openstackers!
>>>
>>> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
>>> using this set of changes
>>> 
>>>  (that's
>>> currently on review - some final steps are required there to finish the
>>> work) and OSprofiler.
>>>
>>> Some preliminary results (all in one OpenStack node) can be found here
>>> 
>>>  (raw
>>> OSprofiler reports are not yet merged to some place and can be found
>>> here ). The full plan
>>> 
>>>  of
>>> what's going to be tested  can be found in the docs as well. In short I
>>> wanted to take a look how does Keystone changed its DB/Cache usage from
>>> Liberty to Mitaka, keeping in mind that there were several changes
>>> introduced:
>>>
>>>- federation support was added (and made DB scheme a bit more
>>>complex)
>>>- Keystone moved to oslo.cache usage
>>>- local context cache was introduced during Mitaka
>>>
>>> First of all - *good job on making Keystone less DB-extensive in case
>>> of cache turned on*! If Keystone caching is turned on, number of DB
>>> queries done to Keystone DB in Mitaka is averagely twice less than in
>>> Liberty, comparing the same requests and topologies. Thanks Keystone
>>> community to make it happen :)
>>>
>>> Although, I faced *two strange issues* during my experiments, and I'm
>>> kindly asking you, folks, to help me here:
>>>
>>>- I've created #1567403
>>> bug to share
>>>information - when I turned caching on, local context cache should cache
>>>identical per API requests function calls not to ping Memcache too often.
>>>Although I faced such calls, Keystone still used Memcache to gather this
>>>information. May someone take a look on this and help me figure out what 
>>> am
>>>I observing? At the first sight local context cache should work ok, but 
>>> for
>>>some reason I do not see it's being used.
>>>- One more filed bug - #1567413
>>> - is about a bit
>>>opposite situation :) When I turned cache off explicitly in the
>>>keystone.conf file, I still observed some of the values being fetched 
>>> from
>>>Memcache... Your help is very appreciated!
>>>
>>> Thanks in advance and sorry for a long email :)
>>>
>>> Cheers,
>>> Dina
>>>
>>>
>> Dina,
>>
>> Thanks for starting this conversation. I had some weird perf results
>> comparing L to an RC release of Mitaka, but I was holding them until
>> someone else confirmed what I saw. I'm testing token creation and
>> validation. From what I saw, token validation slowed down in Mitaka. After
>> doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
>> it was in Liberty. That implies more caching but 8x is a lot and even
>> memcache references are not free.
>>
>> I know some of the Keystone folks are looking into this so it will be
>> good to follow-up on it. Maybe we could talk about this at the summit?
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Morgan Fainberg
Sorry Missed the copy/paste of links:

https://bugs.launchpad.net/keystone/+bug/1567403 [0]
https://bugs.launchpad.net/keystone/+bug/1567413 [1]

[0]
https://review.openstack.org/#/q/I4857cfe1e62d54c3c89a0206ffc895c4cf681ce5,n,z
[1] https://review.openstack.org/#/c/304688/

--Morgan

On Tue, Apr 12, 2016 at 8:16 AM, Morgan Fainberg 
wrote:

> Fixes have been proposed for both of these bugs.
>
> Cheers,
> --Morgan
>
> On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova 
> wrote:
>
>> Matt,
>>
>> Thanks for sharing the information about your benchmark. Indeed we need
>> to follow up on this topic (I'll attend the summit). Let's try to collect
>> as much information as possible prior Austin to have more facts to operate.
>> I'll try to figure out why local context cache did not work at least on my
>> environment, and, due to the results, most probably it did not act as
>> supposed during your benchmarking as well.
>>
>> Cheers,
>> Dina
>>
>> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
>> wrote:
>>
>>> On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
>>> wrote:
>>>
 Hey, openstackers!

 Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
 using this set of changes
 
  (that's
 currently on review - some final steps are required there to finish the
 work) and OSprofiler.

 Some preliminary results (all in one OpenStack node) can be found here
 
  (raw
 OSprofiler reports are not yet merged to some place and can be found
 here ). The full plan
 
  of
 what's going to be tested  can be found in the docs as well. In short I
 wanted to take a look how does Keystone changed its DB/Cache usage from
 Liberty to Mitaka, keeping in mind that there were several changes
 introduced:

- federation support was added (and made DB scheme a bit more
complex)
- Keystone moved to oslo.cache usage
- local context cache was introduced during Mitaka

 First of all - *good job on making Keystone less DB-extensive in case
 of cache turned on*! If Keystone caching is turned on, number of DB
 queries done to Keystone DB in Mitaka is averagely twice less than in
 Liberty, comparing the same requests and topologies. Thanks Keystone
 community to make it happen :)

 Although, I faced *two strange issues* during my experiments, and I'm
 kindly asking you, folks, to help me here:

- I've created #1567403
 bug to share
information - when I turned caching on, local context cache should cache
identical per API requests function calls not to ping Memcache too 
 often.
Although I faced such calls, Keystone still used Memcache to gather this
information. May someone take a look on this and help me figure out 
 what am
I observing? At the first sight local context cache should work ok, but 
 for
some reason I do not see it's being used.
- One more filed bug - #1567413
 - is about a bit
opposite situation :) When I turned cache off explicitly in the
keystone.conf file, I still observed some of the values being fetched 
 from
Memcache... Your help is very appreciated!

 Thanks in advance and sorry for a long email :)

 Cheers,
 Dina


>>> Dina,
>>>
>>> Thanks for starting this conversation. I had some weird perf results
>>> comparing L to an RC release of Mitaka, but I was holding them until
>>> someone else confirmed what I saw. I'm testing token creation and
>>> validation. From what I saw, token validation slowed down in Mitaka. After
>>> doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
>>> it was in Liberty. That implies more caching but 8x is a lot and even
>>> memcache references are not free.
>>>
>>> I know some of the Keystone folks are looking into this so it will be
>>> good to follow-up on it. Maybe we could talk about this at the summit?
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>> __
>> OpenStack Development Mailing List (not for usage qu

Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-12 Thread Duncan Thomas
HP facility just outside Dublin (ireland) is available again, depending on
dates

On 12 April 2016 at 17:05, Sean McGinnis  wrote:

> Hey Cinder team (and those interested),
>
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
>
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
>
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
>
> For reference, here is the current release schedule for Newton:
>
> http://releases.openstack.org/newton/schedule.html
>
> R-10 puts us in the last week of July.
>
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
>
> So potential weeks look like:
>
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
>
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.
>
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.
>
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
>
> Thanks!
>
> Sean (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #78

2016-04-12 Thread Emilien Macchi
On Mon, Apr 11, 2016 at 10:19 AM, Emilien Macchi  wrote:
> Hi,
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
>
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
>
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160412
>
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.

We did our meeting, notes are available here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-04-12-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread Fox, Kevin M
Have a look at this script:
https://review.openstack.org/#/c/158003/1/bin/neutron-local-port

Nova does basically the same thing but plugs it into the vm.

Thanks,
Kevin


From: 张晨 [zhangchen9...@126.com]
Sent: Tuesday, April 12, 2016 1:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

Thx for the answer, i finally locate the implementation in 
nova.network.linux_net.create_ovs_vif_port

but how could nova execute the ovs-vsctl for the compute-node hypervisor just 
in the control-node?






At 2016-04-12 13:13:48, "Sławek Kapłoński"  wrote:
>Hello,
>
>I don't know this ODL and how it works but for ovs-agent nova-compute is part
>which adds port to ovs bridge (see for example nova/virt/libvirt/vif.py)
>
>--
>Pozdrawiam / Best regards
>Sławek Kapłoński
>sla...@kaplonski.pl
>
>Dnia wtorek, 12 kwietnia 2016 12:31:01 CEST 张晨 pisze:
>> Hello everyone,
>>
>>
>> I have a question about Neutron. I learn that the ovs-agent receives the
>> update-port rpc notification,and updates ovsdb data for VM port.
>>
>>
>> But what is the situation when i use SDN controllers instead of OVS
>> mechanism driver? I found no where in ODL to add the VM port to ovs.
>>
>>
>> I asked the author of the related ODL plugin, but he told me that OpenStack
>> adds the VM port to ovs.
>>
>>
>> Then, where is the implementation in OpenStack to  add the VM port to ovs,
>> when i'm using ODL replacing the OVSmechanism driver?
>>
>>
>> Thanks





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-12 Thread Ruby Loo
Yes, I think it would be good to have a summit session on that. However,
before the session, it would really be helpful if the folks with proposals
got together and/or reviewed each other's proposals, and summarized their
findings. You may find after reviewing the proposals, that eg only 2 are
really different. Or they several have merit because they are addressing
slightly different issues. That would make it easier to
present/discuss/decide at the session.

--ruby


On 12 April 2016 at 09:17, Jim Rollenhagen  wrote:

> On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> > Maybe we can continue the discussion here, as there's no enough time in
> the
> > irc meeting :)
>
> Someone mentioned this would make a good summit session, as there's a
> few competing proposals that are all good options. I do welcome
> discussion here until then, but I'm going to put it on the schedule. :)
>
> // jim
>
> >
> > On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
> wrote:
> >
> > >
> > > Ironic is currently using shellinabox to provide a serial console, but
> > > it's not compatible
> > > with nova, so I would like to propose a new console type and a custom
> HTTP
> > > proxy [1]
> > > which validate token and connect to ironic console from nova.
> > >
> > > On Horizon side, we should add support for the new console type [2] as
> > > well, here are some screenshots from my local environment.
> > >
> > >
> > >
> > > ​
> > >
> > > Additionally, shellinabox console ports management should be improved
> in
> > > ironic, instead of manually specified, we should introduce dynamically
> > > allocation/deallocation [3] mechanism.
> > >
> > > Functionality is being implemented in Nova, Horizon and Ironic:
> > > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
> > > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
> > > https://review.openstack.org/#/q/status:open+topic:bug/1526371
> > >
> > >
> > > PS: to achieve this goal, we can also add a new console driver in
> ironic
> > > [4], but I think it doesn't conflict with this, as shellinabox is
> capable
> > > to integrate with nova, and we should support all console drivers.
> > >
> > >
> > > [1] https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
> > > [2]
> > >
> https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
> > > [3] https://bugs.launchpad.net/ironic/+bug/1526371
> > > [4] https://bugs.launchpad.net/ironic/+bug/1553083
> > >
> > > --
> > > Best Regards,
> > > Zhenguo Niu
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Zhenguo Niu
>
>
>
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
Flavio, and whom'ever else who's available to attend,

We have a summit session for instance users listed here that could be part of 
the solution:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9485

Please attend if you can.
--

+1 for a basic common abstraction. The app catalog could really use it too. 
We'd like to be able to host container orchestration templates and hand them 
over to Magnum for launching.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, April 12, 2016 5:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

On 11/04/16 16:53 +, Adrian Otto wrote:
>Amrith,
>
>I respect your point of view, and agree that the idea of a common compute API
>is attractive… until you think a bit deeper about what that would mean. We
>seriously considered a “global” compute API at the time we were first
>contemplating Magnum. However, what we came to learn through the journey of
>understanding the details of how such a thing would be implemented, that such
>an API would either be (1) the lowest common denominator (LCD) of all compute
>types, or (2) an exceedingly complex interface.
>
>You expressed a sentiment below that trying to offer choices for VM, Bare Metal
>(BM), and Containers for Trove instances “adds considerable complexity”.
>Roughly the same complexity would accompany the use of a comprehensive compute
>API. I suppose you were imagining an LCD approach. If that’s what you want,
>just use the existing Nova API, and load different compute drivers on different
>host aggregates. A single Nova client can produce VM, BM (Ironic), and
>Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
>configured in this way. That’s what we do. Flavors determine which compute type
>you get.
>
>If what you meant is that you could tap into the power of all the unique
>characteristics of each of the various compute types (through some modular
>extensibility framework) you’ll likely end up with complexity in Trove that is
>comparable to integrating with the native upstream APIs, along with the
>disadvantage of waiting for OpenStack to continually catch up to the pace of
>change of the various upstream systems on which it depends. This is a recipe
>for disappointment.
>
>We concluded that wrapping native APIs is a mistake, particularly when they are
>sufficiently different than what the Nova API already offers. Containers APIs
>have limited similarities, so when you try to make a universal interface to all
>of them, you end up with a really complicated mess. It would be even worse if
>we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
>approach is to offer the upstream native API’s for the different container
>orchestration engines (COE), and compose Bays for them to run on that are built
>from the compute types that OpenStack supports. We do this by using different
>Heat orchestration templates (and conditional templates) to arrange a COE on
>the compute type of your choice. With that said, there are still gaps where not
>all storage or network drivers work with Ironic, and there are non-trivial
>security hurdles to clear to safely use Bays composed of libvirt-lxc instances
>in a multi-tenant environment.
>
>My suggestion to get what you want for Trove is to see if the cloud has Magnum,
>and if it does, create a bay with the flavor type specified for whatever
>compute type you want, and then use the native API for the COE you selected for
>that bay. Start your instance on the COE, just like you use Nova today. This
>way, you have low complexity in Trove, and you can scale both the number of
>instances of your data nodes (containers), and the infrastructure on which they
>run (Nova instances).


I've been researching on this area and I've reached pretty much the same
conclusion. I've had moments of wondering whether creating bays is something
Trove should do but I now think it should.

The need of handling the native API is the part I find a bit painful as that
means more code needs to happen in Trove for us to provide this provisioning
facilities. I wonder if a common *library* would help here, at least to handle
those "simple" cases. Anyway, I look forward to chatting with you all about 
this.

It'd be great if you (and other magnum folks) could join this session:

https://etherpad.openstack.org/p/trove-newton-summit-container

Thanks for chiming in, Adrian.
Flavio

>Regards,
>
>Adrian
>
>
>
>
>On Apr 11, 2016, at 8:47 AM, Amrith Kumar  wrote:
>
>Monty, Dims,
>
>I read the notes and was similarly intrigued about the idea. In particular,
>from the perspective of projects like Trove, having a common Compute API is
>very valuable. It would allow the projects to have a sin

Re: [openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

2016-04-12 Thread Dina Belova
Thank you Morgan for quick fixes proposed!

On Tue, Apr 12, 2016 at 6:19 PM, Morgan Fainberg 
wrote:

> Sorry Missed the copy/paste of links:
>
> https://bugs.launchpad.net/keystone/+bug/1567403 [0]
> https://bugs.launchpad.net/keystone/+bug/1567413 [1]
>
> [0]
> https://review.openstack.org/#/q/I4857cfe1e62d54c3c89a0206ffc895c4cf681ce5,n,z
> [1] https://review.openstack.org/#/c/304688/
>
> --Morgan
>
> On Tue, Apr 12, 2016 at 8:16 AM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>> Fixes have been proposed for both of these bugs.
>>
>> Cheers,
>> --Morgan
>>
>> On Tue, Apr 12, 2016 at 12:38 AM, Dina Belova 
>> wrote:
>>
>>> Matt,
>>>
>>> Thanks for sharing the information about your benchmark. Indeed we need
>>> to follow up on this topic (I'll attend the summit). Let's try to collect
>>> as much information as possible prior Austin to have more facts to operate.
>>> I'll try to figure out why local context cache did not work at least on my
>>> environment, and, due to the results, most probably it did not act as
>>> supposed during your benchmarking as well.
>>>
>>> Cheers,
>>> Dina
>>>
>>> On Mon, Apr 11, 2016 at 10:57 PM, Matt Fischer 
>>> wrote:
>>>
 On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova 
 wrote:

> Hey, openstackers!
>
> Recently I was trying to profile Keystone (OpenStack Liberty vs
> Mitaka) using this set of changes
> 
>  (that's
> currently on review - some final steps are required there to finish the
> work) and OSprofiler.
>
> Some preliminary results (all in one OpenStack node) can be found here
> 
>  (raw
> OSprofiler reports are not yet merged to some place and can be found
> here ). The full plan
> 
>  of
> what's going to be tested  can be found in the docs as well. In short I
> wanted to take a look how does Keystone changed its DB/Cache usage from
> Liberty to Mitaka, keeping in mind that there were several changes
> introduced:
>
>- federation support was added (and made DB scheme a bit more
>complex)
>- Keystone moved to oslo.cache usage
>- local context cache was introduced during Mitaka
>
> First of all - *good job on making Keystone less DB-extensive in case
> of cache turned on*! If Keystone caching is turned on, number of DB
> queries done to Keystone DB in Mitaka is averagely twice less than in
> Liberty, comparing the same requests and topologies. Thanks Keystone
> community to make it happen :)
>
> Although, I faced *two strange issues* during my experiments, and I'm
> kindly asking you, folks, to help me here:
>
>- I've created #1567403
> bug to share
>information - when I turned caching on, local context cache should 
> cache
>identical per API requests function calls not to ping Memcache too 
> often.
>Although I faced such calls, Keystone still used Memcache to gather 
> this
>information. May someone take a look on this and help me figure out 
> what am
>I observing? At the first sight local context cache should work ok, 
> but for
>some reason I do not see it's being used.
>- One more filed bug - #1567413
> - is about a
>bit opposite situation :) When I turned cache off explicitly in the
>keystone.conf file, I still observed some of the values being fetched 
> from
>Memcache... Your help is very appreciated!
>
> Thanks in advance and sorry for a long email :)
>
> Cheers,
> Dina
>
>
 Dina,

 Thanks for starting this conversation. I had some weird perf results
 comparing L to an RC release of Mitaka, but I was holding them until
 someone else confirmed what I saw. I'm testing token creation and
 validation. From what I saw, token validation slowed down in Mitaka. After
 doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
 it was in Liberty. That implies more caching but 8x is a lot and even
 memcache references are not free.

 I know some of the Keystone folks are looking into this so it will be
 good to follow-up on it. Maybe we could talk about this at the summit?




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Matt Riedemann



On 4/11/2016 5:54 PM, Michael Still wrote:

On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann
mailto:mrie...@linux.vnet.ibm.com>> wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle
last release, so they might still be an option? I don't recall any other
possible hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Tracy Jones is also looking into whether or not VMware could host in 
Palo Alto again.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Ryan Hallisey
I'm definitely guilty of nit picking docs.  I'm fine with holding back and 
following up with a patch to fix any grammar/misspelling mistakes. +1

-Ryan  

- Original Message -
From: "Paul Bourke" 
To: openstack-dev@lists.openstack.org
Sent: Tuesday, April 12, 2016 4:49:09 AM
Subject: Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

I've said in the past I'm not a fan of nitpicking docs. That said, I 
feel it's important for spelling and grammar to be correct. The 
quickstart guide is the first point of contact for many people to the 
project, and rightly or wrongly it will give an overall impression of 
the quality of the project.

When I review doc changes I try not to nitpick on the process being 
described - e.g. if an otherwise fine patch is already 5 iterations in 
and the example given to configure a service could be done in 3 lines 
less bash, I'll usually comment but still +2. If on the otherhand it is 
rife with typos (which by the way is easily solved with a spellchecker), 
and reads really badly I will flag it.

-Paul

On 11/04/16 19:27, Steven Dake (stdake) wrote:
> My proposal was for docs-only patches not code contributions with docs.
> Obviously we want a high bar for code contributions.  This is part of the
> reason we have the DocImpact flag (for folks that don't feel comfortable
> writing documentation because perhaps of ESL, or other reasons).
>
> We already have a way to decouple code from docs with DocImpact.
>
> Regards
> -steve
>
> On 4/11/16, 6:17 AM, "Michał Jastrzębski"  wrote:
>
>> So one way to approach it is to decouple docs from code, make it 2
>> reviews. We can -1 code without docs and ask to create separate
>> patchset depending on one in question with docs. Then we can nitpick
>> all we want:) new contributor will get his/hers code merged, at least
>> one patchset, so it will work better on morale, and we'll be able to
>> keep high bar for QSG and other docs. There is possibility that author
>> will leave docs patch after code merge, but well, we can take over
>> docs review.
>>
>> What do you think guys? I'd really like to keep high quality standard
>> all the way and don't scare off new commiters at the same time.
>>
>> Cheers,
>> Michal
>>
>> On 11 April 2016 at 03:50, Steven Dake (stdake)  wrote:
>>>
>>>
>>> On 4/11/16, 1:38 AM, "Gerard Braad"  wrote:
>>>
 Hi,

 On Mon, Apr 11, 2016 at 4:20 PM, Steven Dake (stdake) 
 wrote:
> On 4/11/16, 12:54 AM, "Gerard Braad"  wrote:
> as
>> at the moment getting an environment up-and-running according to the
>> quickstart guide is a hit and miss
> I don't think deployment is not hit or miss as long as the QSG are
> followed to a T :)

 Maybe saying "at the moment" was incorrect. As the deployment
 according to the QSG has been a few weeks ago. Sorry about this... as
 you guys have put a lot of effort into it recently.


> I agree we need more clarity in what belongs in the QSG.
 This can be a separate discussion (Not intending to hijack this thread).


 I am not a core reviewer, but I keep it as-is. I do not see a need for
>>>
>>> Even though your not a core reviewer, your comments are valued.  The
>>> reason I addressed core reviewers specifically as they have +2
>>> permissions
>>> and I would like more leniency on new documentation in other files
>>> outside
>>> those listed above (philosophy document, QSG) with a pubic statement of
>>> such.
>>>
 a lower-bar. Although, documentation is the entry-point into a
 community (as user and potential contributor) and therefore it should
 be of a high quality. Maybe I would be to provide more suggestions
 instead of just indication of 'change this for that'.
>>>
>>> The issue I see with our QSG is it has the highest bar for review
>>> passage
>>> of any file in the repository.  Any QSG change typically requires 10 or
>>> more patch sets to make it through the core reviewer gauntlet.  This
>>> discourages people from writing new documentation.  I don't want this to
>>> carry over into other parts of the documentation that are as of yet
>>> unwritten.  I'd like new documentation to be ok with misspellings,
>>> grammar
>>> errors, formatting problems, ESL authors, and that sort of thing.
>>>
>>> The QSG should tolerate none of these types of errors at this point - it
>>> must be absolutely perfect (at least in English:) as to not cause
>>> confusion to new operators.
>>>
>>> Regards
>>> -steve
>>>

 regards,


 Gerard

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _
>>> _
>>> OpenStack Development

Re: [openstack-dev] Does Neutron itself add VM ports to ovs?

2016-04-12 Thread Sławek Kapłoński
Hello,

It's nova-compute service which is configuring it. This service is running on 
compute node: http://docs.openstack.org/developer/nova/architecture.html

-- 
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia wtorek, 12 kwietnia 2016 16:16:25 CEST 张晨 pisze:
> Thx for the answer, i finally locate the implementation in
> nova.network.linux_net.create_ovs_vif_port
> 
> 
> but how could nova execute the ovs-vsctl for the compute-node hypervisor
> just in the control-node?
> At 2016-04-12 13:13:48, "Sławek Kapłoński"  wrote:
> >Hello,
> >
> >I don't know this ODL and how it works but for ovs-agent nova-compute is
> >part which adds port to ovs bridge (see for example
> >nova/virt/libvirt/vif.py)>
> >> Hello everyone,
> >> 
> >> 
> >> I have a question about Neutron. I learn that the ovs-agent receives the
> >> update-port rpc notification,and updates ovsdb data for VM port.
> >> 
> >> 
> >> But what is the situation when i use SDN controllers instead of OVS
> >> mechanism driver? I found no where in ODL to add the VM port to ovs.
> >> 
> >> 
> >> I asked the author of the related ODL plugin, but he told me that
> >> OpenStack
> >> adds the VM port to ovs.
> >> 
> >> 
> >> Then, where is the implementation in OpenStack to  add the VM port to
> >> ovs,
> >> when i'm using ODL replacing the OVSmechanism driver?
> >> 
> >> 
> >> Thanks

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Matt Riedemann
I've stubbed out the newton design summit etherpad wiki so people can 
start populating this now.


https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Anita Kuno
On 04/12/2016 12:45 PM, Matt Riedemann wrote:
> I've stubbed out the newton design summit etherpad wiki so people can
> start populating this now.
> 
> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads
> 
Keep in mind we still haven't fully addressed all the consequences of
the wiki spam event from February. If you have never before had an
account on the wiki you won't be able to create one. If folks can
partner up and help out others that are having wiki account issues that
would be great.

We hope to continue to make progress on the wiki security issue and to
discuss it at summit.

If folks get really stuck with the wiki, please talk to us in
#openstack-infra.

Thanks Matt,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-04-12 Thread Ruby Loo
Hi,

We are nerdy to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 04.04.2016):
- Ironic: 200 bugs (+4) + 163 wishlist items. 25 new, 136 in progress (+1),
0 critical, 26 high and 18 incomplete (+2)
- Inspector: 13 bugs (-1) + 15 wishlist items (-1). 1 new, 6 in progress
(-1), 0 critical, 4 high and 0 incomplete (-1)
- Nova bugs with Ironic tag: 16 (+1). 1 new (+1), 0 critical, 0 high

Network isolation (Neutron/Ironic work) (jroll)
===
- Tests are green, needs core love
- The main meat:
- https://review.openstack.org/#/c/285852/
- (deva) some concerns on p31, brought up in ironic-neutron meeting
this morning. needs another rev to fix.
- https://review.openstack.org/#/c/206244/
- https://review.openstack.org/#/c/206144
- https://review.openstack.org/#/c/213262/

Live upgrades (lucasagomes, lintan)
===
- Propose a spec to discuss:
- https://review.openstack.org/#/c/299245/

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- expect an update on this spec before summit (hopefully this week)

Oslo (lintan)
=
- In order to make use oslo-config-generator, we should make ironic-lib
explore the options too.
- Patch is landed :https://review.openstack.org/#/c/297549/ so we need
to release a new version of ironic-lib
- release request: https://review.openstack.org/#/c/304229/

Testing/Quality (jlvillal/krtaylor)
===
- Grenade: No update this week.

Inspector (dtansur)
===
- Reprocessing stored node introspection data API merged, inspector client
patch in review

webclient (krotscheck / betherly)
=
- v1.1 has been released.

Drivers:

CIMC and UCSM (sambetts)

- Both CI down for 1hr because of HW upgrade, then should be back up and
testing.
- UCSM driver needs to prove itself then will be made voting, but seems to
be getting good results after a race condition was fixed on Monday morning.
.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Telco Working Group meeting for Wednesday April 6th CANCELLED

2016-04-12 Thread Steve Gordon


- Original Message -
> From: "Calum Loudon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> ,
> "openstack-operators" 
> Sent: Wednesday, April 6, 2016 6:09:16 AM
> Subject: Re: [openstack-dev] Telco Working Group meeting for Wednesday April  
> 6th CANCELLED
> 
> Thanks Steve
> 
> I agree with moving to the PWG.
> 
> On that topic, do you know what's happened to some of the user stories we
> proposed, specifically https://review.openstack.org/#/c/290060/ and
> https://review.openstack.org/#/c/290347/?  Neither shows up in
> https://review.openstack.org/#/q/status:open+project:openstack/openstack-user-stories

This query includes status:open, and those two reviews were merged already so 
they don't show up.

> but there is a https://review.openstack.org/#/c/290991/ which seems to be a
> copy of https://review.openstack.org/#/c/290060/ with the template help text
> added back in and no mention of the original?

>From Shamail's comment in 290991:

This change should be used to discuss and refine the concept. Can the user 
story owner please make a minor change to show ownership?

Basically they opened new reviews with a minor change to trigger further 
discussion. I'm not in love with this approach versus just discussing it on the 
original move request but it is the way it is being done for now. W.r.t. 290060 
I believe you probably meant to include another link but I imagine the 
situation is the same.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-12 Thread Hongbin Lu
Hi all,

We discussed this in our last team meeting, and we were in disagreement. Some 
of us preferred option #1, others preferred option #2. I would suggest to leave 
this topic to the design summit so that our team members would have more times 
to research each option. If we are in disagreement again, I will let the core 
team to vote (hopefully we will have all the core team in the design summit).

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: April-11-16 4:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has this 
setup. Also we can provide several line instructions how to run Chronos on top 
of Marathon.

honestly I don't see how #2 will work, because Marathon installation is 
different from Aurora installation.

---
Egor


From: Kai Qiang Wu mailto:wk...@cn.ibm.com>>
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Sent: Sunday, April 10, 2016 6:59 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

#2 seems more flexible, and if it be proved it can "make the SAME mesos bay 
applied with mutilple frameworks." It would be great. Which means, one mesos 
bay should support multiple frameworks.




Thanks


Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t feel 
strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay




My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.
This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).Which option you prefer? Or 
you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon.

---
Egor
__

Re: [openstack-dev] Telco Working Group meeting for Wednesday April 6th CANCELLED

2016-04-12 Thread Shamail


> On Apr 12, 2016, at 1:12 PM, Steve Gordon  wrote:
> 
> 
> 
> - Original Message -
>> From: "Calum Loudon" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> ,
>> "openstack-operators" 
>> Sent: Wednesday, April 6, 2016 6:09:16 AM
>> Subject: Re: [openstack-dev] Telco Working Group meeting for Wednesday April 
>>6th CANCELLED
>> 
>> Thanks Steve
>> 
>> I agree with moving to the PWG.
>> 
>> On that topic, do you know what's happened to some of the user stories we
>> proposed, specifically https://review.openstack.org/#/c/290060/ and
>> https://review.openstack.org/#/c/290347/? Neither shows up in
>> https://review.openstack.org/#/q/status:open+project:openstack/openstack-user-stories
> 
> This query includes status:open, and those two reviews were merged already so 
> they don't show up.
> 
>> but there is a https://review.openstack.org/#/c/290991/ which seems to be a
>> copy of https://review.openstack.org/#/c/290060/ with the template help text
>> added back in and no mention of the original?
> 
> From Shamail's comment in 290991:
> 
>This change should be used to discuss and refine the concept. Can the user 
> story owner please make a minor change to show ownership?
> 
> Basically they opened new reviews with a minor change to trigger further 
> discussion. I'm not in love with this approach versus just discussing it on 
> the original move request but it is the way it is being done for now. W.r.t. 
> 290060 I believe you probably meant to include another link but I imagine the 
> situation is the same.
Yeah, unfortunately, this approach was needed when we changed the workflow.  A 
minor change would be recommended for now.  

The template recently changed and you could update the story to the new 
template (if it isn't already updated) and that would suffice.
> 
> -Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Jeremy Stanley
On 2016-03-31 15:15:23 -0400 (-0400), michael mccune wrote:
[...]
> * what is the process for performing an analysis
> 
> * how will an analysis be formally recognized and approved
> 
> * who will be doing these analyses

I intentionally didn't specify when writing the
vulnerability:managed tag description but instead only gave an
example, as the details of who can review a project and how will
vary depending on its scope, language, and so on. I was trying to
keep it vague enough to be applicable to all sorts of projects, but
I see now that lack of specificity is leading to additional
confusion (which makes me fear we'll be forced instead to encode
every possible solution in our tag description).

> * does it make sense to keep the analysis process strictly limited
> to the vmt
[...]

Not at all. Providing security feedback to projects seems beneficial
regardless of whether they're going to have vulnerability reporting
overseen by the OpenStack VMT or plan to handle that on their own.
On the other hand, some volunteers may choose to limit their
assistance to projects applying for the vulnerability:managed tag as
a means of keeping from getting spread too thin.

> ultimately, having a third-party review of a project is worthy
> goal, but this has to be tempered with the reality that a single
> team will not be able to scale out and provide thorough analyses
> for all projects. to that extent, the ossp should work, initially,
> to help a few teams get these analyses completed and in the
> process create a set of useful tools (docs, guides, diagrams,
> foil-hat wearing pamphlets) to help further the effort.
> 
> i would like to propose that the threat analysis portion of the
> vulnerability:managed tag be modified with the goal of having the
> project teams create their own analyses, with an extended
> third-party review to be performed afterwards. in this respect,
> the scale issue can be addressed, as well as the issue of project
> domain knowledge. it makes much more sense to me to have the
> project team creating the initial work here as they will know the
> areas, and architectures, that will need the most attention.
[...]

This seems fine. The issue mostly boils down to the fact that the
VMT used to perform a cursory security review (skim really) of a
project's source code checking for obvious smells to indicate that
it might not be in a mature enough state to cover officially for
vulnerability reporting oversight without creating a ton of
additional work for ourselves the first time someone pointed an
analyzer at it. As a means of scaling the VMT's capacity without
scaling the size of the VMT, in an effort to handle the "big tent"
model, this sanity check seemed like something we could easily
delegate to other interested volunteers to reduce our own workload
and help us cover more and a wider variety of projects.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Newton design summit etherpad wiki created

2016-04-12 Thread Anita Kuno
On 04/12/2016 01:05 PM, Anita Kuno wrote:
> On 04/12/2016 12:45 PM, Matt Riedemann wrote:
>> I've stubbed out the newton design summit etherpad wiki so people can
>> start populating this now.
>>
>> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads
>>
> Keep in mind we still haven't fully addressed all the consequences of
> the wiki spam event from February. If you have never before had an
> account on the wiki you won't be able to create one. If folks can
> partner up and help out others that are having wiki account issues that
> would be great.
> 
> We hope to continue to make progress on the wiki security issue and to
> discuss it at summit.
> 
> If folks get really stuck with the wiki, please talk to us in
> #openstack-infra.
> 
> Thanks Matt,
> Anita.
> 

It was an oversight on my part to not include the link to the discussion
on the spam event in my prior post:
http://lists.openstack.org/pipermail/openstack-infra/2016-February/003791.html

My apologies,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Jeremy Stanley
On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
> If a team has already done a TA (e.g. as part of an internal
> product TA) (and produced all the documentation) would this meet
> the requirements?
> 
> I ask, as Designate looks like it meets nearly  all the current
> requirements - the only outstanding question in my mind was the
> Threat Analysis

Seems fine to me, though in the interest of openness that
documentation should probably be licensed such that it can be
published somewhere for the whole community to read (once any
glaring deficiencies are addressed anyway).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Barrett, Carol L
Hi Folks - I'm looking into the options for Intel to host in Hillsboro Oregon. 
Stay tuned for more details.
Thanks
Carol

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Tuesday, April 12, 2016 9:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Newton midcycle planning



On 4/11/2016 5:54 PM, Michael Still wrote:
> On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
> mailto:mrie...@linux.vnet.ibm.com>> wrote:
>
> A few people have been asking about planning for the nova midcycle
> for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
> R-11 work the best. R-14 is close to the US July 4th holiday, R-13
> is during the week of the US July 4th holiday, and R-12 is the week
> of the n-2 milestone.
>
> R-16 is too close to the summit IMO, and R-10 is pushing it out too
> far in the release. I'd be open to R-14 though but don't know what
> other people's plans are.
>
> As far as a venue is concerned, I haven't heard any offers from
> companies to host yet. If no one brings it up by the summit, I'll
> see if hosting in Rochester, MN at the IBM site is a possibility.
>
>
> Intel at Hillsboro had expressed an interest in hosting the N 
> mid-cycle last release, so they might still be an option? I don't 
> recall any other possible hosts in the queue, but its possible I've missed 
> someone.
>
> Michael
>
> --
> Rackspace Australia
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Tracy Jones is also looking into whether or not VMware could host in Palo Alto 
again.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Steven Dake (stdake)


On 4/12/16, 10:37 AM, "Jeremy Stanley"  wrote:

>On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>> 
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
>Seems fine to me, though in the interest of openness that
>documentation should probably be licensed such that it can be
>published somewhere for the whole community to read (once any
>glaring deficiencies are addressed anyway).
>-- 
>Jeremy Stanley

Perhaps the security team would take reviews to their documentation to add
the threat analysis documentation?

Regards,
-steve

>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] [security][tc] Tidy up language in section 5 of the vulnerability:managed tag

2016-04-12 Thread Jeremy Stanley
On 2016-04-02 14:40:57 + (+), Steven Dake (stdake) wrote:
[...]
> IANAL and writing these things correctly is hard to do properly );
> involving the community around the pain points of the tagging
> process is what I'm after.
[...]

Nobody on the VMT is a lawyer either, and when I wrote the original
text I wanted to make sure it provided sufficient guidance on our
expectations while still being inclusive and without needing a
lawyer to interpret. As it stands, the VMT and TC still make the
final call on whether an application is sufficiently convincing, so
the point of the application criteria are to make sure projects know
what sort of convincing we're looking for.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Clark, Robert Graham
On 12/04/2016 18:37, "Jeremy Stanley"  wrote:



>On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>> 
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
>Seems fine to me, though in the interest of openness that
>documentation should probably be licensed such that it can be
>published somewhere for the whole community to read (once any
>glaring deficiencies are addressed anyway).
>-- 
>Jeremy Stanley

In some cases this may be feasible, in others less so. TA in general
tends to be implementation specific which is why, when discussing how
the Security Project would be performing TA work within OpenStack we
decided that it should be reflective of a best-practice deployment for
whatever project was being evaluated.[1][2]

There are two OpenStack vendors I know of that do in depth functional
threat analysis on OpenStack projects. I have been highly involved in
the development of TA at HPE and various colleagues in the Security
project have been involved with the TA process at Rackspace. When
evaluating our documentation sets together at the mid-cycle[2] it was
felt that in both cases, some degree of "normalization" would need to be
performed before either of us would be ready to share these documents
externally.

-Rob [Security]

[1] 
https://openstack-security.github.io/collaboration/2016/01/16/threat-analysis.html
[2] https://openstack-security.github.io/threatanalysis/2016/02/07/anchorTA.html
[3] 
https://openstack-security.github.io/mid-cycle/2016/01/15/mitaka-midcycle.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] The same SRIOV / NFV CI failures missed a regression, why?

2016-04-12 Thread Jeremy Stanley
On 2016-04-05 00:45:20 -0400 (-0400), Jay Pipes wrote:
> The proposal is to have the hardware companies donate hardware and
> sysadmins to setup and maintain a *single* third-party CI lab
> environment running the *upstream infra CI toolset* in one
> datacenter at first, moving to multiple datacenters eventually.
> This lab environment would contain hardware that the vendors
> intend to ensure is functionally tested in certain projects --
> mostly Nova and Neutron around specialized PCI devices and SR-IOV
> NICs that have zero chance of being tested functionally in the
> cloudy gate CI environments.

This is great. I always love to see increased testing of OpenStack
(and often more insights come from setting up the test environment
and designing the tests than result from running them on proposed
changes later).

> The thing I am proposing the upstream Infra team members would be
> responsible for is guiding/advising on the creation and
> installation of the CI tools and helping to initially get the CI
> system reporting to the upstream Jenkins/Zuul system. That's it.
> No long-term maintenance, no long-term administration of the
> hardware in this lab environment. Just advice and setup help.

We already have a vibrant and active community around these tools
and concepts and I welcome any additional participation there.

> The vendors would continue to be responsible for keeping the CI
> jobs healthy and the lab environment up and running. It's just
> instead of 12 different external CI systems, there would be 1
> spawning jobs on lots of different types of hardware. I'm hoping
> that reducing the number of external CI systems will enable the
> vendors to jointly improve the quality of the tests because they
> will be able to focus on creating tests instead of keeping 12
> different CI systems up and running.
> 
> Hope that better explains the proposal.

It does. I'm glad to see the implications in the original proposal
of being partially staffed by the foundation turned out not to be
integral. Your call for a "single CI system" also seemed to imply
combining this and the upstream CI rather than simply combining
multiple third-party CI systems into a single third-party CI system,
which I now get is not actually the case. Thanks for clarifying!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] [security] threat analysis, tags, and the road ahead

2016-04-12 Thread Hayes, Graham
On 12/04/2016 18:39, Jeremy Stanley wrote:
> On 2016-04-01 15:50:57 + (+), Hayes, Graham wrote:
>> If a team has already done a TA (e.g. as part of an internal
>> product TA) (and produced all the documentation) would this meet
>> the requirements?
>>
>> I ask, as Designate looks like it meets nearly  all the current
>> requirements - the only outstanding question in my mind was the
>> Threat Analysis
>
> Seems fine to me, though in the interest of openness that
> documentation should probably be licensed such that it can be
> published somewhere for the whole community to read (once any
> glaring deficiencies are addressed anyway).
>

Definitely - I have a request in to open the analysis for public
distribution.

My feeling is that it should land somewhere in our docs, and be
covered by the same license as them. (I *think* that is MIT for
us)

- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][ceph] Puppet-ceph is now a formal member of puppet-openstack

2016-04-12 Thread David Moreau Simard
I'm definitely glad we got puppet-ceph to where it is today and I have
complete trust in the Puppet-OpenStack team to take it to the next
level.

With that announcement, I have to do one of my own -- I am stepping
down as a puppet-ceph core.

My day-to-day work does not involve Ceph anymore and as such my
knowledge about it has stagnated.
Doing thoughtful and informed reviews has been taking more time and
effort that I can afford right now.

I am still going to be around and feel free to poke me whenever necessary!

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mon, Apr 11, 2016 at 5:24 PM, Andrew Woodward  wrote:
> It's been a while since we started the puppet-ceph module on stackforge as a
> friend of OpenStack. Since then Ceph's usage in OpenStack has increased
> greatly and we have both the puppet-openstack deployment scenarios as well
> as check-trippleo running against the module.
>
> We've been receiving leadership from the puppet-openstack team for a while
> now and our small core team has struggled to keep up. As such we have added
> the puppet-openstack cores to the review ACL's in gerrit and have been
> formally added to the puppet-openstack project in governance. [1]
>
> I thank the puppet-openstack team for their support and, I am glad to see
> the module move under their leadership.
>
> [1] https://review.openstack.org/300191
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Armando M.
On 12 April 2016 at 07:08, Michael Johnson  wrote:

> Armando,
>
> Is there any way we can move the "Neutron: Development track: future
> of *-aas projects" session?  I overlaps with the LBaaS talk:
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/6893?goback=1
>
> Michael
>

Swapped with the first slot of the day. I also loaded etherpads here:

https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Neutron

Cheers,
Armando

>
>
> On Mon, Apr 11, 2016 at 9:56 PM, Armando M.  wrote:
> > Hi folks,
> >
> > A provisional schedule for the Neutron project is available [1]. I am
> still
> > working with the session chairs and going through/ironing out some
> details
> > as well as gathering input from [2].
> >
> > I hope I can get something more final by the end of this week. In the
> > meantime, please free to ask questions/provide comments.
> >
> > Many thanks,
> > Armando
> >
> > [1]
> >
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
> > [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Dedicated CI job for live-migration testing status update

2016-04-12 Thread Timofei Durakov
Hello,

Separate job for testing live-migration with different storage
configurations(No shared storage/NFS/Ceph was implemented during Mitaka and
available to be executed using experimental pipeline.

The job is ready but shows same stability as
gate-tempest-dsvm-multinode-full.
Bugs like [1] [2] affects its stability. So the main idea for now it to
make this job running against latest libvirt/qemu versions. Markus Zoeller
and Tony Breeds are working on implementation of devstack plugin for that
[3]. So once plugin ready it will be possible to check. Another option for
this is to use Xenial images in experimental job, which will contain newer
libvirt/qemu version. Infra team already add ability to use 16.04 for
experimental[4], so I'm going to try this approach.

Work items:
- Cover negative testcases for live-migration(to check that all rollback
logic works well) - in progress;
- Check state of VM after migration
- Live migration for instance under workload

Timofey.

[1] - https://bugs.launchpad.net/nova/+bug/1535232
[2] - https://bugs.launchpad.net/nova/+bug/1524898
[3] -
https://review.openstack.org/#/q/project:openstack/devstack-plugin-additional-pkg-repos
[4] - https://review.openstack.org/#/c/302949/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Flavio Percoco wrote:

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if
Trove had
access to a single Compute API, there were other significant
complications
further down the road, and I didn’t pursue the project further at the
time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month
and my
conclusion is pretty much the above. There's no silver bullet in this
area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ
in such
a way (feature-wise) that it'd not be good, as far as deploying
databases goes,
for there to be one compute API. Containers allow for a different
deployment
architecture than VMs and so does bare metal.


Just some thoughts from me, but why focus on the 
compute/container/baremetal API at all?


I'd almost like a way that just describes how my app should be 
interconnected, what is required to get it going, and the features 
and/or scheduling requirements for different parts of those app.


To me it feels like this isn't a compute API or really a heat API but 
something else. Maybe it's closer to the docker compose API/template 
format or something like it.


Perhaps such a thing needs a new project. I'm not sure, but it does feel 
like that as developers we should be able to make such a thing that 
still exposes the more advanced functionality of the underlying API so 
that it can be used if really needed...


Maybe this is similar to an app-catalog, but that doesn't quite feel 
like it's the right thing either so maybe somewhere in between...


IMHO I'd be nice to have a unified story around what this thing is, so 
that we as a community can drive (as a single group) toward that, maybe 
this is where the product working group can help and we as a developer 
community can also try to unify behind...


P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Matt Riedemann



On 4/11/2016 11:56 PM, Armando M. wrote:

Hi folks,

A provisional schedulefor the Neutron project is available [1]. I am
still working with the session chairs and going through/ironing out some
details as well as gathering input from [2].

I hope I can get something more final by the end of this week. In the
meantime, please free to ask questions/provide comments.

Many thanks,
Armando

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
[2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FYI, I have the nova/neutron cross-project session for Wednesday at 11am 
in the schedule:


https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [opnfv-tech-discuss][NFV] BoF on NFV Orchestration

2016-04-12 Thread Sridhar Ramaswamy
Greetings,

I'm soliciting inputs, and if interested in this topic, your presence for
the upcoming BoF session on NFV Orchestration during the Austin summit,

https://www.openstack.org/summit/austin-2016/summit-schedule/events/8468

We had a packed session for a similar event in the Tokyo summit with many
great inputs shared on this topic. I've created an etherpad with few
cross-organizational topics to get the discussion going,


https://etherpad.openstack.org/p/austin-2016-nfv-orchestration-bof

Please share your thoughts, either here in the ML or etherpad to make this
event a success.

thanks,
Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Armando M.
On 12 April 2016 at 12:16, Matt Riedemann 
wrote:

>
>
> On 4/11/2016 11:56 PM, Armando M. wrote:
>
>> Hi folks,
>>
>> A provisional schedulefor the Neutron project is available [1]. I am
>> still working with the session chairs and going through/ironing out some
>> details as well as gathering input from [2].
>>
>> I hope I can get something more final by the end of this week. In the
>> meantime, please free to ask questions/provide comments.
>>
>> Many thanks,
>> Armando
>>
>> [1]
>>
>> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
>> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> FYI, I have the nova/neutron cross-project session for Wednesday at 11am
> in the schedule:
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089


Thanks,

Surprisingly this does not show up when searching by Neutron tag, even
though I can see the sessions it's been tagged with both Nova and Neutron.
I wonder if I am doing something wrong.


>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-12 Thread Dan Prince
On Mon, 2016-04-11 at 05:54 -0400, John Trowbridge wrote:
> Hola OOOers,
> 
> It came up in the meeting last week that we could benefit from a CI
> subteam with its own meeting, since CI is taking up a lot of the main
> meeting time.
> 
> I like this idea, and think we should do something similar for the
> other
> informal subteams (tripleoclient, UI), and also add a new subteam for
> tripleo-quickstart (and maybe one for releases?).
> 
> We should make seperate ACL's for these subteams as well. The
> informal
> approach of adding cores who can +2 anything but are told to only +2
> what they know doesn't scale very well.

+1 for some subteams. I think they make good sense for projects which
are adopted upstream into projects like TripleO. I wouldn't say we have
to go crazy and do it for everything but for projects like quickstart
it seems fine to me.

Dan

> 
> - trown
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-12 Thread Jeff Peeler
On Mon, Apr 11, 2016 at 3:37 AM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> The reviewers in Kolla tend to nit-pick the quickstart guide to death during
> reviews.  I'd like to keep that high bar in place for the QSG, because it is
> our most important piece of documentation at present.  However, when new
> contributors see the nitpicking going on in reviews, I think they may get
> discouraged about writing documentation for other parts of Kolla.
>
> I'd prefer if the core reviewers held a lower bar for docs not related to
> the philosophy or quiickstart guide document.  We can always iterate on
> these new documents (like the operator guide) to improve them and raise the
> bar on their quality over time, as we have done with the quickstart guide.
> That way contributors don't feel nitpicked to death and avoid improving the
> documentation.
>
> If you are a core reveiwer and agree with this approach please +1, if not
> please –1.

I'm fine with relaxing the reviews on documentation. However, there's
a difference between having a missed comma versus the whole patch
being littered with misspellings. In general in the former scenario I
try to comment and leave the code review set at 0, hoping the
contributor fixes it. The danger is that a 0 vote people sometimes
miss, but it doesn't block progress.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-12 Thread Anita Kuno
On 04/12/2016 03:23 PM, Armando M. wrote:
> On 12 April 2016 at 12:16, Matt Riedemann 
> wrote:
> 
>>
>>
>> On 4/11/2016 11:56 PM, Armando M. wrote:
>>
>>> Hi folks,
>>>
>>> A provisional schedulefor the Neutron project is available [1]. I am
>>> still working with the session chairs and going through/ironing out some
>>> details as well as gathering input from [2].
>>>
>>> I hope I can get something more final by the end of this week. In the
>>> meantime, please free to ask questions/provide comments.
>>>
>>> Many thanks,
>>> Armando
>>>
>>> [1]
>>>
>>> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
>>> [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>>>
>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> FYI, I have the nova/neutron cross-project session for Wednesday at 11am
>> in the schedule:
>>
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9089
> 
> 
> Thanks,
> 
> Surprisingly this does not show up when searching by Neutron tag, even
> though I can see the sessions it's been tagged with both Nova and Neutron.
> I wonder if I am doing something wrong.

The title for that session includes "Nova: Neutron"
So it comes up when searching for Neutron (without the colon) or Nova:
(with the colon) but not Neutron: (with the colon).

Hopefully the web folks will have this straightened out for Barcelona.

Thanks,
Anita.

> 
> 
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Matt Riedemann



On 4/1/2016 8:45 AM, Sean Dague wrote:

The glance v2 work is currently blocked as there is no active spec,
would be great if someone from the glance team could get that rolling again.

I started digging back through the patches in detail to figure out if
there are some infrastructure bits we could get in early regardless.

#1 - new methods for glance xenserver plugin

Let's take a simplified approach on this patch -
https://review.openstack.org/#/c/266933 and only change the
xenapi/etc/xapi.d/plugins/ content in the following ways.

- add upload/download_vhd_glance2 methods. Don't add an api parameter.
Add these methods mostly via copy/paste as we're optimizing for deleting
v1 not for fixing v1.


How are we planning on deleting the v1 image API? That would also mean 
deleting Nova's images API which is a proxy for glance v1. I didn't 
think we deleted Nova APIs? We can certainly deprecate it once we have 
glance v2 support.




That will put some infrastructure in place so we can just call the v2
actions based on decision from higher up the stack.

#2 - move discover major version back to glanceclient -
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108

I don't understand why this was ever in nova. This really should be

glanceclient.discover... something. It uses internal methods from
glanceclient and internal structures of the content returned.

Catching, if desired, should also be on the glanceclient side.
glanceclient.reset_version() could exist to clear any caching.

#3 - Ideally we'd also have a

client = glanceclient.AutoClient(endpoint, ... ) which basically does
glanceclient.discover and returns us the right client automatically.
client.version provides access to the version information if you need to
figure out what version of a client you have.


This starts to get to a point where the parts of versioning that
glanceclient should know about are in glanceclient, and when nova still
needs to know things it can as for client.version.

For instance make _extract_query_params -
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L448
become and instance method that can

if self._client.version >= 2:
...
else:
...


This isn't the whole story to get us home, however chunking up some of
these pieces I think makes getting the rest of the story in much
simpler. In nearly every case (except for the alt link in the image
view) we can easily have access to a real glance client. And the code
will be a ton easier to understand with some of the glanceclient
specific details behind the glanceclient interface.

-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] About snapshot Rollback?

2016-04-12 Thread Erlon Cruz
I didn't see that mention. You mean about legacy volumes and snapshots?

On Mon, Apr 11, 2016 at 3:58 PM, Duncan Thomas 
wrote:

> Ok, you're right about device naming by UUID.
>
> So we have two advantages compared to the existing system:
>
> - Keeping the same volume id (and therefore disk UUID) makes reverting a
> VM much easier since device names inside the instance stay the same
> - Can significantly reduce the amount of copying required on some backends
>
> These do seem like solid reasons to consider the feature.
>
> If you can solve the backwards compatibility problem mentioned further up
> this thread, then I think there's a strong case for considering adding this
> API.
>
> The next step is a spec and a PoC implementation.
>
>
>
> On 11 April 2016 at 20:57, Erlon Cruz  wrote:
>
>> You are right, the instance should be shutdown or the device be
>> unmounted, before 'revert' or removing the old device. That should be
>> enough to avoid corruption. I think the device naming is not a problem if
>> you use the same volume (at least the disk UUID will be the same).
>>
>> On Mon, Apr 11, 2016 at 2:39 PM, Duncan Thomas 
>> wrote:
>>
>>> You can't just change the contents of a volume under the instance though
>>> - at the very least you need to do an unmount in the instance, and a detach
>>> is preferable, otherwise you've got data corruption issues.
>>>
>>> At that point, the device naming problems are identical.
>>>
>>> On 11 April 2016 at 20:22, Erlon Cruz  wrote:
>>>
 The actual user workflow is:

  1 - User creates a volume(s)
  2 - User attach volume to instance
  3 - User creates a snapshot
  4 - Something happens causing the need of a revert
  5 - User creates a volume(s) from the snapshot(s)
  6 - User detach old volumes
  7 - User attach new volumes (and pray so they get the same id) - Nova,
 should have the ability to honor supplied device names (vdc, vdd, etc),
 which not always happen[1]. But, does the volume keep the same UUID in the
 system? Several application use that to boot.

 The suggested workflow would be simpler for a user POV:

  1 - User creates a volume(s)
  2 - User attach volume to instance
  3 - User creates a snapshot
  4 - Something happens causing the need of a revert
  5 - User revert snapshot(s)


  [1] https://goo.gl/Kusfne

 On Fri, Apr 8, 2016 at 5:07 AM, Ivan Kolodyazhny 
 wrote:

> Hi Chenzongliang,
>
> I still don't understand what is difference between proposed feature
> and 'restore volume from snapshot'? Could you please explain it?
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Thu, Apr 7, 2016 at 12:00 PM, Chenzongliang <
> chenzongli...@huawei.com> wrote:
>
>> Dear Cruz:
>>
>>
>>
>>  Thanks for you kind support, I will review the previous spec
>> according to the following links.  May be more user scenario we should
>> considered,such as backup,create volume from snapshot,consistency group 
>> and
>> etc,we will spend some time to gather
>>
>> the user's scenarios and determin what to do next step.
>>
>>
>>
>> Sincerely,
>>
>> zongliang chen
>>
>>
>>
>> *发件人:* Erlon Cruz [mailto:sombra...@gmail.com]
>> *发送时间:* 2016年4月5日 2:50
>> *收件人:* OpenStack Development Mailing List (not for usage questions)
>> *抄送:* Zhangli (ISSP); Shenhong (C)
>> *主题:* Re: [openstack-dev] [Cinder] About snapshot Rollback?
>>
>>
>>
>> Hi Chen,
>>
>>
>>
>> Not sure if I got you right but I brought this topic in
>> #openstack-cinder some days ago. The idea is to be able to rollback a
>> snapshot in Cinder. Today what is possible to do is to create a volume 
>> from
>> a snapshot. From the user point of view, this is not ideal, as there are
>> several cases, if not the majority of, that the purpose of the snapshot 
>> is
>> to revert to a desired state, and not keep the original volume. For some
>> backends, keeping the original volume means space consumption. This space
>> problem becomes bold when we think about consistency groups. For
>> consistency groups, some backends might have to copy an entire filesystem
>> for each snapshot, consuming space and time. So, I think it would be
>> desired to have the ability to revert snapshots.
>>
>>
>>
>> I know there have been efforts in the past[1] to implement that, but
>> for some reason the work was stopped. If you want to retake the effort
>> please create a spec[2]  sol everybody can provide feedback.
>>
>>
>>
>> Erlon
>>
>>
>>
>>
>>
>> [1]
>> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-rollback-snapshot
>>
>> [2] https://github.com/openstack/cinder-specs
>>
>>
>>
>> On Thu, Mar 24, 

[openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Hongbin Lu
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don't have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.

So, rather then concern itself with supporting launching through a COE and 
through Nova, which are two totally different code paths, OpenStack advanced 
services like Trove could just use a Magnum COE and have a UI that asks which 
existing Magnum COE to launch in, or alternately kick off the "Launch new 
Magnum COE" workflow in horizon, then follow up with the Trove launch workflow. 
Trove then would support being able to use containers, users could potentially 
pack more containers onto their vm's then just Trove, and it still would work 
with both Bare Metal and VM's the same way since Magnum can launch on either. 
I'm afraid supporting both containers and non container deployment with Trove 
will be a large effort with very little code sharing. It may be easiest to have 
a flag version where non container deployments are upgraded to containers then 
non container support is dropped.

As for the app-catalog use case, the app-catalog project 
(http://apps.openstack.org) is working on some of that.

Thanks,
Kevin
 
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 12:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:
> On 11/04/16 18:05 +, Amrith Kumar wrote:
>> Adrian, thx for your detailed mail.
>>
>>
>>
>> Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
>> think it
>> was Vancouver), there’s likely no silver bullet in this area. After that
>> conversation, and some further experimentation, I found that even if
>> Trove had
>> access to a single Compute API, there were other significant
>> complications
>> further down the road, and I didn’t pursue the project further at the
>> time.
>>
>
> Adrian, Amrith,
>
> I've spent enough time researching on this area during the last month
> and my
> conclusion is pretty much the above. There's no silver bullet in this
> area and
> I'd argue there shouldn't be one. Containers, bare metal and VMs differ
> in such
> a way (feature-wise) that it'd not be good, as far as deploying
> databases goes,
> for there to be one compute API. Containers allow for a different
> deployment
> architecture than VMs and so does bare metal.

Just some thoughts from me, but why focus on the
compute/container/baremetal API at all?

I'd almost like a way that just describes how my app should be
interconnected, what is required to get it going, and the features
and/or scheduling requirements for different parts of those app.

To me it feels like this isn't a compute API or really a heat API but
something else. Maybe it's closer to the docker compose API/template
format or something like it.

Perhaps such a thing needs a new project. I'm not sure, but it does feel
like that as developers we should be able to make such a thing that
still exposes the more advanced functionality of the underlying API so
that it can be used if really needed...

Maybe this is similar to an app-catalog, but that doesn't quite feel
like it's the right thing either so maybe somewhere in between...

IMHO I'd be nice to have a unified story around what this thing is, so
that we as a community can drive (as a single group) toward that, maybe
this is where the product working group can help and we as a developer
community can also try to unify behind...

P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Sean Dague
On 04/12/2016 03:37 PM, Matt Riedemann wrote:
> 
> 
> On 4/1/2016 8:45 AM, Sean Dague wrote:
>> The glance v2 work is currently blocked as there is no active spec,
>> would be great if someone from the glance team could get that rolling
>> again.
>>
>> I started digging back through the patches in detail to figure out if
>> there are some infrastructure bits we could get in early regardless.
>>
>> #1 - new methods for glance xenserver plugin
>>
>> Let's take a simplified approach on this patch -
>> https://review.openstack.org/#/c/266933 and only change the
>> xenapi/etc/xapi.d/plugins/ content in the following ways.
>>
>> - add upload/download_vhd_glance2 methods. Don't add an api parameter.
>> Add these methods mostly via copy/paste as we're optimizing for deleting
>> v1 not for fixing v1.
> 
> How are we planning on deleting the v1 image API? That would also mean
> deleting Nova's images API which is a proxy for glance v1. I didn't
> think we deleted Nova APIs? We can certainly deprecate it once we have
> glance v2 support.

This is pretty specific here in reference to xenserver plugin as listed
in that patch. After some point we'll just delete all the things that
talk glance v1 in it, and you have to have glance v2 to work. That will
drop a bunch of untested code.

That being said, in general I think this still holds. The Glance team
wants to delete their v1 API entirely. We should be thinking about how
the Nova code ends up such that it's easy to delete all the v1
interfacing code in our tree the cycle when Glance does that to get rid
of the debt. So abstraction models are way less interesting than a very
high level v1 / v2 branch, and all the code being distinct and clean for
each path.

-Sean

>> That will put some infrastructure in place so we can just call the v2
>> actions based on decision from higher up the stack.
>>
>> #2 - move discover major version back to glanceclient -
>> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108
>>
>>
>> I don't understand why this was ever in nova. This really should be
>>
>> glanceclient.discover... something. It uses internal methods from
>> glanceclient and internal structures of the content returned.
>>
>> Catching, if desired, should also be on the glanceclient side.
>> glanceclient.reset_version() could exist to clear any caching.
>>
>> #3 - Ideally we'd also have a
>>
>> client = glanceclient.AutoClient(endpoint, ... ) which basically does
>> glanceclient.discover and returns us the right client automatically.
>> client.version provides access to the version information if you need to
>> figure out what version of a client you have.
>>
>>
>> This starts to get to a point where the parts of versioning that
>> glanceclient should know about are in glanceclient, and when nova still
>> needs to know things it can as for client.version.
>>
>> For instance make _extract_query_params -
>> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L448
>>
>> become and instance method that can
>>
>> if self._client.version >= 2:
>> ...
>> else:
>> ...
>>
>>
>> This isn't the whole story to get us home, however chunking up some of
>> these pieces I think makes getting the rest of the story in much
>> simpler. In nearly every case (except for the alt link in the image
>> view) we can easily have access to a real glance client. And the code
>> will be a ton easier to understand with some of the glanceclient
>> specific details behind the glanceclient interface.
>>
>> -Sean
>>
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-12 Thread Matt Riedemann



On 4/6/2016 6:15 AM, Mikhail Fedosin wrote:

Hello! Thanks for bring this topic up.

First of all, as I mentioned before, the great work was done in Mitaka,
so Glance v2 adoption in Nova it is not a question of "if", and not even
a question of "when" (in Newton), but the question of "how".

There is a set of commits that make this trick:
1. Xen plugin
https://review.openstack.org/#/c/266933
Sean gave us several good suggestions how we can improve it. In short:

  * Make this only add new glance method calls *upload_vhd_glancev2
**download_vhd_glancev2*which do the v2 work
  * Don't refactor existing code to do common code here, copy / paste /
update instead. We want the final code to be optimized for v1
delete, not for v1 fixing (it was done in previous patchsets, but
then I made the refactor to reduce the amount of code)

2. 'Show' image info
https://review.openstack.org/#/c/228578
Another 'schema-based' handler is added there. It transforms glance v2
image output to format adopted in nova.image.

We have to take in account that image properties in v1 are passed with
http headers which makes them case insensetive. In v2 image info is
passed as json document and 'MyProperty' and 'myproperty' are two
different properties. Thanks Brian Rosmaita who noticed it
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087519.html

Also in v1 user can create custom properties like 'owner' or
'created_at' and they are stored in special dictionary 'properties'. v2
images have flat structure, which means that all custom properties are
located on the same level as base properties. It leads to the fact if v1
image has a custom property that has name coincided with the name of
base property, then this property will be ignored in v2.

3. Listing of artifacts in v2 way
https://review.openstack.org/#/c/238309
There I added additional handlers that transforms v1 image filters in
v2, along with sorting parameters.

'download' and 'delete' patches are included in #238309 since they are
trivial

4. 'creating' and 'updating' images'
https://review.openstack.org/#/c/259097

What were added there:

  * transformation to 2-stepped image creation (creation of instance in
db + file uploading)
  * special handler for creation active images with size '0' without
image data
  * the ability to set custom location for an image
('show_multiple_locations' option must be enabled in glance config
for doing that)
  * special handler to remove custom properties from the image:
purge_props flag in v1 vs. props_to_remove list in v2

What else has to be done:

  * Splitting in 2 patches is required: 'create' and 'update' to make it
easier to review.
  * Matt suggested that it's better not to hardcode method logic for v1
and v2 apis. But rather we should create a a common base class which
is subclassed for v1/v2 specific callback (abstract) methods, and
then we could have a factory that, given the version, provides the
client impl we're going to deal with.


If we're going to literally delete all of the 'if version == 1' paths in 
the nova code in a couple of releases from now, maybe this doesn't 
matter, it just seemed cleaner to me as a way to abstract the common 
parts and subclass the version-specific handling.




5. Also we have a bug: https://bugs.launchpad.net/nova/+bug/1539698

Thanks Samuel Matzek who found it. There is a fix
https://review.openstack.org/#/c/274203/ , but it has contradictory
opinions. If you can suggest a better solution, then I'll be happy :)


If you have any questions about how it was done feel free to send me
emails (mfedo...@mirantis.com ) or ping me
in IRC (mfedosin)

And finally I really want to thank you all for supporting this
transition to v2 - it's a big update for OpenStack and without community
help it cannot be done.

Best regards,
Mikhail Fedosin




On Wed, Apr 6, 2016 at 9:35 AM, Nikhil Komawar mailto:nik.koma...@gmail.com>> wrote:

Inline comment.

On 4/1/16 10:16 AM, Sean Dague wrote:
 > On 04/01/2016 10:08 AM, Monty Taylor wrote:
 >> On 04/01/2016 08:45 AM, Sean Dague wrote:
 >>> The glance v2 work is currently blocked as there is no active spec,
 >>> would be great if someone from the glance team could get that
rolling
 >>> again.
 >>>
 >>> I started digging back through the patches in detail to figure
out if
 >>> there are some infrastructure bits we could get in early
regardless.
 >>>
 >>> #1 - new methods for glance xenserver plugin
 >>>
 >>> Let's take a simplified approach on this patch -
 >>> https://review.openstack.org/#/c/266933 and only change the
 >>> xenapi/etc/xapi.d/plugins/ content in the following ways.
 >>>
 >>> - add upload/download_vhd_glance2 methods. Don't add an api
parameter.
 >>> Add these methods mo

  1   2   >