Re: [openstack-dev] [ceilometer][tempest] disabling 'full' tempest tests for ceilometer changes in CI

2016-04-13 Thread GHANSHYAM MANN
+1, That make sense.

Also as Ceilometer and Aodh will be running tests from plugin, I have
initiated patch to remove those from Tempest.

- https://review.openstack.org/#/c/304992/
Regards
Ghanshyam Mann
+818011120698


On Tue, Apr 12, 2016 at 9:47 PM, Chris Dent  wrote:
> On Tue, 12 Apr 2016, gordon chung wrote:
>
>> i'd be in favour of dropping the full cases -- never really understood
>> why we ran all the tests everywhere. ceilometer/aodh are present at the
>> end of the workflow so i don't think we need to be concerned with any of
>> the other tests, only the ones explicitly related to ceilometer/aodh.
>
>
> +1
> --
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-13 Thread Juvonen, Tomi (Nokia - FI/Espoo)
> -Original Message-
> From: EXT Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Tuesday, April 12, 2016 4:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> 
> On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> > On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > > Hi Nova, Ops, stackers,
> > >
> > > I am trying to figure out different use cases and requirements there
> > > would be for host maintenance and would like to get feedback and
> > > transfer all this to spec and discussion what could and should land for
> > > Nova or other places.
> > >
> > > As working in OPNFV Doctor project that has the Telco perspective about
> > > related requirements, I started to draft a spec based on something
> > > smaller that would be nice to have in Nova and less complicated to have
> > > it in single cycle. Anyhow the feedback from Nova API team was to look
> > > this as a whole and gather more. This is why asking this here and not
> > > just trough spec, to get input for requirements and use cases with
> wider
> > > audience. Here is the draft spec proposing first just maintenance
> window
> > > to be added:
> > > _https://review.openstack.org/296995/_
> > >
> > > Here is link to OPNFV Doctor requirements:
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/02-
> use_cases.html#nvfi-maintenance_
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/03-
> architecture.html#nfvi-maintenance_
> > > _http://artifacts.opnfv.org/doctor/docs/requirements/05-
> implementation.html#nfvi-maintenance_
> > >
> > > Here is what I could transfer as use cases, but would ask feedback to
> > > get more:
> > >
> > > As admin I want to set maintenance period for certain host.
> > >
> > > As admin I want to know when host is ready to actions to be done by
> admin
> > > during the maintenance. Meaning physical resources are emptied.
> > >
> > > As owner of a server I want to prepare for maintenance to minimize
> downtime,
> > > keep capacity on needed level and switch HA service to server not
> > > affected by
> > > maintenance.
> > >
> > > As owner of a server I want to know when my servers will be down
> because of
> > > host maintenance as it might be servers are not moved to another host.
> > >
> > > As owner of a server I want to know if host is to be totally removed,
> so
> > > instead of keeping my servers on host during maintenance, I want to
> move
> > > them
> > > to somewhere else.
> > >
> > > As owner of a server I want to send acknowledgement to be ready for
> host
> > > maintenance and I want to state if servers are to be moved or kept on
> host.
> > > Removal and creating of server is in owner's control already.
> Optionally
> > > server
> > > Configuration data could hold information about automatic actions to be
> > > done
> > > when host is going down unexpectedly or in controlled manner. Also
> > > actions at
> > > the same if down permanently or only temporarily. Still this needs
> > > acknowledgement from server owner as he needs time for application
> level
> > > controlled HA service switchover.
> >
> > While I definitely understand the value of these in a deployement, I'm a
> > bit concerned of baking all this structured data into Nova itself. As it
> > effectively means putting some degree of a ticket management system in
> > Nova that's specific to a workflow you've decided on here. Baked in
> > workflow is hard to change when the needs of an industry do.
> >
> > My counter proposal on your spec was to provide a free form field
> > associated with maintenance mode which could contain a url linking to
> > the details. This could be a jira ticket, or a REST url for some other
> > service. This would actually be much like how we handle images in Nova,
> > with a url to glance to find more info.
> 
> FWIW, this is what we do in ironic. A maintenance boolean, and a
> maintenance_reason text field that operators can dump text/links/etc in.
> 
> As an example:
> $ ironic node-set-maintenance $uuid on --reason "Dead fiber // ticket 123
> // jroll 2016/04/12"
> 
> It's worked well for Rackspace's deployment, at least, and I seem to
> remember others being happy with it as well.

Thanks Jim, I can understand for basic need this is enough. Anyhow looking the 
Telco requirements (linked OPNFV requirements) it is a lot of more complicated. 
Also I think if looking all kind of user or operator needs there could be a 
configurable "maintenance engine" to support different kind of scenarios from 
simpler IT cases to complicated Telco cases. Also this could help in all kind 
of upgrades if you know also what hosts are in certain level already...

If looking now that maybe we have all the APIs in Nova in place and we do not 
want more in Nova content, then it would just be something new (or some other 
existing project) running in OpenStack that would own the configurable 
maintenance.

[openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-13 Thread koshiya maho
Hi All,

I have submitted patches [1] for returning request-id to the caller 
on which Brant has raised his concerns [2];

Brant’s concern:
We've tried a couple of times to have the client return metadata for the lists 
returned 
and we wound up reverting the change both times because it broke some user --
https://review.openstack.org/#/c/285549/

I don't know what the solution is to the problem but this isn't going to work. 
Maybe we to add a flag or new functions or start a whole new client library so 
that
applications can opt-in to this.


I have tried this manually in my local environment and found that it is not 
failing 
with the patch suggested by Brant as well as with my changes.

Steps:
(1)clone openstack-infra/shade project
(2)create tox environment
(3)apply my patch (or above patch) to .tox environment
(4)run tests -- $tox -e py27 shade.tests.functional.test_xxx

You can see all tests are passing without any error. 
I have also verified that _ListWithMeta class is returned from keystoneclient 
by adding pdb in
keystoneclient/base.py _list method.

My request to all keystone cores to give their suggestions about the same.

[1] https://review.openstack.org/#q,topic:bp/return-request-id-to-caller,n,z
[2] https://review.openstack.org/#/c/261188/

Thank you,

--
Maho Koshiya
E-Mail : koshiya.m...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] Problem receiving mirrored ingress traffic and a solution suggestion

2016-04-13 Thread Simhon Doctori שמחון דוקטורי
Anil and all Hi,

Continuing the discussion from the IRC about the problem with the mirrored
traffic incoming to a VM not being mirrored. Indeed, it does look like the
bug mentioned on https://bugs.launchpad.net/tap-as-a-service/+bug/1544176.
I am using Liberty, ovs 2.0.2, Devstack, Single node.

As I mentioned, the problem is due to a rule match including the vlan tag.
Since the VM port is receiving data, after the ovs stripped the vlan of the
virtual network, there is no reason for doing match on a vlan, this rule
does not have any hits:

*cookie=0x0, duration=59625.138s, table=0, n_packets=0, n_bytes=0,
idle_age=59625, priority=20,dl_vlan=3,dl_dst=fa:16:3e:d3:60:16
actions=NORMAL,mod_vlan_vid:3901,output:11*

IMHO, the solution should be a rule where there is no vlan in match AND an
action where output port is the destination port. Since you already have a
match of a destination mac, why not output it to the destination vm
interface, together with the patch-int-tap interface? This rule works for
me:

*cookie=0x0, duration=20.422s, table=0, n_packets=42, n_bytes=3460,
idle_age=1, priority=20,dl_dst=fa:16:3e:d3:60:16
actions=output:14,mod_vlan_vid:3901,output:11*

As you can see, there is no vlan in match, and two output ports - 14 for
the vm interface, and 11 for the patch interface together with the vlan.

Simhon Doctori
imVision Technologies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] vitrage weekly meeting today

2016-04-13 Thread Afek, Ifat (Nokia - IL)
Hi,

Due to daylight saving time issues - Vitrage meeting today will be at 9:00 UTC, 
which might be an hour later than usual. 

Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-13 Thread Yuiko Takada
Hi,

I also want to discuss about it at summit session.

2016-04-13 0:41 GMT+09:00 Ruby Loo :

> Yes, I think it would be good to have a summit session on that. However,
> before the session, it would really be helpful if the folks with proposals
> got together and/or reviewed each other's proposals, and summarized their
> findings.
>

I've summarized all of related proposals.

(1)Add driver using Socat
https://review.openstack.org/#/c/293827/

* Pros:
- There is no influence to other components
- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
- Don't need to bump API microversion/RPC

* Cons:
- Don't output log file

(2)Add driver starting ironic-console-server
https://review.openstack.org/#/c/302291/
(There is no spec, yet)

* Pros:
- There is no influence to other components
- Output log file
- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
- No adding any Ironic services required, only add tools

* Cons:
- Need to bump API microversion/RPC

(3)Add a custom HTTP proxy to Nova
https://review.openstack.org/#/c/300582/

* Pros:
- Don't need any change to Ironic API

* Cons:
- Need Nova API changes(bump microversion)
- Need Horizon changes
- Don't output log file

(4)Add Ironic-ipmiproxy server
https://review.openstack.org/#/c/296869/

* Pros:
- There is no influence to other components
- Output log file
- IPMIShellinaboxConsole will be also available via Horizon

* Cons:
- Need IPMIShellinaboxConsole changes?
- Need to bump API microversion/RPC

If there is any mistake, please give me comment.


Best Regards,
Yuiko Takada

2016-04-13 0:41 GMT+09:00 Ruby Loo :

> Yes, I think it would be good to have a summit session on that. However,
> before the session, it would really be helpful if the folks with proposals
> got together and/or reviewed each other's proposals, and summarized their
> findings. You may find after reviewing the proposals, that eg only 2 are
> really different. Or they several have merit because they are addressing
> slightly different issues. That would make it easier to
> present/discuss/decide at the session.
>
> --ruby
>
>
> On 12 April 2016 at 09:17, Jim Rollenhagen  wrote:
>
>> On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
>> > Maybe we can continue the discussion here, as there's no enough time in
>> the
>> > irc meeting :)
>>
>> Someone mentioned this would make a good summit session, as there's a
>> few competing proposals that are all good options. I do welcome
>> discussion here until then, but I'm going to put it on the schedule. :)
>>
>> // jim
>>
>> >
>> > On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
>> wrote:
>> >
>> > >
>> > > Ironic is currently using shellinabox to provide a serial console, but
>> > > it's not compatible
>> > > with nova, so I would like to propose a new console type and a custom
>> HTTP
>> > > proxy [1]
>> > > which validate token and connect to ironic console from nova.
>> > >
>> > > On Horizon side, we should add support for the new console type [2] as
>> > > well, here are some screenshots from my local environment.
>> > >
>> > >
>> > >
>> > > ​
>> > >
>> > > Additionally, shellinabox console ports management should be improved
>> in
>> > > ironic, instead of manually specified, we should introduce dynamically
>> > > allocation/deallocation [3] mechanism.
>> > >
>> > > Functionality is being implemented in Nova, Horizon and Ironic:
>> > > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
>> > > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
>> > > https://review.openstack.org/#/q/status:open+topic:bug/1526371
>> > >
>> > >
>> > > PS: to achieve this goal, we can also add a new console driver in
>> ironic
>> > > [4], but I think it doesn't conflict with this, as shellinabox is
>> capable
>> > > to integrate with nova, and we should support all console drivers.
>> > >
>> > >
>> > > [1]
>> https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
>> > > [2]
>> > >
>> https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
>> > > [3] https://bugs.launchpad.net/ironic/+bug/1526371
>> > > [4] https://bugs.launchpad.net/ironic/+bug/1553083
>> > >
>> > > --
>> > > Best Regards,
>> > > Zhenguo Niu
>> > >
>> >
>> >
>> >
>> > --
>> > Best Regards,
>> > Zhenguo Niu
>>
>>
>>
>>
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _

Re: [openstack-dev] [TripleO][CI] Ability to reproduce failures

2016-04-13 Thread Steven Hardy
On Tue, Apr 12, 2016 at 11:08:28PM +0200, Gabriele Cerami wrote:
> On Fri, 2016-04-08 at 16:18 +0100, Steven Hardy wrote:
> 
> > Note we're not using devtest at all anymore, the developer script
> > many
> > folks use is tripleo.sh:
> 
> So, I followed the flow of the gate jobs starting from jenkins builder
> script, and it seems like it's using devtest (or maybe something I
> consider to be devtest but it's not, is devtest the part that creates
> some environments, wait for them to be locked by gearman, and so on ?)

So I think the confusion may step from the fact ./docs/TripleO-ci.rst is
out of date.  Derek can confirm, but I think although there may be a few
residual devtest pieces associated with managing the testenv VMs, there's
nothing related to devtest used in the actual CI run itself anymore.

See this commit:

https://github.com/openstack-infra/tripleo-ci/commit/a85deb848007f0860ac32ac0096c5e45fe899cc5

Since then we've moved to using tripleo.sh to drive most steps of the CI
run, and many developers are using it also.  Previously the same was true
of the devtest.sh script in tripleo-incubator, but now that is totally
deprecated and unused (that it still exists in the repo is an oversight).

> What I meant with "the script I'm using (created by Sagi) is not
> creating the same enviroment" is that is not using the same test env
> (with gearman and such) that the ci scripts are currently using.

Sure, I guess my point is that for 99% of issues, the method used to create
the VM is not important.  We use a slightly different method in CI to
manage the VMs than in most developer environments, but if the requirement
is to reproduce CI failures, you mostly care about deploying the exact same
software, not so much how virsh was driven to create the VMs.

Thanks for digging into this, it's great to have some fresh eyes
highlighting these sorts of issues! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Thierry Carrez

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.


This feels like Heat with a more readable descriptive language. I don't 
really like this approach, because you end up with the lowest common 
denominator between COE's functionality. They are all different. And 
they are at the peak of the differentiation phase. The LCD is bound to 
be pretty basic.


This approach may be attractive for us as infrastructure providers, but 
I also know this is not attractive to users who used Kubernetes before 
and wants to continue to use Kubernetes (and don't really want to care 
about whether OpenStack is running under the hood). They don't want to 
learn another descriptor language or API, they just want to learn the 
Kubernetes description model and API and take advantage of its unique 
capabilities.


In summary, this may be a good solution for *existing* OpenStack users 
to start playing with containerized workloads. But it is not a good 
solution to attract the container cool kids to using OpenStack as their 
base infrastructure provider. For those we need to make it as 
transparent and simple to use their usual tools to deploy on top of 
OpenStack clouds. The more they can ignore we are there, the better.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Microversions usage

2016-04-13 Thread Jamie Hannaford
?I recently discovered that Magnum supports microversions [1] - but it doesn't 
seem to be incrementing the API in accordance with new functionality releases. 
Is this due to the v1 API not being locked yet? What is the status of freezing 
the API, so that we can move to a microversions-based release cycle?


The API-WG has recently published a guideline [2] that is very slightly 
different from how Magnum has implemented it. I'm in the process of filing a 
few bugs to address this.


Jamie


[1] https://review.openstack.org/#/c/184975/?

[2] 
https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] vitrage weekly meeting today

2016-04-13 Thread Afek, Ifat (Nokia - IL)
Meeting minutes, for those of you who missed it: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-04-13-09.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-04-13-09.00.log.html

I'll send an update next week regarding the meeting time.

Ifat.

> -Original Message-
> From: EXT Afek, Ifat (Nokia - IL) [mailto:ifat.a...@nokia.com]
> Sent: Wednesday, April 13, 2016 11:10 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [vitrage] vitrage weekly meeting today
> 
> Hi,
> 
> Due to daylight saving time issues - Vitrage meeting today will be at
> 9:00 UTC, which might be an hour later than usual.
> 
> Ifat.
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nit-picking documentation changes

2016-04-13 Thread Martin André
On Tue, Apr 12, 2016 at 10:05 PM, Steve Gordon  wrote:

> - Original Message -
> > From: "Jeff Peeler" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> >
> > On Mon, Apr 11, 2016 at 3:37 AM, Steven Dake (stdake) 
> > wrote:
> > > Hey folks,
> > >
> > > The reviewers in Kolla tend to nit-pick the quickstart guide to death
> > > during
> > > reviews.  I'd like to keep that high bar in place for the QSG, because
> it
> > > is
> > > our most important piece of documentation at present.  However, when
> new
> > > contributors see the nitpicking going on in reviews, I think they may
> get
> > > discouraged about writing documentation for other parts of Kolla.
> > >
> > > I'd prefer if the core reviewers held a lower bar for docs not related
> to
> > > the philosophy or quiickstart guide document.  We can always iterate on
> > > these new documents (like the operator guide) to improve them and
> raise the
> > > bar on their quality over time, as we have done with the quickstart
> guide.
> > > That way contributors don't feel nitpicked to death and avoid
> improving the
> > > documentation.
> > >
> > > If you are a core reveiwer and agree with this approach please +1, if
> not
> > > please –1.
> >
> > I'm fine with relaxing the reviews on documentation. However, there's
> > a difference between having a missed comma versus the whole patch
> > being littered with misspellings. In general in the former scenario I
> > try to comment and leave the code review set at 0, hoping the
> > contributor fixes it. The danger is that a 0 vote people sometimes
> > miss, but it doesn't block progress.
>
> My typical experience with (very) occasional drive by commits to
> operational project docs (albeit not Kolla) is that the type of nit that
> comes up is more typically -1 thanks for adding X, can you also add Y and
> Z. Before you know it a simple drive by commit to flesh out one area has
> become an expectation to write an entire chapter.
>

That's because you're a native speaker and you write proper English to
begin with :)

We should be asking ourselves this simple question when reviewing
documentation patch "does it make the documentation better?". Often the
answer is yes, that's why I'm trying to ask for additional improvements in
follow-up patches.

Regarding spelling or a grammatical mistakes, why not fix it now while it's
still hot when we spot one in the new documentation that's being written?
It's more time consuming to fix it later. If needed a native speaker can
take over the patch and correct English.

Martin


> -Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-13 Thread tie...@vn.fujitsu.com
Hi

>From: Yuiko Takada [mailto:yuikotakada0...@gmail.com] 
> 
>I've summarized all of related proposals.
>

Thanks for your summary, Yuiko Takada. It is great. I want to give you some 
more comments as below.

>(4)Add Ironic-ipmiproxy server
>https://review.openstack.org/#/c/296869/
>
>* Pros: 
>- There is no influence to other components
>- Output log file
>- IPMIShellinaboxConsole will be also available via Horizon
>
>* Cons: 
>- Need IPMIShellinaboxConsole changes?
>- Need to bump API microversion/RPC

* Pros:
- There is no influence to other components
- Output log file
- IPMIShellinaboxConsole will be also available via Horizon (backward 
compatibility)
- The proxy can serve across Ironic conductors (in case of multiple conductors)

* Cons:
- Need IPMIShellinaboxConsole changes (only add some code, no change the 
existing behaviors of it)
- Need to bump API microversion/RPC

For other specs, currently I don't have any comment due to the Yuiko Takada's 
summary is very detailed.

Best regards,
Dao Cong Tien

>
>
>From: Yuiko Takada [mailto:yuikotakada0...@gmail.com] 
>Sent: Wednesday, April 13, 2016 3:47 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [ironic][nova][horizon] Serial console support 
>for ironic instances
>
>Hi,
>
>I also want to discuss about it at summit session.
>
>2016-04-13 0:41 GMT+09:00 Ruby Loo :
>Yes, I think it would be good to have a summit session on that. However, 
>before the session, it would really be helpful if the folks with proposals got 
>together and/or reviewed each other's proposals, and summarized their 
>findings. 
> 
>I've summarized all of related proposals.
>
>(1)Add driver using Socat
>https://review.openstack.org/#/c/293827/
>
>* Pros: 
>- There is no influence to other components
>- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
>- Don't need to bump API microversion/RPC
>
>* Cons: 
>- Don't output log file
>
>(2)Add driver starting ironic-console-server
>https://review.openstack.org/#/c/302291/
>(There is no spec, yet)
>
>* Pros: 
>- There is no influence to other components
>- Output log file
>- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
>- No adding any Ironic services required, only add tools
>
>* Cons:
>- Need to bump API microversion/RPC
>
>(3)Add a custom HTTP proxy to Nova
>https://review.openstack.org/#/c/300582/
>
>* Pros: 
>- Don't need any change to Ironic API
>
>* Cons: 
>- Need Nova API changes(bump microversion)
>- Need Horizon changes
>- Don't output log file
>
>(4)Add Ironic-ipmiproxy server
>https://review.openstack.org/#/c/296869/
>
>* Pros: 
>- There is no influence to other components
>- Output log file
>- IPMIShellinaboxConsole will be also available via Horizon
>
>* Cons: 
>- Need IPMIShellinaboxConsole changes?
>- Need to bump API microversion/RPC
>
>If there is any mistake, please give me comment.
>
>
>Best Regards,
>Yuiko Takada
>
>2016-04-13 0:41 GMT+09:00 Ruby Loo :
>Yes, I think it would be good to have a summit session on that. However, 
>before the session, it would really be helpful if the folks with proposals got 
>together and/or reviewed each other's proposals, and summarized their 
>findings. You may find after reviewing the proposals, that eg only 2 are 
>really different. Or they several have merit because they are addressing 
>slightly different issues. That would make it easier to present/discuss/decide 
>at the session.
>
>--ruby
>
>
>On 12 April 2016 at 09:17, Jim Rollenhagen  wrote:
>On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
>> Maybe we can continue the discussion here, as there's no enough time in the
>> irc meeting :)
>
>Someone mentioned this would make a good summit session, as there's a
>few competing proposals that are all good options. I do welcome
>discussion here until then, but I'm going to put it on the schedule. :)
>
>// jim
>
>>
>> On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu  wrote:
>>
>> >
>> > Ironic is currently using shellinabox to provide a serial console, but
>> > it's not compatible
>> > with nova, so I would like to propose a new console type and a custom HTTP
>> > proxy [1]
>> > which validate token and connect to ironic console from nova.
>> >
>> > On Horizon side, we should add support for the new console type [2] as
>> > well, here are some screenshots from my local environment.
>> >
>> >
>> >
>> > 
>> >
>> > Additionally, shellinabox console ports management should be improved in
>> > ironic, instead of manually specified, we should introduce dynamically
>> > allocation/deallocation [3] mechanism.
>> >
>> > Functionality is being implemented in Nova, Horizon and Ironic:
>> > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
>> > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
>> > https://review.openstack.org/#/q/status:open+topic:bug/1526371
>> >
>> >
>> > PS: to achieve this goal, we can also add a new console driver in iro

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Michał Dulko
On 04/13/2016 11:16 AM, Thierry Carrez wrote:
> Fox, Kevin M wrote:
>> I think my head just exploded. :)
>>
>> That idea's similar to neutron sfc stuff, where you just say what
>> needs to connect to what, and it figures out the plumbing.
>>
>> Ideally, it would map somehow to heat & docker COE & neutron sfc to
>> produce a final set of deployment scripts and then just runs it
>> through the meat grinder. :)
>>
>> It would be awesome to use. It may be very difficult to implement.
>>
>> If you ignore the non container use case, I think it might be fairly
>> easily mappable to all three COE's though.
>
> This feels like Heat with a more readable descriptive language. I
> don't really like this approach, because you end up with the lowest
> common denominator between COE's functionality. They are all
> different. And they are at the peak of the differentiation phase. 

Are we able to define that lowest common denominator at this stage?
Maybe that subset of features is still valuable?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton blueprints call for action

2016-04-13 Thread Ilya Chukhnakov
Hello everyone!

Count me in for the VLAN aware VMS. I already have a [seemingly working] 
proof-of-concept for the OVS driver case and expect to submit it for the review 
in a few days.


> On 13 Apr 2016, at 12:51, Oleg Bondarev  wrote:
> 
> 
> -- Forwarded message --
> From: Sukhdev Kapur mailto:sukhdevka...@gmail.com>>
> Date: Mon, Apr 11, 2016 at 7:33 AM
> Subject: Re: [openstack-dev] [Neutron] Newton blueprints call for action
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Cc: Bence Romsics  >, yan.songm...@zte.com.cn 
> 
> 
> 
> Hi Rossella, 
> 
> Good to hear that you will follow through with this. Ironic is looking for 
> this API as well for bare metal deployments. We would love to work with you 
> to make sure that this API/Implementation works for all servers ( VMs as well 
> BMs)
> 
> Thanks
> -Sukhdev
> 
> 
> On Wed, Apr 6, 2016 at 4:32 AM, Rossella Sblendido  > wrote:
> 
> 
> On 04/05/2016 05:43 AM, Armando M. wrote:
> >
> > With this email I would like to appeal to the people in CC to report
> > back their interest in continuing working on these items in their
> > respective capacities, and/or the wider community, in case new owners
> > need to be identified.
> >
> > I look forward to hearing back, hoping we can find the right resources
> > to resume progress, and bring these important requirements to completion
> > in time for Newton.
> 
> Count me in for the vlan-aware-vms. We have now a solid design, it's
> only a matter of putting it into code. I will help any way I can, I
> really want to see this feature in Newton.
> 
> cheers,
> 
> Rossella
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Microversions usage

2016-04-13 Thread Eli Qiao

Hi Jamie

On 2016年04月13日 17:26, Jamie Hannaford wrote:


​I recently discovered that Magnum supports microversions [1] - but it 
doesn't seem to be incrementing the API in accordance with new 
functionality releases. Is this due to the v1 API not being locked 
yet? What is the status of freezing the API, so that we can move to a 
microversions-based release cycle?




Microversion is only used after API freezed (to avoid broken SDK user), 
but for now, the team don't think Magnum APIs are mature yet.


I remembered I suggested to bump microversion in one of Magnum meeting I 
got the answers.


The API-WG has recently published a guideline [2] that is 
very slightly different from how Magnum has implemented it. I'm in the 
process of filing a few bugs to address this.





Yes, I agree to align with API-WG,  It will be good to align it with 
API-WG. Feel free to submit patch to address them, I am glad to help

review(yes, I'v been involved Nova's microversion development)

Thanks Eli.


Jamie


[1] https://review.openstack.org/#/c/184975/​

[2] 
https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html 




Rackspace International GmbH a company registered in the Canton of 
Zurich, Switzerland (company identification number CH-020.4.047.077-1) 
whose registered office is at Pfingstweidstrasse 60, 8005 Zurich, 
Switzerland. Rackspace International GmbH privacy policy can be viewed 
at www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail 
message may contain confidential or privileged information intended 
for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in 
error, please notify us immediately by e-mail at ab...@rackspace.com 
and delete the original message. Your cooperation is appreciated.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China
<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] service show omits the network id

2016-04-13 Thread Simhon Doctori שמחון דוקטורי
Hi,

Although the network id is essential argument when creating the service, it
is not shown when doing show for the service. Either using cli or rest-api.












*{  "tap_services": [{  "tenant_id":
"619ce6d9192c494fbf3dd7947ef78f9f",  "port_id":
"2250affe-6a38-4678-b4ab-969c36cc6f12",  "description": "",
"name": "tapServicePort",  "id":
"5afd1d73-0d8c-4931-bc6c-c8388ba6508f"}  ]}*









*neutron) tap-service-show
5afd1d73-0d8c-4931-bc6c-c8388ba6508f+-+--+|
Field   | Value
|+-+--+| description
|  || id  |
5afd1d73-0d8c-4931-bc6c-c8388ba6508f || name|
tapServicePort   || port_id |
2250affe-6a38-4678-b4ab-969c36cc6f12 || tenant_id   |
619ce6d9192c494fbf3dd7947ef78f9f
|+-+--+*


Is this in purpose or a bug ?

Simhon Doctori
imVision Technologies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton Design summit schedule - Draft

2016-04-13 Thread John Schwarz
Hi guys,

Note that the wiki page's timestamps<->session title was a bit
outdated so I've corrected where I've seen the discrepancies.
Specifically the "future of Neutron architecture" and "future of
Neutron client" were swapped.

John.

On Tue, Apr 12, 2016 at 9:47 PM, Armando M.  wrote:
>
>
> On 12 April 2016 at 07:08, Michael Johnson  wrote:
>>
>> Armando,
>>
>> Is there any way we can move the "Neutron: Development track: future
>> of *-aas projects" session?  I overlaps with the LBaaS talk:
>>
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/6893?goback=1
>>
>> Michael
>
>
> Swapped with the first slot of the day. I also loaded etherpads here:
>
> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Neutron
>
> Cheers,
> Armando
>>
>>
>>
>> On Mon, Apr 11, 2016 at 9:56 PM, Armando M.  wrote:
>> > Hi folks,
>> >
>> > A provisional schedule for the Neutron project is available [1]. I am
>> > still
>> > working with the session chairs and going through/ironing out some
>> > details
>> > as well as gathering input from [2].
>> >
>> > I hope I can get something more final by the end of this week. In the
>> > meantime, please free to ask questions/provide comments.
>> >
>> > Many thanks,
>> > Armando
>> >
>> > [1]
>> >
>> > https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Neutron%3A
>> > [2] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.privsep 1.5.0 release (newton)

2016-04-13 Thread no-reply
We are tickled pink to announce the release of:

oslo.privsep 1.5.0: OpenStack library for privilege separation

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.3.0..1.5.0


559e035 Updated from global requirements
8b2563f Updated from global requirements
030f36f Switch to msgpack for serialization
bd0baf5 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_privsep/comm.py  | 50 +--
requirements.txt  |  3 ++-
test-requirements.txt |  2 +-
5 files changed, 30 insertions(+), 50 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3e6d186..4d4dd9a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.config>=3.4.0 # Apache-2.0
+oslo.config>=3.9.0 # Apache-2.0
@@ -13,0 +14 @@ greenlet>=0.3.2 # MIT
+msgpack-python>=0.4.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index a8302e2..3c4d32f 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ mock>=1.2 # BSD
-fixtures>=1.3.1 # Apache-2.0/BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ironic] ironic-lib 1.3.0 release (newton)

2016-04-13 Thread no-reply
We are content to announce the release of:

ironic-lib 1.3.0: Ironic common library

This release is part of the newton release series.

With package available at:

https://pypi.python.org/pypi/ironic-lib

For more details, please see below.

Changes in ironic-lib 1.1.0..1.3.0
--

9eaad70 Updated from global requirements
733a40b Explore config options to oslo-config-generator
20720a8 Clean up test-requirements
042aa9a use wipefs to erase FS meta information
313f559 Updated from global requirements
3c4af65 Move eventlet to test-requirements. Remove greenlet.
c0d87c8 Tests to not depend on psmisc to be installed

Diffstat (except docs and test files)
-

etc/rootwrap.d/ironic-lib.filters |  5 +++
ironic_lib/disk_partitioner.py|  5 +++
ironic_lib/disk_utils.py  | 52 +--
ironic_lib/utils.py   |  5 +++
requirements.txt  |  4 +--
setup.cfg |  6 
test-requirements.txt | 10 ++
9 files changed, 56 insertions(+), 76 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5cfcbd3..4bd9546 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6,2 +5,0 @@ pbr>=1.6 # Apache-2.0
-eventlet!=0.18.3,>=0.18.2 # MIT
-greenlet>=0.3.2 # MIT
@@ -9 +7 @@ oslo.concurrency>=3.5.0 # Apache-2.0
-oslo.config>=3.4.0 # Apache-2.0
+oslo.config>=3.9.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 09e18c2..c96af19 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6 +6 @@ coverage>=3.6 # Apache-2.0
-discover # BSD
+eventlet!=0.18.3,>=0.18.2 # MIT
@@ -8 +8,2 @@ hacking<0.11,>=0.10.0
-oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+mock>=1.2 # BSD
+os-testr>=0.4.1 # Apache-2.0
@@ -10,3 +10,0 @@ oslotest>=1.10.0 # Apache-2.0
-pylint==1.4.5 # GNU GPL v2
-simplejson>=2.2.0 # MIT
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
@@ -15,2 +12,0 @@ testtools>=1.4.0 # MIT
-mox3>=0.7.0 # Apache-2.0
-os-testr>=0.4.1 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][neutron] python-neutronclient 4.2.0 release (newton)

2016-04-13 Thread no-reply
We are pumped to announce the release of:

python-neutronclient 4.2.0: CLI and Client Library for OpenStack
Networking

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/python-neutronclient

With package available at:

https://pypi.python.org/pypi/python-neutronclient

Please report issues through launchpad:

http://bugs.launchpad.net/python-neutronclient

For more details, please see below.

4.2.0
^

Adding new QoS DSCP marking rule commands.


New Features


* New create, update, list, show, and delete commands are added for
  QoS DSCP marking rule functionality.

Changes in python-neutronclient 4.1.1..4.2.0


fc2950c Change try..except to assertRaises in UT
726293d Updated from global requirements
1b8af3e Change --no-gateway help text
d8fa792 Log SHA1 hash of X-Auth-Token value
4012d8b Remove APIParamsCall decorator
7180444 Fix assertNotEqual parameters
2f571ac organize the release notes consistently
3052b61 Update reno for stable/mitaka
2db432f Add parser options for description on resources
5d28651 Updated from global requirements
8ff7d5c Adding DSCP marking changes to neutronclient
7621f6e Fixed typos in securitygroup.py
150cc4c Do not print 'Created' message when using non-table formatter

Diffstat (except docs and test files)
-

neutronclient/common/utils.py  |  12 +-
neutronclient/neutron/v2_0/__init__.py |   5 +-
neutronclient/neutron/v2_0/floatingip.py   |   5 +-
neutronclient/neutron/v2_0/network.py  |   6 +-
neutronclient/neutron/v2_0/port.py |   6 +-
.../neutron/v2_0/qos/bandwidth_limit_rule.py   |  10 +-
.../neutron/v2_0/qos/dscp_marking_rule.py  | 112 
neutronclient/neutron/v2_0/qos/rule.py |   5 +
neutronclient/neutron/v2_0/router.py   |  11 +-
neutronclient/neutron/v2_0/securitygroup.py|  16 +-
neutronclient/neutron/v2_0/subnet.py   |   8 +-
neutronclient/neutron/v2_0/subnetpool.py   |   6 +-
neutronclient/shell.py |  16 ++
.../unit/qos/test_cli20_bandwidth_limit_rule.py| 137 +
neutronclient/v2_0/client.py   | 307 +++--
releasenotes/notes/dscp_qos-4a26d3c0363624b0.yaml  |   6 +
releasenotes/source/index.rst  |  13 +-
releasenotes/source/mitaka.rst |   6 +
releasenotes/source/unreleased.rst |   5 +
requirements.txt   |   2 +-
test-requirements.txt  |   2 +-
33 files changed, 614 insertions(+), 492 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c1142eb..cc6fd9d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6 # Apache-2.0
-cliff!=1.16.0,>=1.15.0 # Apache-2.0
+cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 7bece79..a989350 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -14 +14 @@ python-subunit>=0.0.18 # Apache-2.0/BSD
-reno>=0.1.1 # Apache2
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Cinder Datasource

2016-04-13 Thread Erlon Cruz
Can you give a bit more of context? Where is the design you mentioned? By
datasource you mean the Cinder service?
There is already some work[1] to allow Cider to attach volumes in baremetal
servers.


[1] https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova

On Tue, Apr 12, 2016 at 4:02 AM, Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi,
>
> Here is the design of the Cinder datasource of Vitrage.
>
> Currently Cinder datasource is handling only Volumes.
> This datasource listens to cinder volumes notifications on the oslo bus,
> and updates the topology accordingly.
> Currently Cinder Volume can be attached only to instance (Cinder design).
>
> Future Steps:
> We want to perform research on what other data we can bring from Cinder.
> For example:
> 1. To what zone we can connect the volume
> 2. To what image we can connect the volume
>
> Alexey
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-keystoneclient] Return request-id to caller

2016-04-13 Thread David Stanek
On Wed, Apr 13, 2016 at 3:26 AM koshiya maho 
wrote:

>
> My request to all keystone cores to give their suggestions about the same.
>
>
I'll test this a little and see if I can see how it breaks.

Overall I'm not really a fan of this design. It's just a hack to add
attributes where they don't belong. Long term I think this will be hard to
maintain.

-- David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Ability to reproduce failures

2016-04-13 Thread Derek Higgins
On 13 April 2016 at 09:58, Steven Hardy  wrote:
> On Tue, Apr 12, 2016 at 11:08:28PM +0200, Gabriele Cerami wrote:
>> On Fri, 2016-04-08 at 16:18 +0100, Steven Hardy wrote:
>>
>> > Note we're not using devtest at all anymore, the developer script
>> > many
>> > folks use is tripleo.sh:
>>
>> So, I followed the flow of the gate jobs starting from jenkins builder
>> script, and it seems like it's using devtest (or maybe something I
>> consider to be devtest but it's not, is devtest the part that creates
>> some environments, wait for them to be locked by gearman, and so on ?)
>
> So I think the confusion may step from the fact ./docs/TripleO-ci.rst is
> out of date.  Derek can confirm, but I think although there may be a few
> residual devtest pieces associated with managing the testenv VMs, there's
> nothing related to devtest used in the actual CI run itself anymore.
>
> See this commit:
>
> https://github.com/openstack-infra/tripleo-ci/commit/a85deb848007f0860ac32ac0096c5e45fe899cc5
>
> Since then we've moved to using tripleo.sh to drive most steps of the CI
> run, and many developers are using it also.  Previously the same was true
> of the devtest.sh script in tripleo-incubator, but now that is totally
> deprecated and unused (that it still exists in the repo is an oversight).
Devtest was used to generate the images that host the testenvs and we
should be soon getting rid of this. So I wouldn't spend a lot of time
looking at devtest it isn't used during the actual CI runs'

>
>> What I meant with "the script I'm using (created by Sagi) is not
>> creating the same enviroment" is that is not using the same test env
>> (with gearman and such) that the ci scripts are currently using.
>
> Sure, I guess my point is that for 99% of issues, the method used to create
> the VM is not important.  We use a slightly different method in CI to
> manage the VMs than in most developer environments, but if the requirement
> is to reproduce CI failures, you mostly care about deploying the exact same
> software, not so much how virsh was driven to create the VMs.

Yes, most of our errors can be reproduced outside of the CI
environment, but there are cases where we need an identical setup
(specifically for some transient errors), what we should do I think is
set aside a single test env host for these errors for anybody
debugging transient errors. I'll can get something together for this.
It wont be for general purpose developing but instead should be
limited for use only when debugging transient errors that don't
reproduce elsewhere.

>
> Thanks for digging into this, it's great to have some fresh eyes
> highlighting these sorts of issues! :)
Yes, its been a while since we had a updodate description of how
everything ties together so thanks for this. I've read through it and
added a few comments but in general what you have documented is on the
right track.


>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][sahara] sahara-tests 0.1.0 release (newton)

2016-04-13 Thread davanum
We are gleeful to announce the release of:

sahara-tests 0.1.0: Sahara tests

This is the first release of sahara-tests. This release is part of the
newton release series.

For more details, please see below.

0.1.0
^


New Features


* Sahara API tests have been imported from Tempest and made
  available using the Tempest Plugin Interface.


Other Notes
***

* OpenStack reno integration was added for managing release notes

Changes in sahara-tests 15bccab79070dba41efd5df33cf504418a1a3731..0.1.0
---

baf526e Convert the imported API tests to Tempest Plugin interface
af89d1b first release note added
d3f3337 Add missed file with mapr-5.1.0 to mitaka
6cb7766 Add reno to sahara-test
61bf654 Updated from global requirements
83be5c3 Move default templates to framework repo
d6aa428 Move image_username to cluster section
973c94d Add MapR 510 test
5aeacc2 Add check of scaling for Ambari
0e381e2 Refactoring of runner.py
c2929d0 Fix py34 tests
97cdcc2 Fix for default templates(fake, transient)
1e610c9 Fix pylint
f024153 Enable all pep8 checks
a29e922 Adding ability run test several times
fd2583d Add mitaka folder with yaml files
fd8a89f Use tempest.lib code in tempest
497d70a Remove MapR 4.0.1 tempest test
3f159e7 [sahara] adding new plugin versions to sahara tempest tests
4788870 Fix H404/405 violations for api tests(1/3)
a17e0ab Full response for DataProcessingClient methods
007c69a Use the prefix-embedded rand_name method
d2991ae Initial class creds creation in test base class
fb04e36 Remove redundant calls to clear_isolated_creds
3da9f95 Decouple Sahara templates test from vanilla plugin
348a13c Remove migrated utils code
79c1e2f Add UUIDs to all tempest tests and gate check
525438e Drop the legacy and un-used _interface
619c59d Change tempest NotFound exc to tempest-lib exc
d7d1481 Change data_processing client to return one value and update tests
94a0bc5 Split resource_setup for data_processing tests
6c9429c Allow to specify the list of sahara enabled plugins
b6942af Migrate data_processing API tests to resource_* fixtures
9a72f4e Sahara: add API tests for jobs
17cc65e Sahara: preparations for job tests
761e50b Add client response checking for data processing service
d1e4a63 Sahara: minor changes for API tests
0ac34cb Sahara: add API tests for job binaries
fb4aa6b Enable E251,E265 rules ignore H402
8e42eff Sahara: preparations for job binary tests
5ebcc0a Sahara: add API tests for cluster templates
d0d20f1 Sahara: minor changes for API tests
fc830da Sahara: add API tests for job binary internals
3349718 Sahara: preparations for job binary internal tests
f996a31 Sahara: add API tests for data sources
6470386 Enable H302 rule everywhere
365b3de Sahara: editing licenses
5ab02c1 Sahara: preparations for data source tests
a24a61c Sahara: preparations for new tests
7a7cf87 fix sahara base class
2863005 Rename Savanna to Sahara
9d2532f Savanna: add API client and tests for plugins
f184736 Add missing isolated_cred cleanup to savanna tests
aa4cbb7 Add simple node group tmpl API test for Savanna
34af822 Add missed yaml files for liberty
a87f96e rename sahara-tests to sahara_tests in setup.cfg
b01dd35 Add autoregistering of image
e2a333c Add parameter for ssh conection
60dea7d Updated from global requirements
3d6970f Add ability export results to file
1f156ac Fix README for sahara-scenario
ad6b110 Updated from global requirements
8c87380 Adding ability use default templates
b75ab20 Add MapR-FS support to sahara scenario framework
e4b27f1 Fix .gitignore
b420b8f Fix using proxy node for checks
a894fc3 Add pylint to tox
e885b96 Disable ssl_verify as default
96ab039 Use ostestr instead of the custom pretty_tox.sh
625d28a Fix scenario tests for correct output to swift
a0e56ce Updated from global requirements
34d0b88 Updated from global requirements
94ad6fc Updated from global requirements
8d1f2f3 Add more infomation when create cluster failed for scenario test
b7d1531 Fix READMEs location for sahara_tests
d4f1be0 Add CDH 5.5.0 scenario test
06dcee6 Define variables via args in scenario tests
c587b4d Update MapReduce job
4105d34 Fix gates
5df0221 Fix .gitreview after repo renaming
a8e3262 Updated from global requirements
a1ab987 Updated from global requirements
87a18ab Add scenario for Spark 1.6.0
d25d1c4 Override auth version for swiftclient
239e2e9 Add EDP job flow for spark-1.0.0
7725559 Add missed files for kilo and liberty
65b1076 Add bashate check in pep8 env
5a4de20 Move gate scripts from sahara repo
ed28163 Fix imports in custom checks
9c8bd1e Fix ini sourcecode in README.rst
3a5c763 Fix project name in .coveragerc
d850637 Restructure of sahara-tests repository
60f59a1 Refactoring of tox.ini
b6acd74 Separate requirements
5892b77 Fix all python-jobs check
63b5f85 Updated from global requirements
fcb9e0a Merge "Added Keystone and RequestID headers to CORS middleware"
b13822b Fix wrong file path in scenario test README.rst
a179b4f 

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Fox, Kevin M
Barbican is the abstraction layer. Its plugable like nova, neutron, cinder, etc.

Thanks,
Kevin


From: rezroo
Sent: Tuesday, April 12, 2016 11:00:30 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

Interesting conversation, and I think I have more of a question than a comment. 
With my understanding of OpenStack architecture, I don't understand the point 
about making "Magnum dependent on Barbican". Wouldn't this issue be completely 
resolved using a driver model, such as delegating the task to a separate class 
specified in magnum.conf, with a reference implementation using Barbian API 
(like the vif driver of nova, or nova chance vs. filter scheduler)? If people 
want choice, we know how to give them choice - decouple, and have a base 
implementation. The rest is up to them. That's the framework's architecture. 
What am I missing?
Thanks,
Reza

On 4/12/2016 9:16 PM, Fox, Kevin M wrote:
Ops are asking for you to make it easy for them to make their security weak. 
And as a user of other folks clouds, i'd have no way to know the cloud is in 
that mode. That seems really bad for app developers/users.

Barbican, like some of the other servises, wont become common if folks keep 
trying to reimplement it so they dont have to depend on it. Folks said the same 
things about Keystone. Ultimately it was worth making it a dependency.

Keystone doesnt support encryption, so you are asking for new functionality 
duplicating Barbican either way.

And we do understand the point of what you are trying to do. We just dont see 
eye to eye on it being a good thing to do. If you are invested enough in 
setting up an ha setup where you would need a clusterd solution, barbicans not 
that much of an extra lift compared to the other services you've already had to 
deploy. Ive deployed both ha setups and barbican before. Ha is way worse.

Thanks,
Kevin



From: Adrian Otto
Sent: Tuesday, April 12, 2016 8:06:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
<dolph.math...@gmail.com>
 wrote:


On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
mailto:lbrags...@gmail.com>> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
<hongbin...@huawei.com>
 wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don’t have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbi

Re: [openstack-dev] [Nova][Neutron] [Live Migration] Prevent invalid live migration instead of failing and setting instance to error state after porbinding failed

2016-04-13 Thread Andreas Scheuring
After  a great chat with Kevin we agreed to follow up the multiple
binding approach until the summit and see if this is the right
direction. [1] 

[1] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%
23openstack-neutron.2016-04-13.log.html#t2016-04-13T09:43:46

-- 
-
Andreas (IRC: scheuran) 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Fox, Kevin M
It partially depends on if your following lightweight container methodology. 
Can nova api support unix sockets or bind mounts between containers in the same 
pod? Would it be reasonable to add that functionality? Its pretty different to 
novas usual use cases.

Thanks,
Kevin



From: Peng Zhao
Sent: Tuesday, April 12, 2016 11:33:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Agreed.

IMO, OpenStack is an open framework to different technologies and use cases. 
Different architectures for different things make sense. Some may say that 
using nova to launch docker images with hypervisor is weird, but it can be seen 
as “Immutable IaaS”.


-
Hyper_ Secure Container Cloud




On Wed, Apr 13, 2016 1:43 PM, Joshua Harlow 
harlo...@fastmail.com wrote:
Sure, so that helps, except it still has the issue of bumping up against the 
mismatch of the API(s) of nova. This is why I'd rather have a template kind of 
format (as say the input API) that allows for (optionally) expressing such 
container specific capabilities/constraints. Then some project that can 
understand that template /format can if needed talk to a COE (or similar 
project) to translate that template 'segment' into a realized entity using the 
capabilities/constraints that the template specified. Overall it starts to feel 
like maybe it is time to change the upper and lower systems and shake things up 
a little ;) Peng Zhao wrote: > I'd take the idea further. Imagine a typical 
Heat template, what you > need to do is: > > - replace the VM id with Docker 
image id > - nothing else > - run the script with a normal heat engine > - the 
entire stack gets deployed in seconds > > Done! > > Well, that sounds like 
nova-docker. What about cinder and neutron? They > don't work well with Linux 
container! The answer is Hypernova > (https://github.com/hyperhq/hypernova) or 
Intel ClearContainer, seamless > integration with most OpenStack components. > 
> Summary: minimal changes to interface and upper systems, much smaller > image 
and much better developer workflow. > > Peng > > 
- > Hyper_ Secure Container 
Cloud > > > > On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com 
> wrote: > > __ Fox, Kevin M wrote: > I think part of the problem is containers 
> are mostly orthogonal to vms/bare metal. Containers are a package > for a 
single service. Multiple can run on a single vm/bare metal > host. 
Orchestration like Kubernetes comes in to turn a pool of > vm's/bare metal into 
a system that can easily run multiple > containers. > Is the orthogonal part a 
problem because we have made > it so or is it just how it really is? 
Brainstorming starts here: > Imagine a descriptor language like (which I stole 
from > https://review.openstack.org/#/c/210549 and modified): --- > components: 
- label: frontend count: 5 image: ubuntu_vanilla > requirements: high memory, 
low disk stateless: true - label: > database count: 3 image: ubuntu_vanilla 
requirements: high memory, > high disk stateless: false - label: memcache 
count: 3 image: > debian-squeeze requirements: high memory, no disk stateless: 
true - > label: zookeeper count: 3 image: debian-squeeze requirements: high > 
memory, medium disk stateless: false backend: VM networks: - label: > 
frontend_net flavor: "public network" associated_with: - frontend - > label: 
database_net flavor: high bandwidth associated_with: - > database - label: 
backend_net flavor: high bandwidth and low latency > associated_with: - 
zookeeper - memchache constraints: - ref: > container_only params: - frontend - 
ref: no_colocated params: - > database - frontend - ref: spread params: - 
database - ref: > no_colocated params: - database - frontend - ref: spread 
params: - > memcache - ref: spread params: - zookeeper - ref: isolated_network 
> params: - frontend_net - database_net - backend_net ... Now nothing > in the 
above is about container, or baremetal or vms, (although a > 'advanced' 
constraint can be that a component must be on a > container, and it must say be 
deployed via docker image XYZ...); > instead it's just about the constraints 
that a user has on there > deployment and the components associated with it. It 
can be left up > to some consuming project of that format to decide how to turn 
that > desired description into an actual description (aka a full expanding > 
of that format into an actual deployment plan), possibly say by > optimizing 
for density (packing as many things container) or > optimizing for security (by 
using VMs) or optimizing for performance > (by using bare-metal). > So, rather 
then concern itself with > supporting launching 

Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-13 Thread Fox, Kevin M
I favor the solutions that also enable logs.

Thanks,
Kevin


From: Yuiko Takada
Sent: Wednesday, April 13, 2016 1:47:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic][nova][horizon] Serial console support for 
ironic instances

Hi,

I also want to discuss about it at summit session.

2016-04-13 0:41 GMT+09:00 Ruby Loo 
mailto:rlooya...@gmail.com>>:
Yes, I think it would be good to have a summit session on that. However, before 
the session, it would really be helpful if the folks with proposals got 
together and/or reviewed each other's proposals, and summarized their findings.

I've summarized all of related proposals.

(1)Add driver using Socat
https://review.openstack.org/#/c/293827/

* Pros:
- There is no influence to other components
- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
- Don't need to bump API microversion/RPC

* Cons:
- Don't output log file

(2)Add driver starting ironic-console-server
https://review.openstack.org/#/c/302291/
(There is no spec, yet)

* Pros:
- There is no influence to other components
- Output log file
- Don't need to change any other Ironic drivers(like IPMIShellinaboxConsole)
- No adding any Ironic services required, only add tools

* Cons:
- Need to bump API microversion/RPC

(3)Add a custom HTTP proxy to Nova
https://review.openstack.org/#/c/300582/

* Pros:
- Don't need any change to Ironic API

* Cons:
- Need Nova API changes(bump microversion)
- Need Horizon changes
- Don't output log file

(4)Add Ironic-ipmiproxy server
https://review.openstack.org/#/c/296869/

* Pros:
- There is no influence to other components
- Output log file
- IPMIShellinaboxConsole will be also available via Horizon

* Cons:
- Need IPMIShellinaboxConsole changes?
- Need to bump API microversion/RPC

If there is any mistake, please give me comment.


Best Regards,
Yuiko Takada

2016-04-13 0:41 GMT+09:00 Ruby Loo 
mailto:rlooya...@gmail.com>>:
Yes, I think it would be good to have a summit session on that. However, before 
the session, it would really be helpful if the folks with proposals got 
together and/or reviewed each other's proposals, and summarized their findings. 
You may find after reviewing the proposals, that eg only 2 are really 
different. Or they several have merit because they are addressing slightly 
different issues. That would make it easier to present/discuss/decide at the 
session.

--ruby


On 12 April 2016 at 09:17, Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:
On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> Maybe we can continue the discussion here, as there's no enough time in the
> irc meeting :)

Someone mentioned this would make a good summit session, as there's a
few competing proposals that are all good options. I do welcome
discussion here until then, but I'm going to put it on the schedule. :)

// jim

>
> On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
> mailto:niu.zgli...@gmail.com>> wrote:
>
> >
> > Ironic is currently using shellinabox to provide a serial console, but
> > it's not compatible
> > with nova, so I would like to propose a new console type and a custom HTTP
> > proxy [1]
> > which validate token and connect to ironic console from nova.
> >
> > On Horizon side, we should add support for the new console type [2] as
> > well, here are some screenshots from my local environment.
> >
> >
> >
> > ​
> >
> > Additionally, shellinabox console ports management should be improved in
> > ironic, instead of manually specified, we should introduce dynamically
> > allocation/deallocation [3] mechanism.
> >
> > Functionality is being implemented in Nova, Horizon and Ironic:
> > https://review.openstack.org/#/q/topic:bp/shellinabox-http-proxy
> > https://review.openstack.org/#/q/topic:bp/ironic-shellinabox-console
> > https://review.openstack.org/#/q/status:open+topic:bug/1526371
> >
> >
> > PS: to achieve this goal, we can also add a new console driver in ironic
> > [4], but I think it doesn't conflict with this, as shellinabox is capable
> > to integrate with nova, and we should support all console drivers.
> >
> >
> > [1] https://blueprints.launchpad.net/nova/+spec/shellinabox-http-proxy
> > [2]
> > https://blueprints.launchpad.net/horizon/+spec/ironic-shellinabox-console
> > [3] https://bugs.launchpad.net/ironic/+bug/1526371
> > [4] https://bugs.launchpad.net/ironic/+bug/1553083
> >
> > --
> > Best Regards,
> > Zhenguo Niu
> >
>
>
>
> --
> Best Regards,
> Zhenguo Niu




> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Developm

Re: [openstack-dev] [release][sahara] sahara-tests 0.1.0 release (newton)

2016-04-13 Thread Luigi Toscano
On Wednesday 13 of April 2016 09:42:25 dava...@gmail.com wrote:
> We are gleeful to announce the release of:
> 
> sahara-tests 0.1.0: Sahara tests
> 
> This is the first release of sahara-tests. This release is part of the
> newton release series.
> 
> For more details, please see below.

Small correction, sahara-tests is branchless, like Tempest, and it contains 
tests which supports Sahara from Liberty upwards. How can this be highlighted 
in place of "part of the xyz release series" for the next releases?

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread rezroo

Hi Kevin,

I understand that this is how it is now. My question is how bad would it 
be to wrap the Barbican client library calls in another class and claim, 
for all practical purposes, that Magnum has no direct dependency on 
Barbican? What is the negative of doing that?


Anyone who wants to use another mechanism should be able to do that with 
a simple change to the Magnum conf file. Nothing more complicated. 
That's the essence of my question.


Appreciate your thoughts and insight.

Reza

On 4/13/2016 6:46 AM, Fox, Kevin M wrote:
Barbican is the abstraction layer. Its plugable like nova, neutron, 
cinder, etc.


Thanks,
Kevin *
*

*From:* rezroo
*Sent:* Tuesday, April 12, 2016 11:00:30 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates


Interesting conversation, and I think I have more of a question than a 
comment. With my understanding of OpenStack architecture, I don't 
understand the point about making "Magnum dependent on Barbican". 
Wouldn't this issue be completely resolved using a driver model, such 
as delegating the task to a separate class specified in magnum.conf, 
with a reference implementation using Barbian API (like the vif driver 
of nova, or nova chance vs. filter scheduler)? If people want choice, 
we know how to give them choice - decouple, and have a base 
implementation. The rest is up to them. That's the framework's 
architecture. What am I missing?

Thanks,
Reza

On 4/12/2016 9:16 PM, Fox, Kevin M wrote:
Ops are asking for you to make it easy for them to make their 
security weak. And as a user of other folks clouds, i'd have no way 
to know the cloud is in that mode. That seems really bad for app 
developers/users.


Barbican, like some of the other servises, wont become common if 
folks keep trying to reimplement it so they dont have to depend on 
it. Folks said the same things about Keystone. Ultimately it was 
worth making it a dependency.


Keystone doesnt support encryption, so you are asking for new 
functionality duplicating Barbican either way.


And we do understand the point of what you are trying to do. We just 
dont see eye to eye on it being a good thing to do. If you are 
invested enough in setting up an ha setup where you would need a 
clusterd solution, barbicans not that much of an extra lift compared 
to the other services you've already had to deploy. Ive deployed both 
ha setups and barbican before. Ha is way worse.


Thanks,
Kevin

*
*

*From:* Adrian Otto
*Sent:* Tuesday, April 12, 2016 8:06:03 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates


Please don't miss the point here. We are seeking a solution that 
allows a location to place a client side encrypted blob of data (A 
TLS cert) that multiple magnum-conductor processes on different hosts 
can reach over the network.


We *already* support using Barbican for this purpose, as well as 
storage in flat files (not as secure as Barbican, and only works with 
a single conductor) and are seeking a second alternative for clouds 
that have not yet adopted Barbican, and want to use multiple 
conductors. Once Barbican is common in OpenStack clouds, both 
alternatives are redundant and can be deprecated. If Keystone depends 
on Barbican, then we have no reason to keep using it. That will mean 
that Barbican is core to OpenStack.


Our alternative to using Keystone is storing the encrypted blobs in 
the Magnum database which would cause us to add an API feature in 
magnum that is the exact functional equivalent of the credential 
store in Keystone. That is something we are trying to avoid by 
leveraging existing OpenStack APIs.


--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews  
wrote:




On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad > wrote:


Keystone's credential API pre-dates barbican. We started talking
about having the credential API back to barbican after it was a
thing. I'm not sure if any work has been done to move the
credential API in this direction. From a security perspective, I
think it would make sense for keystone to back to barbican.


+1

And regarding the "inappropriate use of keystone," I'd agree... 
without this spec, keystone is entirely useless as any sort of 
alternative to Barbican:


https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu
 wrote:

Hi all,

In short, some Magnum team members proposed to store TLS
certificates in Keystone credential store. As Magnum PTL, I
want to get agreements (or non-disagree

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Clayton O'Neill
On Wed, Apr 13, 2016 at 10:26 AM, rezroo  wrote:
> Hi Kevin,
>
> I understand that this is how it is now. My question is how bad would it be
> to wrap the Barbican client library calls in another class and claim, for
> all practical purposes, that Magnum has no direct dependency on Barbican?
> What is the negative of doing that?
>
> Anyone who wants to use another mechanism should be able to do that with a
> simple change to the Magnum conf file. Nothing more complicated. That's the
> essence of my question.

For us, the main reason we’d want to be able to deploy without
Barbican is mostly to lower the initial barrier of entry.  We’re not
running anything else that would require Barbican for a multi-node
deployment, so for us to do a realistic evaluation of Magnum, we’d
have to get two “new to us” services up and running in a development
environment.  Since we’re not running Barbican or Magnum, that’s a big
time commitment for something we don’t really know if we’d end up
using.  From that perspective, something that’s less secure might be
just fine in the short term.  For example, I’d be completely fine with
storing certificates in the Magnum database as part of an evaluation,
knowing I had to switch from that before going to production.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-13 Thread Ryan Brady
On Mon, Apr 11, 2016 at 5:54 AM, John Trowbridge  wrote:

> Hola OOOers,
>
> It came up in the meeting last week that we could benefit from a CI
> subteam with its own meeting, since CI is taking up a lot of the main
> meeting time.
>
> I like this idea, and think we should do something similar for the other
> informal subteams (tripleoclient, UI), and also add a new subteam for
> tripleo-quickstart (and maybe one for releases?).
>
> We should make seperate ACL's for these subteams as well. The informal
> approach of adding cores who can +2 anything but are told to only +2
> what they know doesn't scale very well.
>

+1 to subteams for selected projects.

I think there should be a clearly defined practice of ensuring there is
enough reviewers so that a subteam core doesn't need to +A their own
patches.  I don't know if that's a standing rule in tripleo core, but I
think it should be explicit in subteams.

-r


> - trown
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][sahara] sahara-tests 0.1.0 release (newton)

2016-04-13 Thread Davanum Srinivas
Luigi, @vgridnev,

http://git.openstack.org/cgit/openstack/releases/tree/deliverables/newton/sahara-tests.yaml
needs to move into _independent/deliverables/newton/sahara-tests.yaml
for next time.

Thanks,
Dims

On Wed, Apr 13, 2016 at 10:10 AM, Luigi Toscano  wrote:
> On Wednesday 13 of April 2016 09:42:25 dava...@gmail.com wrote:
>> We are gleeful to announce the release of:
>>
>> sahara-tests 0.1.0: Sahara tests
>>
>> This is the first release of sahara-tests. This release is part of the
>> newton release series.
>>
>> For more details, please see below.
>
> Small correction, sahara-tests is branchless, like Tempest, and it contains
> tests which supports Sahara from Liberty upwards. How can this be highlighted
> in place of "part of the xyz release series" for the next releases?
>
> --
> Luigi



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][swift] Privileges for new user

2016-04-13 Thread Alexey Deryugin
Hi!

As you may know, there's an issue with access to swift for newly created
users. Here's a bug [0].
As mentioned in a comment [1] it can be fixed by two different ways:

1. Add role SwiftOperator to all newly created users.
2. Add role __member__ to swift operator_roles parameter.

So, I'd like to discuss which way is preffered, or is there a more
prefferable solution.

[0] https://bugs.launchpad.net/fuel/+bug/1561241
[1] https://bugs.launchpad.net/fuel/+bug/1561241/comments/2

-- 
Kind Regards,
Alexey Deryugin,
Intern Deployment Engineer,
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Emilien Macchi
Hi,

Current OpenStack Infra Periodic jobs do not send e-mails (only
periodic-stable do), so I propose to create periodic-ci-reports
mailing list [1] and to use it when our periodic jobs fail [2].
If accepted, people who care about periodic jobs would like to
subscribe to this new ML so they can read quick feedback from
failures, thanks to e-mail filters.

The use-case is described in [2], please use Gerrit to give feedback.

Thanks,

[1] https://review.openstack.org/305326
[2] https://review.openstack.org/305278
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Joshua Harlow

Thierry Carrez wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what
needs to connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to
produce a final set of deployment scripts and then just runs it
through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly
easily mappable to all three COE's though.


This feels like Heat with a more readable descriptive language. I don't
really like this approach, because you end up with the lowest common
denominator between COE's functionality. They are all different. And
they are at the peak of the differentiation phase. The LCD is bound to
be pretty basic.

This approach may be attractive for us as infrastructure providers, but
I also know this is not attractive to users who used Kubernetes before
and wants to continue to use Kubernetes (and don't really want to care
about whether OpenStack is running under the hood). They don't want to
learn another descriptor language or API, they just want to learn the
Kubernetes description model and API and take advantage of its unique
capabilities.

In summary, this may be a good solution for *existing* OpenStack users
to start playing with containerized workloads. But it is not a good
solution to attract the container cool kids to using OpenStack as their
base infrastructure provider. For those we need to make it as
transparent and simple to use their usual tools to deploy on top of
OpenStack clouds. The more they can ignore we are there, the better.



I get the feeling of 'the more they can ignore we are there, the 
better.' but it just feels like at that point we have accepted our own 
fate in this arena vs trying to actually having an impact in it... Do 
others feel that it is already at the point where we can no longer 
attract the cool kids, is the tipping point of that happening already past?


I'd like for openstack to still attract the cool kids, and not just 
attract the cool kids by accepting 'the more they can ignore we are 
there, the better' as our fate... I get that someone has to provide the 
equivalent of roads, plumbing and water but man, it feels like we can 
also provide other things still ;)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API doc import, and next steps

2016-04-13 Thread Sean Dague
I think we've gotten the automatic converters for the wadl files to
about as good as we're going to get. The results right now are here -
https://review.openstack.org/#/c/302500/

There remain many issues in the content (there are many issues in the
source content, and a few crept in during imperfect translation),
however at some point we need to just call the automatic translation
effort "good enough", commit it, and start fixing the docs in chunks. I
think we are at that stage.

Once we get those bits committed, it's time to start fixing what
remains. I started an etherpad for the rough guide here -
https://etherpad.openstack.org/p/nova-api-docs-in-rst there are a few
global level things, but a bunch of this is a set of verifications and
fixes that will have to happen for every *.inc file.

for every file in api-ref/sources/*.inc

1. Verify methods
 1. Do all methods of the resource currently exist?
 2. Rearange methods in order (sorted by url)
  1. GET
  2. POST
  3. PUT
  4. DELETE
  5. i.e. for servers.inc GET /servers, POST /servers, GET
 /servers/details, GET /servers/{id}, PUT /servers/{id},
 DELETE /servers/{id}
2. Verify all parameters
 1. Are all parameters that exist in the resource are listed
 2. Are all parameters referencing the right lookup value in
parameters.yaml
  1. name, id are common issues, will need $foo_name and $foo_id
 created
 3. Add microversion parameters at the end of the table in order of
introduction
  1. min_ver: 2.10 is a valid parameter key
3. Examples
 1. Is there an example response for all request / response that
have
a body
 2. Is there an english description of the change in question
explaining the action that it would have
4. Body Text
 1. Is formatting of the introduction text for each section well
formatted (lists and headers were stripped in the processing)

My feeling is that we should probably create a fleet of bugs which is 1
per source file and phase, with a set of api-ref tags. This will give us
easy artifacts to hand off to people, and know which ones are getting
done and which ones remain. A lot of this work is pretty easy, just
takes some time.

I'd like to get the base patches landed in the next day or so so that we
can start chugging through these fixes pre summit, and do a virtual doc
sprint post summit to push through to completion.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Hongbin Lu
I think there are two questions here:

1.   Should Magnum decouple from Barbican?

2.   Which options Magnum should use to achieve #1 (leverage Keystone 
credential store, or other alternatives [1])?
For question #1, Magnum team has thoughtfully discussed it. I think we all 
agreed that Magnum should decouple from Barbican for now (I didn’t hear any 
disagreement from any of our team members). What we are currently debating is 
question #2. That is which approach we should use to achieve the goal. The 
first option is to store TLS credentials in Keystone. The second option is to 
store the credentials in Magnum DB. The third option is to eliminate the need 
to store TLS credentials (e.g. switch to another non-TLS authentication 
mechanism). What we want to know is if Keystone team allows us to pursue the 
first option. If it is disallowed, I will suggest Magnum team to pursue other 
options.
So, for the original question, does Keystone team allow us to store encrypted 
data in Keystone? A point of view is that if the data to be stored is already 
encrypted, there will be no disagreement from Keystone side (so far, all the 
concerns is about the security implications of storing un-encrypted data). 
Would I confirm if Keystone team agrees (or doesn’t disagree) with this point 
of view?

[1] https://etherpad.openstack.org/p/magnum-barbican-alternative

Best regards,
Hongbin

From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
Sent: April-13-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates



On Tue, Apr 12, 2016 at 8:06 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:
Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

Is it really unreasonable to make Magnum depend on Barbican? I know I discussed 
this with you previously, but I would like to know how much pushback you're 
really seeing on saying "Barbican is important for these security reasons in a 
scaled-up environment and here is why we made this choice to depend on it". 
Secure by default is far better than an option that is significantly 
sub-optimal.

So, is Barbican support really hampering Magnum in significant ways? If so, 
what can we do to improve the story to make Barbican compelling instead of 
needing this alternative?

+1 to Dolph's comment on Barbican being more mature *and* another +1 for the 
comment that credentials being un-encrypted in keystone makes storing secure 
credentials in keystone significantly less desirable.

These questions are intended to just fill in some blanks I am seeing so we have 
a complete story and can look at prioritizing work/specs/etc.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
mailto:dolph.math...@gmail.com>> wrote:

On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
mailto:lbrags...@gmail.com>> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store f

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Ian Cordasco
 

-Original Message-
From: Clayton O'Neill 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: April 13, 2016 at 09:39:38
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

> On Wed, Apr 13, 2016 at 10:26 AM, rezroo wrote:
> > Hi Kevin,
> >
> > I understand that this is how it is now. My question is how bad would it be
> > to wrap the Barbican client library calls in another class and claim, for
> > all practical purposes, that Magnum has no direct dependency on Barbican?
> > What is the negative of doing that?
> >
> > Anyone who wants to use another mechanism should be able to do that with a
> > simple change to the Magnum conf file. Nothing more complicated. That's the
> > essence of my question.
>  
> For us, the main reason we’d want to be able to deploy without
> Barbican is mostly to lower the initial barrier of entry. We’re not
> running anything else that would require Barbican for a multi-node
> deployment, so for us to do a realistic evaluation of Magnum, we’d
> have to get two “new to us” services up and running in a development
> environment. Since we’re not running Barbican or Magnum, that’s a big
> time commitment for something we don’t really know if we’d end up
> using. From that perspective, something that’s less secure might be
> just fine in the short term. For example, I’d be completely fine with
> storing certificates in the Magnum database as part of an evaluation,
> knowing I had to switch from that before going to production.

In that case, why not instead, use an NFS mount to store the certificates in 
that all magnum conductors have (the same way someone evaluating Glance without 
wanting Swift, Ceph, or something else that would be more robust) might use NFS 
+ the default filesystem store? That doesn't require adding yet more code to 
store something in the database or in Keystone.

Further, the other contention here is that people want to run Magnum on old 
deployments of OpenStack which most likely wouldn't even have Keystone v3 
deployed. So I'm still failing to see how this solution solves anything at all.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Lance Bragstad
I think we need to ask who we are lowering the barrier of entry for. Are we
going down this path because we want developers to have less things to do
to stand up a development environment? Or do we want to make it easy for
people to realistically test? If you're going to realistically vet magnum,
why not make that PoC as realistic as possible, as in deploying with
barbican. As an operator, I think it would be better to have an honest
assessment of the work required to deploy magnum, even if it costs a little
extra time. I'd rather hit roadblocks with the realistic approach early
than reassure my team everything will work correctly when we didn't test
what we planned to offer to our customers. In my experience, having
roadblocks pop up later after commitment has been made is expensive and
stressful.

On Wed, Apr 13, 2016 at 9:37 AM, Clayton O'Neill  wrote:

> On Wed, Apr 13, 2016 at 10:26 AM, rezroo  wrote:
> > Hi Kevin,
> >
> > I understand that this is how it is now. My question is how bad would it
> be
> > to wrap the Barbican client library calls in another class and claim, for
> > all practical purposes, that Magnum has no direct dependency on Barbican?
> > What is the negative of doing that?
> >
> > Anyone who wants to use another mechanism should be able to do that with
> a
> > simple change to the Magnum conf file. Nothing more complicated. That's
> the
> > essence of my question.
>
> For us, the main reason we’d want to be able to deploy without
> Barbican is mostly to lower the initial barrier of entry.  We’re not
> running anything else that would require Barbican for a multi-node
> deployment, so for us to do a realistic evaluation of Magnum, we’d
> have to get two “new to us” services up and running in a development
> environment.  Since we’re not running Barbican or Magnum, that’s a big
> time commitment for something we don’t really know if we’d end up
> using.  From that perspective, something that’s less secure might be
> just fine in the short term.  For example, I’d be completely fine with
> storing certificates in the Magnum database as part of an evaluation,
> knowing I had to switch from that before going to production.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] RPC Communication Errors Might Lead to a Bad State

2016-04-13 Thread Shoham Peller
Hi all,

There are some cases that a communication failure between the different
nova services, might cause a bad state in the system.

For example, when "shelving" a VM, nova-api puts the VM's task_state as
"shelving", sends an RPC to nova-compute, which shelves the VM, and resets
it's task_state in DB.
But, if for some reason, nova-compute didn't get the message (i.e. the RPC
service was down, there's a bug in the RPC service, nova-compute was down,
there was a temporary network malfunction), the VM is now stuck as
"shelving", and the user can't perform any operation on the stuck VM.
This example applies to a couple of scenarios in the system that involve
communication between different services.

>From nova-api's point-of-view, all it does is sending a message through
RPC, and neither actually checks that the message was received, nor waits
to get a reply or an acknowledgement from the receiver.

Of course, to solve this, a user can "reset-state" on a VM, and try to run
the action again, but this is error-prone and doesn't scale.

Possible solutions might be:

   - nova-api should receive an acknowledgement from nova-compute. It is
   unclear to me why today it uses a non-reply mechanism - probably to free
   the worker as fast as it can.
   - Change the task_state mechanism to prevent this kind of a stuck state
   to stay in the DB. nova-compute can be the one that writes the task_state
   to the DB, but this is not enough of course, but maybe there's another way?
   - nova-api could start a timer for the action to complete. If the
   shelving operation hasn't completed in X seconds, it will clean it by
   itself and rollback\try-again.

What do you think about the problem and the solutions?

Thanks,
Shoham Peller
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton blueprints call for action

2016-04-13 Thread Armando M.
On 13 April 2016 at 03:18, Ilya Chukhnakov  wrote:

> Hello everyone!
>
> Count me in for the VLAN aware VMS. I already have a [seemingly working]
> proof-of-concept for the OVS driver case and expect to submit it for the
> review in a few days.
>
>
Please provide feedback on [1], as that should be the entry point to
discuss your PoC idea.

[1] https://review.openstack.org/#/c/273954/


>
> On 13 Apr 2016, at 12:51, Oleg Bondarev  wrote:
>
>
> -- Forwarded message --
> From: Sukhdev Kapur 
> Date: Mon, Apr 11, 2016 at 7:33 AM
> Subject: Re: [openstack-dev] [Neutron] Newton blueprints call for action
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Cc: Bence Romsics , yan.songm...@zte.com.cn
>
>
> Hi Rossella,
>
> Good to hear that you will follow through with this. Ironic is looking for
> this API as well for bare metal deployments. We would love to work with you
> to make sure that this API/Implementation works for all servers ( VMs as
> well BMs)
>
> Thanks
> -Sukhdev
>
>
> On Wed, Apr 6, 2016 at 4:32 AM, Rossella Sblendido 
> wrote:
>
>>
>>
>> On 04/05/2016 05:43 AM, Armando M. wrote:
>> >
>> > With this email I would like to appeal to the people in CC to report
>> > back their interest in continuing working on these items in their
>> > respective capacities, and/or the wider community, in case new owners
>> > need to be identified.
>> >
>> > I look forward to hearing back, hoping we can find the right resources
>> > to resume progress, and bring these important requirements to completion
>> > in time for Newton.
>>
>> Count me in for the vlan-aware-vms. We have now a solid design, it's
>> only a matter of putting it into code. I will help any way I can, I
>> really want to see this feature in Newton.
>>
>> cheers,
>>
>> Rossella
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RPC Communication Errors Might Lead to a Bad State

2016-04-13 Thread Dan Smith
>   * nova-api should receive an acknowledgement from nova-compute. It is
> unclear to me why today it uses a non-reply mechanism - probably to
> free the worker as fast as it can.

Yes, wherever possible, we want the API to return immediately and let
the action complete later. Making a wholesale change to blocking calls
from the API to any other service is not a good idea, IMHO.

>   * Change the task_state mechanism to prevent this kind of a stuck
> state to stay in the DB. nova-compute can be the one that writes the
> task_state to the DB, but this is not enough of course, but maybe
> there's another way?

The task_state being set in the API is our way of limiting/locking the
operation so that if the request is queued for a long time, a user
doesn't reissue the command a bunch of time and add load to the API
and/or jam up the queue with a thousand requests to do the same
operation just because it's taking a while.

>   * nova-api could start a timer for the action to complete. If the
> shelving operation hasn't completed in X seconds, it will clean it
> by itself and rollback\try-again.

I have wanted to make a change for a while that involves a TTL on
messages, along with a deadline record so that we can know when to retry
or revert things that were in flight. This requires a lot of machinery
to accomplish, and is probably interwoven with the task concept we've
had on the back burner for a while. The complexity of moving nova to
this sort of scheme means that nobody has picked it up as of yet, but
it's certainly in the minds of many of us as something we need to do
before too long.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-13 Thread Jason Rist
On 04/13/2016 08:39 AM, Ryan Brady wrote:
> On Mon, Apr 11, 2016 at 5:54 AM, John Trowbridge  wrote:
>
> > Hola OOOers,
> >
> > It came up in the meeting last week that we could benefit from a CI
> > subteam with its own meeting, since CI is taking up a lot of the main
> > meeting time.
> >
> > I like this idea, and think we should do something similar for the other
> > informal subteams (tripleoclient, UI), and also add a new subteam for
> > tripleo-quickstart (and maybe one for releases?).
> >
> > We should make seperate ACL's for these subteams as well. The informal
> > approach of adding cores who can +2 anything but are told to only +2
> > what they know doesn't scale very well.
> >
>
> +1 to subteams for selected projects.
>
> I think there should be a clearly defined practice of ensuring there is
> enough reviewers so that a subteam core doesn't need to +A their own
> patches.  I don't know if that's a standing rule in tripleo core, but I
> think it should be explicit in subteams.
>
> -r
>
Another +1 to this.
-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] os-vif status report

2016-04-13 Thread Daniel P. Berrange
I won't be present at the forthcoming Austin summit, so to prepare other
people in case there are f2f discussions, this is a rough status report
on the os-vif progress


os-vif core
---

NB by os-vif core, I mean the python packages in the os_vif/ namespace.

The object model for describing the various different VIF backend
configurations is defined well enough that it should cover all the
VIF types currently used by Nova libvirt driver, and probably all
those needed by other virt drivers. The only exception is that we
do not have a representation for the vmware 'dvs' VIF type. There's
no real reason why not, other than the fact that we're concentrating
on converting the libvirt nova driver first. These are dealt with
by the os_vif.objects.VIFBase object and its subclasses.


We now have an object model for describing client host capabilities.
This is dealt with by the os_vif.objects.HostInfo versioned object
and things is used. Currently this object provides details of all
the os-vif plugins that are installed on the host, and which VIF
configs objects each supports.  The intent is that the HostInfo
object is serialized to JSON, and passed to Neutron by Nova when
creating a port.  This allows Neutron to dynamically decide which
plugin and which VIF config it wants to use for creating the port.


The os_vif.PluginBase class which all plugins must inherit from
has been enhanced so that plugins can declare configuration
parameters they wish to support. This allows config options for
the plugins to be included directly in the nova.conf file in
a dedicated section per plugin. For example, the linux bridge
plugin will have its parameters in a "[os_vif_linux_bridge]"
section in nova.conf.  This lets us setup the deprecations
correctly, so that when upgrading from older Nova, existing
settings in nova.conf still apply to the plugins provided
by os-vif.


os-vif reference plugins


Originally the intention was that all plugins would live outside
of the os-vif package. During discussions at the Nova mid-cycle
meeting there was a strong preference to have the linux bridge
and openvswitch plugin implementations be distributed as part of
the os-vif package directly.

As such we now have 'vif_plug_linux_bridge' and 'vif_plug_ovs'
python packages as part of the os-vif module. Note that these
are *not* under the os_vif python namespace, as the intention
was to keep their code structured as if they were separate,
so we can easily split them out again in future in we need to.

Both the linux bridge and ovs plugins have now been converted
over to use oslo.privsep instead of rootwrap for all the places
where they need to run privileged commands.


os-vif extra plugins


Jay has had GIT repositories created to hold the plugins for all
the other VIF types the libvirt driver needs to support to have
feature parity with Mitaka and earlier. AFAIK, no one has done
any work to actually get the code for these working. This is not
a blocker, since the way the Nova integration is written allows
us to incrementally convert each VIF type over to use os-vif, so
we avoid need for a "big bang".


os-vif Nova integration
---

I have a patch up for review against Nova that converts the libvirt
driver to use os-vif. It only does the conversion for linux bridge
and openvswitch, all other vif types fallback to using the current
code, as mentioned above.  The unit tests for this pass locally,
but I've not been able to verify its working correctly when run for
real. There's almost certainly privsep related integration tasks to
shake out - possibly as little as just installing the rootwrap filter
needed to allow use of privsep. My focus right now is ironing this
out so that I can verify linux bridge + ovs work with os-vif.


There is a new job defined in the experimental queue that tests that
can verify Nova against os-vif git master so we can get forwarning
if something in os-vif will cause Nova to break. This should also
let us verify that the integration is actually working in Nova CI
before allowing it to actually merge.


os-vif Neutron integration
--

As mentioned earlier we now have a HostInfo versioned object defined
in os-vif which Nova will populate. We need to extend the Neutron API
to accept this object when nova creates a port. This lets Neutron know
which VIF plugins are available and the configs they require.

Once Neutron has this information, instead of sending back the current
unstructured port binding dict, it will be able to send back a serialized
os_vif.objects.VIFBase subclass which formally describes the VIF it wants
Nova to use. This might be possible by just defining a VIF_TYPE_OS_VIF
and putting the VIFBase serialized data in another port binding metadata
field. Alternatively it might be desirable to extend the Neutron API to
more explictly represent os-vif. 

None of the Neutron integration has been started, or even written up

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Ian Cordasco
 

-Original Message-
From: Lance Bragstad 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: April 13, 2016 at 10:24:18
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

> I think we need to ask who we are lowering the barrier of entry for. Are we
> going down this path because we want developers to have less things to do
> to stand up a development environment? Or do we want to make it easy for
> people to realistically test? If you're going to realistically vet magnum,
> why not make that PoC as realistic as possible, as in deploying with
> barbican. As an operator, I think it would be better to have an honest
> assessment of the work required to deploy magnum, even if it costs a little
> extra time. I'd rather hit roadblocks with the realistic approach early
> than reassure my team everything will work correctly when we didn't test
> what we planned to offer to our customers. In my experience, having
> roadblocks pop up later after commitment has been made is expensive and
> stressful.

I agree wtih you, but there is a feeling among some that they want to /try/ 
Magnum without Barbican. With Magnum supporting a filesystem storage driver, 
like Glance's filesystem storage driver, I think this can be accomodated for 
proofs of concept (e.g., that Magnum "works" and serves the user's needs). From 
an operational perspective, it will be very misleading, especially to 
management, when the idea of Magnum goes from PoC to supported and requires 
Barbican and some (Soft or not) HSM which needs to be deployed.

Keep in mind, that Magnum's templates to deploy its COEs also have dependencies 
on other services that a small cloud may not deploy (e.g., Neutron) or features 
there of that may not be enabled (e.g., LBaaS). So there may be yet more 
requests to make Magnum adoption easier and thos requests will impact usage of 
the deployed COE more than anything else.

(And yes, let's not forget that this thread started regarding adoption, not 
simplifying PoC deployments which, while certainly related, are not the same 
thing.)

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Adam Young

On 04/12/2016 03:43 PM, Hongbin Lu wrote:


Hi all,

In short, some Magnum team members proposed to store TLS certificates 
in Keystone credential store. As Magnum PTL, I want to get agreements 
(or non-disagreement) from OpenStack community in general, Keystone 
community in particular, before approving the direction.


In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for 
storing TLS certificates.



No it does not.

Nothing required "secure storing of certificates."

What is required is "secure storing of private keys."  Period. Nothing 
else needs to be securely stored.


Next step is the "signing" of X509 certificates, and this requires a 
CA.  Barbican is the OpenStack abstraction for a CA, but still requires 
a "real" implementation to back to.  Dogtag is available for this role.



Now, what Keystone can and should do is provide a way to map an X509 
Certificate to a user.  This is actually much better done using the 
Federation approach than the Credentials store.


Credentials kinda suck.  They should die in a fire.  They can't, but 
they should. Different rant though.


So, to nail it down specifically:  Keystone's  sole role here is to map  
the Subject from an X509 certificate to a user_id.  If you try to do 
anything more than that with Keystone, you are in a state of sin.


So, if what you want to do is to store an X509 Certificate in the 
Keystone Credentials API, go for it, but I don;'t know what it would buy 
you, as only the "owner" of that cert would then be able to retrieve it.



If, on the other hand, what you want to do is to decouple the 
request/approval of X509 dfrom Barbican, I would suggest you use 
Certmonger.  It is an Operating system level tool for exactly this 
purpose.  And then we should make sure that Barbican can act as a CA for 
Certmonger (I know that Dogtag can already).



There is nothing Magnum specific about this.  We need to solve the Cert 
story for OpenStack in general.  We need TLS for The Message Broker and 
the Database connections as well as any HTTPS servers we have.





Currently, we leverage Barbican for this purpose, but we constantly 
received requests to decouple Magnum from Barbican (because users 
normally don’t have Barbican installed in their clouds). Some Magnum 
team members proposed to leverage Keystone credential store as a 
Barbican alternative [1]. Therefore, I want to confirm what is 
Keystone team position for this proposal (I remembered someone from 
Keystone mentioned this is an inappropriate use of Keystone. Would I 
ask for further clarification?). Thanks in advance.


[1] 
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store


Best regards,

Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Matthew Treinish
On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:
> Hi,
> 
> Current OpenStack Infra Periodic jobs do not send e-mails (only
> periodic-stable do), so I propose to create periodic-ci-reports
> mailing list [1] and to use it when our periodic jobs fail [2].
> If accepted, people who care about periodic jobs would like to
> subscribe to this new ML so they can read quick feedback from
> failures, thanks to e-mail filters.

So a big motivation behind openstack-health was to make doing this not
necessary. In practice the ML posts never get any real attention from people
and things just sit. [3] So, instead of trying to create another ML here
I think it would be better to figure out why openstack-health isn't working
for doing this and figure out how to improve it.

-Matt Treinish

> 
> The use-case is described in [2], please use Gerrit to give feedback.
> 
> Thanks,
> 
> [1] https://review.openstack.org/305326
> [2] https://review.openstack.org/305278
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-February/086706.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Ihar Hrachyshka

Matthew Treinish  wrote:


On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:

Hi,

Current OpenStack Infra Periodic jobs do not send e-mails (only
periodic-stable do), so I propose to create periodic-ci-reports
mailing list [1] and to use it when our periodic jobs fail [2].
If accepted, people who care about periodic jobs would like to
subscribe to this new ML so they can read quick feedback from
failures, thanks to e-mail filters.


So a big motivation behind openstack-health was to make doing this not
necessary. In practice the ML posts never get any real attention from  
people

and things just sit. [3] So, instead of trying to create another ML here
I think it would be better to figure out why openstack-health isn't working
for doing this and figure out how to improve it.

-Matt Treinish


What I like in ML is that you get notified, instead of going to a website  
each day to check job health, while it passes. In the end, you go there not  
daily, but weekly or monthly, and so the failure response is not immediate.


Ihar

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] BoF OVN call for ideas/discussion points

2016-04-13 Thread David Medberry
Hi,

There is a Birds of a Feather session on OVN at Austin.
If you've got experience or questions or issues with OVN, you can register
them here:

https://etherpad.openstack.org/p/AUS-BoF-OVN

and participate in the session:
Wednesday, April 27, 1:50pm-2:30pm

https://www.openstack.org/summit/austin-2016/summit-schedule/events/6871?goback=1

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova quota statistics counting issue

2016-04-13 Thread Dmitry Stepanenko
Hi Team,

I worked on nova quota statistics issue (
https://bugs.launchpad.net/nova/+bug/1284424) happenning when nova-*
processes are restarted during removing instances and was able to reproduce
it. For repro I used devstack and started nova-api and nova-compute in
separate screen windows. For killing them I used ctrl+c. As I found this
issue happened if nova-* processes are killed after instance was deleted
but right before quota commit procedure finishes.

We discussed these results with Markus Zoeller and decided that even though
killing nova processes is a bit exotic event, this still should be fixed
because quotas counting affects billing and very important for us.

So, we need to introduce some mechanism that will prevent us from reaching
inconsistent states in terms of quotas. In other words, this mechanism
should work in such a way that both instance create/remove operation and
quota usage recount operation happened or not happened together.

Any ideas how to do that properly?

Kind regards,
Dmitry
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-04-13 Thread Javier Pena


- Original Message -
> 
> Hello,
> 
> I would like to step up as PTL if everybody is ok with it.
> 

Go for it Iván!

Javier

> Cheers,
> Ivan
> 
> - Original Message -
> > From: "Martin Magr" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Cc: "Javier Pena" , "David Moreau Simard"
> > , "Alan Pevec" ,
> > "Ivan Chavero" 
> > Sent: Tuesday, April 12, 2016 1:52:38 AM
> > Subject: Re: [openstack-dev] [packstack] Update packstack core list
> > 
> > Greetings guys,
> > 
> > 
> >   I will have to step down from PTL responsibility. TBH I haven't have time
> >   to work on Packstack lately and I probably won't have in the future
> >   because of my other responsibilities. So from my point of view it is not
> >   correct to lead the project (though I'd like to contribute/do review from
> >   time to time).
> > 
> > Thanks for understanding,
> > Martin
> > 
> > 
> > - Original Message -
> > From: "Emilien Macchi" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Cc: "Javier Pena" , "David Moreau Simard"
> > 
> > Sent: Wednesday, March 16, 2016 3:50:37 PM
> > Subject: Re: [openstack-dev] [packstack] Update packstack core list
> > 
> > On Wed, Mar 16, 2016 at 6:35 AM, Alan Pevec  wrote:
> > > 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
> > >>> ...
> > >>> - Martin Mágr
> > >>> - Iván Chavero
> > >>> - Javier Peña
> > >>> - Alan Pevec
> > >>>
> > >>> I have a doubt about Lukas, he's contributed an awful lot to
> > >>> Packstack, just not over the last 90 days. Lukas, will you be
> > >>> contributing in the future? If so, I'd include him in the proposal as
> > >>> well.
> > >>
> > >> Thanks, yeah I do plan to contribute just haven't had time lately for
> > >> packstack.
> > >
> > > I'm also adding David Simard who recently contributed integration tests.
> > >
> > > Since there hasn't been -1 votes for a week, I went ahead and
> > > implemented group membership changes in gerrit.
> > > Thanks to the past core members, we will welcome you back on the next
> > >
> > > One more topic to discuss is if we need PTL election? I'm not sure we
> > > need formal election yet and de-facto PTL has been Martin Magr, so if
> > > there aren't other proposal let's just name Martin our overlord?
> > 
> > Packstack is not part of OpenStack big tent so de-facto does not need
> > a PTL to work.
> > It's up to the project team to decide if whether or not a PTL is needed.
> > 
> > Oh and of course, go ahead Martin ;-)
> > 
> > > Cheers,
> > > Alan
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > --
> > Emilien Macchi
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-13 Thread John Trowbridge


On 04/11/2016 05:54 AM, John Trowbridge wrote:
> Hola OOOers,
> 
> It came up in the meeting last week that we could benefit from a CI
> subteam with its own meeting, since CI is taking up a lot of the main
> meeting time.
> 
> I like this idea, and think we should do something similar for the other
> informal subteams (tripleoclient, UI), and also add a new subteam for
> tripleo-quickstart (and maybe one for releases?).
> 
> We should make seperate ACL's for these subteams as well. The informal
> approach of adding cores who can +2 anything but are told to only +2
> what they know doesn't scale very well.
> 
> - trown
> 

I went ahead and did this for tripleo-quickstart[1], and added Lars to
the tripleo-quickstart core team. It is relatively painless for anyone
else wanting to do the same.

- trown

[1] https://review.openstack.org/#/c/304145/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Fox, Kevin M
For evaluation, you should be able to throw it on a single machine with the 
file backend and skip barbican. Why do you need to do a partially hardened 
config? (magnum ha but insecure)

Thanks
Kevin

From: Clayton O'Neill [clay...@oneill.net]
Sent: Wednesday, April 13, 2016 7:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates

On Wed, Apr 13, 2016 at 10:26 AM, rezroo  wrote:
> Hi Kevin,
>
> I understand that this is how it is now. My question is how bad would it be
> to wrap the Barbican client library calls in another class and claim, for
> all practical purposes, that Magnum has no direct dependency on Barbican?
> What is the negative of doing that?
>
> Anyone who wants to use another mechanism should be able to do that with a
> simple change to the Magnum conf file. Nothing more complicated. That's the
> essence of my question.

For us, the main reason we’d want to be able to deploy without
Barbican is mostly to lower the initial barrier of entry.  We’re not
running anything else that would require Barbican for a multi-node
deployment, so for us to do a realistic evaluation of Magnum, we’d
have to get two “new to us” services up and running in a development
environment.  Since we’re not running Barbican or Magnum, that’s a big
time commitment for something we don’t really know if we’d end up
using.  From that perspective, something that’s less secure might be
just fine in the short term.  For example, I’d be completely fine with
storing certificates in the Magnum database as part of an evaluation,
knowing I had to switch from that before going to production.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Matthew Treinish
On Wed, Apr 13, 2016 at 06:22:50PM +0200, Ihar Hrachyshka wrote:
> Matthew Treinish  wrote:
> 
> > On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:
> > > Hi,
> > > 
> > > Current OpenStack Infra Periodic jobs do not send e-mails (only
> > > periodic-stable do), so I propose to create periodic-ci-reports
> > > mailing list [1] and to use it when our periodic jobs fail [2].
> > > If accepted, people who care about periodic jobs would like to
> > > subscribe to this new ML so they can read quick feedback from
> > > failures, thanks to e-mail filters.
> > 
> > So a big motivation behind openstack-health was to make doing this not
> > necessary. In practice the ML posts never get any real attention from
> > people
> > and things just sit. [3] So, instead of trying to create another ML here
> > I think it would be better to figure out why openstack-health isn't working
> > for doing this and figure out how to improve it.
> > 
> > -Matt Treinish
> 
> What I like in ML is that you get notified, instead of going to a website
> each day to check job health, while it passes. In the end, you go there not
> daily, but weekly or monthly, and so the failure response is not immediate.
> 

So, sure I understand the attraction of an active notification. But, for that to
work that assumes people actively looking at things and taking action as soon as
they get an email. That very rarely happens, even for people who look at the
failures regularly. Let's take a look at the stable maint list:

http://lists.openstack.org/pipermail/openstack-stable-maint/2016-February/thread.html
http://lists.openstack.org/pipermail/openstack-stable-maint/2016-March/thread.html

That's 2 months, where is the immediate response on those? Only a handful only
have a single isolated failure, and looking at them they are mostly transient
failure (like a down mirror, git failure, upstream release, etc) which was
likely blocking other development and was almost certainly fixed without looking
at the ML results. The majority of the failure just sat for a long time.

The issue with the ML is that it's not actually used as a notification but
instead people only look at the details periodically. The ML provides a really
bad interface for visualizing these results over time. That's why using
openstack-health is a better solution for doing this. Now, I don't pretend that
it's complete or perfect, it's still a relatively young project. But, the thing
is everyone can help us work on fixing any issues or gaps with it:

http://git.openstack.org/cgit/openstack/openstack-health/

Instead, of pretending the ML work for doing this I think it'll be better if we
concentrate on making the dashboard for this better.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Reza,

The Barbican team has already abstracted python-barbicanclient into a
general purpose key-storage library called Castellan [1]

There are a few OpenStack projects that have planned to integrate or
are currently integrating with Castellan to avoid a hard dependency on
Barbican.

There are some tradeoffs to choosing Castellan over
python-barbicanclient and Castellan may not be right for everyone.
Also, the only complete implementation of Castellan is currently the
Barbican implementation, so even though integrating with Castellan
does not result in a direct dependency, there is still work to be done
to have a working non-barbican solution.

- - Douglas Mendizábal

[1] http://git.openstack.org/cgit/openstack/castellan/

On 4/13/16 9:26 AM, rezroo wrote:
> Hi Kevin,
> 
> I understand that this is how it is now. My question is how bad 
> would it be to wrap the Barbican client library calls in another 
> class and claim, for all practical purposes, that Magnum has no 
> direct dependency on Barbican? What is the negative of doing that?
> 
> Anyone who wants to use another mechanism should be able to do
> that with a simple change to the Magnum conf file. Nothing more 
> complicated. That's the essence of my question.
> 
> Appreciate your thoughts and insight.
> 
> Reza
> 
> On 4/13/2016 6:46 AM, Fox, Kevin M wrote:
>> Barbican is the abstraction layer. Its plugable like nova, 
>> neutron, cinder, etc.
>> 
>> Thanks, Kevin *
>> 
>> * 
>> -
- ---
>>
>>
>> 
*From:* rezroo
>> *Sent:* Tuesday, April 12, 2016 11:00:30 PM *To:* 
>> openstack-dev@lists.openstack.org *Subject:* Re: [openstack-dev] 
>> [magnum][keystone][all] Using Keystone /v3/credentials to store 
>> TLS certificates
>> 
>> Interesting conversation, and I think I have more of a question 
>> than a comment. With my understanding of OpenStack architecture, 
>> I don't understand the point about making "Magnum dependent on 
>> Barbican". Wouldn't this issue be completely resolved using a 
>> driver model, such as delegating the task to a separate class 
>> specified in magnum.conf, with a reference implementation using 
>> Barbian API (like the vif driver of nova, or nova chance vs. 
>> filter scheduler)? If people want choice, we know how to give 
>> them choice - decouple, and have a base implementation. The rest 
>> is up to them. That's the framework's architecture. What am I 
>> missing? Thanks, Reza
>> 
>> On 4/12/2016 9:16 PM, Fox, Kevin M wrote:
>>> Ops are asking for you to make it easy for them to make their 
>>> security weak. And as a user of other folks clouds, i'd have
>>> no way to know the cloud is in that mode. That seems really
>>> bad for app developers/users.
>>> 
>>> Barbican, like some of the other servises, wont become common 
>>> if folks keep trying to reimplement it so they dont have to 
>>> depend on it. Folks said the same things about Keystone. 
>>> Ultimately it was worth making it a dependency.
>>> 
>>> Keystone doesnt support encryption, so you are asking for new 
>>> functionality duplicating Barbican either way.
>>> 
>>> And we do understand the point of what you are trying to do.
>>> We just dont see eye to eye on it being a good thing to do. If
>>> you are invested enough in setting up an ha setup where you
>>> would need a clusterd solution, barbicans not that much of an
>>> extra lift compared to the other services you've already had
>>> to deploy. Ive deployed both ha setups and barbican before. Ha
>>> is way worse.
>>> 
>>> Thanks, Kevin
>>> 
>>> *
>>> 
>>> * 
>>> 
- 
>>>
>>>
>>> 
*From:* Adrian Otto
>>> *Sent:* Tuesday, April 12, 2016 8:06:03 PM *To:* OpenStack 
>>> Development Mailing List (not for usage questions) *Subject:* 
>>> Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
>>> /v3/credentials to store TLS certificates
>>> 
>>> Please don't miss the point here. We are seeking a solution 
>>> that allows a location to place a client side encrypted blob
>>> of data (A TLS cert) that multiple magnum-conductor processes
>>> on different hosts can reach over the network.
>>> 
>>> We *already* support using Barbican for this purpose, as well 
>>> as storage in flat files (not as secure as Barbican, and only 
>>> works with a single conductor) and are seeking a second 
>>> alternative for clouds that have not yet adopted Barbican, and 
>>> want to use multiple conductors. Once Barbican is common in 
>>> OpenStack clouds, both alternatives are redundant and can be 
>>> deprecated. If Keystone depends on Barbican, then we have no 
>>> reason to keep using it. That will mean that Barbican is core 
>>> to OpenStack.
>>> 
>>> Our alternative to using Keystone is storing the encrypted 
>>> blobs in the Magnum database which would cause us to add an
>>> API feature in magnum that is the exact functional equivalent
>>

[openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-13 Thread Colette Alexander
Hi everyone!

Quick summary of where we're at with leadership training: dates are
confirmed as available with ZingTrain, and we're finalizing trainers with
them right now. *June 28/29th in Ann Arbor, Michigan.*

https://etherpad.openstack.org/p/Leadershiptraining

Has updated info, a sample itinerary that's currently being worked on, and
a request for current/recent past TC to please sign up/reserve your spot if
you know you can attend. I'm also probably bugging you on IRC this week a
lot, for good measure. I'm currently waiting on an address for the training
site so I can give decent hotel recs. Keep an eye on the etherpad for that
in the next two days.

I would *really* love to get as many current and past TC members to this as
possible, but totally understand that is dependent on personal schedules
and travel budgets. *If you are uninterested or unable to attend, please
let me know ASAP,* as I'd like to get a count of how many spots are
potentially open for non-TC members of the community who'd like to attend.
I would like to confirm open space on this list by next Friday, April 22nd,
to allow community members who are interested to sign up throughout the end
of April and into early May.

Do let me know if you have any other questions, too.

-colette

(gothicmindfood)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-04-13 Thread Amrith Kumar
Andreas, Lana, Mike, Matt, and others who've been active on this thread,

I've been following this conversation about installation documentation and core 
vs. non-core projects from afar and was under the impression that the changes 
being proposed would take effect for Newton and moving forward.

Today I was informed that after a lot of effort and testing, the installation 
guide for Trove/Mitaka which is ready and up for review[1] has been placed on 
hold pending the outcome of your discussions in Austin.

The documentation that is now available and ready for review is for the Mitaka 
series and should not, I believe, be held up because there is now a proposal 
afoot to put non-core project installation guides somewhere else. If we choose 
to do that, that's a conversation for Newton, I believe, and I believe that the 
Trove installation guide for Mitaka should be considered for inclusion along 
with the other Mitaka documentation.

The lack of installation guides for a project is a serious challenge for 
deployers and users, and much work has been expended getting the Trove 
documentation ready and thoroughly tested on Ubuntu, RDO and SUSE.

I'm therefore requesting that the doc team consider this set of documentation 
for the Mitaka series and make it available with the other install guides for 
other projects after it has been reviewed, and not hold it subject to the 
outcome of some Newton focused discussion that is to happen in Austin.

Thanks,

-amrith


[1] https://review.openstack.org/#/c/298929/

> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: Monday, April 04, 2016 2:42 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore
> - What about big tent?
> 
> On 04/04/2016 12:12 PM, Thierry Carrez wrote:
> > Doug Hellmann wrote:
> >>> [...]
> >>> We would love to add all sufficiently mature projects to the
> >>> installation guide because it increases visibility and adoption by
> >>> operators, but we lack resources to develop a source installation
> >>> mechanism that retains as much simplicity as possible for our
> >>> audience.
> >>
> >> I think it would be a big mistake to try to create one guide for
> >> installing all OpenStack projects. As you say, testing what we have
> >> now is already a monumental task and impedes your ability to make
> >> changes.  Adding more projects, with ever more dependencies and
> >> configuration issues to the work the same team is doing would bury
> >> the current documentation team. So I think focusing on the DefCore
> >> list, or even a smaller list of projects with tight installation
> >> integration requirements, makes sense for the team currently
> >> producing the installation guide.
> >
> > Yes, the base install guide should ideally serve as a reference to
> > reach that first step where you have all the underlying services
> > (MySQL,
> > Rabbit) and a base set of functionality (starterkit:compute ?)
> > installed and working. That is where we need high-quality,
> > proactively-checked, easy to understand content.
> >
> > Then additional guides (ideally produced by each project team with
> > tooling and mentoring from the docs team) can pick up from that base
> > first step, assuming their users have completed that first step
> > successfully.
> >
> 
> Fully agreed.
> 
> I just wrote a first draft spec for all of this and look forward to
> reviews.
> 
> I'll enhance some more tomorrow, might copy a bit from above (saw this too
> late).
> 
> https://review.openstack.org/301284
> 
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Jeremy Stanley
On 2016-04-13 12:58:47 -0400 (-0400), Matthew Treinish wrote:
> So, sure I understand the attraction of an active notification.
[...]
> Instead, of pretending the ML work for doing this I think it'll be
> better if we concentrate on making the dashboard for this better.

Mentioned in IRC as well, but would an RSS/ATOM feed be a good
compromise between active notification and focus on the dashboard as
an entry point to researching job failures?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Matthew Treinish
On Wed, Apr 13, 2016 at 05:22:28PM +, Jeremy Stanley wrote:
> On 2016-04-13 12:58:47 -0400 (-0400), Matthew Treinish wrote:
> > So, sure I understand the attraction of an active notification.
> [...]
> > Instead, of pretending the ML work for doing this I think it'll be
> > better if we concentrate on making the dashboard for this better.
> 
> Mentioned in IRC as well, but would an RSS/ATOM feed be a good
> compromise between active notification and focus on the dashboard as
> an entry point to researching job failures?

That sounds like a really good idea, I like it. It enables individuals who say
they want notifications for failures to subscribe to something and receive them,
but in the normal case there is no extra noise being injected. We can add the
rss links off of http://status.openstack.org/openstack-health to give the
appearance of a uniformity. (regardless of how we actually implement the feeds)
We can also have it generate links for jobs to openstack-health pages, etc. This
definitely feels like a better approach to me.

-Matt Treinish



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-04-13 Thread Andreas Jaeger
On 04/13/2016 07:17 PM, Amrith Kumar wrote:
> Andreas, Lana, Mike, Matt, and others who've been active on this thread,
> 
> I've been following this conversation about installation documentation and 
> core vs. non-core projects from afar and was under the impression that the 
> changes being proposed would take effect for Newton and moving forward.
> 
> Today I was informed that after a lot of effort and testing, the installation 
> guide for Trove/Mitaka which is ready and up for review[1] has been placed on 
> hold pending the outcome of your discussions in Austin.

> The documentation that is now available and ready for review is for the 
> Mitaka series and should not, I believe, be held up because there is now a 
> proposal afoot to put non-core project installation guides somewhere else. If 
> we choose to do that, that's a conversation for Newton, I believe, and I 
> believe that the Trove installation guide for Mitaka should be considered for 
> inclusion along with the other Mitaka documentation.

Amrith, I'm a bit surprised by this email and request. So, let me give
some more context.

There's a spec out:
https://review.openstack.org/#/c/290053 for this work which came very
late. Bogdan asked on the 23rd of March, and I commented on the spec
with -1 on the 27th of March that this is a post-Mitaka topic. Then, on
the 29th of March, your referenced change gets submitted -without any
followup discussion on the spec.

Would you have taken a code change under these conditions for trove itself?

While I applaud your team's work, the documentation team also needs to
review content you propose for consistency - and that takes time. We're
still flashing out some details for some of the guides for Mitaka.

> The lack of installation guides for a project is a serious challenge for 
> deployers and users, and much work has been expended getting the Trove 
> documentation ready and thoroughly tested on Ubuntu, RDO and SUSE.
> 
> I'm therefore requesting that the doc team consider this set of documentation 
> for the Mitaka series and make it available with the other install guides for 
> other projects after it has been reviewed, and not hold it subject to the 
> outcome of some Newton focused discussion that is to happen in Austin.

I'm glad about the work the team has done and will not block this going
in on my own. IMHO think we have the following options:

1) Wait until Austin and speed track this change afterwards based on the
outcome of the discussion there if possible.
2) Take the change in with the explicit understanding that it might be
taken out again based on the general Install Guide discussion.
3) Do nothing for Mitaka.

I'm happy to take my -2 away from the change after the spec has been
approved and we've decided which of the options  to take - and for that
I defer to Lana and Matt.

So, let's discuss how to move forward on the documentation list with the
docs team and see what they suggest,

Andreas

> Thanks,
> 
> -amrith
> 
> 
> [1] https://review.openstack.org/#/c/298929/
> 
>> -Original Message-
>> From: Andreas Jaeger [mailto:a...@suse.com]
>> Sent: Monday, April 04, 2016 2:42 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore
>> - What about big tent?
>>
>> On 04/04/2016 12:12 PM, Thierry Carrez wrote:
>>> Doug Hellmann wrote:
> [...]
> We would love to add all sufficiently mature projects to the
> installation guide because it increases visibility and adoption by
> operators, but we lack resources to develop a source installation
> mechanism that retains as much simplicity as possible for our
> audience.

 I think it would be a big mistake to try to create one guide for
 installing all OpenStack projects. As you say, testing what we have
 now is already a monumental task and impedes your ability to make
 changes.  Adding more projects, with ever more dependencies and
 configuration issues to the work the same team is doing would bury
 the current documentation team. So I think focusing on the DefCore
 list, or even a smaller list of projects with tight installation
 integration requirements, makes sense for the team currently
 producing the installation guide.
>>>
>>> Yes, the base install guide should ideally serve as a reference to
>>> reach that first step where you have all the underlying services
>>> (MySQL,
>>> Rabbit) and a base set of functionality (starterkit:compute ?)
>>> installed and working. That is where we need high-quality,
>>> proactively-checked, easy to understand content.
>>>
>>> Then additional guides (ideally produced by each project team with
>>> tooling and mentoring from the docs team) can pick up from that base
>>> first step, assuming their users have completed that first step
>>> successfully.
>>>
>>
>> Fully agreed.
>>
>> I just wrote a first draft spec for all of this and look forward 

[openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Matt Riedemann

I'd like to propose that we make Andrey Kurilin core on python-novaclient.

He's been doing a lot of the maintenance the last several months and a 
lot of times is the first to jump on any major issue, does a lot of the
microversion work, and is also working on cleaning up docs and helping 
me with planning releases.


His work is here [1].

Review stats for the last 4 months (although he's been involved in the 
project longer than that) [2].


Unless there is disagreement I plan to make Andrey core by the end of 
the week.


[1] 
https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient

[2] http://stackalytics.com/report/contribution/python-novaclient/120

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-04-13 Thread Jonathan D. Proulx
On Wed, Apr 13, 2016 at 05:17:18PM +, Amrith Kumar wrote:

:Today I was informed that after a lot of effort and testing, the installation 
guide for Trove/Mitaka which is ready and up for review[1] has been placed on 
hold pending the outcome of your discussions in Austin.

I've not been following this thread at all so appologies if I'm
confused.

As an operator and a (former? it's been a while) docs contributor
seems to me that the Newton proposal makes sense (given my breif
reading).  I don't see why Mitaka docs that are already written and
tested should be help up on that though, seems the point of coordinated
reseases is so other can rely on major structures being stable through
them.

My $0.02,
-Jon

:
:The documentation that is now available and ready for review is for the Mitaka 
series and should not, I believe, be held up because there is now a proposal 
afoot to put non-core project installation guides somewhere else. If we choose 
to do that, that's a conversation for Newton, I believe, and I believe that the 
Trove installation guide for Mitaka should be considered for inclusion along 
with the other Mitaka documentation.
:
:The lack of installation guides for a project is a serious challenge for 
deployers and users, and much work has been expended getting the Trove 
documentation ready and thoroughly tested on Ubuntu, RDO and SUSE.
:
:I'm therefore requesting that the doc team consider this set of documentation 
for the Mitaka series and make it available with the other install guides for 
other projects after it has been reviewed, and not hold it subject to the 
outcome of some Newton focused discussion that is to happen in Austin.
:
:Thanks,
:
:-amrith
:
:
:[1] https://review.openstack.org/#/c/298929/
:
:> -Original Message-
:> From: Andreas Jaeger [mailto:a...@suse.com]
:> Sent: Monday, April 04, 2016 2:42 PM
:> To: OpenStack Development Mailing List (not for usage questions)
:> 
:> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore
:> - What about big tent?
:> 
:> On 04/04/2016 12:12 PM, Thierry Carrez wrote:
:> > Doug Hellmann wrote:
:> >>> [...]
:> >>> We would love to add all sufficiently mature projects to the
:> >>> installation guide because it increases visibility and adoption by
:> >>> operators, but we lack resources to develop a source installation
:> >>> mechanism that retains as much simplicity as possible for our
:> >>> audience.
:> >>
:> >> I think it would be a big mistake to try to create one guide for
:> >> installing all OpenStack projects. As you say, testing what we have
:> >> now is already a monumental task and impedes your ability to make
:> >> changes.  Adding more projects, with ever more dependencies and
:> >> configuration issues to the work the same team is doing would bury
:> >> the current documentation team. So I think focusing on the DefCore
:> >> list, or even a smaller list of projects with tight installation
:> >> integration requirements, makes sense for the team currently
:> >> producing the installation guide.
:> >
:> > Yes, the base install guide should ideally serve as a reference to
:> > reach that first step where you have all the underlying services
:> > (MySQL,
:> > Rabbit) and a base set of functionality (starterkit:compute ?)
:> > installed and working. That is where we need high-quality,
:> > proactively-checked, easy to understand content.
:> >
:> > Then additional guides (ideally produced by each project team with
:> > tooling and mentoring from the docs team) can pick up from that base
:> > first step, assuming their users have completed that first step
:> > successfully.
:> >
:> 
:> Fully agreed.
:> 
:> I just wrote a first draft spec for all of this and look forward to
:> reviews.
:> 
:> I'll enhance some more tomorrow, might copy a bit from above (saw this too
:> late).
:> 
:> https://review.openstack.org/301284
:> 
:> Andreas
:> --
:>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
:>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
:>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
:>HRB 21284 (AG Nürnberg)
:> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
:> 
:> 
:> __
:> OpenStack Development Mailing List (not for usage questions)
:> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
:
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsub

Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-04-13 Thread Jonathan D. Proulx
On Wed, Apr 13, 2016 at 01:52:38PM -0400, Jonathan D. Proulx wrote:

:I've not been following this thread at all so appologies if I'm
:confused.

 reading follow up emails relating to timing of various submissions, I
 back away slowly clearly not having all the context on this one.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-04-13 Thread Amrith Kumar
Andreas,

Thanks for your email. I am aware of the reviews you describe below but I was 
still under the impression that the status from the email on openstack-docs 
(Mitaka Install Guide testing) [1] and [2] were still valid.

The understanding I had from those email threads is that the door hadn't yet 
closed. But I'll defer to the doc team; I think you understand the motivation 
for my request, and I respect (and fully admit that I don't understand) the 
complexities involved in releasing documentation.

I trust that if it is at all possible, you will accommodate the request. Of 
your options below, I would request #2 if at all possible.

Thanks,

-amrith


[1] http://lists.openstack.org/pipermail/openstack-docs/2016-March/008385.html
[2] http://lists.openstack.org/pipermail/openstack-docs/2016-March/008387.html



> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: Wednesday, April 13, 2016 1:46 PM
> To: Amrith Kumar ; OpenStack Development Mailing List
> (not for usage questions) 
> Cc: mkassaw...@gmail.com; Lana Brindley ; Mike
> Perez ; openstack-d...@lists.openstack.org
> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore
> - What about big tent?
> 
> On 04/13/2016 07:17 PM, Amrith Kumar wrote:
> > Andreas, Lana, Mike, Matt, and others who've been active on this
> > thread,
> >
> > I've been following this conversation about installation documentation
> and core vs. non-core projects from afar and was under the impression that
> the changes being proposed would take effect for Newton and moving
> forward.
> >
> > Today I was informed that after a lot of effort and testing, the
> installation guide for Trove/Mitaka which is ready and up for review[1]
> has been placed on hold pending the outcome of your discussions in Austin.
> 
> > The documentation that is now available and ready for review is for the
> Mitaka series and should not, I believe, be held up because there is now a
> proposal afoot to put non-core project installation guides somewhere else.
> If we choose to do that, that's a conversation for Newton, I believe, and
> I believe that the Trove installation guide for Mitaka should be
> considered for inclusion along with the other Mitaka documentation.
> 
> Amrith, I'm a bit surprised by this email and request. So, let me give
> some more context.
> 
> There's a spec out:
> https://review.openstack.org/#/c/290053 for this work which came very
> late. Bogdan asked on the 23rd of March, and I commented on the spec with
> -1 on the 27th of March that this is a post-Mitaka topic. Then, on the
> 29th of March, your referenced change gets submitted -without any followup
> discussion on the spec.
> 
> Would you have taken a code change under these conditions for trove
> itself?
> 
> While I applaud your team's work, the documentation team also needs to
> review content you propose for consistency - and that takes time. We're
> still flashing out some details for some of the guides for Mitaka.
> 
> > The lack of installation guides for a project is a serious challenge for
> deployers and users, and much work has been expended getting the Trove
> documentation ready and thoroughly tested on Ubuntu, RDO and SUSE.
> >
> > I'm therefore requesting that the doc team consider this set of
> documentation for the Mitaka series and make it available with the other
> install guides for other projects after it has been reviewed, and not hold
> it subject to the outcome of some Newton focused discussion that is to
> happen in Austin.
> 
> I'm glad about the work the team has done and will not block this going in
> on my own. IMHO think we have the following options:
> 
> 1) Wait until Austin and speed track this change afterwards based on the
> outcome of the discussion there if possible.
> 2) Take the change in with the explicit understanding that it might be
> taken out again based on the general Install Guide discussion.
> 3) Do nothing for Mitaka.
> 
> I'm happy to take my -2 away from the change after the spec has been
> approved and we've decided which of the options  to take - and for that I
> defer to Lana and Matt.
> 
> So, let's discuss how to move forward on the documentation list with the
> docs team and see what they suggest,
> 
> Andreas
> 
> > Thanks,
> >
> > -amrith
> >
> >
> > [1] https://review.openstack.org/#/c/298929/
> >
> >> -Original Message-
> >> From: Andreas Jaeger [mailto:a...@suse.com]
> >> Sent: Monday, April 04, 2016 2:42 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover
> >> Defcore
> >> - What about big tent?
> >>
> >> On 04/04/2016 12:12 PM, Thierry Carrez wrote:
> >>> Doug Hellmann wrote:
> > [...]
> > We would love to add all sufficiently mature projects to the
> > installation guide because it increases visibility and adoption by
> > operators, but we lack resources to develop a source installation

Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Jay Pipes

Big +1.

On 04/13/2016 01:53 PM, Matt Riedemann wrote:

I'd like to propose that we make Andrey Kurilin core on python-novaclient.

He's been doing a lot of the maintenance the last several months and a
lot of times is the first to jump on any major issue, does a lot of the
microversion work, and is also working on cleaning up docs and helping
me with planning releases.

His work is here [1].

Review stats for the last 4 months (although he's been involved in the
project longer than that) [2].

Unless there is disagreement I plan to make Andrey core by the end of
the week.

[1]
https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient

[2] http://stackalytics.com/report/contribution/python-novaclient/120



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Clint Byrum
Excerpts from Douglas Mendizábal's message of 2016-04-13 10:01:21 -0700:
> Hash: SHA512
> 
> Hi Reza,
> 
> The Barbican team has already abstracted python-barbicanclient into a
> general purpose key-storage library called Castellan [1]
> 
> There are a few OpenStack projects that have planned to integrate or
> are currently integrating with Castellan to avoid a hard dependency on
> Barbican.
> 
> There are some tradeoffs to choosing Castellan over
> python-barbicanclient and Castellan may not be right for everyone.
> Also, the only complete implementation of Castellan is currently the
> Barbican implementation, so even though integrating with Castellan
> does not result in a direct dependency, there is still work to be done
> to have a working non-barbican solution.

From an outsider's perspective with no real stake in this debate,
this sounds like a very reasonable way for Magnum to proceed, which
a pre-dependency that they would move their file based approach into
Castellan.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Clint Byrum
Excerpts from Clayton O'Neill's message of 2016-04-13 07:37:16 -0700:
> On Wed, Apr 13, 2016 at 10:26 AM, rezroo  wrote:
> > Hi Kevin,
> >
> > I understand that this is how it is now. My question is how bad would it be
> > to wrap the Barbican client library calls in another class and claim, for
> > all practical purposes, that Magnum has no direct dependency on Barbican?
> > What is the negative of doing that?
> >
> > Anyone who wants to use another mechanism should be able to do that with a
> > simple change to the Magnum conf file. Nothing more complicated. That's the
> > essence of my question.
> 
> For us, the main reason we’d want to be able to deploy without
> Barbican is mostly to lower the initial barrier of entry.  We’re not
> running anything else that would require Barbican for a multi-node
> deployment, so for us to do a realistic evaluation of Magnum, we’d
> have to get two “new to us” services up and running in a development
> environment.  Since we’re not running Barbican or Magnum, that’s a big
> time commitment for something we don’t really know if we’d end up
> using.  From that perspective, something that’s less secure might be
> just fine in the short term.  For example, I’d be completely fine with
> storing certificates in the Magnum database as part of an evaluation,
> knowing I had to switch from that before going to production.
> 

I'd say there's a perfectly reasonable option already for evaluation
purposes, and that is the existing file based backend. For multiple
nodes, I wonder how poorly an evaluation will go if one simply rsyncs
that directory every few minutes.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-13 Thread gordon chung
hi Nadya,

copy/pasting full original message with comments inline to clarify some 
comments.

i think a lot of the confusion is because we use pipeline.yaml across 
both polling and notification agents when really it only applies to 
latter. just an fyi, we've had an open work item to create a 
polling.yaml file... just the issue of 'resources'.

> Hello colleagues,
>
> I'd like to discuss one question with you. Perhaps, you remember that
> in Liberty we decided to get rid of transformers on polling agents [1]. I'd
> like to describe several issues we are facing now because of this decision.
> 1. pipeline.yaml inconsistency.
> Ceilometer pipeline consists from the two basic things: source and
> sink. In source, we describe how to get data, in sink - how to deal with
> the data. After the refactoring described in [1], on polling agents we
> apply only "source" definition, on notification agents we apply only "sink"
> one. It causes the problems described in the mailing thread [2]: the "pipe"
> concept is actually broken. To make it work more or less correctly, the
> user should care that from a polling agent he/she doesn't send duplicated
> samples. In the example below, we send "cpu" Sample twice each 600 seconds
> from a compute agents:
>
> sources:
> - name: meter_source
> interval: 600
> meters:
> - "*"
> sinks:
> - meter_sink
> - name: cpu_source
> interval: 60
> meters:
> - "cpu"
> sinks:
> - cpu_sink
> - cpu_delta_sink
>
> If we apply the same configuration on notification agent, each "cpu" Sample
> will be processed by all of the 3 sinks. Please refer to the mailing thread
> [2] for more details.
> As I understood from the specification, the main reason for [1] is
> making the pollster code more readable. That's why I call this change a
> "refactoring". Please correct me if I miss anything here.

i don't know about more readable. it was also to offload work from 
compute nodes and all the stuff cdent mentions.

>
> 2. Coordination stuff.
> TBH, coordination for notification agents is the most painful thing for
> me because of several reasons:
>
> a. Stateless service has became stateful. Here I'd like to note that tooz
> usage for central agents and alarm-evaluators may be called "optional". If
> you want to have these services scalable, it is recommended to use tooz,
> i.e. install Redis/Zookeeper. But you may have your puppets unchanged and
> everything continue to work with one service (central agent or
> alarm-evaluator) per cloud. If we are talking about notification agent,
> it's not the case. You must change the deployment: eighter rewrite the
> puppets for notification agent deployment (to have only one notification
> agent per cloud)  or make tooz installation with Redis/Zookeeper required.
> One more option: remove transformations completely - that's what we've done
> in our company's product by default.

the polling change is not related to coordination work in notification. 
the coordination work was to handle HA / multiple notification agents. 
regardless polling change, this must exist.

>
> b. RabbitMQ high utilisation. As you know, tooz does only one part of
> coordination for a notification agent. In Ceilometer, we use IPC queues
> mechanism to be sure that samples with the one metric and from the one
> resource are processed by exactly the one notification agent (to make it
> possible to use a local cache). I'd like to remind you that without
> coordination (but with [1] applied) each compute agent polls each instances
> and send the result as one message to a notification agent. The

this is not entirely accurate pre-polling change, the polling agents 
publish one message per sample. not the polling agents publish one 
message per interval (multiple samples).

> notification agent processes all the samples and sends as many messages to
> a collector as many sinks it is defined (2-4, not many). If [1] if not
> applied, one "publishing" round is skipped. But with [1] and coordination
> (it's the most recommended deployment), amount of publications increases
> dramatically because we publish each Sample as a separate message. Instead
> of 3-5 "publish" calls, we do 1+2*instance_amount_on_compute publishings
> per each compute. And it's by design, i.e. it's not a bug but a feature.

i don't think the maths is right but regardless, IPC is one of the 
standard use cases for message queues. the concept of using queues to 
pass around and distribute work is essentially what it's designed for. 
if rabbit or any message queue service can't provide this function, it 
does worry me.

>
> c. Samples ordering in the queues. It may be considered as a corner case,
> but anyway I'd like to describe it here too. We have a lot of
> order-sensitive transformers (cpu.delta, cpu_util), but we can guarantee
> message ordering only in the "main" polling queue, but not in IPC queues. At
> the picture below (hope it will be displayed) there are 3 agents A1, A2 and
> A3 and 3 time-ordered m

Re: [openstack-dev] [packstack] Update packstack core list

2016-04-13 Thread Alan Pevec
>> I would like to step up as PTL if everybody is ok with it.
> 
> Go for it Iván!

+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Hongbin,

I have to admit that it's a bit disappointing that the Magnum team
chose to decouple from Barbican, although I do understand that our
team needs to do a better job of documenting detailed how-tos for
deploying Barbican.

I'm not sure that I understand the Threat Model you're trying to
protect against, and I have not spent a whole lot of time researching
Magnum architecture so please forgive me if my assumptions are wrong.

So that we're all on the same page, I'm going to summarize the TLS
use-case as I understand it:

The magnum-conductor is a single process that may be scalable at some
point in the future. [1]

When the magnum-conductor is asked to provision a new bay the
following things happen:
1. A new self-signed root CA is created.  This results in a Root CA
Certificate and its associated key
2. N number of nodes are created to be part of the new bay.  For each
node, a new x509 certificate is provisioned and signed by the Root CA
created in 1.  This results in a certificate and key pair for each node.
3. The conductor then needs to store all generated keys in a secure
location.
4. The conductor would also like to store all generated Certificates
in a secure location, although this is not strictly necessary since
Certificates contain no secret information as pointed out by Adam
Young elsewhere in this thread.

Currently the conductor is using python-barbicanclient to store the
Root CA and Key in Barbican and associates those secrets via a
Certificate Container and then stores the container URI in the
conductor database.

Since most users of Magnum are unwilling or unable to deploy Barbican
the Magnum team would like an alternative mechanism for storing all
keys as well as the Certificates.

Additionally, since magnum-conductor may be more than one process in
the future, the alternative storage must be available to many
magnum-conductors.

Now, in the proposed Keystone alternative the magnum-conductor will
have a (symmetric?) encryption key.  Let's call this key the DEK
(short for data-encryption-key).  How the DEK is stored and replicated
to other magnum-conductors is outside of the scope of the proposed
alternative solution.
The magnum-conductor will use the DEK to encrypt all Certificates and
Keys and then store the resulting ciphertexts using the Keystone
credentials endpoint.

This begs the question: If you're pre-encrypting all this data with
the DEK, why do you need to store it in an external system?  I see no
security benefit of using Keystone credentials over just storing these
ciphertexts in a table in the database that all magnum-conductors will
already have access to.

I think a better alternative would be to integrate with Castellan and
develop a new Castellan implementation where the DEK is specified in a
config file, and the ciphertexts are stored in a database.  Let's call
this new implementation LocalDEKAndDBKeyManager.

With this approach the deployer could specify the
LocalDEKAndDBKeyManager class as the implementation of Castellan to be
used for their deployment, and then the DEK and db connection string
could be specified in the config as well.

By introducing the Castellan abstraction you would lose the ability to
group secrets into containers, so you'd have to store separate
references for each cert and key instead of just one barbican
reference for both.  Also, you would probably have to write the
Castellan integration in a way that always uses a context that is
generated from the config file which will result in all keys being
owned by the Magnum service tenant instead of the user's tenant when
using Barbican as a backend.

The upshot is that a deployer could choose the existing Barbican
implementation instead, and other projects may be able to make use of
the LocalDEKAndDBKeyManager.

- - Douglas Mendizábal

[1] http://docs.openstack.org/developer/magnum/#architecture

On 4/13/16 10:14 AM, Hongbin Lu wrote:
> I think there are two questions here:
> 
> 1.   Should Magnum decouple from Barbican?
> 
> 2.   Which options Magnum should use to achieve #1 (leverage 
> Keystone credential store, or other alternatives [1])?
> 
> For question #1, Magnum team has thoughtfully discussed it. I think
> we all agreed that Magnum should decouple from Barbican for now (I
> didn’t hear any disagreement from any of our team members). What we
> are currently debating is question #2. That is which approach we
> should use to achieve the goal. The first option is to store TLS
> credentials in Keystone. The second option is to store the
> credentials in Magnum DB. The third option is to eliminate the need
> to store TLS credentials (e.g. switch to another non-TLS
> authentication mechanism). What we want to know is if Keystone team
> allows us to pursue the first option. If it is disallowed, I will
> suggest Magnum team to pursue other options.
> 
> So, for the original question, does Keystone team allow us to
> store encrypted da

[openstack-dev] [trove] Trove weekly meeting notes

2016-04-13 Thread Amrith Kumar
The minutes of the Trove weekly meeting are at

http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-04-13-18.00.html

AGREED:

We decided that changes from the proposal bot (requirements, translations) can 
be approved on master by a single +2.

The python34 gate jobs can be changed from non-gating to gating

The list of specification reviews that are currently outstanding, and that we 
should prioritize is provided below.

255437

Enable CouchDB replication in Trove

256057

Implementing CEPH as a backend for backups

256079

Add support for hbase in Trove

263980

Configuration Groups for Couchdb Implements: blueprint couchdb-configuration-...

294213

Multi-Region Support

295274

Separate trove image build project based on libguestfs tools

298994

Replication/cluster locality

302416

Image Upgrade

302952

extending trove to better utilize storage capabilities


Other Newton projects for which specs have not yet been submitted ... please 
submit them as soon as possible so that we can have productive discussions in 
Austin.

Thanks,

-amrith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Emilien Macchi
On Wed, Apr 13, 2016 at 12:13 PM, Matthew Treinish  wrote:
> On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:
>> Hi,
>>
>> Current OpenStack Infra Periodic jobs do not send e-mails (only
>> periodic-stable do), so I propose to create periodic-ci-reports
>> mailing list [1] and to use it when our periodic jobs fail [2].
>> If accepted, people who care about periodic jobs would like to
>> subscribe to this new ML so they can read quick feedback from
>> failures, thanks to e-mail filters.
>
> So a big motivation behind openstack-health was to make doing this not
> necessary. In practice the ML posts never get any real attention from people
> and things just sit. [3] So, instead of trying to create another ML here
> I think it would be better to figure out why openstack-health isn't working
> for doing this and figure out how to improve it.

I like openstack-health, I use it mostly every day.
Though I miss notifications, like we have with emails.
Something we could investigate is RSS support in openstack-health.

> -Matt Treinish
>
>>
>> The use-case is described in [2], please use Gerrit to give feedback.
>>
>> Thanks,
>>
>> [1] https://review.openstack.org/305326
>> [2] https://review.openstack.org/305278
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086706.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Ken'ichi Ohmichi
+1
Thanks for implementing microversion support, Andrey.


2016-04-13 10:53 GMT-07:00 Matt Riedemann :
> I'd like to propose that we make Andrey Kurilin core on python-novaclient.
>
> He's been doing a lot of the maintenance the last several months and a lot
> of times is the first to jump on any major issue, does a lot of the
> microversion work, and is also working on cleaning up docs and helping me
> with planning releases.
>
> His work is here [1].
>
> Review stats for the last 4 months (although he's been involved in the
> project longer than that) [2].
>
> Unless there is disagreement I plan to make Andrey core by the end of the
> week.
>
> [1]
> https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient
> [2] http://stackalytics.com/report/contribution/python-novaclient/120
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-13 Thread Doug Hellmann
Excerpts from Julien Danjou's message of 2016-04-11 15:38:44 +0200:
> Hi,
> 
> There's a lot of assumption in oslo.log about Nova, such as talking
> about "instance" and "context" in a lot of the code by default. There's
> even a dependency on oslo.context. >.<
> 
> That's being an issue for projects not being Nova, where we end up
> having configuration options talking about "instances" and with default
> values referring to that.
> I'm at least taking that as being a serious UX issue for telemetry
> projects.
> 
> I'd love to sanitize that library a bit. So, is this an option, or would
> I be better off starting something new?
> 

I refer you to this as-yet-unimplemented spec from kilo. :-)

http://specs.openstack.org/openstack/oslo-specs/specs/kilo/app-agnostic-logging-parameters.html

Ronald and I were going to spend some time this cycle working on it. We
would love to have your help.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-13 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-04-11 10:16:23 -0400:
> On 04/11/2016 10:08 AM, Ed Leafe wrote:
> > On 04/11/2016 08:38 AM, Julien Danjou wrote:
> > 
> >> There's a lot of assumption in oslo.log about Nova, such as talking
> >> about "instance" and "context" in a lot of the code by default. There's
> >> even a dependency on oslo.context. >.<
> >>
> >> That's being an issue for projects not being Nova, where we end up
> >> having configuration options talking about "instances" and with default
> >> values referring to that.
> >> I'm at least taking that as being a serious UX issue for telemetry
> >> projects.
> >>
> >> I'd love to sanitize that library a bit. So, is this an option, or would
> >> I be better off starting something new?
> > 
> > The nova team spent a lot of time in Mitaka starting to clean up the
> > config options that were scattered all over the codebase, and improve
> > the help text for each of them so that you didn't need to grep the
> > source code to find out what they did.
> > 
> > I could see a similar effort for oslo.log (and maybe other oslo
> > projects), and I would be happy to help out.
> 
> This isn't so much about scattered options, oslo.log options are all in
> one place already, it's about the application specific ones that are
> embedded.
> 
> I agree that "instance" being embedded all the way back to oslo.log is
> weird. Ideally we'd have something like "resource" that if specified
> would be the primary resource the request was acting on. Or could even
> just build some custom loggers Nova side to inject the instance when we
> have it.
> 
> I'm not sure why oslo.context is an issue though. That's mostly about
> putting in the common information about the identity of the requester
> into the stream.

The context is also the place, frequently, where we know the id of the
resource on which action is being taken.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Andrew Laski
+1

On Wed, Apr 13, 2016, at 01:53 PM, Matt Riedemann wrote:
> I'd like to propose that we make Andrey Kurilin core on
> python-novaclient.
> 
> He's been doing a lot of the maintenance the last several months and a 
> lot of times is the first to jump on any major issue, does a lot of the
> microversion work, and is also working on cleaning up docs and helping 
> me with planning releases.
> 
> His work is here [1].
> 
> Review stats for the last 4 months (although he's been involved in the 
> project longer than that) [2].
> 
> Unless there is disagreement I plan to make Andrey core by the end of 
> the week.
> 
> [1] 
> https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient
> [2] http://stackalytics.com/report/contribution/python-novaclient/120
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Georgy Okrokvertskhov
At Mirantis we are playing with different technologies to explore possible
ways of using containers. Recently we did some POC kind of work for
integration existing OpenStack components with containers technologies.
Here is a link  for a demo. In this POC Nova
API can schedule a container via Nova Mesos driver which is quite similar
to nova-docker concept. The most interesting part is that Neutron manages
Docker networks and Cinder creates a volume which can be attached to the
container. Mesos is not necessary here, so the same work can be done with
existing nova-docker driver.

We did not try to address all possible cases for containers. This POC
covers a very specific use cases when someone has a limited number of
applications which can be executed in both VMs and containers. This
application is self container so there is no needs in complex orchestration
which Kubernetes or Marathon provides.

-Gosha

On Wed, Apr 13, 2016 at 8:05 AM, Joshua Harlow 
wrote:

> Thierry Carrez wrote:
>
>> Fox, Kevin M wrote:
>>
>>> I think my head just exploded. :)
>>>
>>> That idea's similar to neutron sfc stuff, where you just say what
>>> needs to connect to what, and it figures out the plumbing.
>>>
>>> Ideally, it would map somehow to heat & docker COE & neutron sfc to
>>> produce a final set of deployment scripts and then just runs it
>>> through the meat grinder. :)
>>>
>>> It would be awesome to use. It may be very difficult to implement.
>>>
>>> If you ignore the non container use case, I think it might be fairly
>>> easily mappable to all three COE's though.
>>>
>>
>> This feels like Heat with a more readable descriptive language. I don't
>> really like this approach, because you end up with the lowest common
>> denominator between COE's functionality. They are all different. And
>> they are at the peak of the differentiation phase. The LCD is bound to
>> be pretty basic.
>>
>> This approach may be attractive for us as infrastructure providers, but
>> I also know this is not attractive to users who used Kubernetes before
>> and wants to continue to use Kubernetes (and don't really want to care
>> about whether OpenStack is running under the hood). They don't want to
>> learn another descriptor language or API, they just want to learn the
>> Kubernetes description model and API and take advantage of its unique
>> capabilities.
>>
>> In summary, this may be a good solution for *existing* OpenStack users
>> to start playing with containerized workloads. But it is not a good
>> solution to attract the container cool kids to using OpenStack as their
>> base infrastructure provider. For those we need to make it as
>> transparent and simple to use their usual tools to deploy on top of
>> OpenStack clouds. The more they can ignore we are there, the better.
>>
>>
> I get the feeling of 'the more they can ignore we are there, the better.'
> but it just feels like at that point we have accepted our own fate in this
> arena vs trying to actually having an impact in it... Do others feel that
> it is already at the point where we can no longer attract the cool kids, is
> the tipping point of that happening already past?
>
> I'd like for openstack to still attract the cool kids, and not just
> attract the cool kids by accepting 'the more they can ignore we are there,
> the better' as our fate... I get that someone has to provide the equivalent
> of roads, plumbing and water but man, it feels like we can also provide
> other things still ;)
>
> -Josh
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Director of Performance Engineering,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday April 14th at 17:00 UTC

2016-04-13 Thread Ken'ichi Ohmichi
Hi,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, April 14th at 17:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_April_14th_2016_.281700_UTC.29

Anyone is welcome to add items to the agenda.

To help people figure out what time 17:00 UTC is in other timezones the
next meeting will be at:

12:00 EST
02:00 JST
02:30 ACST
07:00 CEST
12:00 CDT
10:00 PDT

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread melanie witt

On Wed, 13 Apr 2016 12:53:12 -0500, Matt Riedemann wrote:

I'd like to propose that we make Andrey Kurilin core on python-novaclient.

He's been doing a lot of the maintenance the last several months and a
lot of times is the first to jump on any major issue, does a lot of the
microversion work, and is also working on cleaning up docs and helping
me with planning releases.

His work is here [1].

Review stats for the last 4 months (although he's been involved in the
project longer than that) [2].

Unless there is disagreement I plan to make Andrey core by the end of
the week.

[1]
https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient

[2] http://stackalytics.com/report/contribution/python-novaclient/120


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] Invalid command sfc-create

2016-04-13 Thread Victor Mehmeri
Hi all,

I am trying to follow this walkthrough here: 
https://github.com/trozet/sfc-random/blob/master/tacker_sfc_walkthrough.txt

But when I get to this point: tacker sfc-create --name mychain --chain 
testVNF1, I get the error:

Invalid command u'sfc-create --name'

'tacker help' doesn't even list any command related to sfc.  My devstack 
local.conf file has this line:

enable_plugin tacker https://git.openstack.org/openstack/tacker stable/liberty

is the reason that I don't have sfc-related commands that I am pointing to the 
liberty version? Should I point to master and rerun stack.sh?

Thanks in advance,

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-04-13 Thread Julien Danjou
On Wed, Apr 13 2016, Doug Hellmann wrote:

>> I'm not sure why oslo.context is an issue though. That's mostly about
>> putting in the common information about the identity of the requester
>> into the stream.
>
> The context is also the place, frequently, where we know the id of the
> resource on which action is being taken.

There's a bunch of projects that have no intention of using
oslo.context, so depending and referring to it by default is something
I'd love to fade away.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Invalid command sfc-create

2016-04-13 Thread Tim Rozet
Hi Victor,
You can use the local.conf thats in the sfc-random repo.  The sfc functionality 
is not in upstream Tacker yet.  It is here:
https://github.com/trozet/sfc-random/blob/master/local.conf#L2


Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Victor Mehmeri" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, April 13, 2016 4:38:10 PM
Subject: [openstack-dev] [Tacker] Invalid command sfc-create



Hi all, 



I am trying to follow this walkthrough here: 
https://github.com/trozet/sfc-random/blob/master/tacker_sfc_walkthrough.txt 



But when I get to this point: tacker sfc-create --name mychain --chain 
testVNF1, I get the error: 



Invalid command u'sfc-create --name' 



‘tacker help’ doesn’t even list any command related to sfc. My devstack 
local.conf file has this line: 



enable_plugin tacker https://git.openstack.org/openstack/tacker stable/liberty 



is the reason that I don’t have sfc-related commands that I am pointing to the 
liberty version? Should I point to master and rerun stack.sh? 



Thanks in advance, 



Victor 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker] Invalid command sfc-create

2016-04-13 Thread Victor Mehmeri
Thanks, Tim!

Victor

-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: 13. april 2016 16:01
To: OpenStack Development Mailing List (not for usage questions); Victor Mehmeri
Subject: Re: [openstack-dev] [Tacker] Invalid command sfc-create

Hi Victor,
You can use the local.conf thats in the sfc-random repo.  The sfc functionality 
is not in upstream Tacker yet.  It is here:
https://github.com/trozet/sfc-random/blob/master/local.conf#L2


Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Victor Mehmeri" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, April 13, 2016 4:38:10 PM
Subject: [openstack-dev] [Tacker] Invalid command sfc-create



Hi all, 



I am trying to follow this walkthrough here: 
https://github.com/trozet/sfc-random/blob/master/tacker_sfc_walkthrough.txt 



But when I get to this point: tacker sfc-create --name mychain --chain 
testVNF1, I get the error: 



Invalid command u'sfc-create --name' 



‘tacker help’ doesn’t even list any command related to sfc. My devstack 
local.conf file has this line: 



enable_plugin tacker https://git.openstack.org/openstack/tacker stable/liberty 



is the reason that I don’t have sfc-related commands that I am pointing to the 
liberty version? Should I point to master and rerun stack.sh? 



Thanks in advance, 



Victor 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [all] Tagging sessions for other tracks

2016-04-13 Thread Nikhil Komawar
Hello everyone,

The summit schedule for Glance is up [1]. I hope no-to-minimal changes
at this point (besides descriptions) so, was trying to gauge some
interest from other teams regarding tagging a subset of these sessions
with the respective tracks. I've tagged one FB session for
app-catalog+murano and one W session for ops feedback. May be there are
more interest parties?

Please let me know soon.

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Glance%3A

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Andrey Kurilin for python-novaclient core

2016-04-13 Thread Sylvain Bauza



Le 13/04/2016 19:53, Matt Riedemann a écrit :
I'd like to propose that we make Andrey Kurilin core on 
python-novaclient.


He's been doing a lot of the maintenance the last several months and a 
lot of times is the first to jump on any major issue, does a lot of the
microversion work, and is also working on cleaning up docs and helping 
me with planning releases.


His work is here [1].

Review stats for the last 4 months (although he's been involved in the 
project longer than that) [2].


Unless there is disagreement I plan to make Andrey core by the end of 
the week.


[1] 
https://review.openstack.org/#/q/owner:akurilin%2540mirantis.com+project:openstack/python-novaclient

[2] http://stackalytics.com/report/contribution/python-novaclient/120



+1.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Newton Midcycle Planning

2016-04-13 Thread Sean McGinnis
On Tue, Apr 12, 2016 at 09:05:18AM -0500, Sean McGinnis wrote:

All folks interested in attending the Cinder Newton Midcycle - please
fill out this survey so we can collect data to help finalize a date and
location:

https://www.surveymonkey.com/r/L2CY3RL

Thanks!

Sean

> Hey Cinder team (and those interested),
> 
> We've had a few informal conversations on the channel and in meetings,
> but wanted to capture some things here and spread awareness.
> 
> I think it would be good to start planning for our Newton midcycle.
> These have been incredibly productive in the past (at least in my
> opinion) so I'd like to get it on the schedule so folks can start
> planning for it.
> 
> For Mitaka we held our midcycle in the R-10 week. That seemed to work
> out pretty well, but I also think it might be useful to hold it a little
> earlier in the cycle to keep some momentum going and make sure things
> stay pretty focused for the rest of the cycle.
> 
> For reference, here is the current release schedule for Newton:
> 
> http://releases.openstack.org/newton/schedule.html
> 
> R-10 puts us in the last week of July.
> 
> I would have a conflict R-16, R-15. We probably want to avoid US
> Independence Day R-13, and milestone weeks R-18 and R12.
> 
> So potential weeks look like:
> 
> * R-17
> * R-14
> * R-11
> * R-10
> * R-9
> 
> Nova is in the process of figuring out their date. If we have that, it
> would be good to try to avoid an overlap there. Our linked midcycle
> session worked out well, but probably better if they don't conflict.
> 
> We also need to work out locations. Anyone able and willing to host,
> just let me know. We need a facility with wifi, able to hold ~30-40
> people, wifi, close to an airport. And wifi.
> 
> At some point I still think it would be nice for our international folks
> to be able to do a non-US midcycle, but I'm fine if we end up back in Ft
> Collins our somewhere similar.
> 
> Thanks!
> 
> Sean (smcginnis)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Dolph Mathews
On Wed, Apr 13, 2016 at 2:37 PM, Emilien Macchi  wrote:

> On Wed, Apr 13, 2016 at 12:13 PM, Matthew Treinish 
> wrote:
> > On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:
> >> Hi,
> >>
> >> Current OpenStack Infra Periodic jobs do not send e-mails (only
> >> periodic-stable do), so I propose to create periodic-ci-reports
> >> mailing list [1] and to use it when our periodic jobs fail [2].
> >> If accepted, people who care about periodic jobs would like to
> >> subscribe to this new ML so they can read quick feedback from
> >> failures, thanks to e-mail filters.
> >
> > So a big motivation behind openstack-health was to make doing this not
> > necessary. In practice the ML posts never get any real attention from
> people
> > and things just sit. [3] So, instead of trying to create another ML here
> > I think it would be better to figure out why openstack-health isn't
> working
> > for doing this and figure out how to improve it.
>
> I like openstack-health, I use it mostly every day.
> Though I miss notifications, like we have with emails.
> Something we could investigate is RSS support in openstack-health.
>

I guess everyone is different. Outside of automated systems, I don't
interface with RSS/atom anymore myself (~since Google Reader was shutdown).

I've investigated every mailing-list based notification I've ever received,
however I don't feel compelled to respond to the mailing list thread (if
that is a metric anyone is looking at here).

OpenStack Health is cool, but I certainly won't check it with any
regularity.


> > -Matt Treinish
> >
> >>
> >> The use-case is described in [2], please use Gerrit to give feedback.
> >>
> >> Thanks,
> >>
> >> [1] https://review.openstack.org/305326
> >> [2] https://review.openstack.org/305278
> > [3]
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086706.html
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Community App Catalog IRC meeting Thursday April 14th

2016-04-13 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for April 14th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

One topic that might be of interest if you are not a regular attendee
will be our summit plans.  Since the Community App Catalog (which is
not Murano - different things!) crosses over into many different
spaces in OpenStack we would love to get more conversations going with
different projects and working groups.  ESPECIALLY around the subject
of developing applications for OpenStack.  Join us if you can!

Looking forward to seeing everyone there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] service show omits the network id

2016-04-13 Thread Anil Rao
Hi Simhon,

We are in the process of removing the network-id argument from the 
tap-service-create API. There is a patch that has been submitted with this 
change and it is currently under review. We expect it to be merged very soon.

Thanks,
Anil

From: Simhon Doctori שמחון דוקטורי [mailto:simh...@gmail.com]
Sent: Wednesday, April 13, 2016 4:46 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron][taas] service show omits the network id

Hi,
Although the network id is essential argument when creating the service, it is 
not shown when doing show for the service. Either using cli or rest-api.

{
  "tap_services": [
{
  "tenant_id": "619ce6d9192c494fbf3dd7947ef78f9f",
  "port_id": "2250affe-6a38-4678-b4ab-969c36cc6f12",
  "description": "",
  "name": "tapServicePort",
  "id": "5afd1d73-0d8c-4931-bc6c-c8388ba6508f"
}
  ]
}

neutron) tap-service-show 5afd1d73-0d8c-4931-bc6c-c8388ba6508f
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 5afd1d73-0d8c-4931-bc6c-c8388ba6508f |
| name| tapServicePort   |
| port_id | 2250affe-6a38-4678-b4ab-969c36cc6f12 |
| tenant_id   | 619ce6d9192c494fbf3dd7947ef78f9f |
+-+--+

Is this in purpose or a bug ?
Simhon Doctori
imVision Technologies.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Ian Wienand

On 04/14/2016 03:22 AM, Jeremy Stanley wrote:

Mentioned in IRC as well, but would an RSS/ATOM feed be a good
compromise between active notification and focus on the dashboard as
an entry point to researching job failures?


For myself, simply ordering by date on the log page as per [1] would
make it one step easier to write a local cron job to pick up the
latest.

-i

[1] https://review.openstack.org/#/c/301989/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][monasca][cloudkitty][neutron][stacktach] removing python 2.6 classifiers from package metadata

2016-04-13 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-02-10 09:59:54 -0500:
> We stopped running tests under python 2.6 a while back, and I submitted
> a bunch of patches to projects that still had the python package
> classifier indicating support for python 2.6. Most of those merged, but
> quite a few are still open [1]. Please take a look at the list and if
> you find any for your project merge them before the next milestone tag.
> 
> Thanks,
> Doug
> 
> [1] https://review.openstack.org/#/q/status:open++topic:remove-py26-classifier

We still have a few projects that are claiming Python 2.6 support
incorrectly. Please merge these patches before doing any more releases:

https://review.openstack.org/#/q/status:open++topic:remove-py26-classifier

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Matthew Treinish
On Wed, Apr 13, 2016 at 05:12:08PM -0500, Dolph Mathews wrote:
> On Wed, Apr 13, 2016 at 2:37 PM, Emilien Macchi  wrote:
> 
> > On Wed, Apr 13, 2016 at 12:13 PM, Matthew Treinish 
> > wrote:
> > > On Wed, Apr 13, 2016 at 10:59:10AM -0400, Emilien Macchi wrote:
> > >> Hi,
> > >>
> > >> Current OpenStack Infra Periodic jobs do not send e-mails (only
> > >> periodic-stable do), so I propose to create periodic-ci-reports
> > >> mailing list [1] and to use it when our periodic jobs fail [2].
> > >> If accepted, people who care about periodic jobs would like to
> > >> subscribe to this new ML so they can read quick feedback from
> > >> failures, thanks to e-mail filters.
> > >
> > > So a big motivation behind openstack-health was to make doing this not
> > > necessary. In practice the ML posts never get any real attention from
> > people
> > > and things just sit. [3] So, instead of trying to create another ML here
> > > I think it would be better to figure out why openstack-health isn't
> > working
> > > for doing this and figure out how to improve it.
> >
> > I like openstack-health, I use it mostly every day.
> > Though I miss notifications, like we have with emails.
> > Something we could investigate is RSS support in openstack-health.
> >
> 
> I guess everyone is different. Outside of automated systems, I don't
> interface with RSS/atom anymore myself (~since Google Reader was shutdown).
> 
> I've investigated every mailing-list based notification I've ever received,
> however I don't feel compelled to respond to the mailing list thread (if
> that is a metric anyone is looking at here).

TBH, I don't think that's a metric that's relevant here. The argument that was
being made earlier was that people want a ML to report results to because it
acts as a notification system for periodic gate failures. The contention was
that it enables people to respond quickly when things start to fail. But, what I
was saying is that past experience has shown that in practice this is never the
case. No one is actually on call for dealing with failures when they come in. So
what ends up really happening is that failures just sit on the list and repeat
every day. The ML is also pretty bad interface for dealing with this sort of
thing over time. This problem was a big part of why openstack-health was
developed.

The RSS/atom idea is a compromise to provide an alternative for everyone who
still says they want a notification on failures but would be more tightly
integrated with the rest of the systems we're working on here.

FWIW, I started doodling on doing this here:

https://review.openstack.org/305496

it's still pretty far from complete, but it's a starting point.


> 
> OpenStack Health is cool, but I certainly won't check it with any
> regularity.

This is actually the behavior I think we want to address. One of the goals for
the project is to make it the go to spot for investigating anything related to
the gate. Identifying the gaps that are preventing it from being a useful tool
for you and other people is important for this goal. So I'm gonna put you on the
spot, what do you think is missing from making this really useful for you today?

-Matt Treinish

> > >>
> > >> The use-case is described in [2], please use Gerrit to give feedback.
> > >>
> > >> Thanks,
> > >>
> > >> [1] https://review.openstack.org/305326
> > >> [2] https://review.openstack.org/305278
> > > [3]
> > http://lists.openstack.org/pipermail/openstack-dev/2016-February/086706.html



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-13 Thread Edward Leafe
On Apr 12, 2016, at 6:07 PM, Bhandaru, Malini K  
wrote:

> Intel would be pleased to host the Nova midcycle meetup either at San 
> Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
> 18-22) as preferred by the Nova community.

In July? Oregon > San Antonio


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-13 Thread Dan Smith
> In July? Oregon > San Antonio

In any month of any year, Oregon > San Antonio :P

+1 for Hillsboro again from me, but I'm just a tad biased :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] meeting topics for 4/14/2016 networking-sfc project IRC meeting

2016-04-13 Thread Cathy Zhang
Hi everyone,
 Here are some topics I have in mind for tomorrow's meeting discussion. Feel 
free to add more.
 Meeting Info: Every Thursday 1700 UTC on #openstack-meeting-4

1. Meeting time change from Thursday 1700 UTC to Wednesday 1700 UTC?

2. Launch pad bug scrub

3. Launch pad Blueprint update

4. Remove Source port "mandatory" requirement in the FC API

5. Consistent Repository rule for networking-sfc related drivers: 
Northbound Tacker driver and Southbound ONOS driver, ODL driver, OVD driver

6. new field "priority"  in flow-classifier

7. Move the generation of the data path chain path ID from the Driver 
component to the networking-sfc plugin component

8. Define VNF type (l2 or l3) param in service-function-param

9. Networking-sfc SFC driver for OVN

10.   Networking-sfc SFC driver for ODL

11.   Networking-sfc integration with ONOS completion status update

12.   Tacker Driver for networking-sfc

13.   Dynamic service chain update without service interruption


Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tricircle] Error runnig py27

2016-04-13 Thread joehuang
Hi, Khayam,

@mock.patch('self.app.post_json')

No “self.” needed.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Khayam Gondal [mailto:khayam.gon...@gmail.com]
Sent: Wednesday, April 13, 2016 2:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; Zhiyuan Cai
Subject: [Tricircle] Error runnig py27

Hi I am writing a test for exception . Following is my testing function.

@mock.patch('self.app.post_json')
def test_post_exp(self, mock_get, mock_http_error_handler):

mock_response = mock.Mock()
mock_response.raise_for_status.side_effect = db_exc.DBDuplicateEntry
mock_get.return_value = mock_response
mock_http_error_handler.side_effect = db_exc.DBDuplicateEntry
with self.assertRaise(db_exc.DBDuplicateEntry):
self.app.post_json(
'/v1.0/pods',
dict(pod=None),
expect_errors=True)

But when I run tox -epy27 it shows:

  File 
"/home/khayam/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1206, in _importer

thing = __import__(import_path)

ImportError: No module named self

Can someone guide me whats wrong here. I already had installed latest version 
of mock, python-dev.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-13 Thread Cathy Zhang
Hi everyone,
Per Armando's request, Louis and I are looking into the following features for 
Newton cycle.

* Neutron Common FC used for SFC, QoS, Tap as a service etc.,
* OVS Agent extension
Some of you might know that we already developed a FC in networking-sfc project 
and QoS also has a FC. It makes sense that we have one common FC in Neutron 
that could be shared by SFC, QoS, Tap as a service etc. features in Neutron.
Different features may extend OVS agent and add different new OVS flow tables 
to support their new functionality. A mechanism is needed to ensure consistent 
OVS flow table modification when multiple features co-exist. AFAIK, there is 
some preliminary work on this, but it is not a complete solution yet.
We will like to start these effort by collecting requirements and then posting 
specifications for review. If any of you would like to join this effort, please 
chime in. We can set up a meet-up session in the Summit to discuss this 
face-in-face.
Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] keystoneauth1 2.6.0 release (newton)

2016-04-13 Thread no-reply
We are psyched to announce the release of:

keystoneauth1 2.6.0: Authentication Library for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

Changes in keystoneauth1 2.5.0..2.6.0
-

1aa6667 Allow to send different recorders to betamax
038b614 Fix doc build if git is absent
5b5a4c0 Updated from global requirements
8b3ed3e Updated from global requirements

Diffstat (except docs and test files)
-

keystoneauth1/fixture/keystoneauth_betamax.py | 4 ++--
setup.cfg | 2 +-
test-requirements.txt | 6 +++---
4 files changed, 13 insertions(+), 8 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index ad06c08..6fa68e9 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +10 @@ discover # BSD
-fixtures>=1.3.1 # Apache-2.0/BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
@@ -12 +12 @@ mock>=1.2 # BSD
-oslo.config>=3.7.0 # Apache-2.0
+oslo.config>=3.9.0 # Apache-2.0
@@ -19 +19 @@ pycrypto>=2.6 # Public Domain
-reno>=0.1.1 # Apache2
+reno>=1.6.2 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Peng Zhao
Well, it is a myth that Docker is not linux container specific. It is born with
cgroup/namespace, but the image is an app-centric way to package, nothing
particular to linux container.
For openstack, given the virtualization root, it is an easy win in places where
requires strong isolation, multi-tenancy. And that creates new patterns to
consume technologies.
Peng
- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 9:49 PM, Fox, Kevin M kevin@pnnl.gov wrote:
It partially depends on if your following lightweight container methodology. Can
nova api support unix sockets or bind mounts between containers in the same pod?
Would it be reasonable to add that functionality? Its pretty different to novas
usual use cases.

Thanks,
Kevin




From: Peng Zhao
Sent: Tuesday, April 12, 2016 11:33:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform –
Containers/Bare Metal? (Re: Board of Directors Meeting)

Agreed.
IMO, OpenStack is an open framework to different technologies and use cases.
Different architectures for different things make sense. Some may say that using
nova to launch docker images with hypervisor is weird, but it can be seen as
“Immutable IaaS”.

- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 1:43 PM, Joshua Harlow harlo...@fastmail.com wrote:
Sure, so that helps, except it still has the issue of bumping up against the
mismatch of the API(s) of nova. This is why I'd rather have a template kind of
format (as say the input API) that allows for (optionally) expressing such
container specific capabilities/constraints. Then some project that can
understand that template /format can if needed talk to a COE (or similar
project) to translate that template 'segment' into a realized entity using the
capabilities/constraints that the template specified. Overall it starts to feel
like maybe it is time to change the upper and lower systems and shake things up
a little ;) Peng Zhao wrote: > I'd take the idea further. Imagine a typical Heat
template, what you > need to do is: > > - replace the VM id with Docker image id
> - nothing else > - run the script with a normal heat engine > - the entire
stack gets deployed in seconds > > Done! > > Well, that sounds like nova-docker.
What about cinder and neutron? They > don't work well with Linux container! The
answer is Hypernova > (https://github.com/hyperhq/hypernova) or Intel
ClearContainer, seamless > integration with most OpenStack components. > >
Summary: minimal changes to interface and upper systems, much smaller > image
and much better developer workflow. > > Peng > >
- > Hyper_ Secure Container
Cloud > > > > On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com
> wrote: > > __ Fox, Kevin M wrote: > I think part of the problem is containers
> are mostly orthogonal to vms/bare metal. Containers are a package > for a
single service. Multiple can run on a single vm/bare metal > host. Orchestration
like Kubernetes comes in to turn a pool of > vm's/bare metal into a system that
can easily run multiple > containers. > Is the orthogonal part a problem because
we have made > it so or is it just how it really is? Brainstorming starts here:
> Imagine a descriptor language like (which I stole from >
https://review.openstack.org/#/c/210549 and modified): --- > components: -
label: frontend count: 5 image: ubuntu_vanilla > requirements: high memory, low
disk stateless: true - label: > database count: 3 image: ubuntu_vanilla
requirements: high memory, > high disk stateless: false - label: memcache count:
3 image: > debian-squeeze requirements: high memory, no disk stateless: true - >
label: zookeeper count: 3 image: debian-squeeze requirements: high > memory,
medium disk stateless: false backend: VM networks: - label: > frontend_net
flavor: “public network” associated_with: - frontend - > label: database_net
flavor: high bandwidth associated_with: - > database - label: backend_net
flavor: high bandwidth and low latency > associated_with: - zookeeper -
memchache constraints: - ref: > container_only params: - frontend - ref:
no_colocated params: - > database - frontend - ref: spread params: - database -
ref: > no_colocated params: - database - frontend - ref: spread params: - >
memcache - ref: spread params: - zookeeper - ref: isolated_network > params: -
frontend_net - database_net - backend_net ... Now nothing > in the above is
about container, or baremetal or vms, (although a > 'advanced' constraint can be
that a component must be on a  > instead it's just about the constraints that a
user has on there > deployment and the components associated with it. It can be
left up > to some cons

  1   2   >