Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Ihar Hrachyshka

Doug Hellmann  wrote:


Excerpts from Alan Pevec's message of 2016-03-22 20:19:44 +0100:
The release team discussed this at the summit and agreed that it didn't  
really matter. The only folks seeing the auto-generated versions are  
those doing CD from git, and they should not be mixing different  
branches of a project in a given environment. So I don't think it is  
strictly necessary to raise the major version, or give pbr the hint to  
do so.


ok, I'll send confused RDO trunk users here :)
That means until first Newton milestone tag is pushed, master will
have misleading version. Newton schedule is not defined yet but 1st
milestone is normally 1 month after Summit, and 2 months from now is
rather large window.

Cheers,
Alan


Are you packaging unreleased things in RDO? Because those are the only
things that will have similar version numbers. We ensure that whatever
is actually tagged have good, non-overlapping, versions.


RDO comes in two flavours. One is ‘classic’ RDO that builds from tarballs.

But there’s also another thing in RDO called Delorean which is RDO packages  
built from upstream git heads (both master and stable/*).


More info:  
http://blogs.rdoproject.org/7834/delorean-openstack-packages-from-the-future


It allows to install the latest and greatest from upstream heads. Two  
options are available: running from ‘current’ which is unvalidated latest  
heads, or rely on ‘current-passed-ci’ that points to the latest build that  
was validated by CI.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] HowTo: Compose a local bundle file

2016-03-24 Thread Nikolay Starodubtsev
Hi wangzhh,
I've just sent to the review a draft of documentation about bundles. It can
be found here:
https://review.openstack.org/#/c/296929/
Plese, contact me if you have any questions.





Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2016-03-24 2:24 GMT+03:00 Serg Melikyan :

> Hi wangzhh,
>
> You can use python-muranoclient in order to download bundle from
> apps.openstack.org and then use it somewhere else for the import:
>
> murano bundle-save app-servers
>
> you can find more about this option in corresponding spec [0].
>
> Generally local bundle is not different from the remote one, you can
> take a look at the same bundle [1] internals. If you will download
> this file and then will try to execute:
>
> murano bundle-import ./app-servers.bundle
>
> murano will try to find all mentioned packages in the local folder
> before going to apps.openstack.org.
>
> References:
> [0]
> http://specs.openstack.org/openstack/murano-specs/specs/liberty/bundle-save.html
> [1] http://storage.apps.openstack.org/bundles/app-servers.bundle
>
> On Wed, Mar 23, 2016 at 1:48 AM, 王正浩  wrote:
> >
> > Hi Serg Melikyan!
> >   I'm a programmer of China. And I have a question about Application
> Servers Bundle (bundle)
> https://apps.openstack.org/#tab=murano-apps&asset=Application%20Servers%20Bundle
> >   I want to import a bundle across a local bundle file. Could you tell
> me how  to create a bundle file? Is there any doc explain it?
> >   Thanks!
> >
> >
> >
> >
> > --
> > --
> > Best Regards,
> > wangzhh
>
>
>
>
> --
> Serg Melikyan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-24 Thread Sergey Kraynev
Rabi,

Good point. I suppose, that the root cause of it is gap in our documentation.
Unfortunately I can not find any clear description what's the
differences or how it should work (especially with examples) in our
documentation. [1]

May be we need to improve it by adding more examples ?

[1] 
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr

On 24 March 2016 at 08:39, Rabi Mishra  wrote:
>> On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
>> >Hello,
>> >It looks similar on issue, which was discussed here [1]
>> >I suppose, that the root cause is incorrect using get_attr for your
>> >case.
>> >Probably you got "list"  instead of "string".
>> >F.e. if I do something similar:
>> >outputs:
>> >  rg_1:
>> >value: {get_attr: [rg_a, rg_a_public_ip]}
>> >  rg_2:
>> >value: {get_attr: [rg_a, rg_a_public_ip, 0]}
>> >
>> >  rg_3:
>> >value: {get_attr: [rg_a]}
>> >  rg_4:
>> >value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
>> >where rg_a is also resource group which uses custom template as
>> >resource.
>> >the custom template has output value rg_a_public_ip.
>> >The output for it looks like [2]
>> >So as you can see, that in first case (like it is used in your example),
>> >get_attr returns list with one element.
>> >rg_2 is also wrong, because it takes first symbol from sting with IP
>> >address.
>>
>> Shouldn't rg_2 and rg_4 be equivalent?
>
> They are the same for template version 2013-05-23. However, they behave 
> differently
> from the next  version(2014-10-16) onward and return a list of characters. I 
> think
> this is due to the fact that `get_attr` function mapping is changed from 
> 2014-10-16.
>
>
> 2013-05-23 -  
> https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L70
> 2014-10-16 -  
> https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L291
>
> This makes me wonder why would a template author do something like
> {get_attr: [rg_a, rg_a_public_ip, 0]} when he can easily do
> {get_attr: [rg_a, resource.0.rg_a_public_ip]} or {get_attr: [rg_a, 
> resource.0, rg_a_public_ip]}
> for specific resource atrributes.
>
> I understand that {get_attr: [rg_a, rg_a_public_ip]} cane be useful when we 
> just want to use
> the list of attributes.
>
>
>>
>> {get_attr: [rg_a, rg_a_public_ip]} should return a list of all
>> rg_a_public_ip attributes (one list item for each resource in the group),
>> then the 0 should select the first item from that list?
>>
>> If it's returning the first character of the first element, that sounds
>> like a bug to me?
>>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-24 Thread Yolanda Robla Mota

Hi
I have also been talking with diskimage-builder cores and they suggested 
we better start with the element on magnum. So I moved the change here:

https://review.openstack.org/#/c/296719/

About diskimage-size, I'll dig a bit more on it. It can be for two causes:
1. To generate the fedora-atomic image, I first install a fedora-minimal 
image, and install ostree packages and the dependencies there. Based on 
that, i execute the ostree commands, and then I update the grub config 
to boot with the new image. But the old contents are still there, so 
this is potentially increasing the size. I will take a look at possible 
cleanup solutions.
2. The way diskimage-builder compresses the image. It can be different 
from the one that Fedora is using, and less effective. I saw discussions 
in diskimage-builder about that, and there have been recent improvements 
as https://review.openstack.org/290944 . Is worth investigating a bit 
more as well.


Best
Yolanda

El 23/03/16 a las 18:10, Ton Ngo escribió:


Hi Yolanda,
Thank you for making a huge improvement from the manual process of 
building the Fedora Atomic image.
Although Atomic does publish a public OpenStack image that is being 
considered in this patch:

https://review.openstack.org/#/c/276232/
in the past we have come to many situations where we need an image 
with specific version of certain software
for features or bug fixes (Kubernetes, Docker, Flannel, ...). So the 
automated and customizable build process

will be very helpful.

With respect to where to land the patch, I think diskimage-builder is 
a reasonable target.
If it does not land there, Magnum does currently have 2 sets of 
diskimage-builder elements for Mesos image
and Ironic image, so it is also reasonable to submit the patch to 
Magnum. With the new push to reorganize
into drivers for COE and distro, the elements would be a natural fit 
for Fedora Atomic.


As for periodic image build, it's a good idea to stay current with the 
distro, but we should avoid the situation
where something new in the image breaks a COE and we are stuck for 
awhile until a fix is made. So instead of
an automated periodic build, we might want to stage the new image to 
make sure it's good before switching.


One question: I notice the image built by DIB is 871MB, similar to the 
manually built image, while the
public image from Atomic is 486MB. It might be worthwhile to 
understand the difference.


Ton Ngo,

Inactive hide details for Yolanda Robla Mota ---03/23/2016 04:12:54 
AM---Hi I wanted to start a discussion on how Fedora AtomicYolanda 
Robla Mota ---03/23/2016 04:12:54 AM---Hi I wanted to start a 
discussion on how Fedora Atomic images are being


From: Yolanda Robla Mota 
To: 
Date: 03/23/2016 04:12 AM
Subject: [openstack-dev] [magnum] Generate atomic images using 
diskimage-builder






Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
The image needs to be built manually, uploaded to fedorapeople, and then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to consume
any tree we need, so images can be customized on demand. I generated one
image using this element, and uploaded to fedora people. The image has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps. This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm doing now
- generate this element internally on magnum. So we can have a directory
in magnum project, called "elements", and have the fedora-atomic element
here. This will give us more control on the element behaviour, and will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be to
periodically generate images using a magnum job, and upload these images
to OpenStack Infra mirrors. Currently the image is based on Fedora F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the tests
can be changed, to consume these internals images. By this way the tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests can
be more reilable, because we will be removing an external dependency.

So i'd like to get m

Re: [openstack-dev] [trove] OpenStack Trove meeting minutes (2016-03-23)

2016-03-24 Thread Luigi Toscano


- Original Message -
> On Wed, Mar 23, 2016 at 07:52:10PM +, Amrith Kumar wrote:
> > The meeting bot died during the meeting and therefore the logs on eavesdrop
> > are useless. So I've had to get "Old-Fashioned-Logs(tm)".
> > 
> > Action Items:
> > 
> >  #action [all] If you have a patch set that you intend to resume
> > work on, please put an update in it to that effect so we don't go abandon
> > it under you ...
> >  #action [all]  if any of the abandoned patches looks like
> > something you would like to pick up feel free
> >  #action cp16net reply to trove-dashboard ML question for RC2
> >  #action [all] please review changes [3], [4], and link [5] in
> > agenda and update the reviews
> > 
> > Agreed:
> > 
> >  #agreed flaper87 to WF+1 the patches in question [3] and [4]
> > 
> > Meeting agenda is at
> > https://wiki.openstack.org/wiki/Trove/MeetingAgendaHistory#Trove_Meeting.2C_March_23.2C_2016
> > 
> > Meeting minutes (complete transcript) is posted at
> > 
> > https://gist.github.com/amrith/5ce3e4a0311f2cc4044c
> 
> I'm still unsure of the value of adding these tests, and would love some
> pointers.
> 
> The current stable/liberty branch of trove fails
> gate-trove-scenario-functional-dsvm-mysql with a summary of "FAILED (SKIP=23,
> errors=13)"[1].
> 
> The 2 reviews in question fail with
> "FAILED (SKIP=5, errors=2)"[2] and
> "FAILED (SKIP=18, errors=1)"[3].
> 
> Granted these are heading in the right direction (in terms of fails).  There
> is
> no dependencies between the 2 reviews so I'll assume that if they're both
> merged that number wont regress.
> 
> If you look at the logs all 3 runs failed "instance_resize_flavor"
> [4][5][6]
> 
> So even with the changes merged you still don't end up with a gate job (even
> on
> the experimental queue that you can "just use".
> 
> Yes you have improved testing coverage, but is it meaningful?
> 
> I'm not blocking you from merging them I just don't understand the benefit.

The benefit is allow people using Liberty to have some meaningful results using 
them.

Without the fixes, you can't even run the basic instance creation tests, and 
you have broken code shipped with the release.
With them, you can run at least some basic tests, like instance creation, for 
datastores not supported by current set of integration tests.
Not all the tests run by the gate, but some, compared to the currently totally 
broken status.


-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-24 Thread Steven Hardy
On Thu, Mar 24, 2016 at 01:39:01AM -0400, Rabi Mishra wrote:
> > On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
> > >Hello,
> > >It looks similar on issue, which was discussed here [1]
> > >I suppose, that the root cause is incorrect using get_attr for your
> > >case.
> > >Probably you got "list"  instead of "string".
> > >F.e. if I do something similar:
> > >outputs:
> > >  rg_1:
> > >    value: {get_attr: [rg_a, rg_a_public_ip]}
> > >  rg_2:
> > >    value: {get_attr: [rg_a, rg_a_public_ip, 0]}
> > >                  
> > >  rg_3:
> > >    value: {get_attr: [rg_a]}
> > >  rg_4:
> > >    value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
> > >where rg_a is also resource group which uses custom template as
> > >resource.
> > >the custom template has output value rg_a_public_ip.
> > >The output for it looks like [2]
> > >So as you can see, that in first case (like it is used in your 
> > > example),
> > >get_attr returns list with one element.
> > >rg_2 is also wrong, because it takes first symbol from sting with IP
> > >address.
> > 
> > Shouldn't rg_2 and rg_4 be equivalent?
> 
> They are the same for template version 2013-05-23. However, they behave 
> differently
> from the next  version(2014-10-16) onward and return a list of characters. I 
> think 
> this is due to the fact that `get_attr` function mapping is changed from 
> 2014-10-16.

Ok, I guess it's way too late to fix it, but it still sounds like a
backwards incompatible regression to me.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-24 Thread Steven Hardy
On Wed, Mar 23, 2016 at 05:05:17PM -0400, Zane Bitter wrote:
> On 23/03/16 13:35, Steven Hardy wrote:
> >On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:
> >>Hello,
> >>It looks similar on issue, which was discussed here [1]
> >>I suppose, that the root cause is incorrect using get_attr for your 
> >> case.
> >>Probably you got "list"  instead of "string".
> >>F.e. if I do something similar:
> >>outputs:
> >>  rg_1:
> >>value: {get_attr: [rg_a, rg_a_public_ip]}
> >>  rg_2:
> >>value: {get_attr: [rg_a, rg_a_public_ip, 0]}
> >>
> >>  rg_3:
> >>value: {get_attr: [rg_a]}
> >>  rg_4:
> >>value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
> 
> There's actually another option here too that I personally prefer:
> 
>   rg_5:
> value: {get_attr: [rg_a, resource.0, rg_a_public_ip]}
> 
> >>where rg_a is also resource group which uses custom template as 
> >> resource.
> >>the custom template has output value rg_a_public_ip.
> >>The output for it looks like [2]
> >>So as you can see, that in first case (like it is used in your example),
> >>get_attr returns list with one element.
> >>rg_2 is also wrong, because it takes first symbol from sting with IP
> >>address.
> >
> >Shouldn't rg_2 and rg_4 be equivalent?
> 
> Nope, rg_2 returns:
> 
>   [[0], [0], ...]
> 
> If this makes no sense, imagine that rg_a_public_ip is actually a map rather
> than a string. If you want to pick one key out of the map on each member and
> return the list of all of them, then you just have to add the key as the
> next argument to get_attr. This makes get_attr on a resource group work
> somewhat differently to other resources, but it's the only sensible way to
> express this in a template:

Ah, yeah I was thinking of the returned list containing simple string IP's
vs a map, this makes more sense now - thanks!

It has to be said, this RG attributes interface is really confusing, even
for all of us, so we probably need to improve the docs for users.

I did create this example template a long time ago, but it seems based on
this discussion that it's in need of an update and more comments (probably
in addition to improving the resource docs as well).

https://github.com/openstack/heat-templates/blob/master/hot/resource_group/resource_group.yaml

Cheers,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Thierry Carrez

Anita Kuno wrote:

[...]
So some items that have been raised thus far:
- permissions: having a bot on gerrit with +2 +A is something we would
like to avoid
- "unsanctioned" bots (bots not in infra config files) in channels
shared by multiple teams (meeting channels, the -dev channel)
- forming a dependence on bots and expecting infra to maintain them ex
post facto (example: bot soren maintained until soren didn't)
- causing irritation for others due to the presence of an echoing bot
which eventually infra will be asked or expected to mediate
- duplication of features, both meetbot and purplebot log channels and
host the archives in different locations
- canonical bot doesn't get maintained


So it feels like people write their own bot rather than contribute to 
the already-existing infrastructure bots (statusbot and meetbot) -- is 
there a reason for that, beyond avoiding the infra contribution process ?


I was faced with such a decision when I coded up the #success feature, 
but I ended up just adding the feature to the existing statusbot, rather 
than create my own successbot. Are there technical limitations in the 
existing bots that prevent people from adding the features they want there ?


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder-Nova API meeting

2016-03-24 Thread Ildikó Váncsa
Hi All,

As it was discussed several times on this mailing list there is room for 
improvements regarding the Cinder-Nova interaction. To fix these issues we 
would like to create a cross-project spec to capture the problems and ways to 
solve them. The current activity is captured on this etherpad: 
https://etherpad.openstack.org/p/cinder-nova-api-changes

Before writing up several specs we will have a meeting next Wednesday to 
synchronize and discuss with the two teams and everyone who's interested in 
this work the way forward.

The meeting will be held on the #openstack-meeting-cp channel, on March 30, 
2100UTC.

Please let me know if you have any questions.

See you next week!
Best Regards,
/Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] About snapshot Rollback?

2016-03-24 Thread Chenzongliang
Hi all:
We are condering add a fucntion rollback_snapshot when we use backup. In 
the end user's scenario. If a vm fails, we hope that we can use snapshot to to 
recovery the volume's data.
Beacuse it can quickly recovery our vm. But if we use the remote data to 
recovery. We will spend more time.
But i'm not sure if the data was recoveried from the backend. whether the 
host need to rescan the volumes? At the same time. If a volume have been 
extended, whether it can be roolback?

   I want to know whether the topic have been discussed or have other 
recommendations to us?

   Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [vote] Managing bug backports to Mitaka branch

2016-03-24 Thread Paul Bourke

Hey Steve,

Thanks for dropping this round - I was occupied during the meeting and 
wanted to be sure what I was asked to vote on! It seems like it's not a 
big deal after all, also considering it's standard practice it's a no 
brainer. +1


Cheers,
-Paul

On 24/03/16 03:55, Swapnil Kulkarni wrote:

On Thu, Mar 24, 2016 at 12:10 AM, Steven Dake (stdake)  wrote:

We had an emergency voting session on this proposal on IRC in our team
meeting today and it passed as documented in the meeting minutes[1].  I was
asked to have a typical vote and discussion on irc by one of the
participants of the vote, so please feel free to discuss and vote again.  I
will leave discussion and voting open until March 30th.  If the voting is
unanimous prior to that time, I will close voting.  The original vote will
stand unless there is a majority that oppose this process in this formal
vote.  (formal votes > informal irc meeting votes).

Thanks,
-steve

[1]
http://eavesdrop.openstack.org/meetings/kolla/2016/kolla.2016-03-23-16.30.log.html

look for timestamp 16:51:05

From: Steven Dake 
Reply-To: OpenStack Development Mailing List

Date: Tuesday, March 22, 2016 at 10:12 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [kolla] Managing bug backports to Mitaka branch

Thierry (ttx in the irc log at [1]) proposed the standard way projects
typically handle backports of newton fixes that should be fixed in an rc,
while also maintaining the information in our rc2/rc3 trackers.

Here is an example bug with the process applied:
https://bugs.launchpad.net/kolla/+bug/1540234

To apply this process, the following happens:

Any individual may propose a newton bug for backport potential by specifying
the tag 'rc-backport-potential" in the Newton 1 milestone.
Core reviewers review the rc-backport-potential bugs.

CR's review [3] on a daily basis for new rc backport candidates.
If the core reviewer thinks the bug should be backported to stable/mitaka,
(or belongs in the rc), they use the Target to series button, select mitaka,
save.
  copy the state of the bug, but set thte Mitaka milestone target to
"mitaka-rc2".
Finally they remove the rc-backport-potential tag from the bug, so it isn't
re-reviwed.

The purpose of this proposal is to do the following:

Allow the core reviewer team to keep track of bugs needing attention for the
release candidates in [2] by looking at [3].
Allow master development to proceed un-impeded.
Not single thread on any individual for backporting.

I'd like further discussion on this proposal at our Wednesday meeting, so
I've blocked off a 20 minute timebox for this topic.  I'd like wide
agreement from the core reviewers to follow this best practice, or
alternately lets come up with a plan b :)

If your a core reviewer and won't be able to make our next meeting, please
respond on this thread with your  thoughts.  Lets also not apply the process
until the conclusion of the discussion at Wednesday's meeting.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I was not able to attend the meeting yesterday. I am +1 on this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Anastasia Kravets
Hi, team!

If you remember, we've created a cookbook for ec2-api service. After last 
discussion I’ve refactored it, have added specs.
The final version is located on cloudscaling github: 
https://github.com/cloudscaling/cookbook-openstack-ec2.
How do we proceed to integrate our cookbook to your project?

Regards,
Anastasia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Cross-repository testing

2016-03-24 Thread Sylwester Brzeczkowski
Hi all!

We want to enable proper testing for Nailgun (fuel-web) extensions. We want
to implement a script which will clone all required repos for test and
actually run the tests.

The script will be placed in openstack/fuel-web [0] repo and should work
locally and also on openstack CI (gate jobs). All Nailgun Extensions should
have Nailgun in it's requirements so I think it's ok.

What do you think about the idea? Is it a good approach?
Am I missing some already existing solutions for this problem?

PS:
I have sent 1 email about this already [1], but I decided to change the
audience to broader one since question is not Fuel-specific it's more about
asking for feedback. Please read the email if you're interested in details.

[0] https://github.com/openstack/fuel-web
[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089660.html

-- 
*Sylwester Brzeczkowski*
Python Software Engineer
Product Development-Core : Product Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Block subtractive schema changes

2016-03-24 Thread stuart . mclaren


I think this makes sense (helps us spot things which could impact upgrade).


Hi Glance Team,

I have registered a blueprint [1] for blocking subtractive schema changes.
Cinder and Nova are already supporting blocking of subtractive schema 
operations. Would like to add similar support here.

Please let me know your opinion on the same.

[1] https://blueprints.launchpad.net/glance/+spec/block-subtractive-operations


Thank you,

Abhishek Kekane


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Launch of an instance from a bootable volume fails on Xen env

2016-03-24 Thread Eugen Block

Hi,


Has this been resolved?  Is anyone working on this issue?


I had traced down the effect and found a possible solution, see  
https://bugs.launchpad.net/nova/+bug/1560965


If you're just trying to start via Horizon, changing  
/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py ought to fix this for  
you:


---cut here---
control1:/srv/www/openstack-dashboard/openstack_dashboard/dashboards/project/instances/workflows # diff -u5 create_instance.py.dist  
create_instance.py

--- create_instance.py.dist 2016-03-18 10:40:51.123942306 +0100
+++ create_instance.py  2016-03-24 11:49:00.404537704 +0100
@@ -119,11 +119,11 @@
  help_text=_("Volume size in gigabytes "
  "(integer value)."))

 device_name = forms.CharField(label=_("Device Name"),
   required=False,
-  initial="vda",
+  initial="",
   help_text=_("Volume mount point  
(e.g. 'vda' "

   "mounts at '/dev/vda'). Leave "
   "this field blank to let the "
   "system choose a device name "
   "for you."))
@@ -878,20 +878,23 @@
 dev_source_type_mapping = {
 'volume_id': 'volume',
 'volume_snapshot_id': 'snapshot'
 }
 dev_mapping_2 = [
-{'device_name': device_name,
+{
  'source_type': dev_source_type_mapping[source_type],
  'destination_type': 'volume',
  'delete_on_termination':
  bool(context['delete_on_terminate']),
  'uuid': volume_source_id,
  'boot_index': '0',
  'volume_size': context['volume_size']
  }
 ]
+if device_name:
+dev_mapping_2.append({"device_name": device_name})
+
 else:
 dev_mapping_1 = {context['device_name']: '%s::%s' %
  (context['source_id'],
  bool(context['delete_on_terminate']))
  }
@@ -900,20 +903,22 @@
 exceptions.handle(request, msg)

 elif source_type == 'volume_image_id':
 device_name = context.get('device_name', '').strip() or None
 dev_mapping_2 = [
-{'device_name': device_name,  # None auto-selects device
+{
  'source_type': 'image',
  'destination_type': 'volume',
  'delete_on_termination':
  bool(context['delete_on_terminate']),
  'uuid': context['source_id'],
  'boot_index': '0',
  'volume_size': context['volume_size']
  }
 ]
+if device_name:
+dev_mapping_2.append({"device_name": device_name})

 netids = context.get('network_id', None)
 if netids:
 nics = [{"net-id": netid, "v4-fixed-ip": ""}
 for netid in netids]

---cut here---

I'm not aware which side effects this change can have. I started a  
similar thread in the mailing list this week  
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090009.html


Regards,
Eugen


Zitat von "Benjamin, Arputham" :


Launch of an instance from a bootable volume fails on Xen env.
The root cause of this issue is that Nova is mapping the disk_dev  
/disk_bus to vda/virtio

instead of xvda/xen. (Below is the session output showing the launch error)

Has this been resolved?  Is anyone working on this issue?

Thanks,
Benjamin

016-03-08 15:07:51.430 3070 INFO nova.virt.block_device  
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8  
976963ca04df48c79f0c87ff7a330d47 310cb58241964e0a92bc939ec1c6a0ff -  
- -] [instance: d45a5b7b-314f-4bfa-893d-3498e04f04fa] Booting with  
volume 1d33ba84-9ce2-467d-97c5-973a7ed48456 at /dev/vda
2016-03-08 15:07:55.863 3070 INFO nova.virt.libvirt.driver  
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8  
976963ca04df48c79f0c87ff7a330d47 310cb58241964e0a92bc939ec1c6a0ff -  
- -] [instance: d45a5b7b-314f-4bfa-893d-3498e04f04fa] Creating image
2016-03-08 15:07:55.864 3070 WARNING nova.virt.libvirt.driver  
[req-b5033c12-196f-411e-8b19-6d15b1e7a5b8  
976963ca04df48c79f0c87ff7a330d47 310cb58241964e0a92bc939ec1c6a0ff -  
- -] [instance: d45a5b7b-314f-4bfa-893d-3498e04f04fa] File injection  
into a boot from volume instance is not supported
2016-03-08 15:08:01.430 3070 IN

Re: [openstack-dev] [Fuel] Wiping node's disks on delete

2016-03-24 Thread Alexander Gordeev
On Wed, Mar 23, 2016 at 7:49 PM, Dmitry Guryanov 
wrote:

>
> I have no objections against clearing bios boot partition, but could
> you describe scenario, how non-efi system will boot with valid
> BIOS_grub and wiped boot code in MBR
>


I thoroughly agree that it's impossible to boot without stage1 from the
disk for non-uefi system. Besides, it doesn't mean what we shouldn't wipe
dedicated BIOS_grub partition.

But... How about network booting over PXE? I'm not quire sure if it's still
technically possible. I read that stage1 just contains of LBA48 pointer to
the stage1.5 or stage2. So, i can imagine a case when somebody has tweaked
PXE loader so it will be jumping to predefined LBA48 pointer where
stage1.5/2 resides absolutely bypassing stage1.

I knew that partitioning layout for the first 2 partitions is always the
same for all target nodes. The actual value of the partitioning boundaries
may slightly vary due to partition boundaries alignment depending on the
h/w itself. If all nodes were equipped with the identical h/w (which is
almost true for real deployments), then BIOS_grub partition resides under
the same LBA48 pointer for all nodes. So, even it may be sounded too tricky
and requires a lot of manual steps, but it's still possible. No? Did I miss
something?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2016-03-24 Thread Rossella Sblendido


On 03/23/2016 05:52 PM, Ihar Hrachyshka wrote:
> Hey folks,
> 
> some update on proactive backporting for neutron, and a call for action
> from subteam leaders.
> 
> As you probably know, lately we started to backport a lot of bug fixes
> in latest stable branch (liberty atm) + became more systematic in
> getting High+ bug fixes into older stable branch (kilo atm).
> 
> I work on some tooling lately to get the process a bit more straight:
> 
> https://review.openstack.org/#/q/project:openstack-infra/release-tools+owner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22
> 
> 
> I am at the point where I can issue a single command and get the list of
> bugs fixed in master since previous check, with Wishlist bugs filtered
> out [since those are not applicable for backporting]. The pipeline looks
> like:
> 
> ./bugs-fixed-since.py neutron  |
> ./lp-filter-bugs-by-importance.py --importance=Wishlist neutron |
> ./get-lp-links.py
> 
> For Kilo, we probably also need to add another filter for Low impact bugs:
> 
> ./lp-filter-bugs-by-importance.py --importance=Low neutron
> 
> There are more ideas on how to automate the process (specifically, kilo
> backports should probably be postponed till Liberty patches land and be
> handled in a separate workflow pipeline since old-stable criteria are
> different; also, the pipeline should fully automate ‘easy' backport
> proposals, doing cherry-pick and PS upload for the caller).

Wow, great work, thanks a lot!

> 
> However we generate the list of backport candidates, in the end the bug
> list is briefly triaged and categorized and put into the etherpad:
> 
> https://etherpad.openstack.org/p/stable-bug-candidates-from-master
> 
> I backport some fixes that are easy to cherry-pick myself. (easy == with
> a press of a button in gerrit UI)
> 
> Still, we have a lot of backport candidates that require special
> attention in the etherpad.
> 
> I ask folks that cover specific topics in our community (f.e. Assaf for
> testing; Carl and Oleg for DVR/L3; John for IPAM; etc.) to look at the
> current list, book some patches for your subteams to backport, and make
> sure the fixes land in stable.

I've gone through the L2 and ML2 section. I backported missing patches
and added comments where I think no backport is needed.

cheers,

Rossella

> 
> Note that the process generates a lot of traffic on stable branches, and
> that’s why we want more frequent releases. We can’t achieve that on kilo
> since kilo stable is still in the integrated release mode, but starting
> from Liberty we should release more often. It’s on my todo to document
> release process in neutron devref.
> 
> For your reference, it’s just a matter of calling inside
> openstack/releases repo:
> 
> ./tools/new_release.sh liberty neutron bugfix
> 
> FYI I just posted a new Liberty release patch at:
> https://review.openstack.org/296608
> 
> Thanks for attention,
> 
> Ihar Hrachyshka  wrote:
> 
>> Ihar Hrachyshka  wrote:
>>
>>> Rossella Sblendido  wrote:
>>>
 Hi,

 thanks Ihar for the etherpad and for raising this point.
 .


 On 12/18/2015 06:18 PM, Ihar Hrachyshka wrote:
> Hi all,
>
> just wanted to note that the etherpad page [1] with backport
> candidates
> has a lot of work for those who have cycles for backporting relevant
> pieces to Liberty (and Kilo for High+ bugs), so please take some on
> your
> plate and propose backports, then clean up from the page. And please
> don’t hesitate to check the page for more worthy patches in the
> future.
>
> It can’t be a one man army if we want to run the initiative in long
> term.

 I completely agree, it can't be one man army.
 I was thinking that maybe we can be even more proactive.
 How about adding as requirement for a bug fix to be merged to have
 the backport to relevant branches? I think that could help
>>>
>>> I don’t think it will work. First, not everyone should be required to
>>> care about stable branches. It’s my belief that we should avoid
>>> formal requirements that mechanically offload burden from stable team
>>> to those who can’t possible care less about master.
>>
>> Of course I meant ‘about stable branches’.
>>
>> Ihar
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: op

Re: [openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-24 Thread Kevin.ZhangSen
Hi Ghanshyam,

Thank you for your sugesstions, and I agree with your opinion that the service 
outside OpenStack is better. In fact I am considering what API Jacket will 
offer. It is better to offer the OpenStack API directly. But there will be a 
question how to modify as little code as possible to keep the same API 
compatibility with OpenStack after OpenStack updates the API version. I think 
this will be an interesting question, but we can try to let Jacket offer the 
OpenStack API.



The reply about other query is below. If there are any questions, please tell 
me. Thank you again.


Best Regards,
Kevin (Sen Zhang)


At 2016-03-24 11:58:47, "GHANSHYAM MANN"  wrote:
>Hi Kevin,
>
>Its always nice idea as jacket has but not sure how feasible and
>valuable it would be. For doing API translation and gateway, there are
>many available and one I remember is Aviator (based on ruby gem) [1],
>not sure how active it is now.
>
>As your idea is more about consuming all differences between different
>cloud, few query-
>
> 1. Different clouds have very much different API model and feature
>they provides, how  worth it is to provide missing/different features
>at jacket layer? its then kind of another layer of cloud layer you
>will end up.
Kevin: We will provide the commonly used functions to let the users manage the 
different clouds just like one kind of cloud, for example VMs 
management(create/destroy/start/stop/restart/rebuild...) and volume 
management(create/delete/backup/snapshot). About network, there is a solution 
to use neutron to offer network function through the overlay virtual network 
based on the provider clouds’ virtual network. This will be another project, 
not in Jacket.
> 2. To support that idea through OpenStack standard API, you need to
>inserting jacket driver all over the components which end up having
>another layer gets inserted there. Maintainability of that is another
>issue for each OpenStack components.
Kevin: I agree with you. Jacket offer OpenStack will be better.
>IMO, outside layer (from OpenStack ) which can do all these would be
>nice something which can redirect API services at top level top and do
>what all conversion, consume difference etc.
>
>[1] https://github.com/aviator/aviator
>
>Regards
>Ghanshyam Mann
>
>
>On Wed, Mar 16, 2016 at 9:58 PM, zs  wrote:
>> Hi Gordon,
>>
>> Thank you for your suggestion.
>>
>> I think jacket is different from tricircle. Because tricircle focuses on
>> OpenStack deployment across multiple sites, but jacket focuses on how to
>> manage the different clouds just like one cloud.  There are some
>> differences:
>> 1. Account management and API model: Tricircle faces multiply OpenStack
>> instances which can share one Keystone and have the same API model, but
>> jacket will face the different clouds which have the respective service and
>> different API model. For example, VMware vCloud Director has no volume
>> management like OpenStack and AWS, jacket will offer a fake volume
>> management for this kind of cloud.
>> 2. Image management: One image just can run in one cloud, jacket need
>> consider how to solve this problem.
>> 3. Flavor management: Different clouds have different flavors which can not
>> be operated by users. Jacket will face this problem but there will be no
>> this problem in tricircle.
>> 4. Legacy resources adoption: Because of the different API modles, it will
>> be a huge challenge for jacket.
>>
>> I think it is maybe a good solution that jacket works to unify the API model
>> for different clouds, and then using tricircle to offer the management of  a
>> large scale VMs.
>>
>> Best Regards,
>> Kevin (Sen Zhang)
>>
>>
>> At 2016-03-16 19:51:33, "gordon chung"  wrote:
>>>
>>>
>>>On 16/03/2016 4:03 AM, zs wrote:
 Hi all,

 There is a new project "jacket" to manage multiply clouds. The jacket
 wiki is: https://wiki.openstack.org/wiki/Jacket
   Please review it and give your comments. Thanks.

 Best Regards,

 Kevin (Sen Zhang)


>>>
>>>i don't know exact details of either project, but i suggest you
>>>collaborate with tricircle project[1] because it seems you are
>>>addressing the same user story (and in a very similar fashion). not sure
>>>if it's a user story for OpenStack itself, but no point duplicating
>>> efforts.
>>>
>>>[1] https://wiki.openstack.org/wiki/Tricircle
>>>
>>>cheers,
>>>
>>>--
>>>gord
>>>
>>>__
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo

Re: [openstack-dev] [Glance] Block subtractive schema changes

2016-03-24 Thread Dulko, Michal
On Thu, 2016-03-24 at 10:45 +, stuart.mcla...@hp.com wrote:
> I think this makes sense (helps us spot things which could impact upgrade).
> 
> >Hi Glance Team,
> >
> >I have registered a blueprint [1] for blocking subtractive schema changes.
> >Cinder and Nova are already supporting blocking of subtractive schema 
> >operations. Would like to add similar support here.
> >
> >Please let me know your opinion on the same.
> >
> >[1] 
> >https://blueprints.launchpad.net/glance/+spec/block-subtractive-operations
> >
> >
> >Thank you,
> >
> >Abhishek Kekane

You'll probably need some way to actually perform such migrations when
needed. In Cinder we've introduced guidelines [1], which allow us to
ALTER or DROP a column with a process stretching through 2-3 releases.

Nova does a little better by not allowing nova-compute to access the DB
(nova-conductor is acting as a proxy).

Also note that unit test won't prevent you from all of the cases. It
won't for example detect DB-specific migrations written in plain SQL as
in [2].

[1] 
http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
[2] https://review.openstack.org/#/c/190300
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Block subtractive schema changes

2016-03-24 Thread Flavio Percoco

On 24/03/16 05:42 +, Kekane, Abhishek wrote:

Hi Glance Team,



I have registered a blueprint [1] for blocking subtractive schema changes.

Cinder and Nova are already supporting blocking of subtractive schema
operations. Would like to add similar support here.



Please let me know your opinion on the same.



[1] https://blueprints.launchpad.net/glance/+spec/block-subtractive-operations




Hey Abhishek,

Thanks for submitting this! Could you please submit a spec here[0]. That way we
can go through the details and approve accordingly. At first glance, I think it
makes sense.

Flavio

[0] http://git.openstack.org/cgit/openstack/glance-specs/




Thank you,



Abhishek Kekane


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Amrith Kumar
Thanks for the posting because it tells me that there is a way to have bots 
contributed to infra (which I didn't know).

I have experimented with some bots, mostly against gerrit and trying to use the 
rather lightweight Launchpad API.

The one bot that I still operate looks at gerrit and periodically generates an 
email with a list of patch sets to review. The only 'smarts' in it are that it 
waits a reasonable amount of time to send the email, either every 12 hours or 
when there are five or more patch sets to look at.

I have recently been thinking that it would be a good idea to add some 
capabilities to help streamline handling of patches in Trove, for example 
around the area of translations suggested by Zanata. Also one which would 
update a web page based on patches to watch for a specific milestone (see other 
conversation on tagging and searching reviews 
http://openstack.markmail.org/thread/qh7u3sxmtpwkdzas)

Two specific things I'd like to know are below.

(a) Are there any guidelines around contributing bots, what are considered 
acceptable and what are not? I see mention of 'core reviewers must be human' (I 
was thinking my Zanata bot would +2 things, so it's a good thing to know if 
that's not advisable), and

(b) What bots are available that we could look to using; if others have found 
bots that were worth contributing, it would be good to look there before 
reinventing the wheel.

Thanks, and sorry if this is tangential to the initial intent of your email.

-amrith

> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Wednesday, March 23, 2016 4:27 PM
> To: openstack-dev@lists.openstack.org >> OpenStack Development Mailing
> List 
> Subject: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other
> 
> Bots are very handy for doing repetitive tasks, we agree on that.
> 
> Bots also require permissions to execute certain actions, require
> maintenance to ensure they operate as expected and do create output which
> is music to some and noise to others. Said output is often archieved
> somewhere which requires additional decisions.
> 
> This thread is intended to initiate a conversation about bots. So far we
> have seen developers want to use bots in Gerrit[0] and in IRC[1]. The
> conversation starts there but isn't limited to these tools if folks have
> usecases for other bots.
> 
> I included an item on the infra meeting agenda for yesterday's meeting
> (April 22, 2016) and discovered there was enough interest[2] in a
> discussion to take it to the list, so here it is.
> 
> So some items that have been raised thus far:
> - permissions: having a bot on gerrit with +2 +A is something we would
> like to avoid
> - "unsanctioned" bots (bots not in infra config files) in channels shared
> by multiple teams (meeting channels, the -dev channel)
> - forming a dependence on bots and expecting infra to maintain them ex
> post facto (example: bot soren maintained until soren didn't)
> - causing irritation for others due to the presence of an echoing bot
> which eventually infra will be asked or expected to mediate
> - duplication of features, both meetbot and purplebot log channels and
> host the archives in different locations
> - canonical bot doesn't get maintained
> 
> It is possible that the bots that infra currently maintains have features
> of which folks are unaware, so if someone was willing to spend some time
> communicating those features to folks who like bots we might be able to
> satisfy their needs with what infra currently operates.
> 
> Please include your own thoughts on this topic, hopefully after some
> discussion we can aggregate on some policy/steps forward.
> 
> Thank you,
> Anita.
> 
> 
> [0]
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-
> infra.2016-03-09.log.html#t2016-03-09T15:21:01
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.html
> [2]
> http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-
> 19.02.log.html
> timestamp 19:53
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Messaging: everything can talk to everything, and that is a bad thing

2016-03-24 Thread Flavio Percoco

On 22/03/16 17:20 -0400, Adam Young wrote:

On 03/22/2016 09:15 AM, Flavio Percoco wrote:

   On 21/03/16 21:43 -0400, Adam Young wrote:

   I had a good discussion with the Nova folks in IRC today.

   My goal was to understand what could talk to what, and the short
   according to dansmith

   " any node in nova land has to be able to talk to the queue for any
   other one for the most part: compute->compute, compute->conductor,
   conductor->compute, api->everything. There might be a few exceptions,
   but not worth it, IMHO, in the current architecture."

   Longer conversation is here:
   http://eavesdrop.openstack.org/irclogs/%23openstack-nova/
   %23openstack-nova.2016-03-21.log.html#t2016-03-21T17:54:27

   Right now, the message queue is a nightmare.  All sorts of sensitive
   information flows over the message queue: Tokens (including admin) are
   the most obvious.  Every piece of audit data. All notifications and all
   control messages.

   Before we continue down the path of "anything can talk to anything" can
   we please map out what needs to talk to what, and why?  Many of the use
   cases seem to be based on something that should be kicked off by the
   conductor, such as "migrate, resize, live-migrate" and it sounds like
   there are plans to make that happen.

   So, let's assume we can get to the point where, if node 1 needs to talk
   to node 2, it will do so only via the conductor.  With that in place,
   we can put an access control rule in place:


   I don't think this is going to scale well. Eventually, this will require
   evolving the conductor to some sort of message scheduler, which is pretty
   much
   what the message bus is supposed to do.


I'll limit this to what happens with Rabbit and QPID (AMQP1.0) and leave 0 our
of it for now.  I'll use rabbit as shorthand for both these, but the rules are
the same for qpid.


Sorry for the pedantic nitpick but, it's not Qpid. I'm afraid calling it Qpid
will just confuse people on what we're really talking about here. The amqp1
driver is based on the AMQP 1.0 protocol, which is brokerless. The library used
in oslo.messaging is qpid-proton (A.K.A Proton). Qpid is just the name of the
Apache Foundation family that these projects belong to (including Qpidd the old
broker which we don't support anymore in oslo.messaging).



For, say, a migrate operation, the call goes to API, controller, and eventually
down to one of the compute nodes.  Source? Target?  I don't know the code well
enough to say, but let's say it is the source.  It sends an RPC message to the
target node.  The message goes to  the central broker right now, and then back
down to the targen node.  Meanwhile, the source node has set up a reply queue
and that queue name has gone into the message.  The target machine responds  by
getting a reference to the response queue and sends a message.  This message
goes up to the broker, and then down to the the source node.

A man in the middle could sit there and also read off the queue. It could
modify a message, with its own response queue, and happily tranfer things back
and forth.

So, we have the HMAC proposal, which then puts crypto and key distribution all
over the place.  Yes, it would guard against a MITM attack, but the cost in
complexity and processor time it high.


Rabbit does not have a very flexible ACL scheme, bascially, a RegEx per Rabbit
user.  However, we could easily spin up a new queue for direct node to node
communication that did meet an ACL regex.  For example, if we said that the
regex was that the node could only read/write queues that have its name in
them, to make a request and response queue between node-1 and node-2 we could
create a queues


node-1-node-2
node-1-node-2--reply


So, instead of a single queue request, there are two.  And conductor could tell
the target node: start listening on this queue.


Or, we could pass the message through the conductor.  The request message goes
from node-1 to conductor,  where conductor validates the businees logic of the
message, then puts it into the message queue for node-2.  Responses can then go
directly back from node-2 to node-1 the way they do now.

OR...we could set up a direct socket between the two nodes, with the socket set
up info going over the broker.  OR we could use a web server,  OR send it over
SNMP.  Or SMTP, OR TFTP.  There are many ways to get the messages from node to
node.

If  we are going to use the message broker to do this, we should at least make
it possible to secure it, even if it is not the default approach.

It might be possible to use a broker specific technology to optimize this, but
I am not a Rabbit expert.  Maybe there is some way of filtering messages?




   1.  Compute nodes can only read from the queue compute.
   -novacompute-.localdomain
   2.  Compute nodes can only write to response queues in the RPC vhost
   3.  Com

Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Alan Pevec
2016-03-24 2:21 GMT+01:00 Robert Collins :
> Trunk will rapidly exceed mitaka's versions, leading to no confusion too.

That's the case now, RC1 tags are reachable from both branches and
master has more patches, generating higher .devN part. But once RC2
and final tags are pushed, generated version will be higher on
stable/mitaka branch:
>>> from packaging.version import Version, parse
>>> rc2=Version("13.0.0.0rc2")
>>> master=Version("13.0.0.0rc2.dev9")
>>> master > rc2
False
>>> ga=Version("13.0.0")
>>> master > ga
False

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of physical_device_mappings

2016-03-24 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, March 23, 2016 8:01 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of
> physical_device_mappings
> 
> +tags for stable and nova
> 
> Hi Vladimir, comments inline. :)
> 
> On 03/21/2016 05:16 AM, Vladimir Eremin wrote:
> > Hey OpenStackers,
> >
> > I've recently found out, that changing of use neutron sriov-agent in
> Mitaka from optional to required[1] makes a kind of regression.
> 
> While I understand that it is important for you to be able to associate
> more than one NIC to a physical network, I see no evidence that there
> was a *regression* in Mitaka. I don't see any ability to specify more
> than one NIC for a physical network in the Liberty Neutron SR-IOV ML2
> agent:
> 
> https://github.com/openstack/neutron/blob/stable/liberty/neutron/common/
> utils.py#L223-L225
> 
> > Before Mitaka, there was possible to use any number of NICs with one
> Neutron physnet just by specifying pci_passthrough_whitelist in nova:
> >
> >  [default]
> >  pci_passthrough_whitelist = { "devname": "eth3",
> > "physical_network": "physnet2"},{ "devname": "eth4",
> > "physical_network": "physnet2"},
> >
> > which means, that eth3 and eth4 will be used for physnet2 in some
> manner.
> 
> Yes, *in Nova*, however from what I can tell, this functionality never
> existed in the parse_mappings() function in neutron.common.utils module.
> 
> > In Mitaka, there also required to setup neutron sriov-agent as well:
> >
> >  [sriov_nic]
> >  physical_device_mappings = physnet2:eth3
> >
> > The problem actually is to unable to specify this mapping as
> "physnet2:eth3,physnet2:eth4" due to implementation details, so it is
> clearly a regression.
> 
> A regression means that a change broke some previously-working
> functionality. This is not a regression, since there apparently was
> never such functionality in Neutron.
This may have worked in the past if yo did not use the neutron sriovnic agent.
In liberty it was optional and not used with intel nics but in mitaka it is now 
required.
I do not have a liberty system to hand to test but perhaps that is how it 
worked(assuming it did work) 
In liberty but not in mitaka?
> 
> > I've filed bug[2] for it and proposed a patch[3]. Originally
> physical_device_mappings is converted to dict, where physnet name goes
> to key, and interface name to value:
> >
> >  >>> parse_mappings('physnet2:eth3')
> >  {'physnet2': 'eth3'}
> >  >>> parse_mappings('physnet2:eth3,physnet2:eth4')
> >  ValueError: Key physnet2 in mapping: 'physnet2:eth4' not unique
> >
> > I've changed it a bit, so interface name is stored in list, so now
> this case is working:
> >
> >  >>> parse_mappings_multi('physnet2:eth3,physnet2:eth4')
> >  {'physnet2': ['eth3', 'eth4']}
> >
> > I'd like to see this fix[3] in master and Mitaka branch.
> 
> I understand you really want this functionality in Mitaka. And I will
> leave it up to the stable team to determine whether this code should be
> backported to stable/mitaka. However, I will point out that this is a
> new feature, not a bug fix for a regression. There is no regression
> because the ability for Neutron to use more than one NIC with a physnet
> was never supported as far as I can tell.
> 
> Best,
> -jay
> 
> > Moshe Levi also proposed to refactor this part of code to remove
> physical_device_mappings and reuse data that nova provides somehow. I'll
> file the RFE as soon as I figure out how it should work.
> >
> > [1]:
> > http://docs.openstack.org/liberty/networking-guide/adv_config_sriov.ht
> > ml
> > [2]: https://bugs.launchpad.net/neutron/+bug/1558626
> > [3]: https://review.openstack.org/294188
> >
> > --
> > With best regards,
> > Vladimir Eremin,
> > Fuel Deployment Engineer,
> > Mirantis, Inc.
> >
> >
> >
> >
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Block subtractive schema changes

2016-03-24 Thread Kekane, Abhishek


-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: 24 March 2016 17:21
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Block subtractive schema changes

On 24/03/16 05:42 +, Kekane, Abhishek wrote:
>Hi Glance Team,
>
> 
>
>I have registered a blueprint [1] for blocking subtractive schema changes.
>
>Cinder and Nova are already supporting blocking of subtractive schema 
>operations. Would like to add similar support here.
>
> 
>
>Please let me know your opinion on the same.
>
> 
>
>[1] 
>https://blueprints.launchpad.net/glance/+spec/block-subtractive-operati
>ons
>


Hey Abhishek,

Thanks for submitting this! Could you please submit a spec here[0]. That way we 
can go through the details and approve accordingly. At first glance, I think it 
makes sense.

Hi Flavio,

I will create a spec and submit it ASAP.

Thank You,

Abhishek Kekane

Flavio

[0] http://git.openstack.org/cgit/openstack/glance-specs/
>
> 
>
>Thank you,
>
> 
>
>Abhishek Kekane
>
>
>__
>Disclaimer: This email and any attachments are sent in strictest 
>confidence for the sole use of the addressee and may contain legally 
>privileged, confidential, and proprietary data. If you are not the 
>intended recipient, please advise the sender by replying promptly to 
>this email and then delete and destroy this email and any attachments 
>without any further use, copying or forwarding.

>___
>___ OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2016-03-24 Thread Ihar Hrachyshka

Rossella Sblendido  wrote:




On 03/23/2016 05:52 PM, Ihar Hrachyshka wrote:

Hey folks,

some update on proactive backporting for neutron, and a call for action
from subteam leaders.

As you probably know, lately we started to backport a lot of bug fixes
in latest stable branch (liberty atm) + became more systematic in
getting High+ bug fixes into older stable branch (kilo atm).

I work on some tooling lately to get the process a bit more straight:

https://review.openstack.org/#/q/project:openstack-infra/release-tools+owner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22


I am at the point where I can issue a single command and get the list of
bugs fixed in master since previous check, with Wishlist bugs filtered
out [since those are not applicable for backporting]. The pipeline looks
like:

./bugs-fixed-since.py neutron  |
./lp-filter-bugs-by-importance.py --importance=Wishlist neutron |
./get-lp-links.py

For Kilo, we probably also need to add another filter for Low impact bugs:

./lp-filter-bugs-by-importance.py --importance=Low neutron

There are more ideas on how to automate the process (specifically, kilo
backports should probably be postponed till Liberty patches land and be
handled in a separate workflow pipeline since old-stable criteria are
different; also, the pipeline should fully automate ‘easy' backport
proposals, doing cherry-pick and PS upload for the caller).


Wow, great work, thanks a lot!


However we generate the list of backport candidates, in the end the bug
list is briefly triaged and categorized and put into the etherpad:

https://etherpad.openstack.org/p/stable-bug-candidates-from-master

I backport some fixes that are easy to cherry-pick myself. (easy == with
a press of a button in gerrit UI)

Still, we have a lot of backport candidates that require special
attention in the etherpad.

I ask folks that cover specific topics in our community (f.e. Assaf for
testing; Carl and Oleg for DVR/L3; John for IPAM; etc.) to look at the
current list, book some patches for your subteams to backport, and make
sure the fixes land in stable.


I've gone through the L2 and ML2 section. I backported missing patches
and added comments where I think no backport is needed.


Thanks Rossella!

I hope that others will join the effort too. If everyone takes one or two  
patches from the list, it should be low effort for everyone and great  
impact for our users.


Thanks again,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Extending API via Plugins

2016-03-24 Thread Serg Melikyan
Hi wangzhh,

we had several discussions about having plugins for API, for example
we could branch CF Broker API to the plugin instead of having it as
part of murano. Unfortunately we didn't found many use-cases at that
time, so we didn't proceed further from just talking about that.

Can you share your use-case for extending API?

P.S. I think it's better to discuss this topic in separate thread from
discussion about bundles.

On Wed, Mar 23, 2016 at 8:56 PM, 王正浩  wrote:
> Thanks a lot!
> And I have another question.
> When I extend API, I find it's a little inconvenient. Does it a good idea
> to use stevedore? In this way, all the API are considered as plugins.
>
> We can extend API like this:
> 1.Add a namespace like
> murano.api.v1.extensions =
> test = murano.api.v1.extensions.test:testAPI
> to set.cfg.
> 2.Add a file named test.py and imply the class testAPI
>
> -- Original --
> From:  "Serg Melikyan";
> Date:  Thu, Mar 24, 2016 07:24 AM
> To:  "OpenStack Development Mailing
> List";
> Cc:  "王正浩";
> Subject:  [murano] HowTo: Compose a local bundle file
>
> Hi wangzhh,
>
> You can use python-muranoclient in order to download bundle from
> apps.openstack.org and then use it somewhere else for the import:
>
> murano bundle-save app-servers
>
> you can find more about this option in corresponding spec [0].
>
> Generally local bundle is not different from the remote one, you can
> take a look at the same bundle [1] internals. If you will download
> this file and then will try to execute:
>
> murano bundle-import ./app-servers.bundle
>
> murano will try to find all mentioned packages in the local folder
> before going to apps.openstack.org.
>
> References:
> [0]
> http://specs.openstack.org/openstack/murano-specs/specs/liberty/bundle-save.html
> [1] http://storage.apps.openstack.org/bundles/app-servers.bundle
>
> On Wed, Mar 23, 2016 at 1:48 AM, 王正浩  wrote:
>>
>> Hi Serg Melikyan!
>>   I'm a programmer of China. And I have a question about Application
>> Servers Bundle (bundle)
>> https://apps.openstack.org/#tab=murano-apps&asset=Application%20Servers%20Bundle
>>   I want to import a bundle across a local bundle file. Could you tell me
>> how  to create a bundle file? Is there any doc explain it?
>>   Thanks!
>>
>>
>>
>>
>> --
>> --
>> Best Regards,
>> wangzhh
>
>
>
>
> --
> Serg Melikyan



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-24 Thread Steve Gordon
- Original Message -
> From: "Kai Qiang Wu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, March 17, 2016 10:54:13 AM
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting 
> single/multiple OS distro
> 
> HI Steve,
> 
> Some points to highlight here:
> 
> 1> There are some work discussion about COE dynamic supports across
> different OS distro.

Yes, and I think that to best achieve that while reducing the workload on the 
core Magnum team it needs to be easier to relay to those distro maintainers 
what the requirements/expectations are for a given Magnum release - ideally in 
such a way that they can easily discover the expectations for e.g. Newton from 
Magnum's planning output (specs) or somewhere in the developer docs rather than 
having them communicated on an ad hoc basis (what happens today). This would 
involve setting up front expected minimums for key components (probably on a 
per-baytype basis) similar to the way we expose the expected minimum Libvirt 
versions for Nova's libvirt driver.

> 2>  For atomic, we did have many requirements before, it was an old story,
> seem some not meet our needs (which once asked in atomic IRC channel or
> community) So we built some images by ourselves. But if atomic community
> could provide related support, it would more beneficial for both( as we use
> it, it would be tested by us daily jenkins and developers)

An interesting opportunity here might be seeing whether there is a way Magnum 
can be used to provide additional integration testing of the Fedora packages 
*before* they are pushed to updates (and hence rolled into the official Fedora 
Atomic image) - for example if Magnum could have been used to vet the 
kubernetes build I linked earlier in our discussion that would have the benefit 
of both ensuring it didn't break Magnum and also increasing the confidence of 
the Fedora folks (via karma in the build system) to push from updates-testing 
to updates.

There would be some non-trivial work to automate here though as I believe we 
would need an image based on updates-testing (which would need to be updated 
quite frequently) and somewhere to actually trigger/run the tests. I raise it 
here simply because I know that the Fedora folks are also interested in putting 
more testing around this process.

> Maybe for the requirements, need some clear channel, like:
> 
> 
> 1>  What's the official channel to open requirements to Atomic community ?
> Is it github or something else which can easily track ?
> 
> 2> What's the normal process to deal with such requirements, and coordinate
> ways.
> 
> 3> Others

For the Fedora Atomic image the cloud working group is the right place:

https://fedoraproject.org/wiki/Cloud
https://fedorahosted.org/cloud/report
https://lists.fedoraproject.org/admin/lists/cloud.lists.fedoraproject.org/

As I was getting at above though if there are known requirements, surely they 
can also be published for the consumption of all interested COEs rather than 
needing to be manually pushed by Magnum folks to each one individually (though 
this is of course fine *as well* for those COEs where a Magnum contributor is 
specifically interested in acting as a liaison)?

-Steve

> 
> Follow your heart. You are miracle!
> 
> 
> 
> From: Steve Gordon 
> To:   "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 17/03/2016 09:24 pm
> Subject:  Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
> 
> 
> 
> - Original Message -
> > From: "Kai Qiang Wu" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Sent: Tuesday, March 15, 2016 3:20:46 PM
> > Subject: Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
> >
> > Hi  Stdake,
> >
> > There is a patch about Atomic 23 support in Magnum.  And atomic 23 uses
> > kubernetes 1.0.6, and docker 1.9.1.
> > From Steve Gordon, I learnt they did have a two-weekly release. To me it
> > seems each atomic 23 release not much difference, (minor change)
> > The major rebases/updates may still have to wait for e.g. Fedora Atomic
> 24.
> 
> Well, the emphasis here is on *may*. As was pointed out in that same thread
> [1] rebases certainly can occur although those builds need to get karma in
> the fedora build system to be pushed into updates and subsequently included
> in the next rebuild (e.g. see [2] for a newer K8S build). The main point is
> that if a rebase involves introducing some element of backwards
> incompatibility then that would have to wait to the next major (F24) -
> outside of that there is some flexibility.
> 
> > So maybe we not need to test every Atomic 23 two-weekly.
> > Pick one or update old, when we find it is integrated with new kubernetes
> > or docker, etcd etc. If other small changes(not include security

Re: [openstack-dev] [magnum] Generate atomic images usingdiskimage-builder

2016-03-24 Thread Steve Gordon


- Original Message -
> From: "Kai Qiang Wu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, March 23, 2016 9:10:30 PM
> Subject: Re: [openstack-dev] [magnum] Generate atomic images  
> usingdiskimage-builder
> 
> 1) +1  diskimage-builder maybe a better place to external consumption.
> 
> 2) For the image size difference(big), I think we may need to know what's
> the issue for that.
> Maybe Redhat guys know something about it.

It's worth asking on cl...@lists.fedoraproject.org but I believe the relevant 
inputs to the process they use which you might be able to compare against are 
here:

https://git.fedorahosted.org/cgit/fedora-atomic.git/tree/README.md
https://git.fedorahosted.org/cgit/spin-kickstarts.git/tree/fedora-cloud-atomic.ks

Thanks,

Steve

> 
> Kai Qiang Wu (吴开强  Kennan)
> IBM China System and Technology Lab, Beijing
> 
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
>  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
> 100193
> 
> Follow your heart. You are miracle!
> 
> 
> 
> From: "Ton Ngo" 
> To:   "OpenStack Development Mailing List \(not for usage questions
> \)" 
> Date: 24/03/2016 01:12 am
> Subject:  Re: [openstack-dev] [magnum] Generate atomic images using
> diskimage-builder
> 
> 
> 
> Hi Yolanda,
> Thank you for making a huge improvement from the manual process of building
> the Fedora Atomic image.
> Although Atomic does publish a public OpenStack image that is being
> considered in this patch:
> https://review.openstack.org/#/c/276232/
> in the past we have come to many situations where we need an image with
> specific version of certain software
> for features or bug fixes (Kubernetes, Docker, Flannel, ...). So the
> automated and customizable build process
> will be very helpful.
> 
> With respect to where to land the patch, I think diskimage-builder is a
> reasonable target.
> If it does not land there, Magnum does currently have 2 sets of
> diskimage-builder elements for Mesos image
> and Ironic image, so it is also reasonable to submit the patch to Magnum.
> With the new push to reorganize
> into drivers for COE and distro, the elements would be a natural fit for
> Fedora Atomic.
> 
> As for periodic image build, it's a good idea to stay current with the
> distro, but we should avoid the situation
> where something new in the image breaks a COE and we are stuck for awhile
> until a fix is made. So instead of
> an automated periodic build, we might want to stage the new image to make
> sure it's good before switching.
> 
> One question: I notice the image built by DIB is 871MB, similar to the
> manually built image, while the
> public image from Atomic is 486MB. It might be worthwhile to understand the
> difference.
> 
> Ton Ngo,
> 
> Inactive hide details for Yolanda Robla Mota ---03/23/2016 04:12:54 AM---Hi
> I wanted to start a discussion on how Fedora AtomicYolanda Robla Mota
> ---03/23/2016 04:12:54 AM---Hi I wanted to start a discussion on how Fedora
> Atomic images are being
> 
> From: Yolanda Robla Mota 
> To: 
> Date: 03/23/2016 04:12 AM
> Subject: [openstack-dev] [magnum] Generate atomic images using
> diskimage-builder
> 
> 
> 
> Hi
> I wanted to start a discussion on how Fedora Atomic images are being
> built. Currently the process for generating the atomic images used on
> Magnum is described here:
> http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
> The image needs to be built manually, uploaded to fedorapeople, and then
> consumed from there in the magnum tests.
> I have been working on a feature to allow diskimage-builder to generate
> these images. The code that makes it possible is here:
> https://review.openstack.org/287167
> This will allow that magnum images are generated on infra, using
> diskimage-builder element. This element also has the ability to consume
> any tree we need, so images can be customized on demand. I generated one
> image using this element, and uploaded to fedora people. The image has
> passed tests, and has been validated by several people.
> 
> So i'm raising that topic to decide what should be the next steps. This
> change to generate fedora-atomic images has not already landed into
> diskimage-builder. But we have two options here:
> - add this element to generic diskimage-builder elements, as i'm doing now
> - generate this element internally on magnum. So we can have a directory
> in magnum project, called "elements", and have the fedora-atomic element
> here. This will give us more control on the element behaviour, and will
> allow to update the element without waiting for external reviews.
> 
> Once the code for diskimage-builder has landed, another step can be to
> periodical

[openstack-dev] [Magnum] Fwd: Fedora Atomic Two Week Delayed

2016-03-24 Thread Steve Gordon
Just an FYI for folks trying to follow the two-weekly Fedora Atomic update 
cycle.

-Steve

- Forwarded Message -
> From: "Adam Miller" 
> To: "Fedora Cloud SIG" , 
> rel-...@lists.fedoraproject.org,
> atomic-de...@projectatomic.io, atomic-annou...@projectatomic.io
> Sent: Wednesday, March 23, 2016 1:29:36 PM
> Subject: Fedora Atomic Two Week Delayed
> 
> Hello all,
> There is currently an issue with Fedora composes because of an
> unexpected change to the dnf API which has broken lorax[0], we are
> currently waiting for this to land in stable[1] and will get a release
> out after that.
> 
> This kind of failure is something we don't currently handle gracefully
> but we have a plan of action that will be worked on asap to resolve
> that. Once that is in place, we hope in the future the issue is caught
> with (hopefully) enough lead time to resolve it before release time.
> 
> On behalf of the Fedora Release Engineering team, apologies to anyone
> this may have caused a negative impact upon.
> 
> Thank you,
> -AdamM
> 
> [0] - https://bugzilla.redhat.com/show_bug.cgi?id=1312087
> [1] - https://bodhi.fedoraproject.org/updates/FEDORA-2016-910ddbf4c8
> ___
> cloud mailing list
> cl...@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/cl...@lists.fedoraproject.org
> 

-- 
Steve Gordon,
Principal Product Manager,
Red Hat OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Flavio Percoco

On 23/03/16 16:27 -0400, Anita Kuno wrote:

Bots are very handy for doing repetitive tasks, we agree on that.

Bots also require permissions to execute certain actions, require
maintenance to ensure they operate as expected and do create output
which is music to some and noise to others. Said output is often
archieved somewhere which requires additional decisions.

This thread is intended to initiate a conversation about bots. So far we
have seen developers want to use bots in Gerrit[0] and in IRC[1]. The
conversation starts there but isn't limited to these tools if folks have
usecases for other bots.

I included an item on the infra meeting agenda for yesterday's meeting
(April 22, 2016) and discovered there was enough interest[2] in a
discussion to take it to the list, so here it is.

So some items that have been raised thus far:
- permissions: having a bot on gerrit with +2 +A is something we would
like to avoid


To be honest, I wouldn't mind having a bot +2A on specific cases. An example
would be requirements syncs that have passed the gate or trasnlations. I
normally ninja-approve those and I don't really mind doing it but, I wouldn't
mind having those patches approved automatically since they don't really require
a review.

Flavio


- "unsanctioned" bots (bots not in infra config files) in channels
shared by multiple teams (meeting channels, the -dev channel)
- forming a dependence on bots and expecting infra to maintain them ex
post facto (example: bot soren maintained until soren didn't)
- causing irritation for others due to the presence of an echoing bot
which eventually infra will be asked or expected to mediate
- duplication of features, both meetbot and purplebot log channels and
host the archives in different locations
- canonical bot doesn't get maintained

It is possible that the bots that infra currently maintains have
features of which folks are unaware, so if someone was willing to spend
some time communicating those features to folks who like bots we might
be able to satisfy their needs with what infra currently operates.

Please include your own thoughts on this topic, hopefully after some
discussion we can aggregate on some policy/steps forward.

Thank you,
Anita.


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-09.log.html#t2016-03-09T15:21:01
[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.html
[2]
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.02.log.html
timestamp 19:53

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Assaf Muller
On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto  wrote:
> On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
>  wrote:
>> Migration script has been submitted, v1 is not going anywhere from 
>> stable/liberty or stable/mitaka, so it’s about to disappear from master.
>>
>> I’m thinking in this order:
>>
>> - remove jenkins jobs
>> - wait for heat to remove their jenkins jobs ([heat] added to this thread, 
>> so they see this coming before the job breaks)
>
> magnum is relying on lbaasv1.  (with heat)

Is there anything blocking you from moving to v2?

>
>> - remove q-lbaas from devstack, and any references to lbaas v1 in 
>> devstack-gate or infra defaults.
>> - remove v1 code from neutron-lbaas
>>
>> Since newton is now open for commits, this process is going to get started.
>>
>> Thanks,
>> doug
>>
>>
>>
>>> On Mar 8, 2016, at 11:36 AM, Eichberger, German  
>>> wrote:
>>>
>>> Yes, it’s Database only — though we changed the agent driver in the DB from 
>>> V1 to V2 — so if you bring up a V2 with that database it should reschedule 
>>> all your load balancers on the V2 agent driver.
>>>
>>> German
>>>
>>>
>>>
>>>
>>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>>>
 So this looks like only a database migration, right?

 -Original Message-
 From: Eichberger, German [mailto:german.eichber...@hpe.com]
 Sent: Tuesday, March 08, 2016 12:28 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?

 Ok, for what it’s worth we have contributed our migration script: 
 https://review.openstack.org/#/c/289595/ — please look at this as a 
 starting point and feel free to fix potential problems…

 Thanks,
 German




 On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:

> As far as I recall, you can specify the VIP in creating the LB so you 
> will end up with same IPs.
>
> -Original Message-
> From: Eichberger, German [mailto:german.eichber...@hpe.com]
> Sent: Monday, March 07, 2016 8:30 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
>
> Hi Sam,
>
> So if you have some 3rd party hardware you only need to change the
> database (your steps 1-5) since the 3rd party hardware will just keep
> load balancing…
>
> Now for Kevin’s case with the namespace driver:
> You would need a 6th step to reschedule the loadbalancers with the V2 
> namespace driver — which can be done.
>
> If we want to migrate to Octavia or (from one LB provider to another) it 
> might be better to use the following steps:
>
> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
> Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
> file into some scripts which recreate the load balancers with your
> provider of choice —
>
> 6. Run those scripts
>
> The problem I see is that we will probably end up with different VIPs
> so the end user would need to change their IPs…
>
> Thanks,
> German
>
>
>
> On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>
>> As for a migration tool.
>> Due to model changes and deployment changes between LBaaS v1 and LBaaS 
>> v2, I am in favor for the following process:
>>
>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
>> Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS 
>> v1 3.
>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
>> over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
>> make room to some custom modification for mapping between v1 and v2
>> models)
>>
>> What do you think?
>>
>> -Sam.
>>
>>
>>
>>
>> -Original Message-
>> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>> Sent: Friday, March 04, 2016 2:06 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>> weready?
>>
>> Ok. Thanks for the info.
>>
>> Kevin
>> 
>> From: Brandon Logan [brandon.lo...@rackspace.com]
>> Sent: Thursday, March 03, 2016 2:42 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>> weready?
>>
>> Just for clarity, V2 did not reuse tables, all the tables it uses are 
>> only for it.  The main problem is that v1 and v2 both have a pools 
>> resource, but v1 and v2's pool resource have different attributes.  With 
>> the way neutron wsgi works, if bot

Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2016-03-24 08:44:35 +0100:
> Doug Hellmann  wrote:
> 
> > Excerpts from Alan Pevec's message of 2016-03-22 20:19:44 +0100:
> >>> The release team discussed this at the summit and agreed that it didn't  
> >>> really matter. The only folks seeing the auto-generated versions are  
> >>> those doing CD from git, and they should not be mixing different  
> >>> branches of a project in a given environment. So I don't think it is  
> >>> strictly necessary to raise the major version, or give pbr the hint to  
> >>> do so.
> >>
> >> ok, I'll send confused RDO trunk users here :)
> >> That means until first Newton milestone tag is pushed, master will
> >> have misleading version. Newton schedule is not defined yet but 1st
> >> milestone is normally 1 month after Summit, and 2 months from now is
> >> rather large window.
> >>
> >> Cheers,
> >> Alan
> >
> > Are you packaging unreleased things in RDO? Because those are the only
> > things that will have similar version numbers. We ensure that whatever
> > is actually tagged have good, non-overlapping, versions.
> 
> RDO comes in two flavours. One is ‘classic’ RDO that builds from tarballs.
> 
> But there’s also another thing in RDO called Delorean which is RDO packages  
> built from upstream git heads (both master and stable/*).
> 
> More info:  
> http://blogs.rdoproject.org/7834/delorean-openstack-packages-from-the-future
> 
> It allows to install the latest and greatest from upstream heads. Two  
> options are available: running from ‘current’ which is unvalidated latest  
> heads, or rely on ‘current-passed-ci’ that points to the latest build that  
> was validated by CI.
> 
> Ihar
> 

The assumption we're working under is that someone would choose
between three options when installing, regardless of the form of
the thing they install (direct from git, wheels, self-made packages
of another type, distro packages, whatever):

1. Tagged versions from any branch.
2. Untagged versions on a stable branch.
3. Untagged versions on the master branch.

Option 1 is clear, and always produces deployments that are
reproduceable, with versions distinct and increasing over time.

Options 2 and 3 sometimes, right around release cycle boundaries,
produce the same version numbers in different branches for a short
period of time. The lack of distinct version numbers can produce
confusion (in humans and robots), but we thought it extremely
unlikely that anyone would mix option 2 and 3, because that way
lies madness for upgrades and understanding what you actually have
running in your cloud. Choosing either 2 or 3 is fine. Mixing is
not, and the *way* someone installs the code doesn't change the
fact that it's a bad idea to cross the streams.

So, if Delorean includes packages built from untagged commits in
multiple branches, placed in the same package repository where an
automated installation tool can't tell stable/mitaka from master,
that would be bad. Please tell me that's not what happens?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] The same SRIOV / NFV CI failures missed a regression, why?

2016-03-24 Thread Matt Riedemann
We have another mitaka-rc-potential bug [1] due to a regression when 
detaching SR-IOV interfaces in the libvirt driver.


There were two NFV CIs that ran on the original change [2].

Both failed with the same devstack setup error [3][4].

So it sucks that we have a regression, it sucks that no one watched for 
those CI results before approving the change, and it really sucks in 
this case since it was specifically reported from mellanox for sriov 
which failed in [4]. But it happens.


What I'd like to know is, have the CI problems been fixed? There is a 
change up to fix the regression [5] and this time the Mellanox CI check 
is passing [6]. The Intel NFV CI hasn't reported, but with the mellanox 
one also testing the suspend scenario, it's probably good enough.


[1] https://bugs.launchpad.net/nova/+bug/1560860
[2] https://review.openstack.org/#/c/262341/
[3] 
http://intel-openstack-ci-logs.ovh/compute-ci/refs/changes/41/262341/7/compute-nfv-flavors/20160215_232057/screen/n-sch.log.gz
[4] 
http://144.76.193.39/ci-artifacts/262341/7/Nova-ML2-Sriov/logs/n-sch.log.gz

[5] https://review.openstack.org/#/c/296305/
[6] 
http://144.76.193.39/ci-artifacts/296305/1/Nova-ML2-Sriov/testr_results.html.gz


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2016-03-24 09:12:36 -0400:
> On 23/03/16 16:27 -0400, Anita Kuno wrote:
> >Bots are very handy for doing repetitive tasks, we agree on that.
> >
> >Bots also require permissions to execute certain actions, require
> >maintenance to ensure they operate as expected and do create output
> >which is music to some and noise to others. Said output is often
> >archieved somewhere which requires additional decisions.
> >
> >This thread is intended to initiate a conversation about bots. So far we
> >have seen developers want to use bots in Gerrit[0] and in IRC[1]. The
> >conversation starts there but isn't limited to these tools if folks have
> >usecases for other bots.
> >
> >I included an item on the infra meeting agenda for yesterday's meeting
> >(April 22, 2016) and discovered there was enough interest[2] in a
> >discussion to take it to the list, so here it is.
> >
> >So some items that have been raised thus far:
> >- permissions: having a bot on gerrit with +2 +A is something we would
> >like to avoid
> 
> To be honest, I wouldn't mind having a bot +2A on specific cases. An example
> would be requirements syncs that have passed the gate or trasnlations. I
> normally ninja-approve those and I don't really mind doing it but, I wouldn't
> mind having those patches approved automatically since they don't really 
> require
> a review.

The problem with automating some of those things is we have to take
into account schedules, outages, gate issues, and other reasons
when we should *not* take automated actions. We can code some of
those things into the bots, but I expect there will always be special
cases.

To look at patches generated by bots (Zanata, requirements updates,
etc.), we purposefully have those bots introduce the patches because
producing them is a repetitive task, ripe for automation.  We do
not let the bots automatically approve those patches so we can
retain control over *when* changes land, because knowing when it's
OK to merge a change requires knowledge outside of what the bot
will have.

This is also why we've been working so hard to put together a
reviewable release process. Building the package is highly automatable.
Knowing when it's OK to build the package is not.

Doug

> 
> Flavio
> 
> >- "unsanctioned" bots (bots not in infra config files) in channels
> >shared by multiple teams (meeting channels, the -dev channel)
> >- forming a dependence on bots and expecting infra to maintain them ex
> >post facto (example: bot soren maintained until soren didn't)
> >- causing irritation for others due to the presence of an echoing bot
> >which eventually infra will be asked or expected to mediate
> >- duplication of features, both meetbot and purplebot log channels and
> >host the archives in different locations
> >- canonical bot doesn't get maintained
> >
> >It is possible that the bots that infra currently maintains have
> >features of which folks are unaware, so if someone was willing to spend
> >some time communicating those features to folks who like bots we might
> >be able to satisfy their needs with what infra currently operates.
> >
> >Please include your own thoughts on this topic, hopefully after some
> >discussion we can aggregate on some policy/steps forward.
> >
> >Thank you,
> >Anita.
> >
> >
> >[0]
> >http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-09.log.html#t2016-03-09T15:21:01
> >[1]
> >http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.html
> >[2]
> >http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.02.log.html
> >timestamp 19:53
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Update domain network xml on live migration

2016-03-24 Thread Andreas Scheuring
Hi, 
On Live Migration, I want to update the domain.xml with the required
network interface definition for the target node. For code, please see
the prototype [2]

I'm reaching our for feedback about the right way to implement this!


Use Cases/Problem:
=
#1 Live Migration with Neutron Macvtap agent (see bug [1])
#2 Live Migration cross compute nodes that run different l2 agents for
agent transitioning.

More details on the use cases see further below.

Today, both is not working! 
Reason: To get the interface information for the target node, nova needs
to query Neutron. But Neutron can return this information only for the
case, when the binding:host_id attribute of the corresponding port is
set to the target host. This update happens during live migration
process, but it happens too late - in post_live_migration. I need this
kind of information in pre_live_migration!


Proposal

Update the portbinding in pre livemigration instead of in
post_live_migration. Then the virt driver can just query the ports for
the instance to receive the network interface  information for the
target node and update the migration xml accordingly.

I posted a working prototype here [2]

Open Questions: 
* Is there a reason for doing the portbinding after migration? Some
races or so?
* Where to do the cleanup on a failed migration (set the binding:host_id
to the migration source again)?
* Status of the port during migraton. binding.host_id = dest_host
  * ovs-hybrid plug: Status = Active
  * ovs-non-hybrid, lb, macvtap plug: Status = BUILD  (Goes to active as
soon as libvirt created the instance container and with it the network
device on the target. Is this a problem?


Alternatives

* Let Neutron implement a new API to request the port details for a
certain migration target
  * without changing it
  * storing it internally as additional in port.migration_port or so
* Allow portbinding to 2 hosts in parallel (rkukura proposed a patchset
some time ago - need to contact him)



More details on use Cases/Problems:
=
#1 For correct live migration with the Neutron Macvtap agent, the same
physical_interface_mapping must be deployed on each node. If one node
wants to use another mapping, migration fails or the instances is
migrated into a wrong network. This happens, as the for macvtap, the
interface name to place the macvtap upon is hard coded into the
domain.xml. For proper migration, I must be able to specifiy the
interface that is used on the target side! More details see [1]

#2 Live Migration cross compute nodes that run different l2 agents.
E.g. one node runs the ovs agent, another one runs the lb agent. I want
to be able to live migrate between those 2 nodes. This could be
interesting as transition strategy from one l2 agent to another one
without the need of shutting down an instance. (Assuming the ML2 Neutron
plugin is being used)

Any feedback is welcome! 
Thank you!




[1] https://bugs.launchpad.net/neutron/+bug/1550400
[2] https://review.openstack.org/#/c/297100/



-- 
-
Andreas (IRC: scheuran) 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [containers][horizon][magnum-ui] - Stable version for Liberty?

2016-03-24 Thread Marcos Fermin Lobo
Hi all,

I have a question about magnum-ui plugin for Horizon. I see that there is a 
tarball for Stable/Liberty version, but it is very simple, just Index views, 
any "create" action.

But I see a lot of work in master branch for this project, but is not 
compatible with Horizon Liberty. My question to people in charge of this 
project is: When the code is stable, do you plan to backport all the 
functionality to Liberty version? or just go to Mitaka?

Thank you.

Regards,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Cross-repository testing

2016-03-24 Thread Jeremy Stanley
On 2016-03-24 11:44:16 +0100 (+0100), Sylwester Brzeczkowski wrote:
> We want to enable proper testing for Nailgun (fuel-web) extensions. We want
> to implement a script which will clone all required repos for test and
> actually run the tests.
> 
> The script will be placed in openstack/fuel-web [0] repo and should work
> locally and also on openstack CI (gate jobs). All Nailgun Extensions should
> have Nailgun in it's requirements so I think it's ok.
> 
> What do you think about the idea? Is it a good approach?
> Am I missing some already existing solutions for this problem?
[...]

This is basically what devstack-gate does for testing OpenStack
deployments with DevStack. The logic for getting change dependencies
right is far more complex than simply cloning a bunch of
repositories. You need to make sure you try to fetch appropriate Zuul
refs for all of them when available, fallback to sane branch
selections when not all of the repos have the same branches, make
use of repo caches for improved performance and update them to
current state from our remote mirror, et cetera.

Zuul includes a standalone command-line tool which also performs
these operations, and we make it available on all our job workers at
/usr/zuul-env/bin/zuul-cloner. Check job macros in the
project-config directory for numerous examples of creating clonemaps
and invoking zuul-cloner in jobs.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Amrith Kumar
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Thursday, March 24, 2016 9:13 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other
> 
> On 23/03/16 16:27 -0400, Anita Kuno wrote:
> >Bots are very handy for doing repetitive tasks, we agree on that.
> >
> >Bots also require permissions to execute certain actions, require
> >maintenance to ensure they operate as expected and do create output
> >which is music to some and noise to others. Said output is often
> >archieved somewhere which requires additional decisions.
> >
> >This thread is intended to initiate a conversation about bots. So far
> >we have seen developers want to use bots in Gerrit[0] and in IRC[1].
> >The conversation starts there but isn't limited to these tools if folks
> >have usecases for other bots.
> >
> >I included an item on the infra meeting agenda for yesterday's meeting
> >(April 22, 2016) and discovered there was enough interest[2] in a
> >discussion to take it to the list, so here it is.
> >
> >So some items that have been raised thus far:
> >- permissions: having a bot on gerrit with +2 +A is something we would
> >like to avoid
> 
> To be honest, I wouldn't mind having a bot +2A on specific cases. An
> example would be requirements syncs that have passed the gate or
> trasnlations. I normally ninja-approve those and I don't really mind doing
> it but, I wouldn't mind having those patches approved automatically since
> they don't really require a review.
> 

[amrith] I'm strongly against that in Trove (and I can see similar issues for 
other project that have guest images with python packages installed). I know 
that +2A for requirement patches wasn't the thrust of this email thread, but I 
just want to point out that in cases like Trove where there are guest images 
with requirements of their own, the patches with requirement changes have very 
material and consequential impacts and often get -2'ed while we can resolve 
dependent issues. And when some things like this have slipped through the 
cracks, very very bad things have happened.

> Flavio
> 
> >- "unsanctioned" bots (bots not in infra config files) in channels
> >shared by multiple teams (meeting channels, the -dev channel)
> >- forming a dependence on bots and expecting infra to maintain them ex
> >post facto (example: bot soren maintained until soren didn't)
> >- causing irritation for others due to the presence of an echoing bot
> >which eventually infra will be asked or expected to mediate
> >- duplication of features, both meetbot and purplebot log channels and
> >host the archives in different locations
> >- canonical bot doesn't get maintained
> >
> >It is possible that the bots that infra currently maintains have
> >features of which folks are unaware, so if someone was willing to spend
> >some time communicating those features to folks who like bots we might
> >be able to satisfy their needs with what infra currently operates.
> >
> >Please include your own thoughts on this topic, hopefully after some
> >discussion we can aggregate on some policy/steps forward.
> >
> >Thank you,
> >Anita.
> >
> >
> >[0]
> >http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-
> >infra.2016-03-09.log.html#t2016-03-09T15:21:01
> >[1]
> >http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.ht
> >ml
> >[2]
> >http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.
> >02.log.html
> >timestamp 19:53
> >
> >___
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-24 Thread Zane Bitter

On 24/03/16 01:39, Rabi Mishra wrote:

On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:

Hello,
It looks similar on issue, which was discussed here [1]
I suppose, that the root cause is incorrect using get_attr for your
case.
Probably you got "list"  instead of "string".
F.e. if I do something similar:
outputs:
  rg_1:
value: {get_attr: [rg_a, rg_a_public_ip]}
  rg_2:
value: {get_attr: [rg_a, rg_a_public_ip, 0]}

  rg_3:
value: {get_attr: [rg_a]}
  rg_4:
value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
where rg_a is also resource group which uses custom template as
resource.
the custom template has output value rg_a_public_ip.
The output for it looks like [2]
So as you can see, that in first case (like it is used in your example),
get_attr returns list with one element.
rg_2 is also wrong, because it takes first symbol from sting with IP
address.


Shouldn't rg_2 and rg_4 be equivalent?


They are the same for template version 2013-05-23. However, they behave 
differently
from the next  version(2014-10-16) onward and return a list of characters. I 
think
this is due to the fact that `get_attr` function mapping is changed from 
2014-10-16.


2013-05-23 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L70
2014-10-16 -  
https://github.com/openstack/heat/blob/master/heat/engine/hot/template.py#L291


Correct. I think that's probably the source of confusion.


This makes me wonder why would a template author do something like
{get_attr: [rg_a, rg_a_public_ip, 0]} when he can easily do


This is why: https://bugs.launchpad.net/heat/+bug/1341048
You can do things with 2014-10-16 that aren't possible with 2013-05-23.


{get_attr: [rg_a, resource.0.rg_a_public_ip]} or {get_attr: [rg_a, resource.0, 
rg_a_public_ip]}
for specific resource atrributes.


Yep, these are the only correct ways of doing this in 2014-10-16 and up, 
and the first is the only efficient way of doing it in 2013-05-23 (note: 
the second way works only in 2014-10-16 and up).



I understand that {get_attr: [rg_a, rg_a_public_ip]} cane be useful when we 
just want to use
the list of attributes.


Indeed.

- ZB


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-24 Thread Sean McGinnis
On Wed, Mar 23, 2016 at 05:22:34PM -0700, melanie witt wrote:
> On Mar 23, 2016, at 16:56, melanie witt  wrote:
> 
> > I may have found a workaround for the scroll jumping after reading through 
> > the upstream issue comments [1]: use the "Slow" setting in the preferences 
> > for Render. Click the gear icon in the upper right corner of the diff view 
> > and click the Render switch to "Slow" and then Apply. It seems to be 
> > working for me so far.
> 
> I realized Apply only works for the current screen, you must use Save to make 
> the setting stick for future screens [2].
> 
> -melanie

This was a definite improvment for me, but gerrit still occasionally
decides to jump around a bit. But I would recommend changing this
setting to at least improve the situation.

> 
> [2] 
> https://gerrit-review.googlesource.com/Documentation/user-review-ui.html#diff-preferences



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Hongbin Lu


> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: March-24-16 9:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 -
> are weready?
> 
> On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto
>  wrote:
> > On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
> >  wrote:
> >> Migration script has been submitted, v1 is not going anywhere from
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >>
> >> I’m thinking in this order:
> >>
> >> - remove jenkins jobs
> >> - wait for heat to remove their jenkins jobs ([heat] added to this
> >> thread, so they see this coming before the job breaks)
> >
> > magnum is relying on lbaasv1.  (with heat)
> 
> Is there anything blocking you from moving to v2?

A ticket was created for that: 
https://blueprints.launchpad.net/magnum/+spec/migrate-to-lbaas-v2 . It will be 
picked up by contributors once it is approved. Please give us sometimes to 
finish the work.

> 
> >
> >> - remove q-lbaas from devstack, and any references to lbaas v1 in
> devstack-gate or infra defaults.
> >> - remove v1 code from neutron-lbaas
> >>
> >> Since newton is now open for commits, this process is going to get
> started.
> >>
> >> Thanks,
> >> doug
> >>
> >>
> >>
> >>> On Mar 8, 2016, at 11:36 AM, Eichberger, German
>  wrote:
> >>>
> >>> Yes, it’s Database only — though we changed the agent driver in the
> DB from V1 to V2 — so if you bring up a V2 with that database it should
> reschedule all your load balancers on the V2 agent driver.
> >>>
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> >>>
>  So this looks like only a database migration, right?
> 
>  -Original Message-
>  From: Eichberger, German [mailto:german.eichber...@hpe.com]
>  Sent: Tuesday, March 08, 2016 12:28 AM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> 
>  Ok, for what it’s worth we have contributed our migration script:
>  https://review.openstack.org/#/c/289595/ — please look at this as
> a
>  starting point and feel free to fix potential problems…
> 
>  Thanks,
>  German
> 
> 
> 
> 
>  On 3/7/16, 11:00 AM, "Samuel Bercovici" 
> wrote:
> 
> > As far as I recall, you can specify the VIP in creating the LB so
> you will end up with same IPs.
> >
> > -Original Message-
> > From: Eichberger, German [mailto:german.eichber...@hpe.com]
> > Sent: Monday, March 07, 2016 8:30 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> >
> > Hi Sam,
> >
> > So if you have some 3rd party hardware you only need to change
> the
> > database (your steps 1-5) since the 3rd party hardware will just
> > keep load balancing…
> >
> > Now for Kevin’s case with the namespace driver:
> > You would need a 6th step to reschedule the loadbalancers with
> the V2 namespace driver — which can be done.
> >
> > If we want to migrate to Octavia or (from one LB provider to
> another) it might be better to use the following steps:
> >
> > 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> > Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> > Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
> > format file into some scripts which recreate the load balancers
> > with your provider of choice —
> >
> > 6. Run those scripts
> >
> > The problem I see is that we will probably end up with different
> > VIPs so the end user would need to change their IPs…
> >
> > Thanks,
> > German
> >
> >
> >
> > On 3/6/16, 5:35 AM, "Samuel Bercovici" 
> wrote:
> >
> >> As for a migration tool.
> >> Due to model changes and deployment changes between LBaaS v1 and
> LBaaS v2, I am in favor for the following process:
> >>
> >> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> >> Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> >> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1
> >> back over LBaaS v2 (need to allow moving from falvor1-->flavor2,
> >> need to make room to some custom modification for mapping
> between
> >> v1 and v2
> >> models)
> >>
> >> What do you think?
> >>
> >> -Sam.
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> >> Sent: Friday, March 04, 2016 2:06 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Neutron][

Re: [openstack-dev] [cinder][nova] Cinder-Nova API meeting

2016-03-24 Thread Sean McGinnis
On Thu, Mar 24, 2016 at 09:01:28AM +, Ildikó Váncsa wrote:
> Hi All,
> 
> As it was discussed several times on this mailing list there is room for 
> improvements regarding the Cinder-Nova interaction. To fix these issues we 
> would like to create a cross-project spec to capture the problems and ways to 
> solve them. The current activity is captured on this etherpad: 
> https://etherpad.openstack.org/p/cinder-nova-api-changes
> 
> Before writing up several specs we will have a meeting next Wednesday to 
> synchronize and discuss with the two teams and everyone who's interested in 
> this work the way forward.
> 
> The meeting will be held on the #openstack-meeting-cp channel, on March 30, 
> 2100UTC.
> 
> Please let me know if you have any questions.

I will be travelling during this time, and will not be able to attend,
but I look forward to seeing where the dicussions goes. I think it is
time well spent. Thanks for arranging this!

> 
> See you next week!
> Best Regards,
> /Ildikó
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-24 Thread Zane Bitter

On 24/03/16 04:29, Steven Hardy wrote:

On Thu, Mar 24, 2016 at 01:39:01AM -0400, Rabi Mishra wrote:

On Wed, Mar 23, 2016 at 05:25:57PM +0300, Sergey Kraynev wrote:

Hello,
It looks similar on issue, which was discussed here [1]
I suppose, that the root cause is incorrect using get_attr for your
case.
Probably you got "list"  instead of "string".
F.e. if I do something similar:
outputs:
  rg_1:
value: {get_attr: [rg_a, rg_a_public_ip]}
  rg_2:
value: {get_attr: [rg_a, rg_a_public_ip, 0]}

  rg_3:
value: {get_attr: [rg_a]}
  rg_4:
value: {get_attr: [rg_a, resource.0.rg_a_public_ip]}
where rg_a is also resource group which uses custom template as
resource.
the custom template has output value rg_a_public_ip.
The output for it looks like [2]
So as you can see, that in first case (like it is used in your example),
get_attr returns list with one element.
rg_2 is also wrong, because it takes first symbol from sting with IP
address.


Shouldn't rg_2 and rg_4 be equivalent?


They are the same for template version 2013-05-23. However, they behave 
differently
from the next  version(2014-10-16) onward and return a list of characters. I 
think
this is due to the fact that `get_attr` function mapping is changed from 
2014-10-16.


Ok, I guess it's way too late to fix it, but it still sounds like a
backwards incompatible regression to me.


It is backwards incompatible, but it's not a regression. This is the 
exact reason why we have versioned templates.


It's an intentional feature, done by the book with a template version 
bump, which you +2'd at the time :)


https://bugs.launchpad.net/heat/+bug/1341048
https://git.openstack.org/cgit/openstack/heat/commit/?id=eca6faa83e13a43546f7533b52e0fd3071fd97d4
https://git.openstack.org/cgit/openstack/heat/commit/?id=b4903673cd24efd631df79d33cfe4bd07a4243d6

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Adrian Otto


> On Mar 24, 2016, at 7:48 AM, Hongbin Lu  wrote:
> 
> 
> 
>> -Original Message-
>> From: Assaf Muller [mailto:as...@redhat.com]
>> Sent: March-24-16 9:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 -
>> are weready?
>> 
>> On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto
>>  wrote:
>>> On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
>>>  wrote:
 Migration script has been submitted, v1 is not going anywhere from
>> stable/liberty or stable/mitaka, so it’s about to disappear from master.
 
 I’m thinking in this order:
 
 - remove jenkins jobs
 - wait for heat to remove their jenkins jobs ([heat] added to this
 thread, so they see this coming before the job breaks)
>>> 
>>> magnum is relying on lbaasv1.  (with heat)
>> 
>> Is there anything blocking you from moving to v2?
> 
> A ticket was created for that: 
> https://blueprints.launchpad.net/magnum/+spec/migrate-to-lbaas-v2 . It will 
> be picked up by contributors once it is approved. Please give us sometimes to 
> finish the work.

Approved.

 - remove q-lbaas from devstack, and any references to lbaas v1 in
>> devstack-gate or infra defaults.
 - remove v1 code from neutron-lbaas
 
 Since newton is now open for commits, this process is going to get
>> started.
 
 Thanks,
 doug
 
 
 
> On Mar 8, 2016, at 11:36 AM, Eichberger, German
>>  wrote:
> 
> Yes, it’s Database only — though we changed the agent driver in the
>> DB from V1 to V2 — so if you bring up a V2 with that database it should
>> reschedule all your load balancers on the V2 agent driver.
> 
> German
> 
> 
> 
> 
>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>> 
>> So this looks like only a database migration, right?
>> 
>> -Original Message-
>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>> Sent: Tuesday, March 08, 2016 12:28 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
>> are weready?
>> 
>> Ok, for what it’s worth we have contributed our migration script:
>> https://review.openstack.org/#/c/289595/ — please look at this as
>> a
>> starting point and feel free to fix potential problems…
>> 
>> Thanks,
>> German
>> 
>> 
>> 
>> 
>> On 3/7/16, 11:00 AM, "Samuel Bercovici" 
>> wrote:
>> 
>>> As far as I recall, you can specify the VIP in creating the LB so
>> you will end up with same IPs.
>>> 
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>>> Sent: Monday, March 07, 2016 8:30 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
>> are weready?
>>> 
>>> Hi Sam,
>>> 
>>> So if you have some 3rd party hardware you only need to change
>> the
>>> database (your steps 1-5) since the 3rd party hardware will just
>>> keep load balancing…
>>> 
>>> Now for Kevin’s case with the namespace driver:
>>> You would need a 6th step to reschedule the loadbalancers with
>> the V2 namespace driver — which can be done.
>>> 
>>> If we want to migrate to Octavia or (from one LB provider to
>> another) it might be better to use the following steps:
>>> 
>>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
>>> Health Monitors , Members) into some JSON format file(s) 2.
>> Delete LBaaS v1 3.
>>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
>>> format file into some scripts which recreate the load balancers
>>> with your provider of choice —
>>> 
>>> 6. Run those scripts
>>> 
>>> The problem I see is that we will probably end up with different
>>> VIPs so the end user would need to change their IPs…
>>> 
>>> Thanks,
>>> German
>>> 
>>> 
>>> 
>>> On 3/6/16, 5:35 AM, "Samuel Bercovici" 
>> wrote:
>>> 
 As for a migration tool.
 Due to model changes and deployment changes between LBaaS v1 and
>> LBaaS v2, I am in favor for the following process:
 
 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
 Health Monitors , Members) into some JSON format file(s) 2.
>> Delete LBaaS v1 3.
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1
 back over LBaaS v2 (need to allow moving from falvor1-->flavor2,
 need to make room to some custom modification for mapping
>> between
 v1 and v2
 models)
 
 What do you think?
 
 -Sam.
 
 
 
 
 -Original Message-
 From: Fox, Kevin M [mailto:kevin@pnnl.gov]
 Sent: Frida

Re: [openstack-dev] [heat] Issue with validation and preview due to get_attr==None

2016-03-24 Thread Sergey Kraynev
Zane, I like you idea. As example we may discuss some steps for it
during summit session (if it need).

Also I have another question, which probably came in your heads a lot of times:
Can we somekind improve our existing approach for validation?
we do validation twice - before create and during it.
The one issue, which I also see is:
first validation is a synchronous operation, and It takes a lot of
time, for huge stacks. May be we need to make separate state like
validation for stacks ? and maybe it also allows to solve our current
issue with build-in functions ?

On 23 March 2016 at 23:11, Zane Bitter  wrote:
> On 23/03/16 13:14, Steven Hardy wrote:
>>
>> Hi all,
>>
>> I'm looking for some help and additional input on this bug:
>>
>> https://bugs.launchpad.net/heat/+bug/1559807
>
>
> Hmm, I was wondering how this ever worked, but it appears you're making
> particularly aggressive use of the list_join and map_merge Functions there -
> where you're not only getting the elements in the list of things to merge
> (as presumably originally envisioned) but actually getting the list itself
> from an intrinsic function. If we're going to support that then those
> functions need to handle the fact that the input argument may be None, just
> as they do for the list members (see the ensure_string() and ensure_map()
> functions inside the result() methods of those two Functions).
>
>
>> Basically, we have multiple issues due to the fact that we consider
>> get_attr to resolve to None at any point before a resource is actually
>> instantiated.
>>
>> It's due to this:
>>
>>
>> https://github.com/openstack/heat/blob/master/heat/engine/hot/functions.py#L163
>>
>> This then causes problems during validation of several intrinsic
>> functions,
>> because if they reference get_attr, they have to contain hacks and
>> special-cases to work around the validate-time None value (or, as reported
>> in the bug, fail to validate when all would be fine at runtime).
>>
>>
>> https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1333
>>
>> I started digging into fixes, and there are probably a few possible
>> approaches, e.g setting stack.Stack.strict_validate always to False, or
>> reworking the intrinsic function validation to always work with the
>> temporary None value.
>>
>> However, it's a more widespread issue than just validation - this affects
>> any action which happens before the actual stack gets created, so things
>> like preview updates are also broken, e.g consider this:
>>
>> resources:
>>random:
>>  type: OS::Heat::RandomString
>>
>>config:
>>  type: OS::Heat::StructuredConfig
>>  properties:
>>group: script
>>config:
>>  foo: {get_attr: [random, value]}
>>
>>deployment:
>>  type: OS::Heat::StructuredDeployment
>>  properties:
>>config:
>>  get_resource: config
>>server: "dummy"
>>
>> On update, nothing is replaced, but if you do e.g:
>>
>>heat stack-update -x --dry-run
>>
>> You see this:
>>
>> | replaced  | config| OS::Heat::StructuredConfig |
>>
>> Which occurs due to the false comparison between the current value of
>> "random" and the None value we get from get_attr in the temporary stack
>> used for preview comparison:
>>
>> https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L528
>>
>> after_props.get(key) returns None, which makes us falsely declare the
>> "config" resource gets replaced :(
>>
>> I'm looking for ideas on how we solve this - it's clearly a major issue
>> which completely invalidates the results of validate and preview
>> operations
>> in many cases.
>
>
> I've been thinking about this (for about 2 years).
>
> My first thought (it seemed like a good idea at the time, 2 years ago, for
> some reason) was for Function objects themselves to take on the types of
> their return values, so e.g. a Function returning a list would have a
> __getitem__ method and generally act like a list. Don't try this at home,
> BTW, it doesn't work.
>
> I now think the right answer is to return some placeholder object (but not
> None). Then the validating code can detect the placeholder and do some
> checks. e.g. we would be able to say that the placeholder for get_resource
> on a Cinder volume would have type 'cinder.volume' and any property with a
> custom constraint would check that type to see if it matches (and fall back
> to accepting any text type if the placeholder doesn't have a type
> associated). get_param would get its type from the parameter schema
> (including any custom constraints). For get_attr we could make it part of
> the attribute schema.
>
> The hard part obviously would be getting this to work with deeply-nested
> trees of data and across nested stacks. We could probably get the easy parts
> going and incrementally improve from there though. Worst case we just return
> None and get the same behaviour as now.
>
> cheers,
> Zane.
>
>
> ___

[openstack-dev] [StoryBoard] A few new things StoryBoard does these days

2016-03-24 Thread Zara Zaimeche

Hi, all,

We've been working on StoryBoard 
(https://storyboard.openstack.org/#!/dashboard/stories) for a while, but 
haven't really updated the list; here's a roundup.


New things in storyboard over the last few months:

Notification:

* storyboard.openstack.org sends email notifications!

Finally! If you are subscribed to a project, project-group, or story, 
and it changes, you'll get an email. At the moment these are pretty 
basic, and off by default. To turn them on, log in (in the header), 
click 'profile' toward the bottom of the sidebar, and you should see a 
checkbox on the bottom left below 'preferences'. This is just an on-off 
switch; you will receive emails for all changes to a subscribed 
resource, or none. We plan to refine it further; feedback welcome for 
what changes people would like to see most! Thanks so much to everyone 
for helping to get this show on the road. :)


Dashboard:

* column for 'stories assigned to me'

Useful as a shortcut to see your planned work

* column for 'stories created by me'

Useful as a shortcut to see issue reports you've filed.

* subscriptions

Now you can see the list of things you're subscribed to, all in one 
place. These are accessible from the star-shaped button on the dashboard 
submenu on the sidebar (click 'dashboard' and the submenu should fold out)


Boards and Automatic Worklists:

*Manual worklists

These allow you to order tasks, by dragging cards up and down in the 
worklist. They allow for more finely grained personal priorities than 
the global 'priority' button. To make one, click 'create new...' in the 
header. Once created, you can edit to make it public or private, and 
choose who is able to move/delete items (or the worklist itself) by 
setting owners and users. Best to play around.


To see your current worklists, click 'dashboard' in the sidebar, and the 
second icon in the dashboard submenu (it looks like some horizontal 
lines, and is not great; patches welcome!)


*Boards

These are a display of several worklists on a page, which can visualise 
the progress of tasks. You can create due dates which can be shared 
across boards, and then apply them to items in a worklist in a board. 
it's all hard to explain in words, so here's a board:


https://storyboard.openstack.org/#!/board/1

Things in a board you have permissions for can be moved, deleted, 
reordered, etc. Boards are created and accessed from the same places as 
worklists.At the moment they only show manual worklists, and you can't 
import an existing worklist into a board.


* AUTOMATIC WORKLISTS

You can now also have an automatically updating worklist filtered by, 
say "all tasks in the project 'StoryBoard', that are in review" and so 
on. I suggest trying it out and seeing what's around (when creating a 
worklist, there's a checkbox for 'automatic'; check it and see what 
happens). These worklists cannot be reordered.


We want to make all boards and worklists more discoverable in the 
future; they're a bit tucked away at the moment. We're also working 
toward automatic worklists in boards...


* Task Notes

Now you can add notes to a task. This lets you say 'patch in review 
here', etc, a bit more easily than putting a long list of urls in the 
story description. There's a funny-looking button next to the task title 
on the story detail page; clicking it should allow you to add notes. 
It's white if there are no notes and black-on-red if there are some.


That was a very long email, sorry. We tend to be around in #storyboard 
on freenode, though it's a holiday weekend over here in the UK, so... I 
picked a bad time to send this! :D New contributors and patches always 
welcome; the codebase is a mix of python, angularjs, and html/css. Most 
of this is SotK's work, and I just yell about it so he gets the credit 
he deserves; he has done an amazing job; thanks, SotK! And thanks again 
to everyone who has been involved so far, you're the best!  If I've 
missed anything, please say. :)


Best Wishes and Happy Task-Tracking,

Zara

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-24 Thread Matthew Treinish
On Mon, Mar 14, 2016 at 10:37:52AM -0400, Sean Dague wrote:
> On 03/14/2016 10:24 AM, Ian Cordasco wrote:
> >  
> > 
> > -Original Message-
> > From: Davanum Srinivas 
> > Reply: OpenStack Development Mailing List (not for usage questions) 
> > 
> > Date: March 14, 2016 at 09:18:50
> > To: OpenStack Development Mailing List (not for usage questions) 
> > 
> > Subject:  [openstack-dev] [all] Maintaining httplib2 python library
> > 
> >> Team,
> >>  
> >> fyi, http://bitworking.org/news/2016/03/an_update_on_httplib2
> >>  
> >> We have httplib2 in our global requirements and lots of projects are
> >> using it[1]. Is there anyone willing to step up?
> > 
> > Is it really worth our time to dedicate extra resources to that? Glance has 
> > been discussing (but it's been a low priority) to switing all our 
> > dependence on httplib2 to requests (and maybe urllib3 directly) as 
> > necessary.
> > 
> > We have other tools and libraries we can use without taking over 
> > maintenance of yet another library.
> > 
> > I think the better question than "Can people please maintain this for the 
> > community?" is "What benefits does httplib2 have over something that is 
> > actively maintained (and has been actively maintaiend) like urllib3, 
> > requests, etc.?"
> > 
> > And then we can (and should) also ask "Why have we been using this? How 
> > much work do cores think it would be to remove this from our global 
> > requirements?"
> 
> +1.
> 
> Here is the non comprehensive list of usages based on what trees I
> happen to have checked out (which is quite a few, but not all of
> OpenStack for sure).
> 
> I think before deciding to take over ownership of an upstream lib (which
> is a large commitment over space and time), we should figure out the
> migration cost. All the uses in Tempest come from usage in Glance IIRC
> (and dealing with chunked encoding).

No, that's actually the one use case that httplib2 isn't used for in tempest.
The code to deal with chunking based on glanceclient uses httplib. Tempest uses
httplib2 for all it's API traffic. (except when uploading or downloading images
from glance) That being said Jordan Pittier has a WIP patch up switching the
tempest usage from httplib2 to urllib3:

https://review.openstack.org/#/c/295900/

> 
> Neutron seems to use it for a couple of proxies, but that seems like
> requests/urllib3 might be sufficient.
> 
> In Horizon it's only used for a couple of tests.
> 
> EC2 uses it as a proxy client to the Nova metadata service. Again, I
> can't imagine that requests wouldn't be sufficient.
> 
> Trove doesn't seem to actually use it (though it's listed), though maybe
> wsgi_intercept uses it directly?
> 
> run_tests.py:from wsgi_intercept.httplib2_intercept import install as
> wsgi_install
> 
> python-muranoclient lists it as a requirement, there is no reference in
> the source tree for it.
> 
> 
> I suspect Glance is really the lynchpin here (as it actually does some
> low level stuff with it). If there can be a Glance plan to get off of
> it, the rest can follow pretty easily.
> 
>   -Sean
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deleting a cluster in Sahara SQL/PyMYSQL Error

2016-03-24 Thread Mike Bayer
Id recommend filing a bug in Launchpad against Sahara for that.  Can you 
reproduce it ?




On 03/23/2016 07:10 PM, Jerico Revote wrote:

Hello,

When trying to delete a cluster in sahara,
I'm getting the following error:


code 500 and message 'Internal Server Error'
2016-03-23 17:25:21.651 18827 ERROR sahara.utils.api
[req-d797bbc8-7932-4187-a428-565f9d834f8b ] Traceback (most recent
call last):
OperationalError: (pymysql.err.OperationalError) (2014, 'Command Out
of Sync')
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
[req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] DB exception wrapped.
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
Traceback (most recent call last):
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 1139, in _execute_context
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 context)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py",
line 450, in do_execute
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 cursor.execute(statement, parameters)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 132,
in execute
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 result = self._query(query)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 271,
in _query
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 conn.query(q)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
726, in query
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
861, in _read_query_result
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 result.read()
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
1064, in read
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 first_packet = self.connection._read_packet()
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
825, in _read_packet
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 packet = packet_type(self)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
242, in __init__
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 self._recv_packet(connection)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
248, in _recv_packet
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 packet_header = connection._read_bytes(4)
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line
839, in _read_bytes
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
 if len(data) < num_bytes:
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
TypeError: object of type 'NoneType' has no len()
2016-03-23 17:25:35.803 18823 ERROR oslo_db.sqlalchemy.exc_filters
2016-03-23 17:25:35.808 18823 ERROR sahara.utils.api
[req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] Request aborted with
status code 500 and message 'Internal Server Error'
2016-03-23 17:25:35.809 18823 ERROR sahara.utils.api
[req-377ef364-f2c7-4343-b32c-3741bfc0a05b ] Traceback (most recent
call last):
OperationalError: (pymysql.err.OperationalError) (2014, 'Command Out
of Sync')


Any idea what could this mean? Thanks
As a result, sahara clusters are stuck in "Deleting" state.


pkg -l | grep -i sahara
ii  python-sahara1:3.0.0-0ubuntu1~cloud0
all  OpenStack data processing cluster as a service - library
ii sahara-api   1:3.0.0-0ubuntu1~cloud0
  all  OpenStack data processing cluster as a service - API
ii sahara-common1:3.0.0-0ubuntu1~cloud0
  all  OpenStack data processing cluster as a service - common
files


Regards,

Jerico





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
Open

[openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-24 Thread Curtis
Hi,

I'm in the process of building OPNFV style labs [1]. I'd prefer to
manage these labs with openstack-ansible, but I will need things like
OpenVSwitch.

I know there was talk of supporting OVS in some fashion [2] but I'm
wondering what the current status or thinking is. If it's desirable by
the community to add OpenVSwitch support, and potentially other OPNFV
related features, I have time to contribute to work on them (as best I
can, at any rate).

Let me know what you think,
Curtis.

[1]: https://www.opnfv.org/
[2]: https://etherpad.openstack.org/p/osa-neutron-dvr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][all][ptl][stable] requirements updates in stable/mitaka libraries

2016-03-24 Thread Doug Hellmann
We have branched the openstack/requirements repository as the first
step to unfreezing requirements changes in master. Creating the
branch triggered the requirements update bot to propose changes to
repositories with stable/mitaka branches that were out of sync
relative to the point where we branched. Most of those repositories
are for libraries, but there are one or two server projects in the
list as well.

These updates are mostly changes we made to the requirements list
after it was frozen in order to address issues such as bad versions
of libraries, needed patch updates, etc. Some may also reflect
requirements updates that were not merged into the repositories
before their stable branches were created. We want to have the
stable branch requirements updates cleaned up to avoid the limbo
state we were in with liberty for so long, where no one was confident
of merging requirements changes in the stable branch.

We need project teams to review and land the patches, then prepare
release requests for the affected projects as soon as possible.
Please look through the list of patches [1] and give them a careful
but speedy review, then submit a change to openstack/releases for
the new release using a minimum-version level update (increment the
Y from X.Y.Z and reset Z to 0) reflecting the change in requirements.

Thanks,
Doug

[1] 
https://review.openstack.org/#/q/is:open+branch:stable/mitaka+topic:openstack/requirements

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-24 Thread Kevin Carter
I believe the OVS bits are being worked on, however I don't remember by whom 
and I don't know the current state of the work. Personally, I'd welcome the 
addition of other neutron plugin options and if you have time to work on any of 
those bits I'd be happy to help out where I can and review the PRs.

--

Kevin Carter
IRC: cloudnull



From: Curtis 
Sent: Thursday, March 24, 2016 10:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack-ansible] OpenVSwitch support

Hi,

I'm in the process of building OPNFV style labs [1]. I'd prefer to
manage these labs with openstack-ansible, but I will need things like
OpenVSwitch.

I know there was talk of supporting OVS in some fashion [2] but I'm
wondering what the current status or thinking is. If it's desirable by
the community to add OpenVSwitch support, and potentially other OPNFV
related features, I have time to contribute to work on them (as best I
can, at any rate).

Let me know what you think,
Curtis.

[1]: https://www.opnfv.org/
[2]: https://etherpad.openstack.org/p/osa-neutron-dvr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-24 Thread Truman, Travis
Michael Gugino and I were going to work on OVS and I believe we both got
sidetracked. I¹ll echo Kevin¹s offer to help out where possible, but at
the moment, I¹ve been tied up with other items.

On 3/24/16, 11:55 AM, "Kevin Carter"  wrote:

>I believe the OVS bits are being worked on, however I don't remember by
>whom and I don't know the current state of the work. Personally, I'd
>welcome the addition of other neutron plugin options and if you have time
>to work on any of those bits I'd be happy to help out where I can and
>review the PRs.
>
>--
>
>Kevin Carter
>IRC: cloudnull
>
>
>
>From: Curtis 
>Sent: Thursday, March 24, 2016 10:33 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [openstack-ansible] OpenVSwitch support
>
>Hi,
>
>I'm in the process of building OPNFV style labs [1]. I'd prefer to
>manage these labs with openstack-ansible, but I will need things like
>OpenVSwitch.
>
>I know there was talk of supporting OVS in some fashion [2] but I'm
>wondering what the current status or thinking is. If it's desirable by
>the community to add OpenVSwitch support, and potentially other OPNFV
>related features, I have time to contribute to work on them (as best I
>can, at any rate).
>
>Let me know what you think,
>Curtis.
>
>[1]: https://www.opnfv.org/
>[2]: https://etherpad.openstack.org/p/osa-neutron-dvr
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Alan Pevec
> So, if Delorean includes packages built from untagged commits in

Nit clarification: let's call it RDO Trunk repository (Delorean is a
tool, recently renamed https://github.com/openstack-packages/dlrn )

> multiple branches, placed in the same package repository where an
> automated installation tool can't tell stable/mitaka from master,
> that would be bad. Please tell me that's not what happens?

It is not what happens, there are separate repositories for packages
built from master and stable branches but it is still confusing, and
I'm not sure why is such a big deal to just push one empty commit with
Sem-Ver: api-break when we have that nice PBR automagic ?
Previously pre-versioning was used and version had to be bumped
explicitly in setup.cfg when opening master for the new version e.g.
https://review.openstack.org/#q,Ib634eb7acb64ff1d7be49852972295074b11557a,n,z

Another example (not release:managed project but still) where we have
this confusion is gnocchi:
openstack/gnocchi > master $ python ./setup.py --version
2.0.1.dev61
openstack/gnocchi > stable/2.0 $ python ./setup.py --version
2.0.3.dev13

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-24 Thread Andrey Kurilin
Hi Zago,


On Fri, Mar 18, 2016 at 6:35 PM, Zaro  wrote:

> There is a new web and android interface that's being developed by
> guys at Google however it's not available on our version of Gerrit.  I
> think it probably won't be ready until Gerrit ver 2.13+
>
> web interface (polymer+gerrit = polygerrit):
> This interface is already very usable in Gerrit 2.12.  Currently it's
> only available on the gerrit-review.googlesource.com server.  If you
> like you can try it out by creating an account there and access this
> new UI is "https://gerrit-review.googlesource.com/?polygerrit";.  I'm
> sure they would welcome your feedback.
>
>
UI of polygerrit looks better, but it is still unusable from mobile device
:(


> The android interface will be announced soon it's still in early
> development at this time.
>

Which kind of interface it will be? Android app or web-based application
which should potentially work for other devices?


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Flavio Percoco

On 24/03/16 09:42 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2016-03-24 09:12:36 -0400:

On 23/03/16 16:27 -0400, Anita Kuno wrote:
>Bots are very handy for doing repetitive tasks, we agree on that.
>
>Bots also require permissions to execute certain actions, require
>maintenance to ensure they operate as expected and do create output
>which is music to some and noise to others. Said output is often
>archieved somewhere which requires additional decisions.
>
>This thread is intended to initiate a conversation about bots. So far we
>have seen developers want to use bots in Gerrit[0] and in IRC[1]. The
>conversation starts there but isn't limited to these tools if folks have
>usecases for other bots.
>
>I included an item on the infra meeting agenda for yesterday's meeting
>(April 22, 2016) and discovered there was enough interest[2] in a
>discussion to take it to the list, so here it is.
>
>So some items that have been raised thus far:
>- permissions: having a bot on gerrit with +2 +A is something we would
>like to avoid

To be honest, I wouldn't mind having a bot +2A on specific cases. An example
would be requirements syncs that have passed the gate or trasnlations. I
normally ninja-approve those and I don't really mind doing it but, I wouldn't
mind having those patches approved automatically since they don't really require
a review.


The problem with automating some of those things is we have to take
into account schedules, outages, gate issues, and other reasons
when we should *not* take automated actions. We can code some of
those things into the bots, but I expect there will always be special
cases.

To look at patches generated by bots (Zanata, requirements updates,
etc.), we purposefully have those bots introduce the patches because
producing them is a repetitive task, ripe for automation.  We do
not let the bots automatically approve those patches so we can
retain control over *when* changes land, because knowing when it's
OK to merge a change requires knowledge outside of what the bot
will have.

This is also why we've been working so hard to put together a
reviewable release process. Building the package is highly automatable.
Knowing when it's OK to build the package is not.


Agreed with all the above! Those are very good arguments not to approve patches
for those cases. We had good control on these patches for Glance during Mitaka.
They were normally approved really quick unless there were outages.

That said, I'm still not against the concept of having a bot +2A a patch for
very specific cases. I guess I just don't have a good example at hand now.

Flavio


Doug



Flavio

>- "unsanctioned" bots (bots not in infra config files) in channels
>shared by multiple teams (meeting channels, the -dev channel)
>- forming a dependence on bots and expecting infra to maintain them ex
>post facto (example: bot soren maintained until soren didn't)
>- causing irritation for others due to the presence of an echoing bot
>which eventually infra will be asked or expected to mediate
>- duplication of features, both meetbot and purplebot log channels and
>host the archives in different locations
>- canonical bot doesn't get maintained
>
>It is possible that the bots that infra currently maintains have
>features of which folks are unaware, so if someone was willing to spend
>some time communicating those features to folks who like bots we might
>be able to satisfy their needs with what infra currently operates.
>
>Please include your own thoughts on this topic, hopefully after some
>discussion we can aggregate on some policy/steps forward.
>
>Thank you,
>Anita.
>
>
>[0]
>http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-03-09.log.html#t2016-03-09T15:21:01
>[1]
>http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.html
>[2]
>http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.02.log.html
>timestamp 19:53
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] NUMA + SR-IOV

2016-03-24 Thread Sergey Nikitin
Hi, folks.

I want to start a discussion about NUMA + SR-IOV environment. I have a
two-sockets server. It has two NUMA nodes and only one SR-IOV PCI device.
This device is associated with the first NUMA node. I booted a set of VMs
with SR-IOV support. Each of these VMs was booted on the first NUMA node.
As I understand it happened for better performance (VM should be booted in
NUMA node which has PCI device for this VM) [1].

But this behavior leaves my 2-sockets machines half-populated. What if I
don't care about SR-IOV performance? I just want every VM from *any* of
NUMA nodes to use this single SR-IOV PCI device.

But I can't do it because of behavior of numa_topology_filter. In this
filter we want to know if current host has required PCI device [2]. But we
want to have this device *only* in some numa cell on this host. It is
hardcoded here [3]. If we do *not* pass variable "cells" to the method
support_requests() [4] we will boot VM on the current host, if it has
required PCI device *on host* (maybe not in the same NUMA node).

So my question is:
Is it correct that we *always* want to boot VM in NUMA node associated with
requested PCI device and user has no choice?
Or should we give a choice to the user and let him boot a VM with PCI
device, associated with another NUMA node?


[1]
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html
[2]
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/numa_topology_filter.py#L85
[3]
https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L1246-L1247
[4] https://github.com/openstack/nova/blob/master/nova/pci/stats.py#L277
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Flavio Percoco

On 24/03/16 14:28 +, Amrith Kumar wrote:

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: Thursday, March 24, 2016 9:13 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

On 23/03/16 16:27 -0400, Anita Kuno wrote:
>Bots are very handy for doing repetitive tasks, we agree on that.
>
>Bots also require permissions to execute certain actions, require
>maintenance to ensure they operate as expected and do create output
>which is music to some and noise to others. Said output is often
>archieved somewhere which requires additional decisions.
>
>This thread is intended to initiate a conversation about bots. So far
>we have seen developers want to use bots in Gerrit[0] and in IRC[1].
>The conversation starts there but isn't limited to these tools if folks
>have usecases for other bots.
>
>I included an item on the infra meeting agenda for yesterday's meeting
>(April 22, 2016) and discovered there was enough interest[2] in a
>discussion to take it to the list, so here it is.
>
>So some items that have been raised thus far:
>- permissions: having a bot on gerrit with +2 +A is something we would
>like to avoid

To be honest, I wouldn't mind having a bot +2A on specific cases. An
example would be requirements syncs that have passed the gate or
trasnlations. I normally ninja-approve those and I don't really mind doing
it but, I wouldn't mind having those patches approved automatically since
they don't really require a review.



[amrith] I'm strongly against that in Trove (and I can see similar issues for 
other project that have guest images with python packages installed). I know 
that +2A for requirement patches wasn't the thrust of this email thread, but I 
just want to point out that in cases like Trove where there are guest images 
with requirements of their own, the patches with requirement changes have very 
material and consequential impacts and often get -2'ed while we can resolve 
dependent issues. And when some things like this have slipped through the 
cracks, very very bad things have happened.


This is a very good point that I was not aware of. As mentioned in my email to
Doug, my point is that I'm not against the concept of having a bot approving
some patches. I do see your point on this one and perhaps requirements approvals
was not the best example. :)

Also, I do think gerrit bots (or any?) should be a per-project decision. There's
no one size fits all. "Quod Erat Demonstrandum" given Trove's case.

Flavio


Flavio

>- "unsanctioned" bots (bots not in infra config files) in channels
>shared by multiple teams (meeting channels, the -dev channel)
>- forming a dependence on bots and expecting infra to maintain them ex
>post facto (example: bot soren maintained until soren didn't)
>- causing irritation for others due to the presence of an echoing bot
>which eventually infra will be asked or expected to mediate
>- duplication of features, both meetbot and purplebot log channels and
>host the archives in different locations
>- canonical bot doesn't get maintained
>
>It is possible that the bots that infra currently maintains have
>features of which folks are unaware, so if someone was willing to spend
>some time communicating those features to folks who like bots we might
>be able to satisfy their needs with what infra currently operates.
>
>Please include your own thoughts on this topic, hopefully after some
>discussion we can aggregate on some policy/steps forward.
>
>Thank you,
>Anita.
>
>
>[0]
>http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-
>infra.2016-03-09.log.html#t2016-03-09T15:21:01
>[1]
>http://lists.openstack.org/pipermail/openstack-dev/2016-March/089509.ht
>ml
>[2]
>http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.
>02.log.html
>timestamp 19:53
>
>___
>___ OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] No spec approvals for new things until after the summit

2016-03-24 Thread Matt Riedemann
This was discussed in the nova meeting today [1] but I need to make 
everyone aware of it here too.


There will be no spec approvals for *new* things until after the summit.

We've been carrying a large backlog of specs/blueprints/priorities for 
several releases now and need to start flushing some of that out of the 
queue before taking on new debt.


So until after the summit (about the next 5 weeks), we're only 
re-approving previously approved specs from mitaka. We already have 21 
of those re-approved [2]. When you count specless blueprints, we already 
have 29 approved blueprints for newton [3]. Several of those are just 
mass cleanup efforts though.


This is our current list of open previously-approved specs [4]. I expect 
a few more to show up in there with glance v2 and volume multi-attach.


Anyway, the big point here is we have too much stuff in the backlog and 
we need to focus our attention at landing those things early in newton 
before adding more new specs to the backlog.


There might be some exceptional cases of things that were nearly 
approved but didn't land in mitaka due to nits, but have an obvious path 
to landing early in Newton - those can be discussed on a case-by-case 
basis (some of the scheduler work with resource providers and inventory 
falls into this category).


But for the most part, expect that new specs will not be approved (or 
reviewed really) until after the summit. People can still propose new 
specs, but don't expect them to get much attention right now.


As for what *new* things we'll approve after the summit or how long 
we'll have the window open for those, I don't know yet. We'll be 
discussing that at the summit. Part of it is going to depend on what we 
can get done in the next 4-5 weeks. I expect new specs related to 
existing priorities are going to get, well, priority.


"But what do I do while waiting for my spec to be approved?", you ask. 
Well, there are a few things:


* Help review the things that are approved so we can get the backlog to 
shrink, since that's the top issue right now.
* Help with bug triage (another huge backlog issue is unresolved bugs 
that need to be triaged again or potentially just closed now).

* Get POC code up for your spec.
* Help with some of the mass specless bp efforts, like config option 
cleanup, removing mox usage, etc.
* Help with some of the testing gap efforts, like live migration or 
latest libvirt+qemu.


I realize this isn't a popular stance, but the team really needs to 
focus on clearing out the backlog of things that we've said repeatedly, 
in some cases for several releases now, that we were going to make a 
priority and still haven't gotten done.


[1] 
http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-03-24-14.00.log.html

[2] http://specs.openstack.org/openstack/nova-specs/specs/newton/
[3] https://blueprints.launchpad.net/nova/newton
[4] 
https://review.openstack.org/#/q/project:openstack/nova-specs+status:open+previously-approved


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA + SR-IOV

2016-03-24 Thread Nikola Đipanov
On 03/24/2016 04:18 PM, Sergey Nikitin wrote:
> 
> Hi, folks.
> 
> I want to start a discussion about NUMA + SR-IOV environment. I have a
> two-sockets server. It has two NUMA nodes and only one SR-IOV PCI
> device. This device is associated with the first NUMA node. I booted a
> set of VMs with SR-IOV support. Each of these VMs was booted on the
> first NUMA node. As I understand it happened for better performance (VM
> should be booted in NUMA node which has PCI device for this VM) [1]. 
> 
> But this behavior leaves my 2-sockets machines half-populated. What if I
> don't care about SR-IOV performance? I just want every VM from *any* of
> NUMA nodes to use this single SR-IOV PCI device.
> 
> But I can't do it because of behavior of numa_topology_filter. In this
> filter we want to know if current host has required PCI device [2]. But
> we want to have this device *only* in some numa cell on this host. It is
> hardcoded here [3]. If we do *not* pass variable "cells" to the method
> support_requests() [4] we will boot VM on the current host, if it has
> required PCI device *on host* (maybe not in the same NUMA node). 
> 
> So my question is:
> Is it correct that we *always* want to boot VM in NUMA node associated
> with requested PCI device and user has no choice?
> Or should we give a choice to the user and let him boot a VM with PCI
> device, associated with another NUMA node?
> 

This has come up before, and the fact that it keeps coming up tells me
that we should probably do something about it.

Potentially it makes sense to be lax by default unless user specifies
that they want to make sure that the device is on the same NUMA node,
but that is not backwards compatible.

It does not make sense to ask user to specify that they don't care IMHO,
as unless you know there is a problem (and users have nowhere near
enough information to tell), there is no reason for you to specify it -
it's just not sensible UI IMHO.

My 0.02 cents.

N.

> 
> [1]
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/input-output-based-numa-scheduling.html
> [2]
> https://github.com/openstack/nova/blob/master/nova/scheduler/filters/numa_topology_filter.py#L85
> [3]
> https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L1246-L1247
> [4] https://github.com/openstack/nova/blob/master/nova/pci/stats.py#L277


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-specs review tracking etherpad

2016-03-24 Thread Matt Riedemann



On 3/23/2016 4:14 PM, Matt Riedemann wrote:

I've started an etherpad [1] similar to what we had in mitaka. There are
some useful review links in the top for open reviews and fast-approve
re-proposals.

I'm also trying to keep a list of how many things we're re-approving
since that's our backlog from mitaka (and some further back). I'd like
to have that context so we can prioritize the specs for newton given
what we haven't yet landed from previous releases. One of the themes I'd
like to work on in newton is flushing the backlog before taking on new
work, at least for non-priority blueprints.

The etherpad is also trying to categorize things by sub-team (virt
drivers, other projects, etc). So as you come across things when
reviewing specs feel free to update that etherpad so we get an idea of
what we're doing for newton.

[1] https://etherpad.openstack.org/p/newton-nova-spec-review-tracking



We discussed this in the nova meeting today and decided it would be 
easier to keep the spec review tracking in the same etherpad as the 
other review priorities, so let's do that here:


https://etherpad.openstack.org/p/newton-nova-priorities-tracking

We're also going to flush out the subteam sections in there so we start 
with a clean slate for newton. Subteams should start filling those back 
in. The idea is anything that was just cruft from mitaka will be cleaned 
out and we can start fresh. You can also get to the old etherpad if you 
need to.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] OpenVSwitch support

2016-03-24 Thread Curtis
OK thanks everyone. I will put on my learnening hat and start digging into this.

Thanks,
Curtis.

On Thu, Mar 24, 2016 at 10:07 AM, Truman, Travis
 wrote:
> Michael Gugino and I were going to work on OVS and I believe we both got
> sidetracked. I¹ll echo Kevin¹s offer to help out where possible, but at
> the moment, I¹ve been tied up with other items.
>
> On 3/24/16, 11:55 AM, "Kevin Carter"  wrote:
>
>>I believe the OVS bits are being worked on, however I don't remember by
>>whom and I don't know the current state of the work. Personally, I'd
>>welcome the addition of other neutron plugin options and if you have time
>>to work on any of those bits I'd be happy to help out where I can and
>>review the PRs.
>>
>>--
>>
>>Kevin Carter
>>IRC: cloudnull
>>
>>
>>
>>From: Curtis 
>>Sent: Thursday, March 24, 2016 10:33 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: [openstack-dev] [openstack-ansible] OpenVSwitch support
>>
>>Hi,
>>
>>I'm in the process of building OPNFV style labs [1]. I'd prefer to
>>manage these labs with openstack-ansible, but I will need things like
>>OpenVSwitch.
>>
>>I know there was talk of supporting OVS in some fashion [2] but I'm
>>wondering what the current status or thinking is. If it's desirable by
>>the community to add OpenVSwitch support, and potentially other OPNFV
>>related features, I have time to contribute to work on them (as best I
>>can, at any rate).
>>
>>Let me know what you think,
>>Curtis.
>>
>>[1]: https://www.opnfv.org/
>>[2]: https://etherpad.openstack.org/p/osa-neutron-dvr
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA + SR-IOV

2016-03-24 Thread Czesnowicz, Przemyslaw


> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: Thursday, March 24, 2016 4:34 PM
> To: Sergey Nikitin ; OpenStack Development Mailing
> List (not for usage questions) 
> Cc: Czesnowicz, Przemyslaw 
> Subject: Re: [openstack-dev] [nova] NUMA + SR-IOV
> 
> On 03/24/2016 04:18 PM, Sergey Nikitin wrote:
> >
> > Hi, folks.
> >
> > I want to start a discussion about NUMA + SR-IOV environment. I have a
> > two-sockets server. It has two NUMA nodes and only one SR-IOV PCI
> > device. This device is associated with the first NUMA node. I booted a
> > set of VMs with SR-IOV support. Each of these VMs was booted on the
> > first NUMA node. As I understand it happened for better performance
> > (VM should be booted in NUMA node which has PCI device for this VM)
> [1].
> >
> > But this behavior leaves my 2-sockets machines half-populated. What if
> > I don't care about SR-IOV performance? I just want every VM from *any*
> > of NUMA nodes to use this single SR-IOV PCI device.
> >
> > But I can't do it because of behavior of numa_topology_filter. In this
> > filter we want to know if current host has required PCI device [2].
> > But we want to have this device *only* in some numa cell on this host.
> > It is hardcoded here [3]. If we do *not* pass variable "cells" to the
> > method
> > support_requests() [4] we will boot VM on the current host, if it has
> > required PCI device *on host* (maybe not in the same NUMA node).
> >
> > So my question is:
> > Is it correct that we *always* want to boot VM in NUMA node associated
> > with requested PCI device and user has no choice?
> > Or should we give a choice to the user and let him boot a VM with PCI
> > device, associated with another NUMA node?
> >

The rationale for choosing this behavior was that if you are requiring a NUMA 
topology for your VM 
and you request an SRIOV device as well then this is an high performance 
application and it should be configured appropriately.

Similarly if you request hugepages your VM will be confined to one NUMA (unless 
specified otherwise)
node and if there is no single NUMA node with enough resources it won't be 
created.
 

> 
> This has come up before, and the fact that it keeps coming up tells me that
> we should probably do something about it.
> 
> Potentially it makes sense to be lax by default unless user specifies that 
> they
> want to make sure that the device is on the same NUMA node, but that is
> not backwards compatible.
> 
> It does not make sense to ask user to specify that they don't care IMHO, as
> unless you know there is a problem (and users have nowhere near enough
> information to tell), there is no reason for you to specify it - it's just not
> sensible UI IMHO.
> 

Yes this did come up few times, having a way to specify a requirement is 
probably a good idea.
If it would be done the way you propose that would change the behavior for 
existing users, not sure how big problem this is.

Przemek

> My 0.02 cents.


 


> 
> N.
> 
> >
> > [1]
> > https://specs.openstack.org/openstack/nova-
> specs/specs/kilo/implemente
> > d/input-output-based-numa-scheduling.html
> > [2]
> >
> https://github.com/openstack/nova/blob/master/nova/scheduler/filters/n
> > uma_topology_filter.py#L85
> > [3]
> >
> https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L
> 1
> > 246-L1247 [4]
> > https://github.com/openstack/nova/blob/master/nova/pci/stats.py#L277

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Mitaka RC2 available

2016-03-24 Thread Thierry Carrez

Due to release-critical issues spotted in Nova during RC1 testing, a
new release candidate was created for Mitaka. You can find the RC2 
source code tarball at:


https://tarballs.openstack.org/nova/nova-13.0.0.0rc2.tar.gz

Unless new release-critical issues are found that warrant a last-minute
release candidate respin, this tarball will be formally released as
final "Mitaka" versions on April 7th. You are therefore strongly
encouraged to test and validate this tarball !

Alternatively, you can directly test the mitaka release branch at:
http://git.openstack.org/cgit/openstack/nova/log/?h=stable/mitaka

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug

and tag it *mitaka-rc-potential* to bring it to the Nova release crew's
attention.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][openstackclient] Required name option for volumes, snapshots and backups

2016-03-24 Thread Ivan Kolodyazhny
Hi team,

>From the Cinder point of view, both volumes, snapshots and backups APIs do
not require name param. But python-openstackclient requires name param for
these entities.

I'm going to fix this inconsistency with patch [1]. Unfortunately, it's a
bit more than changing required params to not required. We have to change
CLI signatures. E.g. for create a volume: from [2].

Is it acceptable? What is the right way to do such changes for OpenStack
Client?


[1] https://review.openstack.org/#/c/294146/
[2] http://paste.openstack.org/show/491771/
[3] http://paste.openstack.org/show/491772/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][horizon][magnum-ui] - Stable version for Liberty?

2016-03-24 Thread Adrian Otto
Marcos,

Great question. The current intent is to backport security fixes and critical 
bugs, and to focus on master for new feature development. Although we would 
love to expand scope to backport functionality, I’m not sure it’s realistic 
without an increased level of commitment from that group of contributors. With 
that said, I am willing to approve back porting of basic features to previous 
stable branches on an individual case basis.

Adrian

On Mar 24, 2016, at 6:55 AM, Marcos Fermin Lobo 
mailto:marcos.fermin.l...@cern.ch>> wrote:

Hi all,

I have a question about magnum-ui plugin for Horizon. I see that there is a 
tarball for Stable/Liberty version, but it is very simple, just Index views, 
any "create" action.

But I see a lot of work in master branch for this project, but is not 
compatible with Horizon Liberty. My question to people in charge of this 
project is: When the code is stable, do you plan to backport all the 
functionality to Liberty version? or just go to Mitaka?

Thank you.

Regards,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Chris Dent

On Thu, 24 Mar 2016, Alan Pevec wrote:


Another example (not release:managed project but still) where we have
this confusion is gnocchi:
openstack/gnocchi > master $ python ./setup.py --version
2.0.1.dev61
openstack/gnocchi > stable/2.0 $ python ./setup.py --version
2.0.3.dev13


Sorry, I've never understood this feature of pbr (pre and post versioning
is it?). Don't these problems go away if everyone just steps up and uses
explicit version numbers in the projects (in __version__)[1].

No guessing required. No weird magic in pbr that few people
understand.

If some kind of automatic control and validation is required, do it
during the explicit tagging stage?

What is the goal of pre versioning?

[1] The fact that I can't look in the code for __version__ gives me
rage face.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Samuel Cassiba
On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets 
wrote:

> Hi, team!
>
> If you remember, we've created a cookbook for ec2-api service. After last
> discussion I’ve refactored it, have added specs.
> The final version is located on cloudscaling github:
> https://github.com/cloudscaling/cookbook-openstack-ec2.
> How do we proceed to integrate our cookbook to your project?
>
> Regards,
> Anastasia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Hi Anastasia,

That's great news! We'll have to go through the process of getting a new
repo added under our project. Would you be able to attend Monday's meeting
to discuss it further?

Thanks,

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Doug Hellmann
Excerpts from Alan Pevec's message of 2016-03-24 17:06:54 +0100:
> > So, if Delorean includes packages built from untagged commits in
> 
> Nit clarification: let's call it RDO Trunk repository (Delorean is a
> tool, recently renamed https://github.com/openstack-packages/dlrn )

OK.

> 
> > multiple branches, placed in the same package repository where an
> > automated installation tool can't tell stable/mitaka from master,
> > that would be bad. Please tell me that's not what happens?
> 
> It is not what happens, there are separate repositories for packages
> built from master and stable branches

Good.

> but it is still confusing, and
> I'm not sure why is such a big deal to just push one empty commit with
> Sem-Ver: api-break when we have that nice PBR automagic ?

There are a couple of reasons, mostly having to do with the fact
that we're trying to reduce the amount of work it takes to handle
releases so we can start encouraging more projects to do them more
often.

First, every extra step that we have to take during the release
processing is another opportunity for it to go wrong, especially
if we have to do them in a particular sequence. We have, this cycle,
mostly eliminated the in-tree sequential operations. There are still
a few gotchas in the steps the release team actually goes through,
but the negative side-effects of getting those wrong (or in the
wrong order) is mostly small.

As you point out below, we used to require a patch immediately after
a branch to change the version numbering from pre-versioning (declared
in setup.cfg) to post-versioning (declared using tags). That required
a higher level of coordination, even with a small number of teams
very engaged in the release process. We now have too many teams,
releasing too many deliverables, with too few people who understand
how we manage releases. We mostly have everyone up to speed on
SemVer and requesting releases, but not quite.

The second reason is that there's not necessarily an API break at
every release cycle, and not all projects increment their major
version number just because of the cycle. So we would actually end
up doing something different for each project, depending on how
they treat release boundaries, and that's yet another thing to keep
track of, review carefully, and educate folks about when preparing
the release and stable branches.

> Previously pre-versioning was used and version had to be bumped
> explicitly in setup.cfg when opening master for the new version e.g.
> https://review.openstack.org/#q,Ib634eb7acb64ff1d7be49852972295074b11557a,n,z
> 
> Another example (not release:managed project but still) where we have
> this confusion is gnocchi:
> openstack/gnocchi > master $ python ./setup.py --version
> 2.0.1.dev61
> openstack/gnocchi > stable/2.0 $ python ./setup.py --version
> 2.0.3.dev13

Let's turn the question around: Why do you (or anyone) want to
package things that are not tagged as releasable by the contributors
creating them? What are those packages used for?

Doug

> 
> Cheers,
> Alan
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-03-24 17:25:33 +:
> On Thu, 24 Mar 2016, Alan Pevec wrote:
> 
> > Another example (not release:managed project but still) where we have
> > this confusion is gnocchi:
> > openstack/gnocchi > master $ python ./setup.py --version
> > 2.0.1.dev61
> > openstack/gnocchi > stable/2.0 $ python ./setup.py --version
> > 2.0.3.dev13
> 
> Sorry, I've never understood this feature of pbr (pre and post versioning
> is it?). Don't these problems go away if everyone just steps up and uses
> explicit version numbers in the projects (in __version__)[1].
> 
> No guessing required. No weird magic in pbr that few people
> understand.
> 
> If some kind of automatic control and validation is required, do it
> during the explicit tagging stage?
> 
> What is the goal of pre versioning?

Pre-versioning is what you're describing, declaring the version number
statically before releasing something. We use post-versioning, which is
to tag something that already exists to give it a version number (more
details in http://docs.openstack.org/developer/pbr/#version if you
care).

One goal is to have a single place where the version is declared,
and then to use that to derive the metadata that goes into the
distributable artifact created from the inputs. That's possible
with pre-versioning, by placing the version in setup.cfg, except
that you have to predict in advance what the right version number
is in order to get SemVer correct, and that requires managing what
types of patches you land in what order. For example, if you
pre-declare a bug-fix release X.Y.1 and then land a feature, your
version number is no longer correct and also needs to be updated.

That conflict leads us to things like updating the version number
when patches are submitted, which in turn leads to merge conflicts.
This issue was the source of the keyword-based bumping feature of
pbr, but using that also requires contributors to know about it and
remember to apply it and reviewers to remember to require it. It's
much less of a burden on the average contributor for us to just use
post-versioning and review what is being released and the tag being
applied to make sure the version is correct. That's what we've done
with the release automation this cycle.

> 
> [1] The fact that I can't look in the code for __version__ gives me
> rage face.

Having a __version__ inside the package actually has no bearing on
what version the packaging system thinks is associated with the
dist. The name is just a convention some projects have used. Rather
than hard-coding a value, it's better to use pkg_resources to ask
for the version of the current package.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Ronald Bradford
Samuel,

Could you detail when your IRC meeting is, [1] indicates it's on Tuesdays.

[1] https://wiki.openstack.org/wiki/Meetings/EC2API

Regards

Ronald

On Thu, Mar 24, 2016 at 1:36 PM, Samuel Cassiba  wrote:

> On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets 
> wrote:
>
>> Hi, team!
>>
>> If you remember, we've created a cookbook for ec2-api service. After last
>> discussion I’ve refactored it, have added specs.
>> The final version is located on cloudscaling github:
>> https://github.com/cloudscaling/cookbook-openstack-ec2.
>> How do we proceed to integrate our cookbook to your project?
>>
>> Regards,
>> Anastasia
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> Hi Anastasia,
>
> That's great news! We'll have to go through the process of getting a new
> repo added under our project. Would you be able to attend Monday's meeting
> to discuss it further?
>
> Thanks,
>
> Samuel
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Chris Dent

On Thu, 24 Mar 2016, Doug Hellmann wrote:

[good explanation snipped]

Thanks for the detailed explanation.

As with so many of these bits of automation they come with a variety
of compromises. After a while it starts to seem like maintaining
those compromises becomes as important as solving the original goal.


[1] The fact that I can't look in the code for __version__ gives me
rage face.


Having a __version__ inside the package actually has no bearing on
what version the packaging system thinks is associated with the
dist. The name is just a convention some projects have used. Rather
than hard-coding a value, it's better to use pkg_resources to ask
for the version of the current package.


Yeah, I know. I guess I have become accustomed to __version__ as the
canonical source of version authority (used by tooling) because...hrmmm
what's the best way to put this...it's clear, it's just _there_.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Push Type Driver implementation

2016-03-24 Thread Tim Hinrichs
I tried the doctorDriver again.  This time I was successful!  I'm still
getting an error when listing the datasources though.  I tried updating and
installing my client, but no change.

// Create the datasource
$ openstack congress datasource create doctor doctor
+-+--+
| Field   | Value|
+-+--+
| config  | None |
| description | None |
| driver  | doctor   |
| enabled | True |
| id  | 3717095c-25a7-4fe2-8f18-25d845b11c60 |
| name| doctor   |
| type| None |
+-+--+

// Push data
$ curl -g -i -X PUT
http://localhost:1789/v1/data-sources/3717095c-25a7-4fe2-8f18-25d845b11c60/tables/events/rows
-H "User-Agent: python-congressclient" -H "Content-Type: application/json"
-H "Accept: application/json" -d  '[
>   {
> "id": "0123-4567-89ab",
> "time": "2016-02-22T11:48:55Z",
> "type": "compute.host.down",
> "details": {
> "hostname": "compute1",
> "status": "down",
> "monitor": "zabbix1",
> "monitor_event_id": "111"
> }
>   }
> ]'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 0
X-Openstack-Request-Id: req-47c6dfdf-74cd-4101-829a-657b6aea1e2c
Date: Thu, 24 Mar 2016 18:28:31 GMT


// Ask for contents of table that we pushed
$ openstack congress datasource row list doctor events
++--+--+--++-+--+
| id | time | type | hostname | status
| monitor | monitor_event_id |
++--+--+--++-+--+
| 0123-4567-89ab | 2016-02-22T11:48 | compute.host.dow | compute1 | down
| zabbix1 | 111  |
|| :55Z | n|  |
 | |  |
++--+--+--++-+--+


// List the datasources
$ openstack congress datasource list
'NoneType' object has no attribute 'items'

Tim


On Thu, Mar 17, 2016 at 5:56 PM Tim Hinrichs  wrote:

> I tried the doctor driver out.  I just added the file to
> congress/datasources, and set up /etc/congress/congress.conf to include
> congress.datasources.doctor_driver.DoctorDriver.
>
> I could create a new doctor driver, but afterwards I couldn't list all the
> datasources, and I couldn't push any data to it.  See transcript below.
>
> $ openstack congress datasource create doctor doctor
> +-+--+
> | Field   | Value|
> +-+--+
> | config  | None |
> | description | None |
> | driver  | doctor   |
> | enabled | True |
> | id  | 906c6327-15f1-4f3c-aa51-1590540c06b9 |
> | name| doctor   |
> | type| None |
> +-+--+
>
> $ openstack congress datasource list
> 'NoneType' object has no attribute 'items'
>
> The other problem I saw was that the schema was fixed for the doctor
> driver.  So I tried to create a push driver that would accept any
> collection of tuples.  This wouldn't allow the user to push arbitrary JSON,
> but they could push any tuples they'd like.  While experimenting, I fixed
> the problem mentioned above by adding a single (unnecessary) configuration
> option.  Then I ran into a Datasource not found problem.  I pushed the code
> to review so we can all take a look.
>
> https://review.openstack.org/294348
>
> $ curl -g -i -X PUT
> http://localhost:1789/v1/data-sources/push/tables/data/rows -H
> "User-Agent: python-congressclient" -H "Content-Type: application/json" -H
> "Accept: application/json" -d '[[1]]'
> HTTP/1.1 404 Not Found
> Content-Type: application/json; charset=UTF-8
> Content-Length: 102
> X-Openstack-Request-Id: req-23432974-b107-4657-9bbc-c2e05fd25a98
> Date: Thu, 17 Mar 2016 21:13:03 GMT
>
> {"error": {"message": "Not Found::Datasource not found push",
> "error_data": null, "error_code": 404}}
>
> Masahito: do you know what the Datasource Not Found problem is? If not,
> could you look into it? I ran into it with the Doctor Driver too.
>
> Tim
>
>
> On Thu, Mar 17, 2016 at 2:31 AM Masahito MUROI <
> muroi.masah...@lab.ntt.co.jp> wrote:
>
>> Hi folks,
>>
>> This[1] is the driver I mentioned at meeting. It is used for OPNFV
>> D

Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-03-24 18:25:10 +:
> On Thu, 24 Mar 2016, Doug Hellmann wrote:
> 
> [good explanation snipped]
> 
> Thanks for the detailed explanation.
> 
> As with so many of these bits of automation they come with a variety
> of compromises. After a while it starts to seem like maintaining
> those compromises becomes as important as solving the original goal.

Sure, but I hope we haven't reached that point yet, only one cycle
into a multi-cycle effort to make the releases easier. :-)

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Samuel Cassiba
On Thu, Mar 24, 2016 at 11:25 AM, Ronald Bradford 
wrote:

> Samuel,
>
> Could you detail when your IRC meeting is, [1] indicates it's on Tuesdays.
>
> [1] https://wiki.openstack.org/wiki/Meetings/EC2API
>
> Regards
>
> Ronald
>

> On Thu, Mar 24, 2016 at 1:36 PM, Samuel Cassiba  wrote:
>
>> On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets 
>> wrote:
>>
>>> Hi, team!
>>>
>>> If you remember, we've created a cookbook for ec2-api service. After
>>> last discussion I’ve refactored it, have added specs.
>>> The final version is located on cloudscaling github:
>>> https://github.com/cloudscaling/cookbook-openstack-ec2.
>>> How do we proceed to integrate our cookbook to your project?
>>>
>>> Regards,
>>> Anastasia
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> Hi Anastasia,
>>
>> That's great news! We'll have to go through the process of getting a new
>> repo added under our project. Would you be able to attend Monday's meeting
>> to discuss it further?
>>
>> Thanks,
>>
>> Samuel
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Hi Ronald,

Pardon the confusion. I was referring to the Chef OpenStack meeting[1],
which takes place on Mondays.

[1] https://wiki.openstack.org/wiki/Meetings/ChefCookbook

Thanks,

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Request for design session ideas of Austin Summit

2016-03-24 Thread Ken'ichi Ohmichi
Hi

We have a Design Summit next month, and now we are trying to get ideas
for QA sessions.
There is an etherpad for ideas and it is great if writing your ideas
on the etherpad:

https://etherpad.openstack.org/p/newton-qa-summit-topics

After getting ideas, we will arrange them into available slots for QA sessions.

Thanks in advance and see you in Austin :-)
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] ec2-api cookbook

2016-03-24 Thread Anastasia Kravets
Yes, I’ll try my best to join you this Monday!

Thanks,
Anastasia

> On 24 Mar 2016, at 20:36, Samuel Cassiba  wrote:
> 
> On Thu, Mar 24, 2016 at 3:33 AM, Anastasia Kravets  > wrote:
> Hi, team!
> 
> If you remember, we've created a cookbook for ec2-api service. After last 
> discussion I’ve refactored it, have added specs.
> The final version is located on cloudscaling github: 
> https://github.com/cloudscaling/cookbook-openstack-ec2 
> .
> How do we proceed to integrate our cookbook to your project?
> 
> Regards,
> Anastasia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> Hi Anastasia,
> 
> That's great news! We'll have to go through the process of getting a new repo 
> added under our project. Would you be able to attend Monday's meeting to 
> discuss it further?
> 
> Thanks,
> 
> Samuel
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Newton is open!

2016-03-24 Thread Jim Rollenhagen
Hi Ironickers,

We've successfully released ironic and all of its subprojects, and cut
the stable/mitaka branch for those.

Newton is now open for development!

As a reminder, please propose sessions for the Austin summit here:
https://etherpad.openstack.org/p/ironic-newton-summit

And I've started brain dumping things we may want to work on during
Newton here, with the hopes of trimming it down and creating a list of
priorities. Feel free to contribute, but note that this list is already
more work than we can take on. :)
https://etherpad.openstack.org/p/ironic-newton-summit

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Newton is open!

2016-03-24 Thread Jim Rollenhagen
On Thu, Mar 24, 2016 at 12:03:34PM -0700, Jim Rollenhagen wrote:
> Hi Ironickers,
> 
> We've successfully released ironic and all of its subprojects, and cut
> the stable/mitaka branch for those.
> 
> Newton is now open for development!
> 
> As a reminder, please propose sessions for the Austin summit here:
> https://etherpad.openstack.org/p/ironic-newton-summit
> 
> And I've started brain dumping things we may want to work on during
> Newton here, with the hopes of trimming it down and creating a list of
> priorities. Feel free to contribute, but note that this list is already
> more work than we can take on. :)
> https://etherpad.openstack.org/p/ironic-newton-summit

Correction: https://etherpad.openstack.org/p/ironic-newton-priorities

> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Jim Rollenhagen
Hey all,

I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
the Bifrost project, gives super valuable reviews, is beginning to lead
the boot from volume efforts, and is clearly an expert in this space.

All in favor say +1 :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Robert Collins
On 25 March 2016 at 01:11, Alan Pevec  wrote:
> 2016-03-24 2:21 GMT+01:00 Robert Collins :
>> Trunk will rapidly exceed mitaka's versions, leading to no confusion too.
>
> That's the case now, RC1 tags are reachable from both branches and
> master has more patches, generating higher .devN part. But once RC2
> and final tags are pushed, generated version will be higher on
> stable/mitaka branch:
 from packaging.version import Version, parse
 rc2=Version("13.0.0.0rc2")
 master=Version("13.0.0.0rc2.dev9")
 master > rc2
> False
 ga=Version("13.0.0")
 master > ga
> False

Those versions are not the versions that pbr will generate.

mitaka gets backports and local requirements changes only, so it will
get less commits than master.

Say mitaka gets 10 commits, and master 50.

mitaka  13.0.1.dev10
master 13.0.1.dev50

As soon as someone pushes a commit that adds a feature (sem-ver:
feature), master will bump to 13.1.0.devN

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Ian Cordasco
 

-Original Message-
From: Robert Collins 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 23, 2016 at 20:20:30
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [release] [pbr] semver on master branches after 
RC WAS Re: How do I calculate the semantic version prior to a release?

> On 24 March 2016 at 10:36, Ian Cordasco wrote:
> >
> >
>  
> > The project will build wheels first. The wheels generated tend to look 
> > something like  
> 13.0.0.0rc2.dev10 when they're built because of pbr.
> >
> > If someone is doing CD with the openstack-ansible project and they deploy 
> > mitaka once  
> it has a final tag, then they decide to upgrade to run master, they could run 
> into problems  
> upgrading. That said, I think my team is the only team doing this. (Or at 
> least, none of  
> the other active members of the IRC channel talk about doing this.) So it 
> might not be anything  
> more than a "nice to have" especially since no one else from the project has 
> chimed in.  
>  
> So when we discussed this in Tokyo, we had the view that folk *either*
> run master -> master, or they run release->release, or rarely
> release->alpha-or-beta.
>  
> We didn't think it made sense that folk would start with a stable use
> case and evolve that into an unstable one, *particularly* early in the
> unstable cycle.
>  
> So - if there's one team doing it, I think its keep-both-pieces time.
>  
> If its going to be more common than that, we can do the legwork to
> make tags early on every time. But I don't think we've got supporting
> evidence that its more than you so far :).

Right I'm not convinced it's more than the ~15 or 20 of us that work on OSA and 
care about upgrading and catching regressions early.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Robert Collins
On 25 March 2016 at 07:25, Chris Dent  wrote:
> On Thu, 24 Mar 2016, Doug Hellmann wrote:
>
> [good explanation snipped]
>
> Thanks for the detailed explanation.
>
> As with so many of these bits of automation they come with a variety
> of compromises. After a while it starts to seem like maintaining
> those compromises becomes as important as solving the original goal.
>
>>> [1] The fact that I can't look in the code for __version__ gives me
>>> rage face.
>>
>>
>> Having a __version__ inside the package actually has no bearing on
>> what version the packaging system thinks is associated with the
>> dist. The name is just a convention some projects have used. Rather
>> than hard-coding a value, it's better to use pkg_resources to ask
>> for the version of the current package.
>
>
> Yeah, I know. I guess I have become accustomed to __version__ as the
> canonical source of version authority (used by tooling) because...hrmmm
> what's the best way to put this...it's clear, it's just _there_.

There's a stock recipe for pbr using projects that lets you have a
https://www.python.org/dev/peps/pep-0396/ compliant __version__ (even
though that PEP is deferred :)) which will use either pkg_resources
metadata, or pep 345 metadata,  or git history, depending on whats
available. Thats entirely suitable for querying via automation (import
the module, consult mod.__version__). Or you can alternatively query
the standard (https://www.python.org/dev/peps/pep-0345/ ) metadata
generated from python setup.py egg_info, if you prefer.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Robert Collins
On 25 March 2016 at 08:23, Ian Cordasco  wrote:
>

> Right I'm not convinced it's more than the ~15 or 20 of us that work on OSA 
> and care about upgrading and catching regressions early.

However, Doug and I just uncovered a concern on IRC: having PyPI be
ahead of master for more than very short periods seems like a poor
idea.

When we tag a final release on a branch, without master being higher
than that final release, folk running things installed from master,
will now be 'upgradable' to whats on PyPI, even though thats older
from a human perspective.

So, I think we do need some aggressive means to bump master up past
the versions reserved for a stable branch, at the stable branches
creation.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] puppet-trove remove templated guestagent.conf

2016-03-24 Thread Matt Fischer
Right now puppet-trove can configure guestagent.conf in two ways. First via
config options in the guestagent class and second via a templated file that
taskmanager.pp handles by default [1]. I'd like to drop this behavior, but
it's not backwards compatible so would like to discuss.

First the templated file is essentially a fork of the
trove_guestagent_config options. There have been options added there and
options moved to different sections there and the template was never
updated. I have a fix up for some of this [2], but there's more work to do.

Second, I believe that the templated file is unnecessary. If you just want
to set guestagent.conf, but not run the service or install the packages
you'd just do this:

  class {'::trove::guestagent':
enabled  => false,
manage_service => false,
ensure_package => absent,

  }

Lastly, forcing guestagent.conf to re-use settings from taskmanager limits
how you can partition credentials for Rabbit. Since the guest agent runs on
VMs, I'd like to use separate Rabbit credentials for it than for
taskmanager which runs in my control plane. Using the templated file this
is not possible since settings are inherited from trove::taskmanager.

This change is not backwards compatible, so it would need a deprecation
cycle.

So with all that said, is there a reason to keep the current way of doing
things?


[1] -
https://github.com/openstack/puppet-trove/blob/2ccdb978fffe990e28512069a4c4f69465ace942/manifests/taskmanager.pp#L299-L304
[2] - https://review.openstack.org/#/c/297293/2
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Chris K
Big +1 from /me. Juila will be a great addition to the team.

-Chris

On Thu, Mar 24, 2016 at 12:08 PM, Jim Rollenhagen 
wrote:

> Hey all,
>
> I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
> the Bifrost project, gives super valuable reviews, is beginning to lead
> the boot from volume efforts, and is clearly an expert in this space.
>
> All in favor say +1 :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all][ptl][stable] requirements updates in stable/mitaka libraries

2016-03-24 Thread Robert Collins
I think this is reasonable to do; we may need to follow up on this
depending on the result of the other thread about version
mgmt-after-a-branch.

-Rob

On 25 March 2016 at 04:05, Doug Hellmann  wrote:
> We have branched the openstack/requirements repository as the first
> step to unfreezing requirements changes in master. Creating the
> branch triggered the requirements update bot to propose changes to
> repositories with stable/mitaka branches that were out of sync
> relative to the point where we branched. Most of those repositories
> are for libraries, but there are one or two server projects in the
> list as well.
>
> These updates are mostly changes we made to the requirements list
> after it was frozen in order to address issues such as bad versions
> of libraries, needed patch updates, etc. Some may also reflect
> requirements updates that were not merged into the repositories
> before their stable branches were created. We want to have the
> stable branch requirements updates cleaned up to avoid the limbo
> state we were in with liberty for so long, where no one was confident
> of merging requirements changes in the stable branch.
>
> We need project teams to review and land the patches, then prepare
> release requests for the affected projects as soon as possible.
> Please look through the list of patches [1] and give them a careful
> but speedy review, then submit a change to openstack/releases for
> the new release using a minimum-version level update (increment the
> Y from X.Y.Z and reset Z to 0) reflecting the change in requirements.
>
> Thanks,
> Doug
>
> [1] 
> https://review.openstack.org/#/q/is:open+branch:stable/mitaka+topic:openstack/requirements
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Component Leads Elections

2016-03-24 Thread Serg Melikyan
Hi folks,

I'd like to announce that we're running the Component Leads elections.
Detailed information is available on wiki [0].

Component Lead: defines architecture of a particular module or
component in Fuel, resolves technical disputes in their area of
responsibility. All design specs that impact a component must be
approved by the corresponding component lead [1].

Fuel has three large sub-teams, with roughly comparable codebases,
that need dedicated component leads:

* fuel-library
* fuel-web
* fuel-ui

Nominees propose their candidacy by sending an email to the
openstack-dev@lists.openstack.org mailing-list, with the subject:
"[fuel]  lead candidacy"
(for example, "[fuel] fuel-library lead candidacy").

Timeline:
* March 25 - March 31: Open candidacy for Component leads positions
* April 1 - April 7: Component leads elections

References
[0] https://wiki.openstack.org/wiki/Fuel/Elections_Spring_2016
[1] https://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html

-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Yolanda Robla Mota
I've worked with Julia on bifrost and she is amazing. She also was very 
helpful on debugging Ironic problems, and she has extensive knowledge of 
the system.

Big +1 from me

El 24/03/16 a las 20:34, Chris K escribió:

Big +1 from /me. Juila will be a great addition to the team.

-Chris

On Thu, Mar 24, 2016 at 12:08 PM, Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:


Hey all,

I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She
runs
the Bifrost project, gives super valuable reviews, is beginning to
lead
the boot from volume efforts, and is clearly an expert in this space.

All in favor say +1 :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][horizon][magnum-ui] - Stable version for Liberty?

2016-03-24 Thread Jim Rollenhagen
On Thu, Mar 24, 2016 at 05:17:09PM +, Adrian Otto wrote:
> Marcos,
> 
> Great question. The current intent is to backport security fixes and critical 
> bugs, and to focus on master for new feature development. Although we would 
> love to expand scope to backport functionality, I’m not sure it’s realistic 
> without an increased level of commitment from that group of contributors. 
> With that said, I am willing to approve back porting of basic features to 
> previous stable branches on an individual case basis.

According to stable branch policy, features are not valid backports[0].

Of course, Magnum[1] doesn't have the stable:follows-policy tag[2], so
perhaps you're free to do whatever you want here. :)

// jim

[0] 
http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
[1] http://governance.openstack.org/reference/projects/magnum.html
[2] http://governance.openstack.org/reference/tags/stable_follows-policy.html

> 
> Adrian
> 
> On Mar 24, 2016, at 6:55 AM, Marcos Fermin Lobo 
> mailto:marcos.fermin.l...@cern.ch>> wrote:
> 
> Hi all,
> 
> I have a question about magnum-ui plugin for Horizon. I see that there is a 
> tarball for Stable/Liberty version, but it is very simple, just Index views, 
> any "create" action.
> 
> But I see a lot of work in master branch for this project, but is not 
> compatible with Horizon Liberty. My question to people in charge of this 
> project is: When the code is stable, do you plan to backport all the 
> functionality to Liberty version? or just go to Mitaka?
> 
> Thank you.
> 
> Regards,
> Marcos.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Jeremy Stanley
On 2016-03-24 21:11:06 +0100 (+0100), Yolanda Robla Mota wrote:
> I've worked with Julia on bifrost and she is amazing. She also was very
> helpful on debugging Ironic problems, and she has extensive knowledge of the
> system.
> Big +1 from me

Also, she's been a great source of Ironic knowledge we've relied on
in Infra on multiple occasions, so consider this an outside
recommendation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bots and Their Effects: Gerrit, IRC, other

2016-03-24 Thread Jim Rollenhagen
On Thu, Mar 24, 2016 at 09:56:16AM +0100, Thierry Carrez wrote:
> Anita Kuno wrote:
> >[...]
> >So some items that have been raised thus far:
> >- permissions: having a bot on gerrit with +2 +A is something we would
> >like to avoid
> >- "unsanctioned" bots (bots not in infra config files) in channels
> >shared by multiple teams (meeting channels, the -dev channel)
> >- forming a dependence on bots and expecting infra to maintain them ex
> >post facto (example: bot soren maintained until soren didn't)
> >- causing irritation for others due to the presence of an echoing bot
> >which eventually infra will be asked or expected to mediate
> >- duplication of features, both meetbot and purplebot log channels and
> >host the archives in different locations
> >- canonical bot doesn't get maintained
> 
> So it feels like people write their own bot rather than contribute to the
> already-existing infrastructure bots (statusbot and meetbot) -- is there a
> reason for that, beyond avoiding the infra contribution process ?

I tend to find that bots like this are a great little hackday project or
a good way to pick up a new programming language, so maybe that's part
of the reason. :)

// jim

> I was faced with such a decision when I coded up the #success feature, but I
> ended up just adding the feature to the existing statusbot, rather than
> create my own successbot. Are there technical limitations in the existing
> bots that prevent people from adding the features they want there ?
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Villalovos, John L
+1 for me. Julia has been an awesome resource and person in the Ironic 
community :)

John

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com] 
Sent: Thursday, March 24, 2016 12:09
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

Hey all,

I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
the Bifrost project, gives super valuable reviews, is beginning to lead
the boot from volume efforts, and is clearly an expert in this space.

All in favor say +1 :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread David Moreau Simard
On Thu, Mar 24, 2016 at 1:58 PM, Doug Hellmann  wrote:
> Let's turn the question around: Why do you (or anyone) want to
> package things that are not tagged as releasable by the contributors
> creating them? What are those packages used for?

I know Ubuntu provides similar trunk repositories [1] but I'll reply
to this specific bit from the RDO perspective.

TL;DR:
It's a lot of work to package OpenStack and we can't realistically
ship quickly if we only start working once a stable release is done.
Consumers (end users and deployment projects) of RDO packages have the
same need - to keep up with trunk to do a stable release ASAP.

The long version:
RDO is first and foremost a community packaging effort of OpenStack
for Red Hat based distributions.

There are two main reasons why we keep up with trunk for packages
throughout the whole cycle:

#1 Consumers (i.e, end users, deployment projects) are also in their
development cycles and want/need to keep up with trunk.
We want to be able to provide RDO packages for these consumers to
enable them to develop and test their tooling against the latest code
from trunk.
For example, the gate jobs for puppet module master branches run
integration tests using the trunk repositories [2] while their stable
branches use the stable repositories.

There's a lot of changes that are introduced in a cycle, new features,
deprecations, removal, non-backwards compatible, etc.
Until these consumers install a package that contains one of these
changes, they won't know how to use it, they won't know they're
broken, etc.
By continually providing up-to-date packages, they are able to test
them right away and it allows them to spread the necessary work
throughout the whole cycle, which brings me to the next point.

#2 It's a lot of work and we want to release quickly.
Seriously, just packaging and testing the packaging itself is a lot of work.
New projects, new libraries, new dependencies, new configuration
files, etc -- and we're not even installing them or configuring them.

If we started packaging OpenStack only when the official stable
release would be out, it would takes us several weeks/months to get
stable packaging for RDO out.
Mitaka is due in April and we're also on target to be releasing in
April. Releasing the packaging for Mitaka months later is not
something we want to do.

This brings us back to the consumers who install and configure
OpenStack with RDO packages, ex:
- TripleO
- Packstack
- Kolla
- Puppet-OpenStack

If we ship RDO packaging months after a release, this means these
projects can't develop and test against this release for months.
This means their own release will also be delayed, almost bringing us
to the next official release of OpenStack.

At the risk of tooting my own horn a bit here, I went a bit more in
depth into how RDO keeps up with trunk in a talk I've done recently
[3] if you're interested.

[1]: 
http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/mitaka_versions.html
[2]: 
https://github.com/openstack/puppet-openstack-integration/blob/b33ddbe34dacd4244cea5f6b8e674db5c8c939d3/manifests/repos.pp#L27
[3]: https://www.youtube.com/watch?v=XAWLm3jP7Mg&t=647

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?

2016-03-24 Thread Jim Rollenhagen
On Wed, Mar 23, 2016 at 06:02:47PM -0400, Doug Hellmann wrote:
> Excerpts from Jim Rollenhagen's message of 2016-03-23 14:46:16 -0700:
> > On Thu, Mar 24, 2016 at 07:14:35AM +1000, Lana Brindley wrote:
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA256
> > > 
> > > Hi Mike, and sorry I missed you on IRC to discuss this there. That said, 
> > > I think it's great that you took this to the mailing list, especially 
> > > seeing the conversation that has ensued.
> > > 
> > > More inline ...
> > > 
> > > On 24/03/16 01:06, Mike Perez wrote:
> > > > Hey all,
> > > > 
> > > > I've been talking to a variety of projects about lack of install 
> > > > guides. This
> > > > came from me not having a great experience with trying out projects in 
> > > > the big
> > > > tent.
> > > > 
> > > > Projects like Manila have proposed install docs [1], but they were 
> > > > rejected
> > > > by the install docs team because it's not in defcore. One of Manila's 
> > > > goals of
> > > > getting these docs accepted is to apply for the operators tag
> > > > ops:docs:install-guide [2] so that it helps their maturity level in the 
> > > > project
> > > > navigator [3].
> > > > 
> > > > Adrian Otto expressed to me having the same issue for Magnum. I think 
> > > > it's
> > > > funny that a project that gets keynote time at the OpenStack conference 
> > > > can't
> > > > be in the install docs personally.
> > > > 
> > > > As seen from the Manila review [1], the install docs team is suggesting 
> > > > these
> > > > to be put in their developer guide.
> > > 
> > > As Steve pointed out, these now have solid plans to go in. That was 
> > > because both projects opened a conversation with us and we worked with 
> > > them over time to give them the docs they required.
> > > 
> > > > 
> > > > I don't think this is a great idea. Mainly because they are for 
> > > > developers,
> > > > operators aren't going to be looking in there for install information. 
> > > > Also the
> > > > Developer doc page [4] even states "This page contains documentation 
> > > > for Python
> > > > developers, who work on OpenStack itself".
> > > 
> > > I agree, but it's a great place to start. In fact, I've just merged a 
> > > change to the Docs Contributor Guide (on the back of a previous mailing 
> > > list conversation) that explicitly states this:
> > > 
> > > http://docs.openstack.org/contributor-guide/quickstart/new-projects.html
> > > 
> > > > 
> > > > The install docs team doesn't want to be swamped with everyone in big 
> > > > tent
> > > > giving them their install docs, to be verified, and eventually likely 
> > > > to be
> > > > maintained by the install docs team.
> > > 
> > > Which is exactly why we're very selective. Sadly, documenting every big 
> > > tent project's install process is no small task.
> > 
> > I'd love to have some sort of plugin system, where teams can be
> > responsible for their own install guide repo, with a single line in the
> > RST for the install guide to have it include the repo in the build.
> > 
> > // jim
> 
> Why do we need to have one install guide? Why not separate guides for
> the peripheral projects?

That's fine too. What I really want is:

* A link into the ironic install guide from the official install guide.
* A publishing location that is not at /developer.
* To be able to meet the install guide criteria here:
  http://www.openstack.org/software/releases/liberty/components/ironic

// jim

> 
> Doug
> 
> > 
> > > 
> > > > 
> > > > However, as an operator when I go docs.openstack.org under install 
> > > > guides,
> > > > I should know how to install any of the big tent projects. These are 
> > > > accepted
> > > > projects by the Technical Committee.
> > > > 
> > > > Lets consider the bigger picture of things here. If we don't make this
> > > > information accessible, projects have poor adoption and get less 
> > > > feedback
> > > > because people can't attempt to install them to begin reporting bugs.
> > > 
> > > I agree. This has been an issue for several cycles now, but with all our 
> > > RST conversions now (mostly) behind us, I feel like we can dedicate the 
> > > Newton cycle to improving how we do things. Exactly how that happens will 
> > > need to be determined by the docs team in the Austin Design Summit, and I 
> > > strongly suggest you intend to attend that session once we have it 
> > > scheduled, as your voice is important in this conversation.
> > > 
> > > Lana
> > > 
> > > - -- 
> > > Lana Brindley
> > > Technical Writer
> > > Rackspace Cloud Builders Australia
> > > http://lanabrindley.com
> > > -BEGIN PGP SIGNATURE-
> > > Version: GnuPG v2
> > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > > 
> > > iQEcBAEBCAAGBQJW8wc7AAoJELppzVb4+KUywYMIAMr78Gw+zPp3LXyqxkQFPs9y
> > > mo/GJrfQ9OLD6CXpKSxcmvnuaHP1vHRrXPqkE02zb6YTOxV3C3CIW7hf023Dihwa
> > > uED5kL7DrkTO+xFrjClkVRpKit/ghWQ3By/V9yaYjgWQvvRy3/Y+dvjZHnrDDHE1
> > > rIxbU4PVZ0LPTxI7nNy71ffxFXW2Yn9Pl6EJnVm/iu

Re: [openstack-dev] [fuel] Component Leads Elections

2016-03-24 Thread Dmitry Borodaenko
Serg,

Thanks for agreeing to officiate this cycle's component lead elections
for us!

-- 
Dmitry Borodaenko


On Thu, Mar 24, 2016 at 12:55:57PM -0700, Serg Melikyan wrote:
> Hi folks,
> 
> I'd like to announce that we're running the Component Leads elections.
> Detailed information is available on wiki [0].
> 
> Component Lead: defines architecture of a particular module or
> component in Fuel, resolves technical disputes in their area of
> responsibility. All design specs that impact a component must be
> approved by the corresponding component lead [1].
> 
> Fuel has three large sub-teams, with roughly comparable codebases,
> that need dedicated component leads:
> 
> * fuel-library
> * fuel-web
> * fuel-ui
> 
> Nominees propose their candidacy by sending an email to the
> openstack-dev@lists.openstack.org mailing-list, with the subject:
> "[fuel]  lead candidacy"
> (for example, "[fuel] fuel-library lead candidacy").
> 
> Timeline:
> * March 25 - March 31: Open candidacy for Component leads positions
> * April 1 - April 7: Component leads elections
> 
> References
> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Spring_2016
> [1] 
> https://specs.openstack.org/openstack/fuel-specs/policy/team-structure.html
> 
> -- 
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Jeremy Stanley
On 2016-03-25 08:28:16 +1300 (+1300), Robert Collins wrote:
> However, Doug and I just uncovered a concern on IRC: having PyPI be
> ahead of master for more than very short periods seems like a poor
> idea.
> 
> When we tag a final release on a branch, without master being higher
> than that final release, folk running things installed from master,
> will now be 'upgradable' to whats on PyPI, even though thats older
> from a human perspective.
> 
> So, I think we do need some aggressive means to bump master up past
> the versions reserved for a stable branch, at the stable branches
> creation.

I could be misunderstanding, but isn't this why we have a release
pipeline job which merges the release tag into master? So that ~as
soon as there is a new release, post-versioning in the master branch
is parsed by PBR as being at or later than the most recent release?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kite] Seeking core reviewers

2016-03-24 Thread Ronald Bradford
As part of the initiative to replace all legacy Oslo Incubator code in
OpenStack with graduate libraries we are working through all projects.
I have been unable to find any IRC contact information on the wiki or
kite-core group in order to help with several reviews [1] for the project.

Any assistance from kite contributors appreciated.

[1]
https://review.openstack.org/#/q/status:open+project:openstack/kite+branch:master+topic:oslo_incubator_cleanup


Regards

Ronald
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kite] Seeking core reviewers

2016-03-24 Thread Ian Cordasco
-Original Message-
From: Ronald Bradford 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 24, 2016 at 16:16:22
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  [openstack-dev] [kite] Seeking core reviewers

> As part of the initiative to replace all legacy Oslo Incubator code in
> OpenStack with graduate libraries we are working through all projects.
> I have been unable to find any IRC contact information on the wiki or
> kite-core group in order to help with several reviews [1] for the project.
>  
> Any assistance from kite contributors appreciated.
>  
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/kite+branch:master+topic:oslo_incubator_cleanup
>   

I believe Kite is no longer actively developed or maintained and was started by 
the Barbican folks. You should find them in #openstack-barbican.

Cheers,
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [pbr] semver on master branches after RC WAS Re: How do I calculate the semantic version prior to a release?

2016-03-24 Thread Ian Cordasco
 

-Original Message-
From: Jeremy Stanley 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 24, 2016 at 16:15:38
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [release] [pbr] semver on master branches after 
RC WAS Re: How do I calculate the semantic version prior to a release?

> On 2016-03-25 08:28:16 +1300 (+1300), Robert Collins wrote:
> > However, Doug and I just uncovered a concern on IRC: having PyPI be
> > ahead of master for more than very short periods seems like a poor
> > idea.
> >
> > When we tag a final release on a branch, without master being higher
> > than that final release, folk running things installed from master,
> > will now be 'upgradable' to whats on PyPI, even though thats older
> > from a human perspective.
> >
> > So, I think we do need some aggressive means to bump master up past
> > the versions reserved for a stable branch, at the stable branches
> > creation.
>  
> I could be misunderstanding, but isn't this why we have a release
> pipeline job which merges the release tag into master? So that ~as
> soon as there is a new release, post-versioning in the master branch
> is parsed by PBR as being at or later than the most recent release?

Also this only affects libraries and not the server projects. Besides, hasn't 
this community always asserted that no one installs from PyPI anyway (inspite 
of evidence of projects allowing just that based on user feedback)?

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >