Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-13 Thread Chmouel Boudjnah
On Fri, Jun 13, 2014 at 4:47 AM, Xuhan Peng  wrote:

> Since I'm working the neutron client code change, by looking at your code
> change to nova client, looks like only X-Auth-Token is taken care of in
> http_log_req. There is also password in header and token id in response.
> Any particular reason that they are not being taken care of?



I believe this is from keystoneclient which is taken care by Morgan,

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about modifying instance attribute(such as cpu-QoS, disk-QoS ) without shutdown the instance

2014-06-13 Thread Jian Wen
Add dynamic adjust disk qos support
https://blueprints.launchpad.net/nova/+spec/dynamic-adjust-disk-qos

It's not approved yet.


2014-04-09 0:45 GMT+08:00 Trump.Zhang :

> Such as QoS attributes of vCPU, Memory and Disk, including IOPS limit,
> Bandwidth limit, etc.
>
>
> 2014-04-08 23:04 GMT+08:00 Jay Pipes :
>
> On Tue, 2014-04-08 at 08:30 +, Zhangleiqiang (Trump) wrote:
>> > Hi, Stackers,
>> >
>> >   For Amazon, after calling ModifyInstanceAttribute API , the
>> instance must be stopped.
>> >
>> >   In fact, the hypervisor can online-adjust these attribute. But
>> amzon and openstack do not support it.
>> >
>> >   So I want to know what are your advice about introducing the
>> capability of online adjusting these instance attribute?
>>
>> What kind of attributes?
>>
>> Best,
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best,

Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Reviews - we need your help!

2014-06-13 Thread Macdonald-Wallace, Matthew
Thanks Tomas,

http://bit.ly/1lsg3SH now contains the missing projects and has been re-ordered 
slightly so that you see outdated reviews first then the Jenkins/-1 stuff.

Cheers,

Matt

> -Original Message-
> From: Tomas Sedovic [mailto:tsedo...@redhat.com]
> Sent: 12 June 2014 19:03
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [TripleO] Reviews - we need your help!
> 
> On 12/06/14 16:02, Macdonald-Wallace, Matthew wrote:
> > FWIW, I've tried to make a useful dashboard for this using Sean
> > Dague's gerrit-dash-creator [0].
> >
> >
> >
> > Short URL is http://bit.ly/1l4DLFS long url is:
> >
> >
> >
> >
> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack
> %2Ftripleo-incubator+OR+project%3Aopenstack%2Ftripleo-image-
> elements+OR+project%3Aopenstack%2Ftripleo-heat-
> templates+OR+project%3Aopenstack%2Ftripleo-
> specs+OR+project%3Aopenstack%2Fos-apply-
> config+OR+project%3Aopenstack%2Fos-collect-
> config+OR+project%3Aopenstack%2Fos-refresh-
> config+OR+project%3Aopenstack%2Fdiskimage-
> builder%29+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-
> 1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-
> Review%3E%3D-
> 2%252cself+branch%3Amaster+status%3Aopen&title=TripleO+Reviews&Your+a
> re+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%
> 3Aself&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-
> Review%3C%3D-
> 1+limit%3A100&Changes+with+no+code+review+in+the+last+48hrs=NOT+label
> %3ACode-
> Review%3C%3D2+age%3A48h&Changes+with+no+code+review+in+the+last+5+
> days=NOT+label%3ACode-
> Review%3C%3D2+age%3A5d&Changes+with+no+code+review+in+the+last+7+d
> ays=NOT+label%3ACode-Review%!
>  3C%3D2+age
> %3A7d&Some+adjustment+required+%28-1+only%29=label%3ACode-
> Review%3D-1+NOT+label%3ACode-Review%3D-
> 2+limit%3A100&Dead+Specs+%28-2%29=label%3ACode-Review%3C%3D-2
> >
> >
> >
> > I'll add it to my fork and submit a PR if people think it useful.
> 
> I was about to mention this, too. The gerrit-dash-creator is fantastic.
> 
> This one is missing the Tuskar-related projects (openstack/tuskar,
> openstack/tuskar-ui and openstack/python-tuskarclient) and also openstack/os-
> cloud-config, though.
> 
> 
> >
> >
> >
> > Matt
> >
> >
> >
> > [0] https://github.com/sdague/gerrit-dash-creator
> >
> >
> >
> > *From:*James Polley [mailto:j...@jamezpolley.com]
> > *Sent:* 12 June 2014 06:08
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* [openstack-dev] [TripleO] Reviews - we need your help!
> >
> >
> >
> > During yesterday's IRC meeting, we realized that our review stats are
> > starting to slip again.
> >
> > Just after summit, our stats were starting to improve. In the
> > 2014-05-20 meeting, the TripleO  "Stats since the last revision
> > without -1 or -2"[1] looked like this:
> >
> > 1rd quartile wait time: 1 days, 1 hours, 11 minutes
> >
> > Median wait time: 6 days, 9 hours, 49 minutes
> >
> > 3rd quartile wait time: 13 days, 5 hours, 46 minutes
> >
> >
> >
> > As of yesterdays meeting, we have:
> >
> > 1rd quartile wait time: 4 days, 23 hours, 19 minutes
> >
> > Median wait time: 7 days, 22 hours, 8 minutes
> >
> > 3rd quartile wait time: 13 days, 19 hours, 17 minutes
> >
> >
> >
> > This really hurts our velocity, and is especially hard on people
> > making their first commit, as it can take them almost a full work week
> > before they even get their first feedback.
> >
> > To get things moving, we need everyone to make a special effort to do
> > a few reviews every day. It would be most helpful if you can look for
> > older reviews without a -1 or -2 and help those reviews get over the line.
> >
> > If you find reviews that are just waiting for a simple fix - typo or
> > syntax fixes, simple code fixes, or a simple rebase - it would be even
> > more helpful if you could take a few minutes to make those patches,
> > rather than just leaving the review waiting for the attention of the
> > original submitter.
> >
> > Please keep in mind that these stats are based on all of our projects,
> > not just tripleo-incubator. To save you heading to the wiki, here's a
> > handy link that shows you all open code reviews in all our projects:
> >
> > bit.ly/1hQco1N 
> >
> > If you'd prefer the long version:
> > https://review.openstack.org/#/q/status:open+%28project:openstack/trip
> > leo-incubator+OR+project:openstack/tuskar+OR+project:openstack/tuskar-
> > ui+OR+project:openstack-infra/tripleo-ci+OR+project:openstack/os-apply
> > -config+OR+project:openstack/os-collect-config+OR+project:openstack/os
> > -refresh-config+OR+project:openstack/os-cloud-config+OR+project:openst
> > ack/tripleo-image-elements+OR+project:openstack/tripleo-heat-templates
> > +OR+project:openstack/diskimage-builder+OR+project:openstack/python-tu
> > skarclient+OR+project:openstack/tripleo-specs%29,n,z
> >  > pleo-incubator+OR+project

Re: [openstack-dev] NFV in OpenStack use cases and context

2014-06-13 Thread Sylvain Bauza
Hi Yathi,

Le 12/06/2014 20:53, Yathiraj Udupi (yudupi) a écrit :
> Hi Alan, 
>
> Our Smart (Solver) Scheduler blueprint
> (https://blueprints.launchpad.net/nova/+spec/solver-scheduler ) has been
> in the works in the Nova community since late 2013.  We have demoed at the
> Hong Kong summit, as well as the Atlanta summit,  use cases using this
> smart scheduler for better, optimized resource placement with complex
> constrained scenarios.  So to let you know this work was started as a
> smart way of doing scheduling, applicable in general and not limited to
> NFV.  Currently we feel NFV is a killer app for driving this blueprint and
> work ahead, however is applicable for all kinds of resource placement
> scenarios. 
>
> We will be very interested in finding out more about your blueprints that
> you are referring to here, and see how it can be integrated as part of our
> future roadmap. 

Indeed, Smart Scheduler is something that could help NFV use-cases. My
only concern is about the necessary steps for providing such a feature,
with regards to the scheduler breakout that is coming.

Could you please make sure the current nova-spec [1] is taking in
account all other efforts about the scheduler, like scheduler forklift
[2], on-demand resource reporting [3] or others ? It also seems the spec
is not following the defined template, could you please fix it ? It
would be easier to review your proposal.

Gantt and Nova scheduler teams are attending a weekly meeting every
Tuesday at 3pm UTC. Would you have a chance to join, it would be great
to discuss about your proposal and how we can identify all the
milestones for this and potentially track progress on it.

Thanks,
-Sylvain

[1] https://review.openstack.org/#/c/96543/
[2] https://review.openstack.org/82133 and
https://review.openstack.org/89893
[3] https://review.openstack.org/97903



> Thanks,
> Yathi. 
>
>
> On 6/12/14, 10:55 AM, "Alan Kavanagh"  wrote:
>
>> Hi Ramki
>>
>> Really like the smart scheduler idea, we made a couple of blueprints that
>> are related to ensuring you have the right information to build a
>> constrained based scheduler. I do however want to point out that this is
>> not NFV specific but is useful for all applications and services of which
>> NFV is one. 
>>
>> /Alan
>>
>> -Original Message-
>> From: ramki Krishnan [mailto:r...@brocade.com]
>> Sent: June-10-14 6:06 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
>> Subject: Re: [openstack-dev] NFV in OpenStack use cases and context
>>
>> Hi Steve,
>>
>> Forgot to mention, the "Smart Scheduler (Solver Scheduler) enhancements
>> for NFV: Use Cases, Constraints etc." is a good example of an NFV use
>> case deep dive for OpenStack.
>>
>> https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/documen
>> t/d/1k60BQXOMkZS0SIxpFOppGgYp416uXcJVkAFep3Oeju8/edit%23heading%3Dh.wlbcla
>> gujw8c&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2b
>> qaf%2FKm29ZfiqAKXxeo%3D%0A&m=vTulCeloS8Hc59%2FeAOd32Ri4eqbNqVE%2FeMgNRzGZn
>> z4%3D%0A&s=836991d6daab66b519de3b670db8af001144ddb20e636665b395597aa118538
>> f
>>
>> Thanks,
>> Ramki
>>
>> -Original Message-
>> From: ramki Krishnan
>> Sent: Tuesday, June 10, 2014 3:01 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Chris Wright; Nicolas Lemieux; Norival Figueira
>> Subject: RE: NFV in OpenStack use cases and context
>>
>> Hi Steve,
>>
>> We are have OpenStack gap analysis documents in ETSI NFV under member
>> only access. I can work on getting public version of the documents (at
>> least a draft) to fuel the kick start.
>>
>> Thanks,
>> Ramki
>>
>> -Original Message-
>> From: Steve Gordon [mailto:sgor...@redhat.com]
>> Sent: Tuesday, June 10, 2014 12:06 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Chris Wright; Nicolas Lemieux
>> Subject: Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and
>> context
>>
>> - Original Message -
>>> From: "Steve Gordon" 
>>> To: "Stephen Wong" 
>>>
>>> - Original Message -
 From: "Stephen Wong" 
 To: "ITAI MENDELSOHN (ITAI)" ,
 "OpenStack Development Mailing List (not for usage questions)"
 

 Hi,

 Perhaps I have missed it somewhere in the email thread? Where is
 the use case => bp document we are supposed to do for this week? Has
 it been created yet?

 Thanks,
 - Stephen
>>> Hi,
>>>
>>> Itai is referring to the ETSI NFV use cases document [1] and the
>>> discussion is around how we distill those - or a subset of them - into
>>> a more consumable format for an OpenStack audience on the Wiki. At
>>> this point I think the best approach is to simply start entering one
>>> of them (perhaps #5) into the Wiki and go from there. Ideally this
>>> would form a basis for discussing the format etc.
>>>
>>> Thanks,
>>>
>>> Steve
>>>
>>> [1]
>>> http://www.etsi.or

Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints

2014-06-13 Thread Sylvain Bauza
Le 12/06/2014 15:32, Gary Kotton a écrit :
> Hi,
> There is the mid cycle sprint in July for Nova and Neutron. Anyone
> interested in maybe getting one together in Europe/Middle East around
> the same dates? If people are willing to come to this part of the
> world I am sure that we can organize a venue for a few days. Anyone
> interested. If we can get a quorum then I will be happy to try and
> arrange things.
> Thanks
> Gary
>


Hi Gary,

Wouldn't it be more interesting to have a mid-cycle sprint *before* the
Nova one (which is targeted after juno-2) so that we could discuss on
some topics and make a status to other folks so that it would allow a
second run ?

There is already a proposal in Paris for hosting some OpenStack sprints,
see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014

-Sylvain

>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-13 Thread Robert Collins
On 12 June 2014 23:59, Sean Dague  wrote:

> The only thing it makes harder is you have to generate your own token to
> run the curl command. The rest is there. Because everyone is running our
> servers at debug levels, it means the clients are going to be running
> debug level as well (yay python logging!), so this is something I don't
> think people realized was a huge issue.
>
>> Anyway I have sent a patch for swiftclient for this in :
>>
>> https://review.openstack.org/#/c/99632/1
>>
>> Personally I don't think I like much that SHA1 and i'd rather use the
>> first 16 bytes of the token (like we did in swift server)
>
> Using a well known hash means you can verify it was the right thing if
> you have access to the original data. Just taking the first 16 bytes
> doesn't give you that, so I think the hash provides slightly more
> debugability.

Would it be possible to salt it? e.g. make a 128bit salt and use that.
The same token used twice will log with the same salt, but you won't
have the rainbow table weakness.

The length of tokens isn't a particularly strong defense against
rainbow tables AIUI: if folk realise we have tokens exposed, they will
just use a botnet to build a table specifically targetting us.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-13 Thread Flavio Percoco

On 12/06/14 16:27 +, Janczuk, Tomasz wrote:

What exactly is the core set of functionalities Marconi expects all
implementations to support? (I understand it is a subset of the HTTP APIs
Marconi exposes?)


Correct. What the exact calls are, we still don't know. We've talked
about them a bit and this thread is showing how urgent it is to figure
that out.



On 6/12/14, 4:56 AM, "Flavio Percoco"  wrote:


On 11/06/14 16:26 -0700, Devananda van der Veen wrote:

On Tue, Jun 10, 2014 at 1:23 AM, Flavio Percoco 
wrote:

Against:

 € Makes it hard for users to create applications that work across
multiple
   clouds, since critical functionality may or may not be available
in a
given
   deployment. (counter: how many users need cross-cloud
compatibility?
Can
   they degrade gracefully?)




The OpenStack Infra team does.


This is definitely unfortunate but I believe it's a fair trade-off. I
believe the same happens in other services that have support for
different drivers.


I disagree strongly on this point.

Interoperability is one of the cornerstones of OpenStack. We've had
panels about it at summits. Designing an API which is not
interoperable is not "a fair tradeoff" for performance - it's
destructive to the health of the project. Where other projects have
already done that, it's unfortunate, but let's not plan to make it
worse.

A lack of interoperability not only prevents users from migrating
between clouds or running against multiple clouds concurrently, it
hurts application developers who want to build on top of OpenStack
because their applications become tied to specific *implementations*
of OpenStack.



What I meant to say is that, based on a core set of functionalities,
all extra functionalities are part of the "fair trade-off". It's up to
the cloud provider to choose what storage driver/features they want to
expose. Nonetheless, they'll all expose the same core set of
functionalities. I believe this is true also for other services, which
I'm not trying to use as an excuse but as a reference of what the
reality of non-opinionated services is. Marconi is opinionated w.r.t
the API and the core set of functionalities it wants to support.

You make really good points that I agree with. Thanks for sharing.

--
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpkWCGuCray4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] performance

2014-06-13 Thread Flavio Percoco

On 28/04/14 17:41 +, Janczuk, Tomasz wrote:

Hello,

Have any performance numbers been published for Marconi? I have asked this 
question before 
(http://lists.openstack.org/pipermail/openstack-dev/2014-March/031004.html) but 
there were none at that time.



This is a work-in-progress. As soon as the benchmark tool is ready,
the code and results will be shared here.

--
@flaper87
Flavio Percoco


pgphwCjVfz4g3.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] performance

2014-06-13 Thread Flavio Percoco

On 12/06/14 17:20 +, Janczuk, Tomasz wrote:

Hello,

I was wondering if there is any update on the performance of Marconi? Are
results of any performance measurements available yet?



Ops, I think I mistakenly replied to your old email... I'll blaim my
email client. :D

That said, no measurements available yet. Those will be reported here
as soon as they're done. FWIW, they're targeted for J-2.




Thanks,
Tomasz Janczuk
@tjanczuk
HP

On 4/29/14, 1:01 PM, "Janczuk, Tomasz"  wrote:


Hi Flavio,

Thanks! I also added some comments to the performance test plan at
https://etherpad.openstack.org/p/marconi-benchmark-plans we talked about
yesterday.

Thanks,
Tomasz

On 4/29/14, 2:52 AM, "Flavio Percoco"  wrote:


On 28/04/14 17:41 +, Janczuk, Tomasz wrote:

Hello,

Have any performance numbers been published for Marconi? I have asked
this question before
(http://lists.openstack.org/pipermail/openstack-dev/2014-March/031004.ht
m
l) but there were none at that time.




Hi Tomasz,

Some folks in the team are dedicated to working on this and producing
results asap. The details and results will be shared as soon as
possible.

Thanks a lot for your interest, I'll make sure you're in the loop as
soon as we have them.

--
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgp0Fp5vjfpGx.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-13 Thread Daniel P. Berrange
On Thu, Jun 12, 2014 at 09:57:41PM +, Adrian Otto wrote:
> Containers Team,
> 
> The nova-docker developers are currently discussing options for
> implementation for supporting mounting of Cinder volumes in
> containers, and creation of unprivileged containers-in-containters.
> Both of these currently require CAP_SYS_ADMIN[1] which is problematic
> because if granted within a container, can lead to an escape from the
> container back into the host.

NB it is fine for a container to have CAP_SYS_ADMIN if user namespaces
are enabled and the root user remapped.

Also, we should remember that mounting filesystems is not the only use
case for exposing block devices to containers. Some applications will
happily use raw block devices directly without needing to format and
mount any filesystem on them (eg databases).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-13 Thread Matthew Booth
On 13/06/14 05:27, Angus Lees wrote:
> On Thu, 12 Jun 2014 05:06:38 PM Julien Danjou wrote:
>> On Thu, Jun 12 2014, Matthew Booth wrote:
>>> This looks interesting. It doesn't have hooks for fencing, though.
>>>
>>> What's the status of tooz? Would you be interested in adding fencing
>>> hooks?
>>
>> It's maintained and developer, we have plan to use it in Ceilometer and
>> others projects. Joshua also wants to use it for Taskflow.
>>
>> We are blocked for now by https://review.openstack.org/#/c/93443/ and by
>> the lack of resource to complete that request obviously, so help
>> appreciated. :)
>>
>> As for fencing hooks, it sounds like a good idea.
> 
> As far as I understand these things, in distributed-locking-speak "fencing" 
> just means "breaking someone else's lock".

No. It means forcibly preventing a delinquent node from further use of
the shared resource. An example of a fencing solution is a remote power
switch. No trust is required.

Matt

-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Thierry Carrez
Stefano Maffulli wrote:
> On Thu 12 Jun 2014 03:53:35 PM PDT, Morgan Fainberg wrote:
>> I understand your concerns here, and I totally get what you’re driving
>> at, but in the packaging world wouldn’t this make sense to call it
>> "python-bash8"? Now the binary, I can agree (for reasons outlined)
>> should probably not be named ‘bash8’, but the name of the “command”
>> could be separate from the packaging / project name.
> 
> As a user, I hate to have to follow the abstruse reasoning of a random 
> set of developers forcing a packager to pick a name for the package 
> that is different than the executable. A unicorn dies every time 
> `apt-get install sillypackage && sillypackage` results in "File not 
> found". Dang!  that was my favorite unicorn.

Well, in this precise case, no random set of developers is forcing any
packager to do anything.

We develop a Python library called "bash8" (with an executable named
"bash8") and a packager is trying to force us to rename it to something
else, because its distribution doesn't like the name we chose.

It's also interesting to note that in all cases (whether we follow
Thomas suggestion or not), the Debian *binary* package will be named
"python-bash8", which is apparently what you don't like. That's not
because "a random set of developers is forcing upstream", that's due to
Debian's own Python packaging rules.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] RabbitMQ (AMQP 0.9) driver for Marconi

2014-06-13 Thread Flavio Percoco

On 11/06/14 18:01 +, Janczuk, Tomasz wrote:

Thanks Flavio, some comments inline below.

On 6/11/14, 5:15 AM, "Flavio Percoco"  wrote:



 1.  Marconi exposes HTTP APIs that allow messages to be listed without
consuming them. This API cannot be implemented on top of AMQP 0.9 which
implements a strict queueing semantics.


I believe this is quite an important endpoint for Marconi. It's not
about listing messages but getting batch of messages. Wether it is
through claims or not doesn't really matter. What matters is giving
the user the opportunity to get a set of messages, do some work and
decide what to do with those messages afterwards.


The sticky point here is that this Marconi¹s endpoint allows messages to
be obtained *without* consuming them in the traditional messaging system
sense: the messages remain visible to other consumers. It could be argued
that such semantics can be implemented on top of AMQP by first getting the
messages and then immediately releasing them for consumption by others,
before the Marconi call returns. However, even that is only possible for
messages that are at the front of the queue - the "paging" mechanism using
markers cannot be supported.


What matters is whether the listing functionality is useful or not.
Lets not think about it as "listing" or "paging" but about getting
batches of messages that are still available for others to process in
parallel. As mentioned in my previous email, AMQP has been a good way
to analyze the extra set of features Marconi exposes in the API but I
don't want to make the choice of usability based on whether
traditional messaging systems support it and how it could be
implemented there.


 5.  Marconi message consumption API creates a ³claim ID² for a set of
consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS
and Azure Queues), ³claim ID² maps onto the concept of ³delivery tag²
which has a 1-1 relationship with a message. Since there is no way to
represent the 1-N mapping between claimID and messages in the AMQP 0.9
model, it effectively restrict consumption of messages to one per
claimID. This in turn prevents batch consumption benefits.

 6.  Marconi message consumption acknowledgment requires both claimID
and messageID to be provided. MessageID concept is missing in AMQP 0.9.
In order to implement this API, assuming the artificial 1-1 restriction
of claim-message mapping from #5 above, this API could be implemented by
requiring that messageID === claimID. This is really a workaround.



These 2 points represent quite a change in the way Marconi works and a
trade-off in terms of batch consumption (as you mentioned). I believe
we can have support for both things. For example, claimID+suffix where
suffix point to a specific claimed messages.

I don't want to start an extended discussion about this here but lets
keep in mind that we may be able to support both. I personally think
Marconi's claim's are reasonable as they are, which means I currently
like them better than SQS's.


What are the advantages of the Marconi model for claims over the SQS and
Azure Queue model for acknowledgements?

I think the SQS and Azure Queue model is both simpler and more flexible.
But the key advantage is that it has been around for a while, has been
proven to work, and people understand it.

1. SQS and Azure require only one concept to acknowledge a message
(receipt handle/pop receipt) as opposed to Marconi¹s two concepts (message
ID + claim ID). SQS/Azure model is simpler.


TBH, I'm not exactly sure where you're going with this. I mean, the
model may look simpler but it's not necessarily better nor easier to
implement. Keeping both, messages and claims, separate in terms of IDs
and management is flexible and powerful enough, IMHO. But I'm probably
missing your point.

I don't believe requiring the messageID+ClaimID to delete a specific,
claimed, messages is hard.



2. Similarly to Marconi, SQS and Azure allow individual claimed messages
to be deleted. This is a wash.


Calling it a wash is neither helpful nor friendly. Why do you think it
is a wash?

Claiming a message does not delete the message, which means consumers
may want to delete it before the claim is released. Do you have a
better way to do it?



3. SQS and Azure allow a batch of messages *up to a particular receipt
handle/pop receit* to be deleted. This is more flexible than Marconi¹s
mechanism or deleting all messages associated with a particular claim, and
works very well for the most common scenario of in-order message delivery.


Pop semantic is on its way to the codebase. Limited claim deletes
sounds like an interesting thing, lets talk about it. Want to submit a
new spec?


 7.  RabbitMQ message acknowledgment MUST be sent over the same AMQP
channel instance on which the message was originally received. This
requires that the two Marconi HTTP calls that receive and acknowledge a
message are affinitized to the same Marconi backend. It either
substantially complicates 

Re: [openstack-dev] [all] gerrit-dash-creator - much easier process for creating client side dashboards

2014-06-13 Thread Giulio Fidente

On 05/31/2014 03:56 PM, Sean Dague wrote:

We're still working on a way to make it possible to review in server
side gerrit dashboards more easily to gerrit. In the mean time I've put
together a tool that makes it easy to convert gerrit dashboard
definitions into URLs that you can share around.


a bit off topic but useful to better user and share the 'shortened' url 
we get


is anyone aware of a service allowing to 1) update an existing 
'shortened' url 2) customize the 'shortened' url a bit 3) has an api?


reason for the third would be that everytime a change is merged into a 
.dash file we would update the urls automatically


--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

2014-06-13 Thread Flavio Percoco

On 12/06/14 10:39 -0700, Arnaud Legendre wrote:

+1 Good job Nikhil!


Fuck Yeah! +1



━━━
From: "Brian Rosmaita" 
To: "OpenStack Development Mailing List (not for usage questions)"

Sent: Thursday, June 12, 2014 10:15:07 AM
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core

+1
━━━
From: Kuvaja, Erno [kuv...@hp.com]
Sent: Thursday, June 12, 2014 9:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core



+1



From: Alex Meade [mailto:mr.alex.me...@gmail.com]
Sent: 12 June 2014 13:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Nominating Nikhil Komawar for Core



+100 it's about time!



On Thu, Jun 12, 2014 at 3:26 AM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:

   Hi folks,




   I'd like to nominate Nikhil Komawar to join glance-core. His code and
   review contributions over the past years have been very helpful and he's
   been taking on a very important role in advancing the glance tasks work.




   If anyone has any concerns, please let me know. Otherwise I'll make the
   membership change next week (which is code for, when someone reminds me
   to!)




   Thanks!

   markwash


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgp79DpruLyUU.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-13 Thread Matthew Booth
On 12/06/14 21:38, Joshua Harlow wrote:
> So just a few thoughts before going to far down this path,
> 
> Can we make sure we really really understand the use-case where we think
> this is needed. I think it's fine that this use-case exists, but I just
> want to make it very clear to others why its needed and why distributing
> locking is the only *correct* way.

An example use of this would be side-loading an image from another
node's image cache rather than fetching it from glance, which would have
very significant performance benefits in the VMware driver, and possibly
other places. The copier must take a read lock on the image to prevent
the owner from ageing it during the copy. Holding a read lock would also
assure the copier that the image it is copying is complete.

> This helps set a good precedent for others that may follow down this path
> that they also clearly explain the situation, how distributed locking
> fixes it and all the corner cases that now pop-up with distributed locking.
> 
> Some of the questions that I can think of at the current moment:
> 
> * What happens when a node goes down that owns the lock, how does the
> software react to this?

This can be well defined according to the behaviour of the backend. For
example, it is well defined in zookeeper when a node's session expires.
If the lock holder is no longer a valid node, it would be fenced before
deleting its lock, allowing other nodes to continue.

Without fencing it would not be possible to safely continue in this case.

> * What resources are being locked; what is the lock target, what is its
> lifetime?

These are not questions for a locking implementation. A lock would be
held on a name, and it would be up to the api user to ensure that the
protected resource is only used while correctly locked, and that the
lock is not held longer than necessary.

> * What resiliency do you want this lock to provide (this becomes a
> critical question when considering memcached, since memcached is not
> really the best choice for a resilient distributing locking backend)?

What does resiliency mean in this context? We really just need the lock
to be correct

> * What do entities that try to acquire a lock do when they can't acquire
> it?

Typically block, but if a use case emerged for trylock() it would be
simple to implement. For example, in the image side-loading case we may
decide that if it isn't possible to immediately acquire the lock it
isn't worth waiting, and we just fetch it from glance anyway.

> A useful thing I wrote up a while ago, might still be useful:
> 
> https://wiki.openstack.org/wiki/StructuredWorkflowLocks
> 
> Feel free to move that wiki if u find it useful (its sorta a high-level
> doc on the different strategies and such).

Nice list of implementation pros/cons.

Matt

> 
> -Josh
> 
> -Original Message-
> From: Matthew Booth 
> Organization: Red Hat
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Thursday, June 12, 2014 at 7:30 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [nova] Distributed locking
> 
>> We have a need for a distributed lock in the VMware driver, which I
>> suspect isn't unique. Specifically it is possible for a VMware datastore
>> to be accessed via multiple nova nodes if it is shared between
>> clusters[1]. Unfortunately the vSphere API doesn't provide us with the
>> primitives to implement robust locking using the storage layer itself,
>> so we're looking elsewhere.
>>
>> The closest we seem to have in Nova currently are service groups, which
>> currently have 3 implementations: DB, Zookeeper and Memcached. The
>> service group api currently provides simple membership, but for locking
>> we'd be looking for something more.
>>
>> I think the api we'd be looking for would be something along the lines of:
>>
>> Foo.lock(name, fence_info)
>> Foo.unlock(name)
>>
>> Bar.fence(fence_info)
>>
>> Note that fencing would be required in this case. We believe we can
>> fence by terminating the other Nova's vSphere session, but other options
>> might include killing a Nova process, or STONITH. These would be
>> implemented as fencing drivers.
>>
>> Although I haven't worked through the detail, I believe lock and unlock
>> would be implementable in all 3 of the current service group drivers.
>> Fencing would be implemented separately.
>>
>> My questions:
>>
>> * Does this already exist, or does anybody have patches pending to do
>> something like this?
>> * Are there other users for this?
>> * Would service groups be an appropriate place, or a new distributed
>> locking class?
>> * How about if we just used zookeeper directly in the driver?
>>
>> Matt
>>
>> [1] Cluster ~= hypervisor
>> -- 
>> Matthew Booth
>> Red Hat Engineering, Virtualisation Team
>>
>> Phone: +442070094448 (UK)
>> GPG ID:  D33C3490
>> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
>>
>> ___

Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-06-13 Thread henry hly
Hi car1,

In the link:
https://docs.google.com/document/d/1jCmraZGirmXq5V1MtRqhjdZCbUfiwBhRkUjDXGt5QUQ/edit,
 there is some words like " When the node is being scheduled to host the
SNAT, a new namespace and internal IP address will be assigned to host the
SNAT service.  Any nova instance VM that is connected to the router will
have this new SNAT IP as its external gateway address. "

Can nova VM see this secondary IP? I think that even in the node hosting
SNAT, IR still exists. So VM at this node will also see IP of the IR
interface, and send packet to IR first, next the IR will redirect the
traffic to SNAT in the same node (but in different namespace). Is that
right?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-13 Thread Julien Danjou
On Thu, Jun 12 2014, Jay Pipes wrote:

> This is news to me. When was this decided and where can I read about
> it?

Originally https://wiki.openstack.org/wiki/Oslo/blueprints/service-sync
has been proposed, presented and accepted back at the Icehouse summit in
HKG. That's what led to tooz creation and development since then.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-13 Thread Kevin Benton
If these tokens are variable length up to 4k, it will make the search space
much to large to construct any kind of useful table. They become infeasible
for A-z0-9 variable-length password sets above 10 chars if you include
every permutation. Assuming the tokens are generated in a very predictable
manner that exclude a ton of possibilities, we shouldn't have to worry
about rainbow tables.

--
Kevin Benton


On Fri, Jun 13, 2014 at 12:52 AM, Robert Collins 
wrote:

> On 12 June 2014 23:59, Sean Dague  wrote:
>
> > The only thing it makes harder is you have to generate your own token to
> > run the curl command. The rest is there. Because everyone is running our
> > servers at debug levels, it means the clients are going to be running
> > debug level as well (yay python logging!), so this is something I don't
> > think people realized was a huge issue.
> >
> >> Anyway I have sent a patch for swiftclient for this in :
> >>
> >> https://review.openstack.org/#/c/99632/1
> >>
> >> Personally I don't think I like much that SHA1 and i'd rather use the
> >> first 16 bytes of the token (like we did in swift server)
> >
> > Using a well known hash means you can verify it was the right thing if
> > you have access to the original data. Just taking the first 16 bytes
> > doesn't give you that, so I think the hash provides slightly more
> > debugability.
>
> Would it be possible to salt it? e.g. make a 128bit salt and use that.
> The same token used twice will log with the same salt, but you won't
> have the rainbow table weakness.
>
> The length of tokens isn't a particularly strong defense against
> rainbow tables AIUI: if folk realise we have tokens exposed, they will
> just use a botnet to build a table specifically targetting us.
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repository update and the way forward

2014-06-13 Thread Carlos Gonçalves
You can see formatted versions (HTML) once Jenkins finishes building.
Open the gate-neutron-specs-docs link provided by Jenkins and browse to the 
blueprint you want to read.

Carlos Goncalves

On 13 Jun 2014, at 04:36, YAMAMOTO Takashi  wrote:

>> Since Juno-1 is about to close, I wanted to give everyone an update on
>> Neutron's usage of the specs repository. These are observations from
>> using this since a few weeks before the Summit. I thought it would be
>> good to share with the broader community to see if other projects
>> using spec repositories had similar thoughts, and I also wanted to
>> share this info for BP submitters and reviewers.
> 
> it would be better if there's an easy way for reviewers to
> see formatted versions of specs rather than raw *.rst, especially
> for diagrams.  does other project have such a machinary?
> 
> YAMAMOTO Takashi
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-13 Thread Day, Phil
Hi Chris,

>The documentation is NOT the canonical source for the behaviour of the API, 
>currently the code should be seen as the reference. We've run into issues 
>before where people have tried to align code to the fit the documentation and 
>made backwards incompatible changes (although this is not one).

I’ve never seen this defined before – is this published as official Openstack  
or Nova policy ?

Personally I think we should be putting as much effort into reviewing the API 
docs as we do API code so that we can say that the API docs are the canonical 
source for behavior.Not being able to fix bugs in say input validation that 
escape code reviews because they break backwards compatibility seems to be a 
weakness to me.


Phil



From: Christopher Yeoh [mailto:cbky...@gmail.com]
Sent: 13 June 2014 04:00
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Review guidelines for API patches

On Fri, Jun 13, 2014 at 11:28 AM, Matt Riedemann 
mailto:mrie...@linux.vnet.ibm.com>> wrote:


On 6/12/2014 5:58 PM, Christopher Yeoh wrote:
On Fri, Jun 13, 2014 at 8:06 AM, Michael Still 
mailto:mi...@stillhq.com>
>> wrote:

In light of the recent excitement around quota classes and the
floating ip pollster, I think we should have a conversation about the
review guidelines we'd like to see for API changes proposed against
nova. My initial proposal is:

  - API changes should have an associated spec


+1

  - API changes should not be merged until there is a tempest change to
test them queued for review in the tempest repo


+1

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

We do have some API change guidelines here [1].  I don't want to go overboard 
on every change and require a spec if it's not necessary, i.e. if it falls into 
the 'generally ok' list in that wiki.  But if it's something that's not 
documented as a supported API (so it's completely new) and is pervasive (going 
into novaclient so it can be used in some other service), then I think that 
warrants some spec consideration so we don't miss something.

To compare, this [2] is an example of something that is updating an existing 
API but I don't think warrants a blueprint since I think it falls into the 
'generally ok' section of the API change guidelines.

So really I see this a new feature, not a bug fix. Someone thought that detail 
was supported when writing the documentation but it never was. The 
documentation is NOT the canonical source for the behaviour of the API, 
currently the code should be seen as the reference. We've run into issues 
before where people have tried to align code to the fit the documentation and 
made backwards incompatible changes (although this is not one).

Perhaps we need a streamlined queue for very simple API changes, but I do think 
API changes should get more than the usual review because we have to live with 
them for so long (short of an emergency revert if we catch it in time).

[1] https://wiki.openstack.org/wiki/APIChangeGuidelines
[2] https://review.openstack.org/#/c/99443/

--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mid-Cycle Meetup

2014-06-13 Thread Clark, Robert Graham
Thanks Jamie,

Due to timing issues we've decided to host the OSSG mid-cycle separate from the 
Keystone/Barbican one - though it's worth noting that a lot of our interests 
overlap.

We are actually covering a bunch of interesting ground and anyone security 
oriented is welcome to attend.

https://etherpad.openstack.org/p/ossg-juno-meetup

Likewise I'm assured that OSSG folks can tag along at the Barbican meetup if 
they're available.

-Rob


> -Original Message-
> From: Jamie Lennox [mailto:jamielen...@redhat.com]
> Sent: 13 June 2014 03:55
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Openstack-security]
> [Barbican][OSSG][Keystone] Mid-Cycle Meetup
> 
> 
> 
> - Original Message -
> > From: "Jamie Lennox" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Sent: Friday, June 13, 2014 12:28:25 PM
> > Subject: Re: [openstack-dev] [Openstack-security]
> > [Barbican][OSSG][Keystone] Mid-Cycle Meetup
> >
> > On Thu, 2014-06-12 at 16:29 -0700, Valerie Anne Fenwick wrote:
> > > Hi FOlks
> > >
> > > I haven't seen anymore on this. Is this happening?  If so, are there
> > > more details? (location, agenda, etc).  Thanks!
> >
> > Details: http://dolphm.com/openstack-keystone-hackathon-for-juno/
> >
> > I think the agenda will be pushing forward and follow up from things
> > decided at the summit and generally getting things done (so nothing
> > formal).
> 
> Note: that this is the keystone part of it and my understanding is that
> barbican is happening in conjunction.
> Details for the OSSG one are here: https://etherpad.openstack.org/p/ossg-
> juno-meetup
> 
> >
> > > Valerie
> > >
> > > On 05/23/14 06:26 AM, Adam Young wrote:
> > > > The Keystone team  is going to be making sure the  code that needs
> > > > to be in by J2 is in.  That means the API changes.
> > > >
> > > > I'll be there.
> > > >
> > > >
> > > > On 05/23/2014 03:09 AM, Clark, Robert Graham wrote:
> > > >> I’d like to attend all the Barbican stuff and I’m sure there’ll
> > > >> be some interesting Keystone things too.
> > > >>
> > > >> I think it’s likely we’d do more parallel ‘OSSG’ stuff on the
> > > >> Keystone days though
> > > >>
> > > >> I’m free on these dates.
> > > >>
> > > >> From: Bryan Payne mailto:bdpa...@acm.org>>
> > > >> Date: Friday, 23 May 2014 02:14
> > > >> To: Jarret Raim
> > > >>
> mailto:jarret.r...@rackspace.com>>
> > > >> Cc:
> > > >> "openstack-secur...@lists.openstack.org secur...@lists.openstack.org>"
> > > >> mailto:openstack-security
> > > >> @lists.openstack.org>>,
> > > >> OpenStack List
> > > >> mailto:openstack-
> d...@lists.ope
> > > >> nstack.org>>
> > > >> Subject: Re: [Openstack-security] [Barbican][OSSG][Keystone]
> > > >> Mid-Cycle Meetup
> > > >>
> > > >> I plan on attending.
> > > >> -bryan
> > > >>
> > > >>
> > > >> On Thu, May 22, 2014 at 10:48 AM, Jarret Raim
> > > >> mailto:jarret.r...@rackspace.com>>
> wrote:
> > > >> All,
> > > >>
> > > >> There was some interest at the Summit in semi-combining the
> > > >> mid-cycle meet ups for Barbican, Keystone and the OSSG as there
> > > >> is some overlap in team members and interest areas. The current
> > > >> dates being considered are:
> > > >>
> > > >> Mon, July 7 - Barbican
> > > >> Tue, July 8 - Barbican
> > > >> Wed, July 9 - Barbican / Keystone Thu, July 10 - Keystone Fri,
> > > >> July 11 - Keystone
> > > >>
> > > >> Assuming these dates work for for everyone, we'll fit some OSSG
> > > >> work in during whatever days make the most sense. The current
> > > >> plan is to have the meet up in San Antonio at the new Geekdom
> > > >> location, which is downtown.
> > > >> This should make travel a bit easier for everyone as people won't
> > > >> need cars are there are plenty of hotels and restaurants within
> > > >> walking / short cab distance.
> > > >>
> > > >> I wanted to try to get a quick head count from the Barbican and
> > > >> OSSG folks (I think Dolph already has one for Keystone). I'd also
> > > >> like to know if you are a Barbican person interested in going to
> > > >> the Keystone sessions or vice versa.
> > > >>
> > > >> Once we get a rough head count estimate, Dolph and I can work on
> > > >> getting everything booked.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> Thanks,
> > > >>
> > > >> --
> > > >> Jarret Raim
> > > >> @jarretraim
> > > >>
> > > >>
> > > >>
> > > >> ___
> > > >> Openstack-security mailing list
> > > >> openstack-secur...@lists.openstack.org security@
> > > >> lists.openstack.org>
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sec
> > > >> urity
> > > >>
> > > >>
> > > >>
> > > >> ___
> > > >> OpenStack-dev mailing list
> > > >> OpenStack-dev@lists.openstack.org
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > > ___

Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-13 Thread Day, Phil
I agree that we need to keep a tight focus on all API changes.

However was the problem with the floating IP change just to do with the 
implementation in Nova or the frequency with which Ceilometer was calling it ?  
   Whatever guildelines we follow on API changes themselves its pretty hard to 
protect against the impact of a system with admin creds putting a large load 
onto the system.

> -Original Message-
> From: Michael Still [mailto:mi...@stillhq.com]
> Sent: 12 June 2014 23:36
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Nova] Review guidelines for API patches
> 
> In light of the recent excitement around quota classes and the floating ip
> pollster, I think we should have a conversation about the review guidelines
> we'd like to see for API changes proposed against nova. My initial proposal 
> is:
> 
>  - API changes should have an associated spec
> 
>  - API changes should not be merged until there is a tempest change to test
> them queued for review in the tempest repo
> 
> Thoughts?
> 
> Michael
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Message level security plans. [barbican]

2014-06-13 Thread Clark, Robert Graham


> -Original Message-
> From: Jamie Lennox [mailto:jamielen...@redhat.com]
> Sent: 13 June 2014 03:25
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Message level security plans. [barbican]
> 
> On Thu, 2014-06-12 at 23:22 +, Tiwari, Arvind wrote:
> > Thanks Nathan for your response.
> >
> > Still not very convinced for two separate service.
> >
> > Keystone authentication is not a mandatory requirement for Barbican,
as
> per design it can work without Keystone authentication. Rest
(temporary
> session key generation, long-term keys ...) are feature gap which can
be
> easily developed in barbican.
> >
> > If Barbican has to store long term keys on behalf of Kite users than
IMO it
> is good to merge these two services. We can develop separate plug-in
to
> achieve KDC (or Kite plug-in). One can have two separate barbican
> deployments one of KDC another for KMS (or may be only one with
> enhanced barbican API).
> >
> > Thoughts?
> 
> I think you're looking at the wrong part of the stack here. Barbican
is a user
> facing component, kite is looking at securing messages between
services
> that are sent over a message bus. They will need to be deployed and
> configured in a completely different manner. The users in this case
are host
> machines.
> 
> I don't think that session key generation is a 'gap' in barbican's
features, I
> think it's very correctly outside of scope. Key generation such as
this is done
> in a very specific format for a very specific use case, and includes a
lot of
> metadata that has no application outside of kite.
> 
> Kite or a KDC plugin, would still need to manage a whole lot of state
around
> services and which services can talk to which other services.
> This is very much outside barbican's scope.
> 
> The only thing that is really common between the two projects is
safely
> storing encrypted keys, and even then a fundamental purpose of
barbican is
> being able to retrieve the keys, which kite should never do. So we
would
> then need to start putting things into barbican to mark these keys as
> special...
> 
> This was discussed at the summit (you were in that room) that when
> appropriate we should look at having an option for kite to use
barbican for
> it's long term key storage and look to extract the secure key storage
> component from barbican (or kite) into a library that can be shared
between
> the two components if feasible.
> 
> In summary, yes they both store keys but the plugin you suggest will
end up
> with the same API surface area as barbican itself, involve a lot of
service
> management that is way outside of barbican's scope, will need to scale
for
> a different reason, will need to be discovered by a different class of
user
> and have different requirements on 'privacy' on keys.
> 
> Why would we want to deal with all that if you would still need
another
> barbican deployment?
> 
> 
> Jamie
> 

I feel that we did a good job of discussing this at the summit and there
was a strong consensus that Kite is very distinct from Barbican, it
fulfils different use cases and operates at a different level of the
stack. To my mind conceptually lumping Kite together with Barbican is as
confusing as throwing it in with keystone because that handles user
credentials, different tools for different jobs.

-Rob


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute vfsguestfs

2014-06-13 Thread abhishek jain
Hi Rich

Can you  help me regarding the possible cause for  VM stucking at spawning
state on ubuntu powerpc compute node in openstack using devstack.

Thanks


On Tue, Jun 10, 2014 at 10:29 PM, abhishek jain 
wrote:

> Hi Rich
>
> I'm able to solve the problem regarding PAPR in libguestfs on my powerpc
> ubuntu.By default the libguestfs was configuring pseries machine and
> afterwards I changed it to my original machine i.e ppce500 .The changes are
> performed in ./src/guestfs-internal.h file.
>
> However still my VM is stucking in spawning state.The compute node is not
> able generate the xml file required for running the instances which I have
> checked by comparing the nova-compute logs of controller node aw well as
> compute node since I'm able to run VM on controller node.
>
> Thanks for the help,I'll let you know if any further issues.
>
>
>
> On Sat, Jun 7, 2014 at 5:06 PM, Richard W.M. Jones 
> wrote:
>
>> On Tue, May 27, 2014 at 03:25:10PM +0530, abhishek jain wrote:
>> > Hi Daniel
>> >
>> > Thanks for the help.
>> > The end result of my setup is that the VM is stucking at Spawning state
>> on
>> > my compute node whereas it is working fine on the controller node.
>> > Therefore I'm comparing nova-compute logs of both compute node as well
>> as
>> > controller node and trying to proceed step by step.
>> > I'm having all the above packages enabled
>> >
>> > Do you have any idea regarding reason for VM stucking at spawning state.
>>
>> The most common reason is that nested virt is broken.  libguestfs is the
>> canary
>> in the mine here, not the cause of the problem.
>>
>> Rich.
>>
>> >
>> >
>> > On Tue, May 27, 2014 at 2:38 PM, Daniel P. Berrange <
>> berra...@redhat.com>wrote:
>> >
>> > > On Tue, May 27, 2014 at 12:04:23PM +0530, abhishek jain wrote:
>> > > > Hi
>> > > > Below is the code to which I'm going to reffer to..
>> > > >
>> > > >  vim /opt/stack/nova/nova/virt/disk/vfs/api.py
>> > > >
>> > > >
>> #
>> > > >
>> > > > try:
>> > > > LOG.debug(_("Trying to import guestfs"))
>> > > > importutils.import_module("guestfs")
>> > > > hasGuestfs = True
>> > > > except Exception:
>> > > > pass
>> > > >
>> > > > if hasGuestfs:
>> > > > LOG.debug(_("Using primary VFSGuestFS"))
>> > > > return importutils.import_object(
>> > > > "nova.virt.disk.vfs.guestfs.VFSGuestFS",
>> > > > imgfile, imgfmt, partition)
>> > > > else:
>> > > > LOG.debug(_("Falling back to VFSLocalFS"))
>> > > > return importutils.import_object(
>> > > > "nova.virt.disk.vfs.localfs.VFSLocalFS",
>> > > > imgfile, imgfmt, partition)
>> > > >
>> > > > ###
>> > > >
>> > > > When I'm launching  VM from the controller node onto compute
>> node,the
>> > > > nova compute logs on the compute node displays...Falling back to
>> > > > VFSLocalFS and the result is that the VM is stuck in spawning state.
>> > > > However When I'm trying to launch a VM onto controller node form the
>> > > > controller node itself,the nova compute logs on the controller node
>> > > > dislpays ...Using primary VFSGuestFS and I'm able to launch VM on
>> > > > controller node.
>> > > > Is there any module in the kernel or any package that i need to
>> > > > enable.Please help regarding this.
>> > >
>> > > VFSGuestFS requires the libguestfs python module & corresponding
>> native
>> > > package to be present, and only works with KVM/QEMU enabled hosts.
>> > >
>> > > VFSLocalFS requires loopback module, nbd module, qemu-nbd, kpartx and
>> > > a few other misc host tools
>> > >
>> > > Neither of these should cause a VM getting stuck in the spawning
>> > > state, even if stuff they need is missing.
>> > >
>> > > Regards,
>> > > Daniel
>> > > --
>> > > |: http://berrange.com  -o-
>> http://www.flickr.com/photos/dberrange/:|
>> > > |: http://libvirt.org  -o-
>> http://virt-manager.org:|
>> > > |: http://autobuild.org   -o-
>> http://search.cpan.org/~danberr/:|
>> > > |: http://entangle-photo.org   -o-
>> http://live.gnome.org/gtk-vnc:|
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>>
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Richard Jones, Virtualization Group, Red Hat
>> http://people.redhat.com/~rjones
>> Read my programming and virtualization blog: http://rwmj.wordpress.com
>> virt-builder quickly builds VMs from scratch
>> http://libguestfs.org/virt-builder.1.html
>>
>> ___

[openstack-dev] [NFV] Specific example NFV use case for a data plane app

2014-06-13 Thread Calum Loudon
Hello all

At Wednesday's meeting I promised to supply specific examples to help 
illustrate the NFV use cases and also show how they map to some of the
blueprints.  Here's my first example - info on our session border
controller, which is a data plane app.  Please let me know if this is 
the sort of example and detail the group are looking for, then I can
add it into the wiki and send out info on the second, a vIMS core. 

Use case example


Perimeta Session Border Controller, Metaswitch Networks.  Sits on the
edge of a service provider's network and polices SIP and RTP (i.e. VoIP)
control and media traffic passing over the access network between 
end-users and the core network or the trunk network between the core and
another SP.  

Characteristics relevant to NFV/OpenStack
-

Fast & guaranteed performance:
-   fast = performance of order of several million VoIP packets (~64-220
bytes depending on codec) per second per core (achievable on COTS hardware)
-   guaranteed via SLAs.

Fully HA, with no SPOFs and service continuity over software and hardware
failures.

Elastically scalable by adding/removing instances under the control of the
NFV orchestrator.

Ideally, ability to separate traffic from different customers via VLANs.

Requirements and mapping to blueprints
--

Fast & guaranteed performance - implications for network:

-   the packets per second target -> either SR-IOV or an accelerated
DPDK-like data plane
-   maps to the SR-IOV and accelerated vSwitch blueprints:
-   "SR-IOV Networking Support" 
(https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov)
-   "Open vSwitch to use patch ports" 
(https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use)
-   "userspace vhost in ovd vif bindings" 
(https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost)
-   "Snabb NFV driver" 
(https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver)
-   "VIF_SNABB" 
(https://blueprints.launchpad.net/nova/+spec/vif-snabb)

Fast & guaranteed performance - implications for compute:

-   to optimize data rate we need to keep all working data in L3 cache
-> need to be able to pin cores
-   "Virt driver pinning guest vCPUs to host pCPUs" 
(https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning)

-   similarly to optimize data rate need to bind to NIC on host CPU's bus
-   "I/O (PCIe) Based NUMA Scheduling" 
(https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling)

-   to offer guaranteed performance as opposed to 'best efforts' we need
to control placement of cores, minimise TLB misses and get accurate 
info about core topology (threads vs. hyperthreads etc.); maps to the
remaining blueprints on NUMA & vCPU topology:
-   "Virt driver guest vCPU topology configuration" 
(https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology)
-   "Virt driver guest NUMA node placement & topology" 
(https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement)
-   "Virt driver large page allocation for guest RAM" 
(https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages)

-   may need support to prevent 'noisy neighbours' stealing L3 cache -
unproven, and no blueprint we're aware of.

HA:
-   requires anti-affinity rules to prevent active/passive being
instantiated on same host - already supported, so no gap.

Elastic scaling:
-   similarly readily achievable using existing features - no gap.

VLAN trunking:
-   maps straightforwardly to "VLAN trunking networks for NFV" 
(https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al).

Other:
-   being able to offer apparent traffic separation (e.g. service
traffic vs. application management) over single network is also
useful in some cases
-   "Support two interfaces from one VM attached to the same 
network" (https://blueprints.launchpad.net/nova/+spec/2-if-1-net)

regards

Calum


Calum Loudon 
Director, Architecture
+44 (0)208 366 1177
 
METASWITCH NETWORKS 
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints

2014-06-13 Thread Carlos Gonçalves
Hi,

I like the idea of arranging a mid cycle for Neutron in Europe somewhere in 
July. I was also considering inviting folks from the OpenStack NFV team to meet 
up for a F2F kick-off.

I did not know about the sprint being hosted and organised by eNovance in Paris 
until just now. I think it is a great initiative from eNovance even because 
it’s not being focused on a specific OpenStack project. So, I'm interested in 
participating in this sprint for discussing Neutron and NFV. Two more people 
from Instituto de Telecomunicacoes and Portugal Telecom have shown interested 
too.

Neutron and NFV team members, who’s interested in meeting in Paris, or if not 
available on the date set by eNovance in other time and place?

Thanks,
Carlos Goncalves

On 13 Jun 2014, at 08:42, Sylvain Bauza  wrote:

> Le 12/06/2014 15:32, Gary Kotton a écrit :
>> Hi,
>> There is the mid cycle sprint in July for Nova and Neutron. Anyone 
>> interested in maybe getting one together in Europe/Middle East around the 
>> same dates? If people are willing to come to this part of the world I am 
>> sure that we can organize a venue for a few days. Anyone interested. If we 
>> can get a quorum then I will be happy to try and arrange things.
>> Thanks
>> Gary
>> 
> 
> 
> Hi Gary,
> 
> Wouldn't it be more interesting to have a mid-cycle sprint *before* the Nova 
> one (which is targeted after juno-2) so that we could discuss on some topics 
> and make a status to other folks so that it would allow a second run ?
> 
> There is already a proposal in Paris for hosting some OpenStack sprints, see 
> https://wiki.openstack.org/wiki/Sprints/ParisJuno2014
> 
> -Sylvain
> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Thomas Goirand
On 06/13/2014 06:53 AM, Morgan Fainberg wrote:
> Hi Thomas,
> 
> I felt a couple sentences here were reasonable to add (more than “don’t
> care” from before). 
> 
> I understand your concerns here, and I totally get what you’re driving
> at, but in the packaging world wouldn’t this make sense to call it
> "python-bash8"?

Yes, this is what will happen.

> Now the binary, I can agree (for reasons outlined)
> should probably not be named ‘bash8’, but the name of the “command”
> could be separate from the packaging / project name.

If upstream chooses /usr/bin/bash8, I'll have to follow. I don't want to
carry patches which I'd have to maintain.

> Beyond a relatively minor change to the resulting “binary” name [sure
> bash-tidy, or whatever we come up with], is there something more that
> really is awful (rather than just silly) about the naming?

Renaming python-bash8 into something else is not possible, because the
Debian standard is to use, as Debian name, what is used for the import.
So if we have "import xyz", then the package will be python-xyz.

> I just don’t
> see how if we don’t namespace collide on the executable side, how there
> can be any real confusion (python-bash8, sure pypi is a little
> different) over what is being installed.

The problem is that bash8 doesn't express anything but "bash version 8",
unless you know pep8.


On 06/13/2014 07:32 AM, Stefano Maffulli wrote:
> As a user, I hate to have to follow the abstruse reasoning of a
> random  set of developers forcing a packager to pick a name for the
> package  that is different than the executable. A unicorn dies every
> time  `apt-get install sillypackage && sillypackage` results in "File
> not  found". Dang!  that was my favorite unicorn.

I agree. Names are important.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Day, Phil
Hi Folks,

I was looking at the resize code in libvirt, and it has checks which raise an 
exception if the target root or ephemeral disks are smaller than the current 
ones - which seems fair enough I guess (you can't drop arbitary disk content on 
resize), except that the  because the check is in the virt driver the effect is 
to just ignore the request (the instance remains active rather than going to 
resize-verify).

It made me wonder if there were any hypervisors that actually allow this, and 
if not wouldn't it be better to move the check to the API layer so that the 
request can be failed rather than silently ignored ?

As far as I can see:

baremetal: Doesn't support resize

hyperv: Checks only for root disk 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
  )

libvirt: fails for a reduction of either root or ephemeral  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
 )

vmware:   doesn't seem to check at all ?

xen: Allows resize down for root but not for ephemeral 
(https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
 )


It feels kind of clumsy to have such a wide variation of behavior across the 
drivers, and to have the check performed only in the driver ?

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Aryeh Friedman
Theoretically impossible to reduce disk unless you have some really nasty
guest additions.


On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil  wrote:

>  Hi Folks,
>
>
>
> I was looking at the resize code in libvirt, and it has checks which raise
> an exception if the target root or ephemeral disks are smaller than the
> current ones – which seems fair enough I guess (you can’t drop arbitary
> disk content on resize), except that the  because the check is in the virt
> driver the effect is to just ignore the request (the instance remains
> active rather than going to resize-verify).
>
>
>
> It made me wonder if there were any hypervisors that actually allow this,
> and if not wouldn’t it be better to move the check to the API layer so that
> the request can be failed rather than silently ignored ?
>
>
>
> As far as I can see:
>
>
>
> baremetal: Doesn’t support resize
>
>
>
> hyperv: Checks only for root disk (
> https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
>  )
>
>
>
> libvirt: fails for a reduction of either root or ephemeral  (
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
> )
>
>
>
> vmware:   doesn’t seem to check at all ?
>
>
>
> xen: Allows resize down for root but not for ephemeral (
> https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
> )
>
>
>
>
>
> It feels kind of clumsy to have such a wide variation of behavior across
> the drivers, and to have the check performed only in the driver ?
>
>
>
> Phil
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-13 Thread Carlos Gonçalves
Hi Sumit,

My concern was related to sharing common configuration information not
between GP drivers but configurations between GP and ML2 (and any other
future plugins). When both are enabled, users need to configure drivers
information (e.g., endpoint, username, password) twice, when applicable
(e.g., when using ODL for ML2 and GP). A common configuration file here
could help, yes.

Thanks,
Carlos Goncalves

On Thu, Jun 12, 2014 at 6:05 PM, Sumit Naiksatam 
wrote:

> Hi Carlos,
>
> I noticed that the point you raised here had not been followed up. So
> if I understand correctly, your concern is related to sharing common
> configuration information between GP drivers, and ML2 mechanism
> drivers (when used in the mapping)? If so, would a common
> configuration file  shared between the two drivers help to address
> this?
>
> Thanks,
> ~Sumit.
>
> On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves 
> wrote:
> > Hi,
> >
> > On 27 May 2014, at 15:55, Mohammad Banikazemi  wrote:
> >
> > GP like any other Neutron extension can have different implementations.
> Our
> > idea has been to have the GP code organized similar to how ML2 and
> mechanism
> > drivers are organized, with the possibility of having different drivers
> for
> > realizing the GP API. One such driver (analogous to an ML2 mechanism
> driver
> > I would say) is the mapping driver that was implemented for the PoC. I
> > certainly do not see it as the only implementation. The mapping driver is
> > just the driver we used for our PoC implementation in order to gain
> > experience in developing such a driver. Hope this clarifies things a bit.
> >
> >
> > The code organisation adopted to implement the PoC for the GP is indeed
> very
> > similar to the one ML2 is using. There is one aspect I think GP will hit
> > soon if it continues to follow with its current code base where multiple
> > (policy) drivers will be available, and as Mohammad putted it as being
> > analogous to an ML2 mech driver, but are independent from ML2’s. I’m
> > unaware, however, if the following problem has already been brought to
> > discussion or not.
> >
> > From here I see the GP effort going, besides from some code refactoring,
> I'd
> > say expanding the supported policy drivers is the next goal. With that
> ODL
> > support might next. Now, administrators enabling GP ODL support will
> have to
> > configure ODL data twice (host, user, password) in case they’re using
> ODL as
> > a ML2 mech driver too, because policy drivers share no information
> between
> > ML2 ones. This can become more troublesome if ML2 is configured to load
> > multiple mech drivers.
> >
> > With that said, if it makes any sense, a different implementation should
> be
> > considered. One that somehow allows mech drivers living in ML2 umbrella
> to
> > be extended; BP [1] [2] may be a first step towards that end, I’m
> guessing.
> >
> > Thanks,
> > Carlos Gonçalves
> >
> > [1]
> >
> https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
> > [2] https://review.openstack.org/#/c/89208/
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Sean Dague
On 06/13/2014 06:04 AM, Thomas Goirand wrote:
> On 06/13/2014 06:53 AM, Morgan Fainberg wrote:
>> Hi Thomas,
>>
>> I felt a couple sentences here were reasonable to add (more than “don’t
>> care” from before). 
>>
>> I understand your concerns here, and I totally get what you’re driving
>> at, but in the packaging world wouldn’t this make sense to call it
>> "python-bash8"?
> 
> Yes, this is what will happen.
> 
>> Now the binary, I can agree (for reasons outlined)
>> should probably not be named ‘bash8’, but the name of the “command”
>> could be separate from the packaging / project name.
> 
> If upstream chooses /usr/bin/bash8, I'll have to follow. I don't want to
> carry patches which I'd have to maintain.
> 
>> Beyond a relatively minor change to the resulting “binary” name [sure
>> bash-tidy, or whatever we come up with], is there something more that
>> really is awful (rather than just silly) about the naming?
> 
> Renaming python-bash8 into something else is not possible, because the
> Debian standard is to use, as Debian name, what is used for the import.
> So if we have "import xyz", then the package will be python-xyz.
> 
>> I just don’t
>> see how if we don’t namespace collide on the executable side, how there
>> can be any real confusion (python-bash8, sure pypi is a little
>> different) over what is being installed.
> 
> The problem is that bash8 doesn't express anything but "bash version 8",
> unless you know pep8.

Impinging on the bash namespace is something I'll almost buy, except
that bash never ships with a version number.

I'd be vaguely ameniable to renaming the package/binary to bashate,
which is pronounced the same, but doesn't have the same namespacing
problem. Will talk with Matt Odden about it today.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenDaylight ML2 mechanism driver - Openstack

2014-06-13 Thread Sachi Gupta
Hi,

I have set up OpenStack Havana with neutron ML2 plugin and ODL controller 
following link 
http://www.siliconloons.com/getting-started-with-opendaylight-and-openstack/
.

The ODL mechanism driver file in 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mechanism_odl.py is 
attached.


When I issued the network create command with no network name, the call 
passes to ODL and returns with 400 error code in sendjson method.
In line no 187, this exception catches but is ignored due to which network 
is created in the openstack but is failed in ODL.

Please suggest the changes that needs to be done to synchronize OpenStack 
with ODL.

Thanks & Regards
Sachi Gupta
Systems Engineer
Tata Consultancy Services
Mailto: sachi.gu...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




mechanism_odl.py
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-13 Thread Sean Dague
Right, because your 'site' service catalog is encoded in them, they are
big, of unpredictable length, and they are going to be differently
seeded for every installation out there.

Which is why rainbow tables didn't seem to be a valid threat to me.

-Sean

On 06/13/2014 05:09 AM, Kevin Benton wrote:
> If these tokens are variable length up to 4k, it will make the search
> space much to large to construct any kind of useful table. They become
> infeasible for A-z0-9 variable-length password sets above 10 chars if
> you include every permutation. Assuming the tokens are generated in a
> very predictable manner that exclude a ton of possibilities, we
> shouldn't have to worry about rainbow tables.
> 
> --
> Kevin Benton
> 
> 
> On Fri, Jun 13, 2014 at 12:52 AM, Robert Collins
> mailto:robe...@robertcollins.net>> wrote:
> 
> On 12 June 2014 23:59, Sean Dague  > wrote:
> 
> > The only thing it makes harder is you have to generate your own
> token to
> > run the curl command. The rest is there. Because everyone is
> running our
> > servers at debug levels, it means the clients are going to be running
> > debug level as well (yay python logging!), so this is something I
> don't
> > think people realized was a huge issue.
> >
> >> Anyway I have sent a patch for swiftclient for this in :
> >>
> >> https://review.openstack.org/#/c/99632/1
> >>
> >> Personally I don't think I like much that SHA1 and i'd rather use the
> >> first 16 bytes of the token (like we did in swift server)
> >
> > Using a well known hash means you can verify it was the right thing if
> > you have access to the original data. Just taking the first 16 bytes
> > doesn't give you that, so I think the hash provides slightly more
> > debugability.
> 
> Would it be possible to salt it? e.g. make a 128bit salt and use that.
> The same token used twice will log with the same salt, but you won't
> have the rainbow table weakness.
> 
> The length of tokens isn't a particularly strong defense against
> rainbow tables AIUI: if folk realise we have tokens exposed, they will
> just use a botnet to build a table specifically targetting us.
> 
> -Rob
> 
> --
> Robert Collins mailto:rbtcoll...@hp.com>>
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Kevin Benton
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-13 Thread Erlon Cruz
Hi Asselin,

Do you had problems with other ports? Is it need to have outbound access to
other ports?

Erlon



On Mon, Jun 9, 2014 at 2:21 PM, Asselin, Ramy  wrote:

>  All,
>
>
>
> I’ve been working on setting up our Cinder 3rd party CI setup.
>
> I ran into an issue where Zuul requires direct access to
> review.openstack.org port 29418, which is currently blocked in my
> environment. It should be unblocked around the end of June.
>
>
>
> Since this will likely affect other vendors, I encourage you to take a few
> minutes and check if this affects you in order to allow sufficient time
> to resolve.
>
>
>
> Please follow the instructions in section “Reading the Event Stream” here:
> [1]
>
> Make sure you can get the event stream ~without~ any tunnels or proxies,
> etc. such as corkscrew [2].
>
> (Double-check that any such configurations are commented out in:
> ~/.ssh/config and /etc/ssh/ssh_config)
>
>
>
> Ramy (irc: asselin)
>
>
>
> [1] http://ci.openstack.org/third_party.html
>
> [2] http://en.wikipedia.org/wiki/Corkscrew_(program)
>
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-13 Thread Sean Dague
The password dumping is actually in oslo apiclient. So that too should
be scrubbed, but it has to happen in oslo first.

So mostly just because I found it here.

-Sean

On 06/12/2014 10:47 PM, Xuhan Peng wrote:
> Sorry to interrupt this discussion.
> 
> Sean, 
> 
> Since I'm working the neutron client code change, by looking at your
> code change to nova client, looks like only X-Auth-Token is taken care
> of in http_log_req. There is also password in header and token id in
> response. Any particular reason that they are not being taken care of?
> 
> Thanks, 
> Xu Han
> —
> Sent from Mailbox  for iPhone
> 
> 
> On Fri, Jun 13, 2014 at 8:47 AM, Gordon Chung  > wrote:
> 
> >I'm hoping we can just ACK this approach, and get folks to start moving
> > patches through the clients to clean this all up.
> 
> just an fyi, in pyCADF, we obfuscate tokens similar to how credit
> cards are handled: by capturing a percentage of leading and trailing
> characters and substituting the middle ie. "4724  8478".
> whatever we decide here, i'm all for having a consistent way of
> masking and minimising tokens in OpenStack.
> 
> cheers,
> gordon chung
> openstack, ibm software standards 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-13 Thread Sean Dague
On 06/12/2014 10:18 PM, Dan Prince wrote:
> On Thu, 2014-06-12 at 09:24 -0700, Joe Gordon wrote:
>>
>> On Jun 12, 2014 8:37 AM, "Sean Dague"  wrote:
>>>
>>> On 06/12/2014 10:38 AM, Mike Bayer wrote:

 On 6/12/14, 8:26 AM, Julien Danjou wrote:
> On Thu, Jun 12 2014, Sean Dague wrote:
>
>> That's not cacthable in unit or functional tests?
> Not in an accurate manner, no.
>
>> Keeping jobs alive based on the theory that they might one day
>> be useful
>> is something we just don't have the liberty to do any more.
>> We've not
>> seen an idle node in zuul in 2 days... and we're only at j-1.
>> j-3 will
>> be at least +50% of this load.
> Sure, I'm not saying we don't have a problem. I'm just saying
>> it's not a
> good solution to fix that problem IMHO.

 Just my 2c without having a full understanding of all of
>> OpenStack's CI
 environment, Postgresql is definitely different enough that MySQL
 "strict mode" could still allow issues to slip through quite
>> easily, and
 also as far as capacity issues, this might be longer term but I'm
>> hoping
 to get database-related tests to be lots faster if we can move to
>> a
 model that spends much less time creating databases and schemas.
>>>
>>> This is what I mean by functional testing. If we were directly
>> hitting a
>>> real database on a set of in tree project tests, I think you could
>>> discover issues like this. Neutron was headed down that path.
>>>
>>> But if we're talking about a devstack / tempest run, it's not really
>>> applicable.
>>>
>>> If someone can point me to a case where we've actually found this
>> kind
>>> of bug with tempest / devstack, that would be great. I've just
>> *never*
>>> seen it. I was the one that did most of the fixing for pg support in
>>> Nova, and have helped other projects as well, so I'm relatively
>> familiar
>>> with the kinds of fails we can discover. The ones that Julien
>> pointed
>>> really aren't likely to be exposed in our current system.
>>>
>>> Which is why I think we're mostly just burning cycles on the
>> existing
>>> approach for no gain.
>>
>> Given all the points made above, I think dropping PostgreSQL is the
>> right choice; if only we had infinite cloud that would be another
>> story.
>>
>> What about converting one of our existing jobs (grenade partial ncpu,
>> large ops, regular grenade, tempest with nova network etc.) Into a
>> PostgreSQL only job? We could get some level of PostgreSQL testing
>> without any additional jobs, although this is  tradeoff obviously.
> 
> I'd be fine with this tradeoff if it allows us to keep PostgreSQL in the
> mix.

The problem isn't just testing, it's people looking at the failures in
the different configurations.

I'm glad everyone loves having lots of configurations. :)

I'm less glad we've got a 24hr merge queue in the gate because very few
people are actually sifting through the failed results to figure out why
and fix them. :(

If we had more people looking through failures then it would just be a
machine capacity problem. But it's not, it's also a people capacity problem.

It's just not sustainable as a project. Pleading with people to help on
the failed side has not worked over the last year. So I really think
we're at a point where we need to start throwing jobs until we reduce
the failure rate to one that we can actually make forward progress.

Because right now we can't typically land fixes for the race conditions
in any timely manner because they get stomped by other races. I've got a
giant set of outstanding patches to make some of these stuff more clear,
which is all stuck.

So if we can't evolve the system back towards health, we need to just
cut a bunch of stuff off until we can.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-13 Thread Christopher Yeoh
Hi Phil,

On Fri, 13 Jun 2014 09:28:30 +
"Day, Phil"  wrote:
> 
> >The documentation is NOT the canonical source for the behaviour of
> >the API, currently the code should be seen as the reference. We've
> >run into issues before where people have tried to align code to the
> >fit the documentation and made backwards incompatible changes
> >(although this is not one).
> 
> I’ve never seen this defined before – is this published as official
> Openstack  or Nova policy ?

Its not published, but not following this guideline has got us into
trouble before because we end up making backwards incompatible changes
to force the code to match the docs rather than the other way around.

The documentation historically has been generated manually by people
looking at the code and writing up what they think it does. NOTE: this
is not a reflection on the docs team - they have done an EXCELLENT job
based on what we've been giving them (just the api samples and the
code). Its very easy to get it wrong and its most often someone who has
not written the code who is writing the documentation and not familiar
with what was actually merged.

> Personally I think we should be putting as much effort into reviewing
> the API docs as we do API code so that we can say that the API docs
> are the canonical source for behavior.Not being able to fix bugs
> in say input validation that escape code reviews because they break
> backwards compatibility seems to be a weakness to me.

+1 for people to go back through the v2 api docs and fix the
documentation where it is incorrect.

So our medium term goal (and this one the reasons behind wanting the
new v2.1/v3 infrastructure with json schema etc) is to be able to
properly automate the production of the documentation from the code. So
there is no contradiction between the two.

I agree we need to be able to fix bugs that result in backwards
incompatible changes. v2.1microversions should allow us to do that
cleanly as possible.

Chris

> 
> 
> Phil
> 
> 
> 
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: 13 June 2014 04:00
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] Review guidelines for API patches
> 
> On Fri, Jun 13, 2014 at 11:28 AM, Matt Riedemann
> mailto:mrie...@linux.vnet.ibm.com>> wrote:
> 
> 
> On 6/12/2014 5:58 PM, Christopher Yeoh wrote:
> On Fri, Jun 13, 2014 at 8:06 AM, Michael Still
> mailto:mi...@stillhq.com>
> >> wrote:
> 
> In light of the recent excitement around quota classes and the
> floating ip pollster, I think we should have a conversation about
> the review guidelines we'd like to see for API changes proposed
> against nova. My initial proposal is:
> 
>   - API changes should have an associated spec
> 
> 
> +1
> 
>   - API changes should not be merged until there is a tempest
> change to test them queued for review in the tempest repo
> 
> 
> +1
> 
> Chris
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> We do have some API change guidelines here [1].  I don't want to go
> overboard on every change and require a spec if it's not necessary,
> i.e. if it falls into the 'generally ok' list in that wiki.  But if
> it's something that's not documented as a supported API (so it's
> completely new) and is pervasive (going into novaclient so it can be
> used in some other service), then I think that warrants some spec
> consideration so we don't miss something.
> 
> To compare, this [2] is an example of something that is updating an
> existing API but I don't think warrants a blueprint since I think it
> falls into the 'generally ok' section of the API change guidelines.
> 
> So really I see this a new feature, not a bug fix. Someone thought
> that detail was supported when writing the documentation but it never
> was. The documentation is NOT the canonical source for the behaviour
> of the API, currently the code should be seen as the reference. We've
> run into issues before where people have tried to align code to the
> fit the documentation and made backwards incompatible changes
> (although this is not one).
> 
> Perhaps we need a streamlined queue for very simple API changes, but
> I do think API changes should get more than the usual review because
> we have to live with them for so long (short of an emergency revert
> if we catch it in time).
> 
> [1] https://wiki.openstack.org/wiki/APIChangeGuidelines
> [2] https://review.openstack.org/#/c/99443/
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-13 Thread Sean Dague
On 06/13/2014 02:36 AM, Mark McLoughlin wrote:
> On Thu, 2014-06-12 at 22:10 -0400, Dan Prince wrote:
>> On Thu, 2014-06-12 at 08:06 -0400, Sean Dague wrote:
>>> We're definitely deep into capacity issues, so it's going to be time to
>>> start making tougher decisions about things we decide aren't different
>>> enough to bother testing on every commit.
>>
>> In order to save resources why not combine some of the jobs in different
>> ways. So for example instead of:
>>
>>  check-tempest-dsvm-full
>>  check-tempest-dsvm-postgres-full
>>
>> Couldn't we just drop the postgres-full job and run one of the Neutron
>> jobs w/ postgres instead? Or something similar, so long as at least one
>> of the jobs which runs most of Tempest is using PostgreSQL I think we'd
>> be mostly fine. Not shooting for 100% coverage for everything with our
>> limited resource pool is fine, lets just do the best we can.
>>
>> Ditto for gate jobs (not check).
> 
> I think that's what Clark was suggesting in:
> 
> https://etherpad.openstack.org/p/juno-test-maxtrices
> 
>>> Previously we've been testing Postgresql in the gate because it has a
>>> stricter interpretation of SQL than MySQL. And when we didn't test
>>> Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.
>>>
>>> However Monty brought up a good point at Summit, that MySQL has a strict
>>> mode. That should actually enforce the same strictness.
>>>
>>> My proposal is that we land this change to devstack -
>>> https://review.openstack.org/#/c/97442/ and backport it to past devstack
>>> branches.
>>>
>>> Then we drop the pg jobs, as the differences between the 2 configs
>>> should then be very minimal. All the *actual* failures we've seen
>>> between the 2 were completely about this strict SQL mode interpretation.
>>
>>
>> I suppose I would like to see us keep it in the mix. Running SmokeStack
>> for almost 3 years I found many an issue dealing w/ PostgreSQL. I ran it
>> concurrently with many of the other jobs and I too had limited resources
>> (much less that what we have in infra today).
>>
>> Would MySQL strict SQL mode catch stuff like this (old bugs, but still
>> valid for this topic I think):
>>
>>  https://bugs.launchpad.net/nova/+bug/948066
>>
>>  https://bugs.launchpad.net/nova/+bug/1003756
>>
>>
>> Having support for and testing against at least 2 databases helps keep
>> our SQL queries and migrations cleaner... and is generally a good
>> practice given we have abstractions which are meant to support this sort
>> of thing anyway (so by all means let us test them!).
>>
>> Also, Having compacted the Nova migrations 3 times now I found many
>> issues by testing on multiple databases (MySQL and PostgreSQL). I'm
>> quite certain our migrations would be worse off if we just tested
>> against the single database.
> 
> Certainly sounds like this testing is far beyond the "might one day be
> useful" level Sean talks about.

The migration compaction is a good point. And I'm happy to see there
were some bugs exposed as well.

Here is where I remain stuck

We are now at a failure rate in which it's 3 days (minimum) to land a
fix that decreases our failure rate at all.

The way we are currently solving this is by effectively building "manual
zuul" and taking smart humans in coordination to end run around our
system. We've merged 18 fixes so far -
https://etherpad.openstack.org/p/gatetriage-june2014 this way. Merging a
fix this way is at least an order of magnitude more expensive on people
time because of the analysis and coordination we need to go through to
make sure these things are the right things to jump the queue.

That effort, over 8 days, has gotten us down to *only* a 24hr merge
delay. And there are no more smoking guns. What's left is a ton of
subtle things. I've got ~ 30 patches outstanding right now (a bunch are
things to clarify what's going on in the build runs especially in the
fail scenarios). Every single one of them has been failed by Jenkins at
least once. Almost every one was failed by a different unique issue.

So I'd say at best we're 25% of the way towards solving this. That being
said, because of the deep queues, people are just recheck grinding (or
hitting the jackpot and landing something through that then fails a lot
after landing). That leads to bugs like this:

https://bugs.launchpad.net/heat/+bug/1306029

Which was seen early in the patch - https://review.openstack.org/#/c/97569/

Then kind of destroyed us completely for a day -
http://status.openstack.org/elastic-recheck/ (it's the top graph).

And, predictably, a week into a long gate queue everyone is now grumpy.
The sniping between projects, and within projects in assigning blame
starts to spike at about day 4 of these events. Everyone assumes someone
else is to blame for these things.

So there is real community impact when we get to these states.



So, I'm kind of burnt out trying to figure out how to get us out of
this. As I do take it personally when we as a proje

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-13 Thread Sean Dague
On 06/12/2014 10:10 PM, Dan Prince wrote:
> On Thu, 2014-06-12 at 08:06 -0400, Sean Dague wrote:
>> We're definitely deep into capacity issues, so it's going to be time to
>> start making tougher decisions about things we decide aren't different
>> enough to bother testing on every commit.
> 
> In order to save resources why not combine some of the jobs in different
> ways. So for example instead of:
> 
>  check-tempest-dsvm-full
>  check-tempest-dsvm-postgres-full
> 
> Couldn't we just drop the postgres-full job and run one of the Neutron
> jobs w/ postgres instead? Or something similar, so long as at least one
> of the jobs which runs most of Tempest is using PostgreSQL I think we'd
> be mostly fine. Not shooting for 100% coverage for everything with our
> limited resource pool is fine, lets just do the best we can.
> 
> Ditto for gate jobs (not check).
> 
> 
>>
>> Previously we've been testing Postgresql in the gate because it has a
>> stricter interpretation of SQL than MySQL. And when we didn't test
>> Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.
>>
>> However Monty brought up a good point at Summit, that MySQL has a strict
>> mode. That should actually enforce the same strictness.
>>
>> My proposal is that we land this change to devstack -
>> https://review.openstack.org/#/c/97442/ and backport it to past devstack
>> branches.
>>
>> Then we drop the pg jobs, as the differences between the 2 configs
>> should then be very minimal. All the *actual* failures we've seen
>> between the 2 were completely about this strict SQL mode interpretation.
> 
> 
> I suppose I would like to see us keep it in the mix. Running SmokeStack
> for almost 3 years I found many an issue dealing w/ PostgreSQL. I ran it
> concurrently with many of the other jobs and I too had limited resources
> (much less that what we have in infra today).
> 
> Would MySQL strict SQL mode catch stuff like this (old bugs, but still
> valid for this topic I think):
> 
>  https://bugs.launchpad.net/nova/+bug/948066
> 
>  https://bugs.launchpad.net/nova/+bug/1003756
> 
> 
> Having support for and testing against at least 2 databases helps keep
> our SQL queries and migrations cleaner... and is generally a good
> practice given we have abstractions which are meant to support this sort
> of thing anyway (so by all means let us test them!).
> 
> Also, Having compacted the Nova migrations 3 times now I found many
> issues by testing on multiple databases (MySQL and PostgreSQL). I'm
> quite certain our migrations would be worse off if we just tested
> against the single database.

Through Tempest? or at a lower level?

Dropping this Tempest job doesn't mean we're going to remove the other
databases from unit tests, or keep them available for functional tests.

But my experience is that this sorts of things have not been found
through the API surface.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Day, Phil
>Theoretically impossible to reduce disk unless you have some really nasty 
>guest additions.

That’s what I thought – but many of the drivers seem to at least partially 
support it based on the code, hence the question on here to find out of that is 
really supported and works – or is just inconsistent error checking across 
drivers.

From: Aryeh Friedman [mailto:aryeh.fried...@gmail.com]
Sent: 13 June 2014 11:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?

Theoretically impossible to reduce disk unless you have some really nasty guest 
additions.

On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil 
mailto:philip@hp.com>> wrote:
Hi Folks,

I was looking at the resize code in libvirt, and it has checks which raise an 
exception if the target root or ephemeral disks are smaller than the current 
ones – which seems fair enough I guess (you can’t drop arbitary disk content on 
resize), except that the  because the check is in the virt driver the effect is 
to just ignore the request (the instance remains active rather than going to 
resize-verify).

It made me wonder if there were any hypervisors that actually allow this, and 
if not wouldn’t it be better to move the check to the API layer so that the 
request can be failed rather than silently ignored ?

As far as I can see:

baremetal: Doesn’t support resize

hyperv: Checks only for root disk 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
  )

libvirt: fails for a reduction of either root or ephemeral  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
 )

vmware:   doesn’t seem to check at all ?

xen: Allows resize down for root but not for ephemeral 
(https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
 )


It feels kind of clumsy to have such a wide variation of behavior across the 
drivers, and to have the check performed only in the driver ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-13 Thread Mark McLoughlin
On Fri, 2014-06-13 at 07:31 -0400, Sean Dague wrote:
> On 06/13/2014 02:36 AM, Mark McLoughlin wrote:
> > On Thu, 2014-06-12 at 22:10 -0400, Dan Prince wrote:
> >> On Thu, 2014-06-12 at 08:06 -0400, Sean Dague wrote:
> >>> We're definitely deep into capacity issues, so it's going to be time to
> >>> start making tougher decisions about things we decide aren't different
> >>> enough to bother testing on every commit.
> >>
> >> In order to save resources why not combine some of the jobs in different
> >> ways. So for example instead of:
> >>
> >>  check-tempest-dsvm-full
> >>  check-tempest-dsvm-postgres-full
> >>
> >> Couldn't we just drop the postgres-full job and run one of the Neutron
> >> jobs w/ postgres instead? Or something similar, so long as at least one
> >> of the jobs which runs most of Tempest is using PostgreSQL I think we'd
> >> be mostly fine. Not shooting for 100% coverage for everything with our
> >> limited resource pool is fine, lets just do the best we can.
> >>
> >> Ditto for gate jobs (not check).
> > 
> > I think that's what Clark was suggesting in:
> > 
> > https://etherpad.openstack.org/p/juno-test-maxtrices
> > 
> >>> Previously we've been testing Postgresql in the gate because it has a
> >>> stricter interpretation of SQL than MySQL. And when we didn't test
> >>> Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.
> >>>
> >>> However Monty brought up a good point at Summit, that MySQL has a strict
> >>> mode. That should actually enforce the same strictness.
> >>>
> >>> My proposal is that we land this change to devstack -
> >>> https://review.openstack.org/#/c/97442/ and backport it to past devstack
> >>> branches.
> >>>
> >>> Then we drop the pg jobs, as the differences between the 2 configs
> >>> should then be very minimal. All the *actual* failures we've seen
> >>> between the 2 were completely about this strict SQL mode interpretation.
> >>
> >>
> >> I suppose I would like to see us keep it in the mix. Running SmokeStack
> >> for almost 3 years I found many an issue dealing w/ PostgreSQL. I ran it
> >> concurrently with many of the other jobs and I too had limited resources
> >> (much less that what we have in infra today).
> >>
> >> Would MySQL strict SQL mode catch stuff like this (old bugs, but still
> >> valid for this topic I think):
> >>
> >>  https://bugs.launchpad.net/nova/+bug/948066
> >>
> >>  https://bugs.launchpad.net/nova/+bug/1003756
> >>
> >>
> >> Having support for and testing against at least 2 databases helps keep
> >> our SQL queries and migrations cleaner... and is generally a good
> >> practice given we have abstractions which are meant to support this sort
> >> of thing anyway (so by all means let us test them!).
> >>
> >> Also, Having compacted the Nova migrations 3 times now I found many
> >> issues by testing on multiple databases (MySQL and PostgreSQL). I'm
> >> quite certain our migrations would be worse off if we just tested
> >> against the single database.
> > 
> > Certainly sounds like this testing is far beyond the "might one day be
> > useful" level Sean talks about.
> 
> The migration compaction is a good point. And I'm happy to see there
> were some bugs exposed as well.
> 
> Here is where I remain stuck
> 
> We are now at a failure rate in which it's 3 days (minimum) to land a
> fix that decreases our failure rate at all.
> 
> The way we are currently solving this is by effectively building "manual
> zuul" and taking smart humans in coordination to end run around our
> system. We've merged 18 fixes so far -
> https://etherpad.openstack.org/p/gatetriage-june2014 this way. Merging a
> fix this way is at least an order of magnitude more expensive on people
> time because of the analysis and coordination we need to go through to
> make sure these things are the right things to jump the queue.
> 
> That effort, over 8 days, has gotten us down to *only* a 24hr merge
> delay. And there are no more smoking guns. What's left is a ton of
> subtle things. I've got ~ 30 patches outstanding right now (a bunch are
> things to clarify what's going on in the build runs especially in the
> fail scenarios). Every single one of them has been failed by Jenkins at
> least once. Almost every one was failed by a different unique issue.
> 
> So I'd say at best we're 25% of the way towards solving this. That being
> said, because of the deep queues, people are just recheck grinding (or
> hitting the jackpot and landing something through that then fails a lot
> after landing). That leads to bugs like this:
> 
> https://bugs.launchpad.net/heat/+bug/1306029
> 
> Which was seen early in the patch - https://review.openstack.org/#/c/97569/
> 
> Then kind of destroyed us completely for a day -
> http://status.openstack.org/elastic-recheck/ (it's the top graph).
> 
> And, predictably, a week into a long gate queue everyone is now grumpy.
> The sniping between projects, and within projects in assigning blame
> starts to spike at about day 4 of t

Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-13 Thread Sean Dague
On 06/13/2014 08:13 AM, Mark McLoughlin wrote:
> On Fri, 2014-06-13 at 07:31 -0400, Sean Dague wrote:
>> On 06/13/2014 02:36 AM, Mark McLoughlin wrote:
>>> On Thu, 2014-06-12 at 22:10 -0400, Dan Prince wrote:
 On Thu, 2014-06-12 at 08:06 -0400, Sean Dague wrote:
> We're definitely deep into capacity issues, so it's going to be time to
> start making tougher decisions about things we decide aren't different
> enough to bother testing on every commit.

 In order to save resources why not combine some of the jobs in different
 ways. So for example instead of:

  check-tempest-dsvm-full
  check-tempest-dsvm-postgres-full

 Couldn't we just drop the postgres-full job and run one of the Neutron
 jobs w/ postgres instead? Or something similar, so long as at least one
 of the jobs which runs most of Tempest is using PostgreSQL I think we'd
 be mostly fine. Not shooting for 100% coverage for everything with our
 limited resource pool is fine, lets just do the best we can.

 Ditto for gate jobs (not check).
>>>
>>> I think that's what Clark was suggesting in:
>>>
>>> https://etherpad.openstack.org/p/juno-test-maxtrices
>>>
> Previously we've been testing Postgresql in the gate because it has a
> stricter interpretation of SQL than MySQL. And when we didn't test
> Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.
>
> However Monty brought up a good point at Summit, that MySQL has a strict
> mode. That should actually enforce the same strictness.
>
> My proposal is that we land this change to devstack -
> https://review.openstack.org/#/c/97442/ and backport it to past devstack
> branches.
>
> Then we drop the pg jobs, as the differences between the 2 configs
> should then be very minimal. All the *actual* failures we've seen
> between the 2 were completely about this strict SQL mode interpretation.


 I suppose I would like to see us keep it in the mix. Running SmokeStack
 for almost 3 years I found many an issue dealing w/ PostgreSQL. I ran it
 concurrently with many of the other jobs and I too had limited resources
 (much less that what we have in infra today).

 Would MySQL strict SQL mode catch stuff like this (old bugs, but still
 valid for this topic I think):

  https://bugs.launchpad.net/nova/+bug/948066

  https://bugs.launchpad.net/nova/+bug/1003756


 Having support for and testing against at least 2 databases helps keep
 our SQL queries and migrations cleaner... and is generally a good
 practice given we have abstractions which are meant to support this sort
 of thing anyway (so by all means let us test them!).

 Also, Having compacted the Nova migrations 3 times now I found many
 issues by testing on multiple databases (MySQL and PostgreSQL). I'm
 quite certain our migrations would be worse off if we just tested
 against the single database.
>>>
>>> Certainly sounds like this testing is far beyond the "might one day be
>>> useful" level Sean talks about.
>>
>> The migration compaction is a good point. And I'm happy to see there
>> were some bugs exposed as well.
>>
>> Here is where I remain stuck
>>
>> We are now at a failure rate in which it's 3 days (minimum) to land a
>> fix that decreases our failure rate at all.
>>
>> The way we are currently solving this is by effectively building "manual
>> zuul" and taking smart humans in coordination to end run around our
>> system. We've merged 18 fixes so far -
>> https://etherpad.openstack.org/p/gatetriage-june2014 this way. Merging a
>> fix this way is at least an order of magnitude more expensive on people
>> time because of the analysis and coordination we need to go through to
>> make sure these things are the right things to jump the queue.
>>
>> That effort, over 8 days, has gotten us down to *only* a 24hr merge
>> delay. And there are no more smoking guns. What's left is a ton of
>> subtle things. I've got ~ 30 patches outstanding right now (a bunch are
>> things to clarify what's going on in the build runs especially in the
>> fail scenarios). Every single one of them has been failed by Jenkins at
>> least once. Almost every one was failed by a different unique issue.
>>
>> So I'd say at best we're 25% of the way towards solving this. That being
>> said, because of the deep queues, people are just recheck grinding (or
>> hitting the jackpot and landing something through that then fails a lot
>> after landing). That leads to bugs like this:
>>
>> https://bugs.launchpad.net/heat/+bug/1306029
>>
>> Which was seen early in the patch - https://review.openstack.org/#/c/97569/
>>
>> Then kind of destroyed us completely for a day -
>> http://status.openstack.org/elastic-recheck/ (it's the top graph).
>>
>> And, predictably, a week into a long gate queue everyone is now grumpy.
>> The sniping between projects, an

Re: [openstack-dev] [Nova] Review guidelines for API patches

2014-06-13 Thread Anne Gentle
On Fri, Jun 13, 2014 at 6:18 AM, Christopher Yeoh  wrote:

> Hi Phil,
>
> On Fri, 13 Jun 2014 09:28:30 +
> "Day, Phil"  wrote:
> >
> > >The documentation is NOT the canonical source for the behaviour of
> > >the API, currently the code should be seen as the reference. We've
> > >run into issues before where people have tried to align code to the
> > >fit the documentation and made backwards incompatible changes
> > >(although this is not one).
> >
> > I’ve never seen this defined before – is this published as official
> > Openstack  or Nova policy ?
>
> Its not published, but not following this guideline has got us into
> trouble before because we end up making backwards incompatible changes
> to force the code to match the docs rather than the other way around.
>
>
At last week's TC meeting we discussed publishing an API Style Guide to
supplement https://wiki.openstack.org/wiki/APIChangeGuidelines

Diane Fleming, Mark McClain, and I are starting that. Read the TC take on
it at
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-06-10-20.03.log.html


> The documentation historically has been generated manually by people
> looking at the code and writing up what they think it does. NOTE: this
> is not a reflection on the docs team - they have done an EXCELLENT job
> based on what we've been giving them (just the api samples and the
> code). Its very easy to get it wrong and its most often someone who has
> not written the code who is writing the documentation and not familiar
> with what was actually merged.
>


Yes, Diane does an amazing job with this task. We I haven't identified
others dedicated mostly to API docs across all of OpenStack. That said, the
Nova team does a good job identifying pro-level API writers to help with
their API docs.

There are 65 Compute API doc bugs and my sense is this is an indicator and
accurately reflects what's wrong in both the v2 and v3 (2.1) docs.
http://bit.ly/compute-api-bugs

Assign yourself and ask anyone on the docs team if you need assistance
through IRC on #openstack-doc or the mailing list
openstack-d...@lists.openstack.org.

Before you complain about WADL, please let me know if you have another
solution for documenting extensible APIs and are interested in implementing
it across all the project's APIs. Not being snarky at all with that
request, just letting everyone know that our findings were "what isn't
broke do not fix" and "leave the docs to the doc pros" at the Summit. Would
love to find another solution but the one we have does the job.

Thanks,
Anne


>
> > Personally I think we should be putting as much effort into reviewing
> > the API docs as we do API code so that we can say that the API docs
> > are the canonical source for behavior.Not being able to fix bugs
> > in say input validation that escape code reviews because they break
> > backwards compatibility seems to be a weakness to me.
>
> +1 for people to go back through the v2 api docs and fix the
> documentation where it is incorrect.
>
> So our medium term goal (and this one the reasons behind wanting the
> new v2.1/v3 infrastructure with json schema etc) is to be able to
> properly automate the production of the documentation from the code. So
> there is no contradiction between the two.
>
> I agree we need to be able to fix bugs that result in backwards
> incompatible changes. v2.1microversions should allow us to do that
> cleanly as possible.
>
> Chris
>
> >
> >
> > Phil
> >
> >
> >
> > From: Christopher Yeoh [mailto:cbky...@gmail.com]
> > Sent: 13 June 2014 04:00
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova] Review guidelines for API patches
> >
> > On Fri, Jun 13, 2014 at 11:28 AM, Matt Riedemann
> > mailto:mrie...@linux.vnet.ibm.com>> wrote:
> >
> >
> > On 6/12/2014 5:58 PM, Christopher Yeoh wrote:
> > On Fri, Jun 13, 2014 at 8:06 AM, Michael Still
> > mailto:mi...@stillhq.com>
> > >> wrote:
> >
> > In light of the recent excitement around quota classes and the
> > floating ip pollster, I think we should have a conversation about
> > the review guidelines we'd like to see for API changes proposed
> > against nova. My initial proposal is:
> >
> >   - API changes should have an associated spec
> >
> >
> > +1
> >
> >   - API changes should not be merged until there is a tempest
> > change to test them queued for review in the tempest repo
> >
> >
> > +1
> >
> > Chris
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org OpenStack-dev@lists.openstack.org>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > We do have some API change guidelines here [1].  I don't want to go
> > overboard on every change and require a spec if it's not necessary,
> > i.e. if it falls into the 'generally ok' list in that wiki.  But if
> > it's something that's not documented as 

Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-13 Thread Jan Provaznik

On 06/12/2014 09:37 PM, Adam Gandelman wrote:

It's been a while since I've used these tools and I'm not 100% surprised
they've fragmented once again. :)  That said, does pcs support creating
the CIB configuration in bulk from a file? I know that crm shell would
let you dump the entire cluster config and restore from file.  Unless
the CIB format has differs now, couldn't we just create the entire thing
first and use a single pcs or crm command to import it to the cluster,
rather than building each resource command-by-command?



That is an interesting idea. But I'm afraid that this can't be used in 
TripleO use-case. We would have to keep the whole cluster definition as 
a static file which would be included when building overcloud image. 
Keeping this static definition up-to-date sounds like a complex task. 
Also this would make impossible any customization based on used 
elements. For example if there are 2 elements which use pacemaker - 
neutron-l3-agent and ceilometer-agent-central, then I couldn't use them 
separately.


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-13 Thread Doug Hellmann
On Thu, Jun 12, 2014 at 8:07 PM, Stefano Maffulli  wrote:
> Hello folks,
>
> we're working on a quarterly report of activities in all our git and
> gerrit repositories to understand the dynamics of contributions across
> different dimensions. This report will be similar to what Bitergia
> produced at release time.
>
> I'd like to discuss more widely the format of how to meaningfully group
> the information presented. One basic need is to have a top level view
> and then drill down based on the release status of each project. A
> suggested classification would be based on the programs.yaml file:
>
> - OpenStack Software (top level overview):
>- integrated
>- incubated
>- clients
>- other:
>devstack
>deployment
>common libraries
> - OpenStack Quality Assurance
> - OpenStack Documentation
> - OpenStack Infrastructure
> - OpenStack Release management
>
> It seems easy but based on that grouping, integrated and incubated git
> repositories are easy to spot in programs.yaml (they have
> integrated-since attribute).
>
> Let's have the Sahara program as an example:
>
>   projects:
> - repo: openstack/sahara
>   incubated-since: icehouse
>   integrated-since: juno
> - repo: openstack/python-saharaclient
> - repo: openstack/sahara-dashboard
> - repo: openstack/sahara-extra
> - repo: openstack/sahara-image-elements
> - repo: openstack/sahara-specs
>
> So, for the OpenStack Software part:
> * openstack/sahara is integrated in juno and incubated since icehouse.
> * Then clients: python-saharaclient is easy to spot. Is it standard and
> accepted practice that all client projects are called
> python-$PROGRAM-NAMEclient?
> * And what about the rest of the sahara-* projects: where would they go?
> with openstack/sahara? or somewhere else, in others? devstack?
> common-libraries?

That section of the file used to be organized in to separate lists for
incubated, integrated, and "other" repositories. We changed it when we
started tracking the incubation and integration dates. So it seems
like just listing them under sahara as "other" would make sense.

>
> Other repositories for which I have no clear classification:
>
> - repo: openstack/swift-bench
> - repo: openstack/django_openstack_auth
> - repo: openstack/tuskar-ui
> - repo: openstack/heat-cfntools
> - repo: openstack/heat-specs
> - repo: openstack/heat-templates
> - repo: openstack-dev/heat-cfnclient
> - repo: openstack/trove-integration
> - repo: openstack/ironic-python-agent
> - repo: stackforge/kite
>
> Any suggestions on how you would like to see these classified: with
> together with the integrated/incubated 'parent' program (sahara with
> sahara-dashboard, sahara-extra etc  or separately under 'other'? Or
> they're all different and we need to look at them one by one?

I would think they would go with their parent program, no?

Doug

>
> Let me know what you think (tomorrow office hour, 11am PDT, is a good
> time to chat about this).
>
> Cheers,
> stef
>
> --
> Ask and answer questions on https://ask.openstack.org
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Andrew Laski


On 06/13/2014 08:03 AM, Day, Phil wrote:


>Theoretically impossible to reduce disk unless you have some really 
nasty guest additions.


That's what I thought -- but many of the drivers seem to at least 
partially support it based on the code, hence the question on here to 
find out of that is really supported and works -- or is just 
inconsistent error checking across drivers.




My grumpy dev answer is that what works is not resizing down.  I'm 
familiar with the xen driver resize operation and will say that it does 
work when the guest filesystem and partition sizes are accommodating, 
but there's no good way to know whether or not it will succeed without 
actually trying it.  So when it fails it's after someone was waiting on 
a resize that seemed like it was working and then suddenly didn't.


If we want to aim for what's going to work consistently across drivers, 
it's probably going to end up being not resizing disks down.



*From:*Aryeh Friedman [mailto:aryeh.fried...@gmail.com]
*Sent:* 13 June 2014 11:12
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova] Do any hyperviors allow disk 
reduction as part of resize ?


Theoretically impossible to reduce disk unless you have some really 
nasty guest additions.


On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil > wrote:


Hi Folks,

I was looking at the resize code in libvirt, and it has checks
which raise an exception if the target root or ephemeral disks are
smaller than the current ones -- which seems fair enough I guess
(you can't drop arbitary disk content on resize), except that the
 because the check is in the virt driver the effect is to just
ignore the request (the instance remains active rather than going
to resize-verify).

It made me wonder if there were any hypervisors that actually
allow this, and if not wouldn't it be better to move the check to
the API layer so that the request can be failed rather than
silently ignored ?

As far as I can see:

baremetal: Doesn't support resize

hyperv: Checks only for root disk

(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
 )

libvirt:   fails for a reduction of either root or ephemeral 
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923

)

vmware: doesn't seem to check at all ?

xen: Allows resize down for root but not for ephemeral

(https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
)

It feels kind of clumsy to have such a wide variation of behavior
across the drivers, and to have the check performed only in the
driver ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Further improvements and refactoring

2014-06-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 10/06/14 15:40, Alexei Kornienko wrote:
> On 06/10/2014 03:59 PM, Gordon Sim wrote:
>> On 06/10/2014 12:03 PM, Dina Belova wrote:
>>> Hello, stackers!
>>> 
>>> 
>>> Oslo.messaging is future of how different OpenStack components 
>>> communicate with each other, and really I’d love to start
>>> discussion about how we can make this library even better then
>>> it’s now and how can we refactor it make more
>>> production-ready.
>>> 
>>> 
>>> As we all remember, oslo.messaging was initially inspired to be
>>> created as a logical continuation of nova.rpc - as a separated
>>> library, with lots of transports supported, etc. That’s why
>>> oslo.messaging inherited not only advantages of now did the
>>> nova.rpc work (and it were lots of them), but also some
>>> architectural decisions that currently sometimes lead to the
>>> performance issues (we met some of them while Ceilometer 
>>> performance testing [1] during the Icehouse).
>>> 
>>> 
>>> For instance, simple testing messaging server (with connection
>>> pool and eventlet) can process 700 messages per second. The
>>> same functionality implemented using plain kombu (without
>>> connection pool and eventlet) driver is processing ten times
>>> more - 7000-8000 messages per second.
>>> 
>>> 
>>> So we have the following suggestions about how we may make this
>>> process better and quicker (and really I’d love to collect your
>>> feedback, folks):
>>> 
>>> 
>>> 1) Currently we have main loop running in the Executor class,
>>> and I guess it’ll be much better to move it to the Server
>>> class, as it’ll make relationship between the classes easier
>>> and will leave Executor only one task - process the message and
>>> that’s it (in blocking or eventlet mode). Moreover, this will
>>> make further refactoring much easier.
>>> 
>>> 2) Some of the drivers implementations (such as impl_rabbit
>>> and impl_qpid, for instance) are full of useless separated
>>> classes that in reality might be included to other ones. There
>>> are already some changes making the whole structure easier [2],
>>> and after the 1st issue will be solved Dispatcher and Listener
>>> also will be able to be refactored.
>>> 
>>> 3) If we’ll separate RPC functionality and messaging
>>> functionality it’ll make code base clean and easily reused.
>>> 
>>> 4) Connection pool can be refactored to implement more
>>> efficient connection reusage.
>>> 
>>> 
>>> Folks, are you ok with such a plan? Alexey Kornienko already
>>> started some of this work [2], but really we want to be sure
>>> that we chose the correct vector of development here.
>> 
>> For the impl_qpid driver, I think there would need to be quite 
>> significant changes to make it efficient. At present there are
>> several synchronous roundtrips for every RPC call made[1].
>> Notifications are not treated any differently than RPCs (and
>> sending a call is no different to sending a cast).
>> 
>> I agree the connection pooling is not efficient. For qpid at
>> least it creates too many connections for no real benefit[2].
>> 
>> I think this may be a result of trying to fit the same
>> high-level design to two entirely different underlying APIs.
>> 
>> For me at least, this also makes it hard to spot issues by
>> reading the code. The qpid specific 'unit' tests for
>> oslo.messaging also fail for me everytime when an actual qpidd
>> broker is running (I haven't yet got to the bottom of that).
>> 
>> I'm personally not sure that the changes to impl_qpid you linked
>> to have much impact on either efficiency or readability, safety
>> of the code.
> Indeed it was only to remove some of the unnecessary complexity of
> the code. We'll see more improvement after we'll implement points
> 1,2 from the original email (cause the will allow us to proceed to
> further improvement)
> 
>> I think there could be a lot of work required to significantly
>> improve that driver, and I wonder if that would be better spent
>> on e.g. the AMQP 1.0 driver which I believe will perform much
>> better and will offer more choice in deployment.
> I agree with you on this. However I'm not sure that we can do such
> a decision. If we focus on amqp driver only we should mention it 
> explicitly and deprecate qpid driver completely. There is no point
> in keeping driver that is not really functional.

The driver is functional. It may be not that efficient as
alternatives, but that's not a valid reason to deprecate it.

>> 
>> --Gordon
>> 
>> [1] For both the request and the response, the sender is created
>> every time, which results in at least one roundtrip to the
>> broker. Again, for both the request and the response, the message
>> is then sent with a blocking send, meaning a further synchronous
>> round trip for each. So for an RPC call, instead of just one
>> roundtrip, there are at least four.
>> 
>> [2] In my view, what matters more than per-connection throughput
>> for olso.messaging, is the scalability of the 

Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-06-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 29/05/14 17:33, Yuriy Taraday wrote:
> On Wed, May 28, 2014 at 3:54 AM, Joe Gordon 
> wrote:
> 
>> On Fri, May 23, 2014 at 1:13 PM, Nachi Ueno 
>> wrote:
>> 
>>> (2) Avoid duplication of works I have several experience of
>>> this.  Anyway, we should encourage people to check listed bug
>>> before writing patches.
>>> 
>> 
>> That's a very good point, but I don't think requiring a bug/bp
>> for every patch is a good way to address this. Perhaps there is
>> another way.
>> 
> 
> We can require developer to either link to bp/bug or explicitly
> add "Minor-fix" line to the commit message. I think that would
> force commit author to at least think about whether commit worth
> submitting a bug/bp or not.
> 

I like the idea. The best of both approaches (strict requirement and
place for maneuver in case of tiny fixes).

> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTmvmcAAoJEC5aWaUY1u57LKIH+QECvDXWHeT3bwxfT9SlTOYl
jAmQv1wfURDJxS7akFsczw7z0u9cCEQTFH7rDBXk0qb45sm+yfTzACfwOJ5FThiT
1+oGkJJUnrT6sZQlp9abFGCF+LGfbWd3fi7FUg0JSl4u/z3W38ToY7IR1gqj6Z7r
+GQ/zebiU709VUjKDfjsL4a8zkPBQGWUwdFjR5l9P6nJCNjfH4GUjisoglvQGvKM
NVRWHnedBDliPEZTuqhZaKF+II2xDAtgWystPmdpQIGMUeOJefOHcVs7yYv6pLjD
GLhxKl44LYN4rKWnGGeHWMa//vadpHZNVdvmdvMZa9wi/ZXqHFLIWVDgMkjxWfw=
=Eghb
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints

2014-06-13 Thread Assaf Muller


- Original Message -
> Hi,
> There is the mid cycle sprint in July for Nova and Neutron. Anyone interested
> in maybe getting one together in Europe/Middle East around the same dates?
> If people are willing to come to this part of the world I am sure that we
> can organize a venue for a few days. Anyone interested. If we can get a
> quorum then I will be happy to try and arrange things.

+1 on an Israel sprint :)

> Thanks
> Gary
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Day, Phil
I guess the question I'm really asking here is:  "Since we know resize down 
won't work in all cases, and the failure if it does occur will be hard for the 
user to detect, should we just block it at the API layer and be consistent 
across all Hypervisors ?"

From: Andrew Laski [mailto:andrew.la...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?


On 06/13/2014 08:03 AM, Day, Phil wrote:
>Theoretically impossible to reduce disk unless you have some really nasty 
>guest additions.

That's what I thought - but many of the drivers seem to at least partially 
support it based on the code, hence the question on here to find out of that is 
really supported and works - or is just inconsistent error checking across 
drivers.

My grumpy dev answer is that what works is not resizing down.  I'm familiar 
with the xen driver resize operation and will say that it does work when the 
guest filesystem and partition sizes are accommodating, but there's no good way 
to know whether or not it will succeed without actually trying it.  So when it 
fails it's after someone was waiting on a resize that seemed like it was 
working and then suddenly didn't.

If we want to aim for what's going to work consistently across drivers, it's 
probably going to end up being not resizing disks down.



From: Aryeh Friedman [mailto:aryeh.fried...@gmail.com]
Sent: 13 June 2014 11:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?

Theoretically impossible to reduce disk unless you have some really nasty guest 
additions.

On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil 
mailto:philip@hp.com>> wrote:
Hi Folks,

I was looking at the resize code in libvirt, and it has checks which raise an 
exception if the target root or ephemeral disks are smaller than the current 
ones - which seems fair enough I guess (you can't drop arbitary disk content on 
resize), except that the  because the check is in the virt driver the effect is 
to just ignore the request (the instance remains active rather than going to 
resize-verify).

It made me wonder if there were any hypervisors that actually allow this, and 
if not wouldn't it be better to move the check to the API layer so that the 
request can be failed rather than silently ignored ?

As far as I can see:

baremetal: Doesn't support resize

hyperv: Checks only for root disk 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
  )

libvirt: fails for a reduction of either root or ephemeral  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
 )

vmware:   doesn't seem to check at all ?

xen: Allows resize down for root but not for ephemeral 
(https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
 )


It feels kind of clumsy to have such a wide variation of behavior across the 
drivers, and to have the check performed only in the driver ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org




___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] what to do with unit test failures from ironic api contract

2014-06-13 Thread Day, Phil
Hi Folks,

A recent change introduced a unit test to "warn/notify developers" when they 
make a change which will break the out of tree Ironic virt driver:   
https://review.openstack.org/#/c/98201

Ok - so my change (https://review.openstack.org/#/c/68942) broke it as it adds 
some extra parameters to the virt drive power_off() method - and so I now feel 
suitable warned and notified - but am not really clear what I'm meant to do 
next.

So far I've:

-  Modified the unit test in my Nova patch so it now works

-  Submitted an Ironic patch to add the extra parameters 
(https://review.openstack.org/#/c/99932/)

As far as I can see there's no way to create a direct dependency from the 
Ironic change to my patch - so I guess its down to the Ironic folks to wait and 
accept it in the correct sequence ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)

2014-06-13 Thread Matt Riedemann



On 6/12/2014 10:31 AM, John Garbutt wrote:

On 11 June 2014 20:07, Joe Gordon  wrote:

On Wed, Jun 11, 2014 at 11:38 AM, Matt Riedemann
 wrote:

On 6/11/2014 10:01 AM, Eoghan Glynn wrote:

Thanks for bringing this to the list Matt, comments inline ...


tl;dr: some pervasive changes were made to nova to enable polling in
ceilometer which broke some things and in my opinion shouldn't have been
merged as a bug fix but rather should have been a blueprint.

===

The detailed version:

I opened bug 1328694 [1] yesterday and found that came back to some
changes made in ceilometer for bug 1262124 [2].

Upon further inspection, the original ceilometer bug 1262124 made some
changes to the nova os-floating-ips API extension and the database API
[3], and changes to python-novaclient [4] to enable ceilometer to use
the new API changes (basically pass --all-tenants when listing floating
IPs).

The original nova change introduced bug 1328694 which spams the nova-api
logs due to the ceilometer change [5] which does the polling, and right
now in the gate ceilometer is polling every 15 seconds.



IIUC that polling cadence in the gate is in the process of being reverted
to the out-of-the-box default of 600s.


I pushed a revert in ceilometer to fix the spam bug and a separate patch
was pushed to nova to fix the problem in the network API.



Thank you for that. The revert is just now approved on the ceilometer
side,
and is wending its merry way through the gate.


The bigger problem I see here is that these changes were all made under
the guise of a bug when I think this is actually a blueprint.  We have
changes to the nova API, changes to the nova database API, CLI changes,
potential performance impacts (ceilometer can be hitting the nova
database a lot when polling here), security impacts (ceilometer needs
admin access to the nova API to list floating IPs for all tenants),
documentation impacts (the API and CLI changes are not documented), etc.

So right now we're left with, in my mind, two questions:

1. Do we just fix the spam bug 1328694 and move on, or
2. Do we revert the nova API/CLI changes and require this goes through
the nova-spec blueprint review process, which should have happened in
the first place.



So just to repeat the points I made on the unlogged #os-nova IRC channel
earlier, for posterity here ...

Nova already exposed an all_tenants flag in multiple APIs (servers,
volumes,
security-groups etc.) and these would have:

(a) generally pre-existed ceilometer's usage of the corresponding APIs

and:

(b) been tracked and proposed at the time via straight-forward LP
bugs,
as  opposed to being considered blueprint material

So the manner of the addition of the all_tenants flag to the floating_ips
API looks like it just followed existing custom & practice.

Though that said, the blueprint process and in particular the nova-specs
aspect, has been tightened up since then.

My preference would be to fix the issue in the underlying API, but to use
this as "a teachable moment" ... i.e. to require more oversight (in the
form of a reviewed & approved BP spec) when such API changes are proposed
in the future.

Cheers,
Eoghan


Are there other concerns here?  If there are no major objections to the
code that's already merged, then #2 might be excessive but we'd still
need docs changes.

I've already put this on the nova meeting agenda for tomorrow.

[1] https://bugs.launchpad.net/ceilometer/+bug/1328694
[2] https://bugs.launchpad.net/nova/+bug/1262124
[3] https://review.openstack.org/#/c/81429/
[4] https://review.openstack.org/#/c/83660/
[5] https://review.openstack.org/#/c/83676/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



While there is precedent for --all-tenants with some of the other APIs,
I'm concerned about where this stops.  When ceilometer wants polling on some
other resources that the nova API exposes, will it need the same thing?
Doing all of this polling for resources in all tenants in nova puts an undue
burden on the nova API and the database.

Can we do something with notifications here instead?  That's where the
nova-spec process would have probably caught this.


++ to notifications and not polling.


Yeah, I think we need to revert this, and go through the specs
process. Its been released in Juno-1 now, so this revert feels bad,
but perhaps its the best of a bad situation?

Word of caution, we need to get notifications versioned correctly if
we want this as a more formal external "API". I think Heat have
similar issues in this area, efficiently knowing about something
happening in Nova. So we do need to "solve this".

Some kind of callback to ceil

Re: [openstack-dev] [all] gerrit-dash-creator - much easier process for creating client side dashboards

2014-06-13 Thread Jason Rist
On Fri 13 Jun 2014 02:38:11 AM MDT, Giulio Fidente wrote:
> On 05/31/2014 03:56 PM, Sean Dague wrote:
>> We're still working on a way to make it possible to review in server
>> side gerrit dashboards more easily to gerrit. In the mean time I've put
>> together a tool that makes it easy to convert gerrit dashboard
>> definitions into URLs that you can share around.
>
> a bit off topic but useful to better user and share the 'shortened'
> url we get
>
> is anyone aware of a service allowing to 1) update an existing
> 'shortened' url 2) customize the 'shortened' url a bit 3) has an api?
>
> reason for the third would be that everytime a change is merged into a
> .dash file we would update the urls automatically
>

I can't guarantee this but I think bit.ly falls into those categories.

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gerrit-dash-creator - much easier process for creating client side dashboards

2014-06-13 Thread Sean Dague
On 06/13/2014 04:38 AM, Giulio Fidente wrote:
> On 05/31/2014 03:56 PM, Sean Dague wrote:
>> We're still working on a way to make it possible to review in server
>> side gerrit dashboards more easily to gerrit. In the mean time I've put
>> together a tool that makes it easy to convert gerrit dashboard
>> definitions into URLs that you can share around.
> 
> a bit off topic but useful to better user and share the 'shortened' url
> we get
> 
> is anyone aware of a service allowing to 1) update an existing
> 'shortened' url 2) customize the 'shortened' url a bit 3) has an api?
> 
> reason for the third would be that everytime a change is merged into a
> .dash file we would update the urls automatically

I think if we were doing that, we'd want to actually have a url
shortener in openstack-infra so it was a service that we could depend on
(and preferably set meaningful names).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Russell Bryant
On 06/13/2014 09:22 AM, Day, Phil wrote:
> I guess the question I’m really asking here is:  “Since we know resize
> down won’t work in all cases, and the failure if it does occur will be
> hard for the user to detect, should we just block it at the API layer
> and be consistent across all Hypervisors ?”

+1 for consistency.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-13 Thread Jain, Vivek
+2. I totally agree with your comments Doug. It defeats the purpose if Barbican 
does not want to deal with consumers of its service.

Barbican can simply have a counter field on each container to signify how many 
consumers are using it. Every time a consumer uses a container, it increases 
the counter using barbican API.  If counter is 0, container is safe to delete.

—vivek

From: Doug Wiegley mailto:do...@a10networks.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 at 2:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Of what use is a database that randomly delete rows?  That is, in effect, what 
you’re allowing.

The secrets are only useful when paired with a service.  And unless I’m 
mistaken, there’s no undo.  So you’re letting users shoot themselves in the 
foot, for what reason, exactly?  How do you expect openstack to rely on a data 
store that is fundamentally random at the whim of users?  Every single service 
that uses Barbican will now have to hack in a defense mechanism of some kind, 
because they can’t trust that the secret they rely on will still be there 
later.  Which defeats the purpose of this mission statement:  "Barbican is a 
ReST API designed for the secure storage, provisioning and management of 
secrets.”

(And I don’t think anyone is suggesting that blind refcounts are the answer.  
At least, I hope not.)

Anyway, I hear this has already been decided, so, so be it.  Sounds like we’ll 
hack around it.

Thanks,
doug


From: Douglas Mendizabal 
mailto:douglas.mendiza...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 at 3:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

I think that having Barbican decide whether the user is or isn’t allowed to 
delete a secret that they own based on a reference count that is not directly 
controlled by them is unacceptable.   This is indeed policy enforcement, and 
we’d rather not go down that path.

I’m opposed to the idea of reference counting altogether, but a couple of other 
Barbican-core members are open to it, as long as it does not affect the delete 
behaviors.

-Doug M.

From: Adam Harwell 
mailto:adam.harw...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 at 4:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Doug: Right, we actually have a blueprint draft for EXACTLY this, but the 
Barbican team gave us a flat "not happening, we reject this change" on causing 
a delete to fail. The shadow-copy solution I proposed only came about because 
the option you are proposing is not possible. :(

I also realized that really, this whole thing is an issue for the backend, not 
really for the API itself — the LBaaS API will be retrieving the key/cert from 
Barbican and passing it to the backend, and the backend it what's responsible 
for handling it from that point (F5, Stingray etc would never actually call 
back to Barbican). So, really, the Service-VM solution we're architecting is 
where the shadow-copy solution needs to live, at which point it no longer is 
really an issue we'd need to discuss on this mailing list, I think. Stephen, 
does that make sense to you?
--Adam

https://keybase.io/rm_you


From: Doug Wiegley mailto:do...@a10networks.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 10, 2014 4:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

A third option, that is neither shadow copying nor policy enforcement:

Ask the Barbican team to put in a small api that is effectively, “hey, I’m 
using this container” and “hey, I’m done with this container”, and the have 
their delete fail if someone is still using it.  This isn’t calling into other 
services, it’s simply getting informed of who’s using what, and not stomping 
it.  That seems pretty core to me, and the workarounds if we can’t trust the 
store to store things are pretty hacky.

Doug


From: Adam Harwell 
mailto:adam.harw...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.o

Re: [openstack-dev] [all] gerrit-dash-creator - much easier process for creating client side dashboards

2014-06-13 Thread Giulio Fidente

On 06/13/2014 03:37 PM, Jason Rist wrote:

On Fri 13 Jun 2014 02:38:11 AM MDT, Giulio Fidente wrote:

On 05/31/2014 03:56 PM, Sean Dague wrote:

We're still working on a way to make it possible to review in server
side gerrit dashboards more easily to gerrit. In the mean time I've put
together a tool that makes it easy to convert gerrit dashboard
definitions into URLs that you can share around.


a bit off topic but useful to better user and share the 'shortened'
url we get

is anyone aware of a service allowing to 1) update an existing
'shortened' url 2) customize the 'shortened' url a bit 3) has an api?

reason for the third would be that everytime a change is merged into a
.dash file we would update the urls automatically


I can't guarantee this but I think bit.ly falls into those categories.


yeah they have the API but don't seem to allow for editing of the 
destination URL after the shortened is created (not for auth users 
either I mean)


maybe this is how they want it to work though...
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [neutron] [qa] now is not a great time to work on hacking updates

2014-06-13 Thread Sean Dague
I do realize that a new hacking was released, which pulls in a new
flake8. However, right now is really not a great time to be sending
through 10 patch series for style cleanups while we have a giant merge
queue backlog.

I'm calling out Neutron here - https://review.openstack.org/99512 and
kicked it out of the gate (the fact that it was recheck no bugged to get
into the gate makes me extra sad).

And calling out Tempest for it here -
https://review.openstack.org/#/c/98899/ (that series I -2ed so we won't
have the issue of it taking up gate space).

I get that people want to work on some easy code to get it merged, but
now is really not the time.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mysql/mysql-python license "contamination" into openstack?

2014-06-13 Thread Chris Friesen

On 06/12/2014 01:30 PM, Mike Bayer wrote:


the GPL is excepted in the case of MySQL and other MySQL products
released by Oracle (can you imagine such a sentence being
written.), see
http://www.mysql.com/about/legal/licensing/foss-exception/.


Okay, good start.  mysql itself is out of the picture.



If
MySQL-Python itself were an issue, OpenStack could switch to another
MySQL library, such as MySQL Connector/Python which is now MySQL's
official Python driver:
http://dev.mysql.com/doc/connector-python/en/index.html


It seems like mysql-python could be an issue given that it's licensed GPLv2.


Has anyone tested any of the mysql DBAPIs with more permissive licenses?


I just mentioned other MySQL drivers the other day; MySQL
Connector/Python, OurSQL and pymysql are well tested within SQLAlchemy
and these drivers generally pass all tests.   There's some concern over
compatibility with eventlet, however, I can't speak to that just yet.


Okay, so they're not really tested with OpenStack then?

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-13 Thread Duncan Thomas
On 10 June 2014 17:53, Mark McLoughlin  wrote:
> On Tue, 2014-06-10 at 16:09 +0100, Duncan Thomas wrote:
>> On 10 June 2014 15:07, Mark McLoughlin  wrote:
>>
>> > Exposing which configurations are actively "tested" is a perfectly sane
>> > thing to do. I don't see why you think calling this "certification" is
>> > necessary to achieve your goals.
>>
>> What is certification except a formal way of saying 'we tested it'? At
>> least when you test it enough to have some degree of confidence in
>> your testing.
>>
>> That's *exactly* what certification means.
>
> I disagree. I think the word has substantially more connotations than
> simply "this has been tested".
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/036963.html

Ok, so I think you have a high opinion of certification programs in
general than my experiences have led me to expect, but I'm starting to
see your point.



>> Since cinder,
>> and therefore cinder-core, is going to get the blame, I feel we should
>> try to maintain some degree of control over the claims.
>
> I'm starting to see where you're coming from, but I fear this
> "certification" thing will make it even worse.
>
> Right now you can easily shrug off any responsibility for the quality of
> a third party driver or an untested in-tree driver. Sure, some people
> may have unreasonable expectations about such things, but you can't stop
> people being idiots. You can better communicate expectations, though,
> and that's excellent.
>
> But as soon as you "certify" that driver cinder-core takes on a
> responsibility that I would think is unreasonable even if the driver was
> tested. "But you said it's certified!"
>
> Is cinder-core really ready to take on responsibility for every issue
> users see with "certified" drivers and downstream OpenStack products?

I think we de facto have a lot of that responsibility, whether we like
it or not. You might be right about the work certification making it
worse, I don't think it does, but at least I'm managed to explain my
position clearly and I think it has been understood.

> If it's an out-of-tree driver then we say "talk to your vendor".
>
> If it's an in-tree driver, those actively maintaining the driver provide
> "best effort community support" like anything else.
>
> If it's an in-tree driver and isn't being actively maintained, and "best
> effort community support" isn't being provided, then we need a way to
> communicate that unmaintained status. The level of testing it receives
> is what we currently see as the most important aspect, but it's not the
> only aspect.

In cinder, I expect a driver removal patch to be the communication
method. I think that view (with varying degrees of communication,
carrot and stick before hand to get a better resolution if possible)
is pretty much agreed by most of cinder core.

> Mark.


Thanks for your time and repeated replies, I think we are now both
aware of what the other person is saying and why, which is the point I
wanted to get to. It seems like the weight of opinion is against me,
so I'll go quiet on the subject... it is in the end a subjective
matter. Thanks to Anita for opening the discussion.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Review days? (open to ANYBODY and EVERYBODY)

2014-06-13 Thread Duncan Thomas
Same as Jay, for much the same reasons. Having a fixed calendar time
makes it easy for me to put up a 'do not disturb' sign.

On 13 June 2014 05:10, Jay Bryant  wrote:
> John,
>
> +2
>
> I am guilty of falling behind on reviews. Pulled in to a lot of other stuff
> since the summit ... and before.
>
> Having prescribed time on my calendar is a good idea.  Just put it on my
> calendar.
>
> Jay
>
> On Jun 12, 2014 10:49 PM, "John Griffith" 
> wrote:
>>
>> Hey Everyone,
>>
>> So I've been noticing some issues with regards to reviews in Cinder
>> lately, namely we're not keeping up very well.  Most of this is a math
>> problem (submitters >> reviewers).  We're up around 200+ patches in the
>> queue, and a large number of them have no negative feedback but have just
>> been waiting patiently (some > 2 months).
>>
>> Growth is good, new contributors are FANTASTIC... but stale submissions in
>> the queue are BAD, and I hate for people interested in contributing to
>> become discouraged and just go away (almost as much as I hate emails asking
>> me to review patches).
>>
>> I'd like to propose we consider one or two review days a week for a while
>> to try and work on our backlog.  I'd like to propose that on these days we
>> make an attempt to NOT propose new code (or at least limit it to bug-fixes
>> [real bugs, not features disguised as bugs]) and have an agreement from
>> folks to focus on actually doing reviews and using IRC to collaborate
>> together and knock some of these out.
>>
>> We did this sort of thing over a virtual meetup and it was really
>> effective, I'd like to see if we can't do something for a brief duration
>> over IRC.
>>
>> I'm thinking we give it a test run, set aside a few hours next Wed morning
>> to start (coinciding with our Cinder weekly meeting since many folks around
>> that morning across TZ's etc) where we all dedicate some time prior to the
>> meeting to focus exclusively on helping each other get some reviews knocked
>> out.  As a reminder Cinder weekly meeting is 16:00 UTC.
>>
>> Let me know what you all think, and keep in mind this is NOT limited to
>> just current regular "Block-Heads" but anybody in the OpenStack community
>> that's willing to help out and of course new reviewers are MORE than
>> welcome.
>>
>> Thanks,
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-13 Thread Asselin, Ramy
As far as I know, that’s the only non-standard port that needs to be opened in 
order to do 3rd party ci.
Ramy

From: Erlon Cruz [mailto:sombra...@gmail.com]
Sent: Friday, June 13, 2014 4:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to 
review.openstack.org port 29418 required

Hi Asselin,

Do you had problems with other ports? Is it need to have outbound access to 
other ports?

Erlon


On Mon, Jun 9, 2014 at 2:21 PM, Asselin, Ramy 
mailto:ramy.asse...@hp.com>> wrote:
All,

I’ve been working on setting up our Cinder 3rd party CI setup.
I ran into an issue where Zuul requires direct access to 
review.openstack.org port 29418, which is 
currently blocked in my environment. It should be unblocked around the end of 
June.

Since this will likely affect other vendors, I encourage you to take a few 
minutes and check if this affects you in order to allow sufficient time to 
resolve.

Please follow the instructions in section “Reading the Event Stream” here: [1]
Make sure you can get the event stream ~without~ any tunnels or proxies, etc. 
such as corkscrew [2].
(Double-check that any such configurations are commented out in: ~/.ssh/config 
and /etc/ssh/ssh_config)

Ramy (irc: asselin)

[1] http://ci.openstack.org/third_party.html
[2] http://en.wikipedia.org/wiki/Corkscrew_(program)





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Johannes Erdfelt
On Fri, Jun 13, 2014, Russell Bryant  wrote:
> On 06/13/2014 09:22 AM, Day, Phil wrote:
> > I guess the question I’m really asking here is:  “Since we know resize
> > down won’t work in all cases, and the failure if it does occur will be
> > hard for the user to detect, should we just block it at the API layer
> > and be consistent across all Hypervisors ?”
> 
> +1 for consistency.

+1 for having written the code for the xenapi driver and not wishing
that on anyone else :)

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-13 Thread Erlon Cruz
Ok thanks.


On Fri, Jun 13, 2014 at 11:32 AM, Asselin, Ramy  wrote:

>  As far as I know, that’s the only non-standard port that needs to be
> opened in order to do 3rd party ci.
>
> Ramy
>
>
>
> *From:* Erlon Cruz [mailto:sombra...@gmail.com]
> *Sent:* Friday, June 13, 2014 4:03 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct
> access to review.openstack.org port 29418 required
>
>
>
> Hi Asselin,
>
>
>
> Do you had problems with other ports? Is it need to have outbound access
> to other ports?
>
>
>
> Erlon
>
>
>
>
>
> On Mon, Jun 9, 2014 at 2:21 PM, Asselin, Ramy  wrote:
>
> All,
>
>
>
> I’ve been working on setting up our Cinder 3rd party CI setup.
>
> I ran into an issue where Zuul requires direct access to
> review.openstack.org port 29418, which is currently blocked in my
> environment. It should be unblocked around the end of June.
>
>
>
> Since this will likely affect other vendors, I encourage you to take a few
> minutes and check if this affects you in order to allow sufficient time
> to resolve.
>
>
>
> Please follow the instructions in section “Reading the Event Stream” here:
> [1]
>
> Make sure you can get the event stream ~without~ any tunnels or proxies,
> etc. such as corkscrew [2].
>
> (Double-check that any such configurations are commented out in:
> ~/.ssh/config and /etc/ssh/ssh_config)
>
>
>
> Ramy (irc: asselin)
>
>
>
> [1] http://ci.openstack.org/third_party.html
>
> [2] http://en.wikipedia.org/wiki/Corkscrew_(program)
>
>
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Gary Kotton
I think that is a very good point. This should maybe be addressed in the API 
layer as you have suggested.
Thanks
Gary

From: , Phil Day mailto:philip@hp.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 13, 2014 at 4:22 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?

I guess the question I'm really asking here is:  "Since we know resize down 
won't work in all cases, and the failure if it does occur will be hard for the 
user to detect, should we just block it at the API layer and be consistent 
across all Hypervisors ?"

From: Andrew Laski [mailto:andrew.la...@rackspace.com]
Sent: 13 June 2014 13:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?


On 06/13/2014 08:03 AM, Day, Phil wrote:
>Theoretically impossible to reduce disk unless you have some really nasty 
>guest additions.

That's what I thought - but many of the drivers seem to at least partially 
support it based on the code, hence the question on here to find out of that is 
really supported and works - or is just inconsistent error checking across 
drivers.

My grumpy dev answer is that what works is not resizing down.  I'm familiar 
with the xen driver resize operation and will say that it does work when the 
guest filesystem and partition sizes are accommodating, but there's no good way 
to know whether or not it will succeed without actually trying it.  So when it 
fails it's after someone was waiting on a resize that seemed like it was 
working and then suddenly didn't.

If we want to aim for what's going to work consistently across drivers, it's 
probably going to end up being not resizing disks down.



From: Aryeh Friedman [mailto:aryeh.fried...@gmail.com]
Sent: 13 June 2014 11:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as 
part of resize ?

Theoretically impossible to reduce disk unless you have some really nasty guest 
additions.

On Fri, Jun 13, 2014 at 6:02 AM, Day, Phil 
mailto:philip@hp.com>> wrote:
Hi Folks,

I was looking at the resize code in libvirt, and it has checks which raise an 
exception if the target root or ephemeral disks are smaller than the current 
ones - which seems fair enough I guess (you can't drop arbitary disk content on 
resize), except that the  because the check is in the virt driver the effect is 
to just ignore the request (the instance remains active rather than going to 
resize-verify).

It made me wonder if there were any hypervisors that actually allow this, and 
if not wouldn't it be better to move the check to the API layer so that the 
request can be failed rather than silently ignored ?

As far as I can see:

baremetal: Doesn't support resize

hyperv: Checks only for root disk 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L99-L108
  )

libvirt: fails for a reduction of either root or ephemeral  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4918-L4923
 )

vmware:   doesn't seem to check at all ?

xen: Allows resize down for root but not for ephemeral 
(https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vmops.py#L1015-L1032
 )


It feels kind of clumsy to have such a wide variation of behavior across the 
drivers, and to have the check performed only in the driver ?

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Aryeh M. Friedman, Lead Developer, 
http://www.PetiteCloud.org

Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver

2014-06-13 Thread Mohammad Banikazemi
I had tried to address the comment on the review board where Carlos had
raised the same issue. Should have posted here as well.

https://review.openstack.org/#/c/96393/

Patch Set 3:
Carlos, The plan is not to have multiple drivers for enforcing policies. At
least not right now. With respect to using same config options by drivers,
we can have a given group policy driver and possibly an ML2 mechanism
driver use the same config namespace. Does this answer your questions?

Best,

Mohammad



From:   Sumit Naiksatam 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date:   06/12/2014 01:10 PM
Subject:Re: [openstack-dev] [neutron][group-based-policy] GP mapping
driver



Hi Carlos,

I noticed that the point you raised here had not been followed up. So
if I understand correctly, your concern is related to sharing common
configuration information between GP drivers, and ML2 mechanism
drivers (when used in the mapping)? If so, would a common
configuration file  shared between the two drivers help to address
this?

Thanks,
~Sumit.

On Tue, May 27, 2014 at 10:33 AM, Carlos Gonçalves 
wrote:
> Hi,
>
> On 27 May 2014, at 15:55, Mohammad Banikazemi  wrote:
>
> GP like any other Neutron extension can have different implementations.
Our
> idea has been to have the GP code organized similar to how ML2 and
mechanism
> drivers are organized, with the possibility of having different drivers
for
> realizing the GP API. One such driver (analogous to an ML2 mechanism
driver
> I would say) is the mapping driver that was implemented for the PoC. I
> certainly do not see it as the only implementation. The mapping driver is
> just the driver we used for our PoC implementation in order to gain
> experience in developing such a driver. Hope this clarifies things a bit.
>
>
> The code organisation adopted to implement the PoC for the GP is indeed
very
> similar to the one ML2 is using. There is one aspect I think GP will hit
> soon if it continues to follow with its current code base where multiple
> (policy) drivers will be available, and as Mohammad putted it as being
> analogous to an ML2 mech driver, but are independent from ML2’s. I’m
> unaware, however, if the following problem has already been brought to
> discussion or not.
>
> From here I see the GP effort going, besides from some code refactoring,
I'd
> say expanding the supported policy drivers is the next goal. With that
ODL
> support might next. Now, administrators enabling GP ODL support will have
to
> configure ODL data twice (host, user, password) in case they’re using ODL
as
> a ML2 mech driver too, because policy drivers share no information
between
> ML2 ones. This can become more troublesome if ML2 is configured to load
> multiple mech drivers.
>
> With that said, if it makes any sense, a different implementation should
be
> considered. One that somehow allows mech drivers living in ML2 umbrella
to
> be extended; BP [1] [2] may be a first step towards that end, I’m
guessing.
>
> Thanks,
> Carlos Gonçalves
>
> [1]
>
https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions

> [2] https://review.openstack.org/#/c/89208/
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [neutron] [qa] now is not a great time to work on hacking updates

2014-06-13 Thread Joe Gordon
On Fri, Jun 13, 2014 at 7:13 AM, Sean Dague  wrote:

> I do realize that a new hacking was released, which pulls in a new
> flake8. However, right now is really not a great time to be sending
> through 10 patch series for style cleanups while we have a giant merge
> queue backlog.
>
> ++, when I cut hacking 0.9.x series I didn't think the gate backlog would
last this long.


> I'm calling out Neutron here - https://review.openstack.org/99512 and
> kicked it out of the gate (the fact that it was recheck no bugged to get
> into the gate makes me extra sad).
>
> And calling out Tempest for it here -
> https://review.openstack.org/#/c/98899/ (that series I -2ed so we won't
> have the issue of it taking up gate space).
>
> I get that people want to work on some easy code to get it merged, but
> now is really not the time.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Andrew Laski


On 06/13/2014 10:53 AM, Johannes Erdfelt wrote:

On Fri, Jun 13, 2014, Russell Bryant  wrote:

On 06/13/2014 09:22 AM, Day, Phil wrote:

I guess the question I’m really asking here is:  “Since we know resize
down won’t work in all cases, and the failure if it does occur will be
hard for the user to detect, should we just block it at the API layer
and be consistent across all Hypervisors ?”

+1 for consistency.

+1 for having written the code for the xenapi driver and not wishing
that on anyone else :)

JE


I'm also +1.  But this is a feature that's offered by some cloud 
providers so removing it may cause some pain even with a deprecation cycle.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [neutron] [qa] now is not a great time to work on hacking updates

2014-06-13 Thread Kyle Mestery
On Fri, Jun 13, 2014 at 9:13 AM, Sean Dague  wrote:
> I do realize that a new hacking was released, which pulls in a new
> flake8. However, right now is really not a great time to be sending
> through 10 patch series for style cleanups while we have a giant merge
> queue backlog.
>
> I'm calling out Neutron here - https://review.openstack.org/99512 and
> kicked it out of the gate (the fact that it was recheck no bugged to get
> into the gate makes me extra sad).
>
Thanks for bringing this up Sean.

> And calling out Tempest for it here -
> https://review.openstack.org/#/c/98899/ (that series I -2ed so we won't
> have the issue of it taking up gate space).
>
I've added -2 to the patch above so it won't take up gate sapce as well.

> I get that people want to work on some easy code to get it merged, but
> now is really not the time.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Johannes Erdfelt
On Fri, Jun 13, 2014, Andrew Laski  wrote:
> 
> On 06/13/2014 10:53 AM, Johannes Erdfelt wrote:
> >On Fri, Jun 13, 2014, Russell Bryant  wrote:
> >>On 06/13/2014 09:22 AM, Day, Phil wrote:
> >>>I guess the question I’m really asking here is:  “Since we know resize
> >>>down won’t work in all cases, and the failure if it does occur will be
> >>>hard for the user to detect, should we just block it at the API layer
> >>>and be consistent across all Hypervisors ?”
> >>+1 for consistency.
> >+1 for having written the code for the xenapi driver and not wishing
> >that on anyone else :)
> 
> I'm also +1.  But this is a feature that's offered by some cloud
> providers so removing it may cause some pain even with a deprecation
> cycle.

Yeah, that's the hard part about this.

On the flip side, supporting it going forward will be a pain too.

The xenapi implementation only works on ext[234] filesystems. That rules
out *BSD, Windows and Linux distributions that don't use ext[234]. RHEL7
defaults to XFS for instance.

In some cases, we couldn't even support resize down (XFS doesn't support
it).

That is to go along with all of the other problems with resize down as
it currently stands.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Daniel P. Berrange
On Fri, Jun 13, 2014 at 08:24:04AM -0700, Johannes Erdfelt wrote:
> On Fri, Jun 13, 2014, Andrew Laski  wrote:
> > 
> > On 06/13/2014 10:53 AM, Johannes Erdfelt wrote:
> > >On Fri, Jun 13, 2014, Russell Bryant  wrote:
> > >>On 06/13/2014 09:22 AM, Day, Phil wrote:
> > >>>I guess the question I’m really asking here is:  “Since we know resize
> > >>>down won’t work in all cases, and the failure if it does occur will be
> > >>>hard for the user to detect, should we just block it at the API layer
> > >>>and be consistent across all Hypervisors ?”
> > >>+1 for consistency.
> > >+1 for having written the code for the xenapi driver and not wishing
> > >that on anyone else :)
> > 
> > I'm also +1.  But this is a feature that's offered by some cloud
> > providers so removing it may cause some pain even with a deprecation
> > cycle.
> 
> Yeah, that's the hard part about this.
> 
> On the flip side, supporting it going forward will be a pain too.
> 
> The xenapi implementation only works on ext[234] filesystems. That rules
> out *BSD, Windows and Linux distributions that don't use ext[234]. RHEL7
> defaults to XFS for instance.

Presumably it'll have a hard time if the guest uses LVM for its image
or does luks encryption, or anything else that's more complex than just
a plain FS in a partition.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Rethink how we manage projects? (was Gate proposal - drop Postgresql configurations in the gate)

2014-06-13 Thread David Kranz

On 06/13/2014 07:31 AM, Sean Dague wrote:

On 06/13/2014 02:36 AM, Mark McLoughlin wrote:

On Thu, 2014-06-12 at 22:10 -0400, Dan Prince wrote:

On Thu, 2014-06-12 at 08:06 -0400, Sean Dague wrote:

We're definitely deep into capacity issues, so it's going to be time to
start making tougher decisions about things we decide aren't different
enough to bother testing on every commit.

In order to save resources why not combine some of the jobs in different
ways. So for example instead of:

  check-tempest-dsvm-full
  check-tempest-dsvm-postgres-full

Couldn't we just drop the postgres-full job and run one of the Neutron
jobs w/ postgres instead? Or something similar, so long as at least one
of the jobs which runs most of Tempest is using PostgreSQL I think we'd
be mostly fine. Not shooting for 100% coverage for everything with our
limited resource pool is fine, lets just do the best we can.

Ditto for gate jobs (not check).

I think that's what Clark was suggesting in:

https://etherpad.openstack.org/p/juno-test-maxtrices


Previously we've been testing Postgresql in the gate because it has a
stricter interpretation of SQL than MySQL. And when we didn't test
Postgresql it regressed. I know, I chased it for about 4 weeks in grizzly.

However Monty brought up a good point at Summit, that MySQL has a strict
mode. That should actually enforce the same strictness.

My proposal is that we land this change to devstack -
https://review.openstack.org/#/c/97442/ and backport it to past devstack
branches.

Then we drop the pg jobs, as the differences between the 2 configs
should then be very minimal. All the *actual* failures we've seen
between the 2 were completely about this strict SQL mode interpretation.


I suppose I would like to see us keep it in the mix. Running SmokeStack
for almost 3 years I found many an issue dealing w/ PostgreSQL. I ran it
concurrently with many of the other jobs and I too had limited resources
(much less that what we have in infra today).

Would MySQL strict SQL mode catch stuff like this (old bugs, but still
valid for this topic I think):

  https://bugs.launchpad.net/nova/+bug/948066

  https://bugs.launchpad.net/nova/+bug/1003756


Having support for and testing against at least 2 databases helps keep
our SQL queries and migrations cleaner... and is generally a good
practice given we have abstractions which are meant to support this sort
of thing anyway (so by all means let us test them!).

Also, Having compacted the Nova migrations 3 times now I found many
issues by testing on multiple databases (MySQL and PostgreSQL). I'm
quite certain our migrations would be worse off if we just tested
against the single database.

Certainly sounds like this testing is far beyond the "might one day be
useful" level Sean talks about.

The migration compaction is a good point. And I'm happy to see there
were some bugs exposed as well.

Here is where I remain stuck

We are now at a failure rate in which it's 3 days (minimum) to land a
fix that decreases our failure rate at all.

The way we are currently solving this is by effectively building "manual
zuul" and taking smart humans in coordination to end run around our
system. We've merged 18 fixes so far -
https://etherpad.openstack.org/p/gatetriage-june2014 this way. Merging a
fix this way is at least an order of magnitude more expensive on people
time because of the analysis and coordination we need to go through to
make sure these things are the right things to jump the queue.

That effort, over 8 days, has gotten us down to *only* a 24hr merge
delay. And there are no more smoking guns. What's left is a ton of
subtle things. I've got ~ 30 patches outstanding right now (a bunch are
things to clarify what's going on in the build runs especially in the
fail scenarios). Every single one of them has been failed by Jenkins at
least once. Almost every one was failed by a different unique issue.

So I'd say at best we're 25% of the way towards solving this. That being
said, because of the deep queues, people are just recheck grinding (or
hitting the jackpot and landing something through that then fails a lot
after landing). That leads to bugs like this:

https://bugs.launchpad.net/heat/+bug/1306029

Which was seen early in the patch - https://review.openstack.org/#/c/97569/

Then kind of destroyed us completely for a day -
http://status.openstack.org/elastic-recheck/ (it's the top graph).

And, predictably, a week into a long gate queue everyone is now grumpy.
The sniping between projects, and within projects in assigning blame
starts to spike at about day 4 of these events. Everyone assumes someone
else is to blame for these things.

So there is real community impact when we get to these states.



So, I'm kind of burnt out trying to figure out how to get us out of
this. As I do take it personally when we as a project can't merge code.
As that's a terrible state to be in.

Pleading to get more people to dive in, is mostly not helping.

So my only th

[openstack-dev] [vmware] Proposed new Vim/PBM api

2014-06-13 Thread Matthew Booth
I've proposed a new Vim/PBM api in this blueprint for oslo.vmware:
https://review.openstack.org/#/c/99952/

This is just the base change. However, it is a building block for adding
a bunch of interesting new features, including:

* Robust caching
* Better RetrievePropertiesEx
* WaitForUpdatesEx
* Non-polling task waiting

It also gives us explicit session transactions, which are a requirement
for locking should that ever come to pass.

Please read and discuss. There are a couple of points in there on which
I'm actively soliciting input.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][cinder] hope to get any feedback about delay delete volume

2014-06-13 Thread Ben Nemec
Please don't send review requests to the openstack-dev list.  The
correct procedure is outlined here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 06/12/2014 10:20 PM, Yuzhou (C) wrote:
> Hi all,
> 
> I have submit a blueprint about deferred deletion for volumes in cinder: 
> https://review.openstack.org/#/c/97034/
> 
> The implements of deferred deletion for volumes will introduce some 
> complexity, to this point, there are different options in stackers. So we 
> would like to get some feedback from anyone, particularly cloud operators.
> 
>   Here, I introduce the importance of deferred deletion for volumes 
> again.  
> Currently in cinder, calling the API of deleting volume means the volume 
> will be deleted immediately. If the user specify a wrong volume by mistake, 
> the data in the volume may be lost forever. To avoid this, I hope to add a 
> deferred deletion mechanism for volumes. So for a certain amount of time, 
> volumes can be restored after the user find a misuse of deleting volume. 
> Moreover, there are deferred deletion implements for instance in nova and 
> image in glance, I think it is very common feature to protected important 
> resource.
> So I think deferred deletion for volumes is valuable,  it seems to me 
> that should be sufficient.
> 
> Welcome your feedback and suggestions!
> 
> Thanks.
> 
> Zhou Yu
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-13 Thread Thierry Carrez
Stefano Maffulli wrote:
> Hello folks,
> 
> we're working on a quarterly report of activities in all our git and
> gerrit repositories to understand the dynamics of contributions across
> different dimensions. This report will be similar to what Bitergia
> produced at release time.
> 
> I'd like to discuss more widely the format of how to meaningfully group
> the information presented. One basic need is to have a top level view
> and then drill down based on the release status of each project. A
> suggested classification would be based on the programs.yaml file:
> 
> - OpenStack Software (top level overview):
>- integrated
>- incubated
>- clients
>- other:
>devstack
>deployment
>common libraries
> - OpenStack Quality Assurance
> - OpenStack Documentation
> - OpenStack Infrastructure
> - OpenStack Release management

I would regroup devstack, QA, Infra and Release management into the same
"support" group. Not sure there is much value in presenting them distinctly.

> It seems easy but based on that grouping, integrated and incubated git
> repositories are easy to spot in programs.yaml (they have
> integrated-since attribute).
> 
> Let's have the Sahara program as an example:
> 
>   projects:
> - repo: openstack/sahara
>   incubated-since: icehouse
>   integrated-since: juno
> - repo: openstack/python-saharaclient
> - repo: openstack/sahara-dashboard
> - repo: openstack/sahara-extra
> - repo: openstack/sahara-image-elements
> - repo: openstack/sahara-specs
> 
> So, for the OpenStack Software part:
> * openstack/sahara is integrated in juno and incubated since icehouse.
> * Then clients: python-saharaclient is easy to spot. Is it standard and
> accepted practice that all client projects are called
> python-$PROGRAM-NAMEclient?
> * And what about the rest of the sahara-* projects: where would they go?
> with openstack/sahara? or somewhere else, in others? devstack?
> common-libraries?

Interesting to note that most of those extra projects should disappear
soon, as they get consumed by other projects (for example
sahara-dashboard going into horizon). In the mean time, I think they can
go to your "other" category. That's how they are classified in
programs.yaml.


> Other repositories for which I have no clear classification:
> 
> - repo: openstack/swift-bench
> - repo: openstack/django_openstack_auth
> - repo: openstack/tuskar-ui
> - repo: openstack/heat-cfntools
> - repo: openstack/heat-specs
> - repo: openstack/heat-templates
> - repo: openstack-dev/heat-cfnclient
> - repo: openstack/trove-integration
> - repo: openstack/ironic-python-agent
> - repo: stackforge/kite

All "other". To avoid creating so many subcategories in "other", I would
not have any subcategory there and just lump them all in the same
category. That's about as relevant as lumping all integrated projects
work into the same "integrated" category, anyway.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Darren J Moffat



On 06/13/14 16:37, Daniel P. Berrange wrote:

The xenapi implementation only works on ext[234] filesystems. That rules
>out *BSD, Windows and Linux distributions that don't use ext[234]. RHEL7
>defaults to XFS for instance.

Presumably it'll have a hard time if the guest uses LVM for its image
or does luks encryption, or anything else that's more complex than just
a plain FS in a partition.


For example ZFS, which doesn't currently support device removal (except 
for mirror detach) or device size shrink (but does support device grow). 
 ZFS does support file system resize but file systems are "just" 
logical things within a storage pool (made up of 1 or more devices) so 
that has nothing to do with the block device size.


--
Darren J Moffat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][cinder] hope to get any feedback about delay delete volume

2014-06-13 Thread John Griffith
On Fri, Jun 13, 2014 at 9:52 AM, Ben Nemec  wrote:

> Please don't send review requests to the openstack-dev list.  The
> correct procedure is outlined here:
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
>
> Thanks.
>
> -Ben
>
> On 06/12/2014 10:20 PM, Yuzhou (C) wrote:
> > Hi all,
> >
> > I have submit a blueprint about deferred deletion for volumes in
> cinder: https://review.openstack.org/#/c/97034/
> >
> > The implements of deferred deletion for volumes will introduce some
> complexity, to this point, there are different options in stackers. So we
> would like to get some feedback from anyone, particularly cloud operators.
> >
> >   Here, I introduce the importance of deferred deletion for volumes
> again.
> > Currently in cinder, calling the API of deleting volume means the
> volume will be deleted immediately. If the user specify a wrong volume by
> mistake, the data in the volume may be lost forever. To avoid this, I hope
> to add a deferred deletion mechanism for volumes. So for a certain amount
> of time, volumes can be restored after the user find a misuse of deleting
> volume.
> > Moreover, there are deferred deletion implements for instance in
> nova and image in glance, I think it is very common feature to protected
> important resource.
> > So I think deferred deletion for volumes is valuable,  it seems to
> me that should be sufficient.
> >
> > Welcome your feedback and suggestions!
> >
> > Thanks.
> >
> > Zhou Yu
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Ben,

I believe Zhou Yu is actually seeking input from the broader community on
the feature itself; not the review.

I wasn't overly enthusiastic about adding the feature and to me the
argument of "instances do it" isn't necessarily the best reason to do
something.  I'm torn on whether I think it's a good feature to implement or
not given that I think there are some complexities it introduces that may
not be worthwhile.

Anyway, I'm fairly open to hearing more input (maybe from some users in the
community?) to get some input on how valuable or needed this feature
actually is.

Thanks,
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Meaning of 204 from DELETE apis

2014-06-13 Thread David Kranz

On 06/12/2014 08:27 PM, GHANSHYAM MANN wrote:




On Fri, Jun 13, 2014 at 8:42 AM, Christopher Yeoh > wrote:


On Fri, Jun 13, 2014 at 7:05 AM, David Kranz mailto:dkr...@redhat.com>> wrote:

On 06/12/2014 05:27 PM, Jay Pipes wrote:

On 06/12/2014 05:17 PM, David Kranz wrote:

Tempest has a number of tests in various services for
deleting objects
that mostly return 204. Many, but not all, of these
tests go on to check
that the resource was actually deleted but do so in
different ways.
Sometimes they go into a timeout loop waiting for a
GET on the object to
fail. Sometimes they immediately call DELETE again or
GET and assert
that it fails. According to what I can see about the
HTTP "spec", 204
should mean that the object was deleted. So is waiting
for something to
disappear unnecessary? Is immediate assertion wrong?
Does this behavior
vary service to service? We should be as consistent
about this as
possible but I am not sure what the expected behavior
of all services
actually is.


The main problem I've seen is that while the resource is
deleted, it stays in a deleting state for some time, and
quotas don't get adjusted until the server is finally set
to a terminated status.

So you are talking about nova here. In tempest I think we need
to more clearly distinguish when delete is being called to
test the delete api vs. as part of some cleanup. There was an
irc discussion related to this recently.  The question is, if
I do a delete and get a 204, can I expect that immediately
doing another delete or get will fail? And that question needs
an answer for each api that has delete in order to have proper
tests for delete.


So if the deletion does not occur before the call returns the API
should be returning 202 rather than 204. The tasks API should help
clarify things here as a task handle will be returned for long
running things and you can query progress rather than polling by
listing objects etc.

Chris

I was also going through the testing of delete operation in tempest 
and there is not much consistency.
If *strictly* testing, we should not have any wait for 204 response. 
If any operation still in progress and return 204 then its a false 
return  and tempest should be able to catch those as it can break user 
app also. Tempest should report fail so that specific project can fix 
that operation or return code (exception in case of backward 
compatibility).


Right. I think it makes sense for all the delete apis that return 204 to 
have tests that try to do another delete immediately and also do a get. 
But for reasons Jay pointed out we will have to leave the "cleanup" 
deletes doing a loop check.


 -David

Ghanhsyam Mann

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks & Regards
Ghanshyam Mann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-13 Thread Anita Kuno
On 06/13/2014 10:32 AM, Asselin, Ramy wrote:
> As far as I know, that’s the only non-standard port that needs to be opened 
> in order to do 3rd party ci.
> Ramy
> 
> From: Erlon Cruz [mailto:sombra...@gmail.com]
> Sent: Friday, June 13, 2014 4:03 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] Third-Party CI Issue: direct access to 
> review.openstack.org port 29418 required
> 
> Hi Asselin,
> 
> Do you had problems with other ports? Is it need to have outbound access to 
> other ports?
> 
> Erlon
Gerrit sshd documentation can be found here:
https://gerrit-review.googlesource.com/Documentation/config-gerrit.html#sshd

OpenStack Gerrit uses ports 29418 and 443. The document mentions the use
of ports 22 and 8080 which OpenStack Gerrit does not use.

Thanks,
Anita.
> 
> 
> On Mon, Jun 9, 2014 at 2:21 PM, Asselin, Ramy 
> mailto:ramy.asse...@hp.com>> wrote:
> All,
> 
> I’ve been working on setting up our Cinder 3rd party CI setup.
> I ran into an issue where Zuul requires direct access to 
> review.openstack.org port 29418, which is 
> currently blocked in my environment. It should be unblocked around the end of 
> June.
> 
> Since this will likely affect other vendors, I encourage you to take a few 
> minutes and check if this affects you in order to allow sufficient time to 
> resolve.
> 
> Please follow the instructions in section “Reading the Event Stream” here: [1]
> Make sure you can get the event stream ~without~ any tunnels or proxies, etc. 
> such as corkscrew [2].
> (Double-check that any such configurations are commented out in: 
> ~/.ssh/config and /etc/ssh/ssh_config)
> 
> Ramy (irc: asselin)
> 
> [1] http://ci.openstack.org/third_party.html
> [2] http://en.wikipedia.org/wiki/Corkscrew_(program)
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-13 Thread Stefano Maffulli
On 06/13/2014 05:57 AM, Doug Hellmann wrote:
> That section of the file used to be organized in to separate lists 
> for incubated, integrated, and "other" repositories. We changed it 
> when we started tracking the incubation and integration dates. So it 
> seems like just listing them under sahara as "other" would make 
> sense.

If I understand you correctly you'd have for Juno something like:

- OpenStack Software (top level overview):
   - integrated
* nova
* neutron
* cinder
* ...
* sahara
   ** sahara-other
* ...
   - incubated
* ...
   - other
  - clients
* sahara-client
  [... etc]

Looks a bit complicated to me: I wasn't considering that as an option.

It's worth reminding that the objective of the report is to discover,
highlight trends in each program, monitor the efficacy of  actions taken
to fuel our growth, capture early warning signals of distress in the
community. The grouping should be done among repositories and bug
trackers that share common characteristics.

For example, grouping integrated projects is valuable to calculate the
the median time for patches to merge across them all and compare them to
individual repositories. That's why I ask if repos openstack/sahara-*
should go with openstack/sahara and therefore their numbers aggregated
with the other integrated projects gerrit/git repos. Or, as you seem to
suggest, drop them into 'other' as these (ex.
openstack-dev/heat-cfnclient, openstack/ironic-python-agent,
openstack/tuskar-ui, openstack/sahara-image-elements etc) really have
more in common among each other than with their 'parent' project?
Does that make sense?

On 06/13/2014 08:52 AM, Thierry Carrez wrote:
> I would regroup devstack, QA, Infra and Release management into the 
> same "support" group. Not sure there is much value in presenting them
> distinctly.

Quite the other way around: it makes little sense to complicate the
analysis creating a group for these and it's easier to see them
separately. Also, I think there are different sets of people working on
them, it's more interesting to me to see them separately.

Cheers,
stef


PS in the past message I have *erroneously* included a couple of -specs
repositories: we're going to ignore those in our analysis, we're *not*
counting activity in those.

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related specs

2014-06-13 Thread Mohammad Banikazemi

In order to make the review process a bit easier (without duplicating too
much data and without creating too much overhead), we have created a wiki
to keep track of the ML2 related specs for the Juno cycle [1]. The idea is
to organize the people who participate in the ML2 subgroup activities and
get the related specs reviewed as much as possible in the subgroup before
asking the broader community to review. (There is of course nothing that
prevents others from reviewing these specs as soon as they are available
for review.) If you have any ML2 related spec under review or being
planned, you may want to update the wiki [1] accordingly.

We will see if this will be useful or not. If you have any comments or
suggestions please post here or bring them to the IRC weekly meetings [2].

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Tracking_ML2_Subgroup_Reviews
[2] https://wiki.openstack.org/wiki/Meetings/ML2___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][cinder] hope to get any feedback about delay delete volume

2014-06-13 Thread Ben Nemec
On 06/13/2014 11:03 AM, John Griffith wrote:
> On Fri, Jun 13, 2014 at 9:52 AM, Ben Nemec  wrote:
> 
>> Please don't send review requests to the openstack-dev list.  The
>> correct procedure is outlined here:
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
>>
>> Thanks.
>>
>> -Ben
>>
>> On 06/12/2014 10:20 PM, Yuzhou (C) wrote:
>>> Hi all,
>>>
>>> I have submit a blueprint about deferred deletion for volumes in
>> cinder: https://review.openstack.org/#/c/97034/
>>>
>>> The implements of deferred deletion for volumes will introduce some
>> complexity, to this point, there are different options in stackers. So we
>> would like to get some feedback from anyone, particularly cloud operators.
>>>
>>>   Here, I introduce the importance of deferred deletion for volumes
>> again.
>>> Currently in cinder, calling the API of deleting volume means the
>> volume will be deleted immediately. If the user specify a wrong volume by
>> mistake, the data in the volume may be lost forever. To avoid this, I hope
>> to add a deferred deletion mechanism for volumes. So for a certain amount
>> of time, volumes can be restored after the user find a misuse of deleting
>> volume.
>>> Moreover, there are deferred deletion implements for instance in
>> nova and image in glance, I think it is very common feature to protected
>> important resource.
>>> So I think deferred deletion for volumes is valuable,  it seems to
>> me that should be sufficient.
>>>
>>> Welcome your feedback and suggestions!
>>>
>>> Thanks.
>>>
>>> Zhou Yu
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> ​Ben,
> 
> I believe Zhou Yu is actually seeking input from the broader community on
> the feature itself; not the review.
> 
> I wasn't overly enthusiastic about adding the feature and to me the
> argument of "instances do it" isn't necessarily the best reason to do
> something.  I'm torn on whether I think it's a good feature to implement or
> not given that I think there are some complexities it introduces that may
> not be worthwhile.
> 
> Anyway, I'm fairly open to hearing more input (maybe from some users in the
> community?) to get some input on how valuable or needed this feature
> actually is.
> 
> Thanks,
> John​
> 

Okay, sorry for the noise then.  It sounded like a review request, but
if the point was to bring the discussion to a wider audience then that
is obviously fine.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-13 Thread Kyle Mestery
I've spent some time doing some initial analysis of 3rd Party CI in
Neutron. The tl;dr is that it's a mess, and it needs fixing. And I'm
setting a deadline of Juno-2 for people to address their CI systems
and get them in shape or we will remove plugins and drivers in Juno-3
which do not meet the expectations set out below.

My initial analysis of Neutron 3rd Party CI is here [1]. This was
somewhat correlated with information from DriverLog [2], which was
helpful to put this together.

As you can see from the list, there are a lot of CI systems which are
broken right now. Some have just recently started working again.
Others are working great, and some are in the middle somewhere. The
overall state isn't that great. I'm sending this email to
openstack-dev and BCC;ing CI owners to raise awareness of this issue.
If I have incorrectly labeled your CI, please update the etherpad and
include links to the latest voting/comments your CI system has done
upstream and reply to this thread.

I have documented the 3rd Party CI requirements for Neutron here [3].
I expect people to be following these guidelines for their CI systems.
If there are questions on the guidelines or expectations, please reply
to this thread or reach out to myself in #openstack-neutron on
Freenode. There is also a third-party meeting [4] which is a great
place to ask questions and share your experience setting up a 3rd
party CI system. The infra team has done a great job sponsoring and
running this meeting (thanks Anita!), so please both take advantage of
it and also contribute to it so we can all share knowledge and help
each other.

Owners of plugins/drivers should ensure their CI is matching the
requirements set forth by both infra and Neutron when running tests
and posting results. Like I indicated earlier, we will look at
removing code for drivers which are not meeting these requirements as
set forth in the wiki pages.

The goal of this effort is to ensure consistency across testing
platforms, making it easier for developers to diagnose issues when
third party CI systems fail, and to ensure these drivers are tested
since they are part of the integrated releases we perform. We used to
require a core team member to sponsor a plugin/driver, but we moved to
the 3rd party CI system in Icehouse instead. Ensuring these systems
are running and properly working is the only way we can ensure code is
working when it's part of the integrated release.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/ZLp9Ow3tNq
[2] 
http://www.stackalytics.com/driverlog/?project_id=openstack%2Fneutron&vendor=&release_id=
[3] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[4] https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [3rd Party] Log retention with 3rd party testing

2014-06-13 Thread trinath.soman...@freescale.com
Hi Stackers-

I have a doubt on what is the timeline for log retention. From the discussion 
(irc), I learnt that time for log retention is ONE month.

I have two scenarios to deal with.

[1] A change takes two months or more to get merged into the master branch.
 Here, if CI's, delete the logs of the last month, the owner/reviewer may 
not be able to check the old logs.

[2] A change is good and merged into the master branch.

 Here, if in future this change creates a BUG, a research on the CI logs 
might be a helpful to resolve this.

To retain all the logs forever will be an storage issue.

Kindly help me with you suggestions to resolve this.

Thanks in advance.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-13 Thread Aryeh Friedman
Also ZFS needs to know what is on the guest for example bhyve (the only
working hv for bsd currency [vbox kind of also works]) stores the backing
store (unless bare metal) as single block file.   It is impossible to make
that non-opaque to the outside world unless you can run commands on the
instance.


On Fri, Jun 13, 2014 at 11:53 AM, Darren J Moffat 
wrote:

>
>
> On 06/13/14 16:37, Daniel P. Berrange wrote:
>
>> The xenapi implementation only works on ext[234] filesystems. That rules
>>> >out *BSD, Windows and Linux distributions that don't use ext[234]. RHEL7
>>> >defaults to XFS for instance.
>>>
>> Presumably it'll have a hard time if the guest uses LVM for its image
>> or does luks encryption, or anything else that's more complex than just
>> a plain FS in a partition.
>>
>
> For example ZFS, which doesn't currently support device removal (except
> for mirror detach) or device size shrink (but does support device grow).
>  ZFS does support file system resize but file systems are "just" logical
> things within a storage pool (made up of 1 or more devices) so that has
> nothing to do with the block device size.
>
> --
> Darren J Moffat
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] RabbitMQ (AMQP 0.9) driver for Marconi

2014-06-13 Thread Janczuk, Tomasz
Thanks Flavio, inline.

On 6/13/14, 1:37 AM, "Flavio Percoco"  wrote:

>On 11/06/14 18:01 +, Janczuk, Tomasz wrote:
>>Thanks Flavio, some comments inline below.
>>
>>On 6/11/14, 5:15 AM, "Flavio Percoco"  wrote:
>>

  1.  Marconi exposes HTTP APIs that allow messages to be listed
without
consuming them. This API cannot be implemented on top of AMQP 0.9 which
implements a strict queueing semantics.
>>>
>>>I believe this is quite an important endpoint for Marconi. It's not
>>>about listing messages but getting batch of messages. Wether it is
>>>through claims or not doesn't really matter. What matters is giving
>>>the user the opportunity to get a set of messages, do some work and
>>>decide what to do with those messages afterwards.
>>
>>The sticky point here is that this Marconi¹s endpoint allows messages to
>>be obtained *without* consuming them in the traditional messaging system
>>sense: the messages remain visible to other consumers. It could be argued
>>that such semantics can be implemented on top of AMQP by first getting
>>the
>>messages and then immediately releasing them for consumption by others,
>>before the Marconi call returns. However, even that is only possible for
>>messages that are at the front of the queue - the "paging" mechanism
>>using
>>markers cannot be supported.
>
>What matters is whether the listing functionality is useful or not.
>Lets not think about it as "listing" or "paging" but about getting
>batches of messages that are still available for others to process in
>parallel. As mentioned in my previous email, AMQP has been a good way
>to analyze the extra set of features Marconi exposes in the API but I
>don't want to make the choice of usability based on whether
>traditional messaging systems support it and how it could be
>implemented there.

This functionality is very useful in a number of scenarios. It has
traditionally been the domain of database systems - flexible access to
data is what DBs excel in (select top 1000 * from X order by create_date).
Vast majority of existing messaging systems has a much more constrained
and prescriptive way of accessing data than a database. Why does this
functionality need to be part of Marconi? What are the benefits of listing
messages in Marconi that cannot be realized with a plain database?

In other words, if I need to access my data in that way, why would I use
Marconi rather than a DB?

>
  5.  Marconi message consumption API creates a ³claim ID² for a set of
consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS
and Azure Queues), ³claim ID² maps onto the concept of ³delivery tag²
which has a 1-1 relationship with a message. Since there is no way to
represent the 1-N mapping between claimID and messages in the AMQP 0.9
model, it effectively restrict consumption of messages to one per
claimID. This in turn prevents batch consumption benefits.

  6.  Marconi message consumption acknowledgment requires both claimID
and messageID to be provided. MessageID concept is missing in AMQP 0.9.
In order to implement this API, assuming the artificial 1-1 restriction
of claim-message mapping from #5 above, this API could be implemented
by
requiring that messageID === claimID. This is really a workaround.

>>>
>>>These 2 points represent quite a change in the way Marconi works and a
>>>trade-off in terms of batch consumption (as you mentioned). I believe
>>>we can have support for both things. For example, claimID+suffix where
>>>suffix point to a specific claimed messages.
>>>
>>>I don't want to start an extended discussion about this here but lets
>>>keep in mind that we may be able to support both. I personally think
>>>Marconi's claim's are reasonable as they are, which means I currently
>>>like them better than SQS's.
>>
>>What are the advantages of the Marconi model for claims over the SQS and
>>Azure Queue model for acknowledgements?
>>
>>I think the SQS and Azure Queue model is both simpler and more flexible.
>>But the key advantage is that it has been around for a while, has been
>>proven to work, and people understand it.
>>
>>1. SQS and Azure require only one concept to acknowledge a message
>>(receipt handle/pop receipt) as opposed to Marconi¹s two concepts
>>(message
>>ID + claim ID). SQS/Azure model is simpler.
>
>TBH, I'm not exactly sure where you're going with this. I mean, the
>model may look simpler but it's not necessarily better nor easier to
>implement. Keeping both, messages and claims, separate in terms of IDs
>and management is flexible and powerful enough, IMHO. But I'm probably
>missing your point.
>
>I don't believe requiring the messageID+ClaimID to delete a specific,
>claimed, messages is hard.

It may not be hard. It is just more complex than it needs to be to
accomplish the same task.

>
>>
>>2. Similarly to Marconi, SQS and Azure allow individual claimed messages
>>to be deleted. This is a wash.
>
>Calling it a wash

Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward

2014-06-13 Thread Ivar Lazzaro
Hi Kyle,

Embrane's CI was blocked by some nasty bugs affecting the testing
environment. It resumed yesterday (6/12) [0].
Unfortunately it's still non voting (only commenting so far). Not sure if
this is a requirement or not, but it should be able to put +1/-1
immediately after the voting right is granted.
I'll keep an eye on it to make sure it is stable again.

Thanks sorting out the CI situation :D!

[0] https://review.openstack.org/#/c/98813/


On Fri, Jun 13, 2014 at 10:07 AM, Kyle Mestery 
wrote:

> I've spent some time doing some initial analysis of 3rd Party CI in
> Neutron. The tl;dr is that it's a mess, and it needs fixing. And I'm
> setting a deadline of Juno-2 for people to address their CI systems
> and get them in shape or we will remove plugins and drivers in Juno-3
> which do not meet the expectations set out below.
>
> My initial analysis of Neutron 3rd Party CI is here [1]. This was
> somewhat correlated with information from DriverLog [2], which was
> helpful to put this together.
>
> As you can see from the list, there are a lot of CI systems which are
> broken right now. Some have just recently started working again.
> Others are working great, and some are in the middle somewhere. The
> overall state isn't that great. I'm sending this email to
> openstack-dev and BCC;ing CI owners to raise awareness of this issue.
> If I have incorrectly labeled your CI, please update the etherpad and
> include links to the latest voting/comments your CI system has done
> upstream and reply to this thread.
>
> I have documented the 3rd Party CI requirements for Neutron here [3].
> I expect people to be following these guidelines for their CI systems.
> If there are questions on the guidelines or expectations, please reply
> to this thread or reach out to myself in #openstack-neutron on
> Freenode. There is also a third-party meeting [4] which is a great
> place to ask questions and share your experience setting up a 3rd
> party CI system. The infra team has done a great job sponsoring and
> running this meeting (thanks Anita!), so please both take advantage of
> it and also contribute to it so we can all share knowledge and help
> each other.
>
> Owners of plugins/drivers should ensure their CI is matching the
> requirements set forth by both infra and Neutron when running tests
> and posting results. Like I indicated earlier, we will look at
> removing code for drivers which are not meeting these requirements as
> set forth in the wiki pages.
>
> The goal of this effort is to ensure consistency across testing
> platforms, making it easier for developers to diagnose issues when
> third party CI systems fail, and to ensure these drivers are tested
> since they are part of the integrated releases we perform. We used to
> require a core team member to sponsor a plugin/driver, but we moved to
> the 3rd party CI system in Icehouse instead. Ensuring these systems
> are running and properly working is the only way we can ensure code is
> working when it's part of the integrated release.
>
> Thanks,
> Kyle
>
> [1] https://etherpad.openstack.org/p/ZLp9Ow3tNq
> [2]
> http://www.stackalytics.com/driverlog/?project_id=openstack%2Fneutron&vendor=&release_id=
> [3] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
> [4] https://wiki.openstack.org/wiki/Meetings/ThirdParty
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] How to group activity in git/gerrit repositories

2014-06-13 Thread Stangel, Dan
Hi Stef,

On Thu, 2014-06-12 at 17:07 -0700, Stefano Maffulli wrote:
> we're working on a quarterly report of activities in all our git and
> gerrit repositories to understand the dynamics of contributions across
> different dimensions. This report will be similar to what Bitergia
> produced at release time.
> 
> I'd like to discuss more widely the format of how to meaningfully group
> the information presented. One basic need is to have a top level view
> and then drill down based on the release status of each project. A
> suggested classification would be based on the programs.yaml file:
> 
> - OpenStack Software (top level overview):
>- integrated
>- incubated
>- clients
>- other:
>devstack
>deployment
>common libraries
> - OpenStack Quality Assurance
> - OpenStack Documentation
> - OpenStack Infrastructure
> - OpenStack Release management
> 
> It seems easy but based on that grouping, integrated and incubated git
> repositories are easy to spot in programs.yaml (they have
> integrated-since attribute).
> 
> Let's have the Sahara program as an example:
> 
>   projects:
> - repo: openstack/sahara
>   incubated-since: icehouse
>   integrated-since: juno
> - repo: openstack/python-saharaclient
> - repo: openstack/sahara-dashboard
> - repo: openstack/sahara-extra
> - repo: openstack/sahara-image-elements
> - repo: openstack/sahara-specs
> 
> So, for the OpenStack Software part:
> * openstack/sahara is integrated in juno and incubated since icehouse.
> * Then clients: python-saharaclient is easy to spot. Is it standard and
> accepted practice that all client projects are called
> python-$PROGRAM-NAMEclient?
> * And what about the rest of the sahara-* projects: where would they go?
> with openstack/sahara? or somewhere else, in others? devstack?
> common-libraries?
> 
> Other repositories for which I have no clear classification:
> 
> - repo: openstack/swift-bench
> - repo: openstack/django_openstack_auth
> - repo: openstack/tuskar-ui
> - repo: openstack/heat-cfntools
> - repo: openstack/heat-specs
> - repo: openstack/heat-templates
> - repo: openstack-dev/heat-cfnclient
> - repo: openstack/trove-integration
> - repo: openstack/ironic-python-agent
> - repo: stackforge/kite
> 
> Any suggestions on how you would like to see these classified: with
> together with the integrated/incubated 'parent' program (sahara with
> sahara-dashboard, sahara-extra etc  or separately under 'other'? Or
> they're all different and we need to look at them one by one?
> 
> Let me know what you think (tomorrow office hour, 11am PDT, is a good
> time to chat about this).

You can also refer to the example of Stackalytics, who have created
their own hierarchy and groupings for metrics reporting:
https://github.com/stackforge/stackalytics/blob/master/etc/default_data.json

I have found this grouping to be pretty logical and useful.

For my own internal metrics, I also rely on the programs.yaml file
organization, and / or the "Programs" wiki page
https://wiki.openstack.org/wiki/Programs where they differ.

Further, I group all the "other" projects that do not clearly fit into a
separate "bucket" grouping, until they either disappear or become
incorporated into other groupings.

Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Can tenants provide hints to router scheduling?

2014-06-13 Thread CARVER, PAUL
Suppose a tenant knows that some of their networks are particularly high 
bandwidth and others are relatively low bandwidth.

Is there any mechanism that a tenant can use to let Neutron know what sort of 
bandwidth is expected through a particular router?

I'm concerned about the physical NICs on some of our network nodes getting 
saturated if several virtual routers that end up on the same network node 
happen to be serving multi -Gbps networks.

I'm looking through 
https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py
 and it appears the only choices are ChanceScheduler which just calls 
random.choice and LeastRoutersScheduler which appears to make its decision 
based on simple quantity of routers per L3 agent.

Are there any blueprints or WIP for taking bandwidth utilization into account 
when scheduling routers?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Can tenants provide hints to router scheduling?

2014-06-13 Thread Nachi Ueno
Hi Paul

I think this flavor bp is related.
https://review.openstack.org/#/c/90070/
By using flavor, you can specify the flavor for routers ( high
bandwidth or low bandwidth ) such as VM (vCPU vMemory etc).
I don't see any bp for flavor based scheduling yet, but IMO it is
great we could have such scheduler such as
Nova's filter scheduler.

Best
Nachi





2014-06-13 11:24 GMT-07:00 CARVER, PAUL :
> Suppose a tenant knows that some of their networks are particularly high
> bandwidth and others are relatively low bandwidth.
>
>
>
> Is there any mechanism that a tenant can use to let Neutron know what sort
> of bandwidth is expected through a particular router?
>
>
>
> I’m concerned about the physical NICs on some of our network nodes getting
> saturated if several virtual routers that end up on the same network node
> happen to be serving multi –Gbps networks.
>
>
>
> I’m looking through
> https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py
> and it appears the only choices are ChanceScheduler which just calls
> random.choice and LeastRoutersScheduler which appears to make its decision
> based on simple quantity of routers per L3 agent.
>
>
>
> Are there any blueprints or WIP for taking bandwidth utilization into
> account when scheduling routers?
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenDaylight ML2 mechanism driver - Openstack

2014-06-13 Thread Kyle Mestery
On Fri, Jun 13, 2014 at 5:53 AM, Sachi Gupta  wrote:
> Hi,
>
> I have set up OpenStack Havana with neutron ML2 plugin and ODL controller
> following link
> http://www.siliconloons.com/getting-started-with-opendaylight-and-openstack/.
>
The ODL MechanismDriver support landed in Icehouse, it's not
officially supported in Havana. I think I recall one other person
privately reaching out to me about running this with Havana, but they
hit some issues, maybe the same as you. Is there a way you could
upgrade to Icehouse where this is supported?

> The ODL mechanism driver file in
> /opt/stack/neutron/neutron/plugins/ml2/drivers/mechanism_odl.py is attached.
>
>
> When I issued the network create command with no network name, the call
> passes to ODL and returns with 400 error code in sendjson method.
> In line no 187, this exception catches but is ignored due to which network
> is created in the openstack but is failed in ODL.
>
> Please suggest the changes that needs to be done to synchronize OpenStack
> with ODL.
>
> Thanks & Regards
> Sachi Gupta
> Systems Engineer
> Tata Consultancy Services
> Mailto: sachi.gu...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty.IT Services
>Business Solutions
>Consulting
> 
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] Proposed new Vim/PBM api

2014-06-13 Thread Davanum Srinivas
cc'ing Gary

On Fri, Jun 13, 2014 at 11:49 AM, Matthew Booth  wrote:
> I've proposed a new Vim/PBM api in this blueprint for oslo.vmware:
> https://review.openstack.org/#/c/99952/
>
> This is just the base change. However, it is a building block for adding
> a bunch of interesting new features, including:
>
> * Robust caching
> * Better RetrievePropertiesEx
> * WaitForUpdatesEx
> * Non-polling task waiting
>
> It also gives us explicit session transactions, which are a requirement
> for locking should that ever come to pass.
>
> Please read and discuss. There are a couple of points in there on which
> I'm actively soliciting input.
>
> Matt
> --
> Matthew Booth
> Red Hat Engineering, Virtualisation Team
>
> Phone: +442070094448 (UK)
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Mathew R Odden

I am surprised this became a concern so quickly, but I do understand the
strangeness of installing a 'bash8' binary on command line. I'm fine with
renaming to 'bashate' or 'bash_tidy', but renames can take some time to
work through all the references.

Apparently Sean and I both thought of the 'bashate' name independently
(from gpb => jeepyb) but I wasn't to keen on the idea since it isn't very
descriptive. 'bash-tidy' makes more sense but we can't use dashes in python
package names :(

My vote would be for 'bashate' still, since  I think that would be the
easiest to transition to from the current name.

Mathew Odden, Software Developer
IBM STG OpenStack Development___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Eventletdev] Eventlet 0.15 pre-release testers needed

2014-06-13 Thread Chuck Thier
Just a FYI for those interested in the next eventlet version.  It also
looks like they have a python 3 branch ready to start testing with.

--
Chuck

-- Forwarded message --
From: Sergey Shepelev 
Date: Fri, Jun 13, 2014 at 1:18 PM
Subject: [Eventletdev] Eventlet 0.15 pre-release testers needed
To: eventletdev , Noah Glusenkamp <
n...@empowerengine.com>, Victor Sergeyev ,
ja...@stasiak.at


Hello, everyone.

TL;DR: please test these versions in Python2 and Python3:
pip install URL should work
(master)
https://github.com/eventlet/eventlet/archive/6c4823c80575899e98afcb12f84dcf4d54e277cd.zip
(py3-greenio branch, on top of master)
https://github.com/eventlet/eventlet/archive/9e666c78086a1eb0c05027ec6892143dfa5c32bd.zip

I am going to make Eventlet 0.15 release in coming week or two and your
feedback would be greatly appreciated because it's the first release since
we started work on Python3 compatibility. So please try to run your project
in Python3 too, if you can.


___
Click here to unsubscribe or manage your list subscription:
https://lists.secondlife.com/cgi-bin/mailman/listinfo/eventletdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Kyle Mestery
On Fri, Jun 13, 2014 at 2:01 PM, Mathew R Odden  wrote:
> I am surprised this became a concern so quickly, but I do understand the
> strangeness of installing a 'bash8' binary on command line. I'm fine with
> renaming to 'bashate' or 'bash_tidy', but renames can take some time to work
> through all the references.
>
> Apparently Sean and I both thought of the 'bashate' name independently (from
> gpb => jeepyb) but I wasn't to keen on the idea since it isn't very
> descriptive. 'bash-tidy' makes more sense but we can't use dashes in python
> package names :(
>
> My vote would be for 'bashate' still, since  I think that would be the
> easiest to transition to from the current name.
>
When I first saw bashate, my eyes focused on "hate" in there for some
reason. Not sure if others noticed this, but my advice would be to
stay away from it for that reason. basheight has the same euphemism
without the harshness, IMHO. Though now that I type this, I do see
"height" in there, though it's less offensive than "hate".

Thanks,
Kyle

> Mathew Odden, Software Developer
> IBM STG OpenStack Development
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-13 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2014-06-13 03:04:07 -0700:
> On 06/13/2014 06:53 AM, Morgan Fainberg wrote:
> > Hi Thomas,
> > 
> > I felt a couple sentences here were reasonable to add (more than “don’t
> > care” from before). 
> > 
> > I understand your concerns here, and I totally get what you’re driving
> > at, but in the packaging world wouldn’t this make sense to call it
> > "python-bash8"?
> 
> Yes, this is what will happen.
> 
> > Now the binary, I can agree (for reasons outlined)
> > should probably not be named ‘bash8’, but the name of the “command”
> > could be separate from the packaging / project name.
> 
> If upstream chooses /usr/bin/bash8, I'll have to follow. I don't want to
> carry patches which I'd have to maintain.
> 
> > Beyond a relatively minor change to the resulting “binary” name [sure
> > bash-tidy, or whatever we come up with], is there something more that
> > really is awful (rather than just silly) about the naming?
> 
> Renaming python-bash8 into something else is not possible, because the
> Debian standard is to use, as Debian name, what is used for the import.
> So if we have "import xyz", then the package will be python-xyz.
> 

For python _libraries_ yes.

But for a utility which happens to import that library, naming the
package after what upstream calls it is a de facto standard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] And the prize for proposing change #100000 goes to ...

2014-06-13 Thread Monty Taylor
Greg Haynes

Let the race for change 100 commence.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] And the prize for proposing change #100000 goes to ...

2014-06-13 Thread Gregory Haynes
Excerpts from Monty Taylor's message of 2014-06-13 19:32:39 +:
> Greg Haynes
> 
> Let the race for change 100 commence.
> 

W!

-- 
Gregory Haynes
g...@greghaynes.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] versioning and releases

2014-06-13 Thread Doug Hellmann
Since I think we have consensus on this, I have published a slightly
modified version (cleaning up verb tense and clarifying how Juno and
post-Juno will differ) to the wiki:
https://wiki.openstack.org/wiki/Oslo/VersioningPolicy

On Thu, Jun 12, 2014 at 3:33 PM, Doug Hellmann
 wrote:
> On Thu, Jun 12, 2014 at 12:03 PM, Thierry Carrez  
> wrote:
>> Mark McLoughlin wrote:
>>> On Thu, 2014-06-12 at 12:09 +0200, Thierry Carrez wrote:
 Doug Hellmann wrote:
> On Tue, Jun 10, 2014 at 5:19 PM, Mark McLoughlin  
> wrote:
>> On Tue, 2014-06-10 at 12:24 -0400, Doug Hellmann wrote:
>> [...]
>>> Background:
>>>
>>> We have two types of oslo libraries. Libraries like oslo.config and
>>> oslo.messaging were created by extracting incubated code, updating the
>>> public API, and packaging it. Libraries like cliff and taskflow were
>>> created as standalone packages from the beginning, and later adopted
>>> by the oslo team to manage their development and maintenance.
>>>
>>> Incubated libraries have been released at the end of a release cycle,
>>> as with the rest of the integrated packages. Adopted libraries have
>>> historically been released "as needed" during their development. We
>>> would like to synchronize these so that all oslo libraries are
>>> officially released with the rest of the software created by OpenStack
>>> developers.

 Could you outline the benefits of syncing with the integrated release ?
>>>
>>> Sure!
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2012-November/003345.html
>>>
>>> :)
>>
>> Heh :) I know why *you* prefer it synced. Was just curious to see if
>> Doug thought the same way :P
>
> At first I didn't want to bother with alpha releases, because they
> introduce a special case. I've since come around to the same line of
> thinking as Mark, especially now that releasing alphas works without
> having to point to tarballs in requirements files.
>
>>
 Personally I see a few drawbacks to this approach:

 We dump the new version on consumers usually around RC time, which is
 generally a bad time to push a new version of a  dependency and detect
 potential breakage. Consumers just seem to get the new version at the
 worst possible time.

 It also prevents from spreading the work all over the cycle. For example
 it may have been more successful to have the oslo.messaging new release
 by milestone-1 to make sure it's adopted by projects in milestone-2 or
 milestone-3... rather than have it ready by milestone-3 and expect all
 projects to use it by consuming alphas during the cycle.

 Now if *all* projects were continuously consuming alpha versions, most
 of those drawbacks would go away.
>>>
>>> Yes, that's the plan. Those issues are acknowledged and we're reasonably
>>> confident the alpha versions plan will address them.
>>
>> I agree that if we release alphas often and most projects consume them
>> instead of jump from stable release to stable release, we have all the
>> benefits without the drawbacks.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >