[openstack-dev] [neutron] L3 agent bug - metadata nat rule removal

2013-08-16 Thread Maru Newby
Hi Nachi,

The current neutron gate failure is due to the following nat rule being cleared 
from the router namespace when the l3 agent syncs the router:

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 
-j REDIRECT --to-ports 9697

The only place the metadata nat rule appears to be applied is when a router is 
detected as being added by the l3 agent.

I'm unclear on whether the failure is due to not having the metadata nat rule 
added on sync, or if the sync is supposed to retain it.  Do you have any 
insight on this?

See the comments on the bug for more info: 
https://bugs.launchpad.net/neutron/+bug/1211829


m.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-16 Thread Maru Newby

On Aug 15, 2013, at 12:50 PM, Joe Gordon  wrote:

>   • 
> On Thu, Aug 15, 2013 at 12:22 PM, Sam Harwell  
> wrote:
> I like to take a different approach. If my commit message is going to take 
> more than a couple lines for people to understand the decisions I made, I go 
> and make an issue in the issue tracker before committing locally and then 
> reference that issue in the commit message. This helps in a few ways:
> 
>  
> 
> 1.   If I find a technical or grammatical error in the commit message, it 
> can be corrected.
> 
> 2.   Developers can provide feedback on the subject matter independently 
> of the implementation, as well as feedback on the implementation itself.
> 
> 3.   I like the ability to include formatting and hyperlinks in my 
> documentation of the commit.
> 
>  
> 
> 
> This pattern has one slight issue, which is:
>  
>   • Do not assume the reviewer has access to external web services/site.
> In 6 months time when someone is on a train/plane/coach/beach/pub 
> troubleshooting a problem & browsing GIT history, there is no guarantee they 
> will have access to the online bug tracker, or online blueprint documents. 
> The great step forward with distributed SCM is that you no longer need to be 
> "online" to have access to all information about the code repository. The 
> commit message should be totally self-contained, to maintain that benefit.

I'm not sure I agree with this.  It can't be true in all cases, so it can 
hardly be considered a rule.  A guideline, maybe - something to strive for.  
But not all artifacts of the development process are amenable to being stuffed 
into code or the commits associated with them.  A dvcs is great and all, but 
unless one is working in a silo, online resources are all but mandatory.


m.

> 
> 
> https://wiki.openstack.org/wiki/GitCommitMessages#Information_in_commit_messages
> 
> 
> 
>  
> 
> Sam
> 
>  
> 
> From: Christopher Yeoh [mailto:cbky...@gmail.com] 
> Sent: Thursday, August 15, 2013 7:12 AM
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] Code review study
> 
>  
> 
>  
> 
> On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins  
> wrote:
> 
> This may interest data-driven types here.
> 
> https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> 
> Note specifically the citation of 200-400 lines as the knee of the review 
> effectiveness curve: that's lower than I thought - I thought 200 was clearly 
> fine - but no.
> 
>  
> 
> Very interesting article. One other point which I think is pretty relevant is 
> point 4 about getting authors to annotate the code better (and for those who 
> haven't read it, they don't mean comments in the code but separately) because 
> it results in the authors picking up more bugs before they even submit the 
> code.
> 
> So I wonder if its worth asking people to write more detailed commit logs 
> which include some reasoning about why some of the more complex changes were 
> done in a certain way and not just what is implemented or fixed. As it is 
> many of the commit messages are often very succinct so I think it would help 
> on the review efficiency side too.
> 
>  
> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why is "network" and "subnet" modeled separately?

2013-08-16 Thread P Balaji-B37839
Hi Zhidong,

Thanks for pointers.

IMHO, we need to have a flexibility to attach/select a Subnet for Networks and 
as well support for L2/L3 mode selection.

Certainly this will extend the Neutron use cases for ex: NFV deployments etc.

Any comments?

Regards,
Balaji.P

From: Zhidong Yu [mailto:zdyu2...@gmail.com]
Sent: Friday, August 16, 2013 6:43 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [neutron] Why is "network" and "subnet" modeled 
separately?

I asked the similar question before. Salvatore Orlando's answer makes sense to 
me. Please refer to https://lists.launchpad.net/openstack/msg21928.html

On Thu, Aug 15, 2013 at 2:58 PM, Stephen Gran 
mailto:stephen.g...@theguardian.com>> wrote:
Hi,


On 14/08/13 21:12, Lorin Hochstein wrote:
Here's a neutron implementation question: why does neutron model
"network" and "subnet" as separate entities?

Or, to ask another way, are there are any practical use cases where you
would *not* have a one-to-one relationship between neutron networks and
neutron subnets in an OpenStack deployment? (e.g. one neutron network
associated with multiple neutron subnets, or one neutron network
associated with zero neutron subnets)?

Different tenants might both use the same subnet range on different layer 2 
networks.
One one layer 2 network, you might run dual stacked, ie, ipv4 and ipv6.

Supporting these use cases necessitates modeling them separately.

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com
On your mobile, download the Guardian iPhone app 
theguardian.com/iphone and our iPad edition 
theguardian.com/iPad   Save up to 32% by 
subscribing to the Guardian and Observer - choose the papers you want and get 
full digital access.
Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-16 Thread Alex Xu

On 2013?08?16? 14:34, Christopher Yeoh wrote:


On Fri, Aug 16, 2013 at 10:28 AM, Melanie Witt > wrote:


On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:

> +1 from me as long as this wouldn't change anything for the EC2
API's security groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was
consensus that removing the standalone associate/disassociate
actions should happen.

Now the question is whether to keep the server create piece and
not remove the extension entirely. The concern is about a delay in
the newly provisioned instance being associated with the desired
security groups. With the extension, the instance gets the desired
security groups before the instance is active (I think). Without
the extension, the client would receive the active instance and
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?


I think we should keep the capability to set the security group on 
instance creation, so those who care about this sort of race condition 
can avoid if they want to.




I am working v3 network. I plan to only support create new instance with 
port id, didn't support with
network id and fixed ip anymore. So that means user need create port 
from Neutron firstly, then
pass the port id into the request of creating instance. If we think this 
is ok, user can associate the
desired security groups when create port, and we can remove the 
securitygroup extension entirely.



+1 to removing the associate/disassociate actions though

Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-16 Thread Robert Collins
On 16 August 2013 20:15, Maru Newby  wrote:

>> This pattern has one slight issue, which is:
>>
>>   • Do not assume the reviewer has access to external web services/site.
>> In 6 months time when someone is on a train/plane/coach/beach/pub 
>> troubleshooting a problem & browsing GIT history, there is no guarantee they 
>> will have access to the online bug tracker, or online blueprint documents. 
>> The great step forward with distributed SCM is that you no longer need to be 
>> "online" to have access to all information about the code repository. The 
>> commit message should be totally self-contained, to maintain that benefit.
>
> I'm not sure I agree with this.  It can't be true in all cases, so it can 
> hardly be considered a rule.  A guideline, maybe - something to strive for.  
> But not all artifacts of the development process are amenable to being 
> stuffed into code or the commits associated with them.  A dvcs is great and 
> all, but unless one is working in a silo, online resources are all but 
> mandatory.

In a very strict sense you're right, but consider that for anyone
doing fast iterative development the need to go hit a website is a
huge slowdown : at least in most of the world :).

So - while I agree that it's something to strive for, I think we
should invert it and say 'not having everything in the repo is
something we should permit occasional exceptions to'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Daniel P. Berrange
On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
> Hi Everyone,
> 
> I have been trying for some time to get the code for the live-snapshot 
> blueprint[1]
> in. Going through the review process for the rpc and interface code[2] was 
> easy. I
> suspect the api-extension code[3] will also be relatively trivial to get in. 
> The
> main concern is with the libvirt driver implementation[4]. I'd like to 
> discuss the
> concerns and see if we can make some progress.
> 
> Short Summary (tl;dr)
> =
> 
> I propose we merge live-cloning as an experimental feature for havanna and 
> have the
> api extension disabled by default.
> 
> Overview
> 
> 
> First of all, let me express the value of live snapshoting. The slowest part 
> of the
> vm provisioning process is generally booting of the OS. The advantage of live-
> snapshotting is that it allows the possibility of bringing up application 
> servers
> while skipping the overhead of vm (and application startup).

For Linux at least I think bootup time is a problem that is being solved by the
distros. It is possible to boot up many modern Linux distros in a couple of 
seconds
even in physical hardware - VMs can be even faster since they don't have such 
stupid
BIOS to worry about & have a restricted set of possible hardware. This is on a 
par
with, or better than, the overheads imposed by Nova itself in the boot up 
process.

Windows may be a different story, but I've not used it in years so don't know 
what
its boot performance is like.

> I recognize that this capability comes with some security concerns, so I 
> don't expect
> this feature to go in and be ready to for use in production right away. 
> Similarly,
> containers have a lot of the same benefit, but have had their own security 
> issues
> which are gradually being resolved. My hope is that getting this feature in 
> would
> allow people to start experimenting with live-booting so that we could 
> uncover some
> of these security issues.
> 
> There are two specific concerns that have been raised regarding my patch. The 
> first
> concern is related to my use of libvirt. The second concern is related to the 
> security
> issues above. Let me address them separately.
> 
> 1. Libvirt Issues
> =
> 
> The only feature I require from the hypervisor is to load memory/processor 
> state for
> a vm from a file. Qemu supports this directly. The only way that libvirt 
> exposes this
> functionality is via its restore command which is specifically for restoring 
> the
> previous state of an existing vm. "Cloning", or restoring the memory state of 
> a
> cloned vm is considered unsafe (which I will address in the second point, 
> below).
> 
> The result of the limited api is that I must include some hacks to make the 
> restore
> command actually allow me to restore the state of the new vm. I recognize 
> that this
> is using an undocumented libvirt api and isn't the ideal solution, but it 
> seemed
> "better" then avoiding libvirt and talking directly to qemu.
> 
> This is obviously not ideal. It is my hope that this 0.1 version of the 
> feature will
> allow us to iteratively improve the live-snapshot/clone proccess and get the 
> security
> to a point where the libvirt maintainers would be willing to accept a patch 
> to directly
> expose an api to load memory from a file.

To characterize this as a libvirt issue is somewhat misleading. The reason why 
libvirt
does not explicitly allow this, is that from discussions with the upstream 
QEMU/KVM
developers, the recommendation/advise that this is not a safe operation and 
should not
be exposed to application developers.

The expectation is that the functionality in QEMU is only targetted for taking 
point in
time snapshots & allowing rollback of a VM to those snapshots, not creating 
clones of
active VMs.

> 2. Security Concerns
> 
> 
> There are a number of security issues with loading state from another vm. 
> Here is a
> short list of things that need to be done just to make a cloned vm usable:
> 
> a) mac address needs to be recreated
> b) entropy pool needs to be reset
> c) host name must be reset
> d) host keys bust be regenerated
> 
> There are others, and trying to clone a running application as well may 
> expose other
> sensitive data, especially if users are snaphsoting vms and making them 
> public.
> 
> The only issue that I address on the driver side is the mac addresses. This 
> is the
> minimum that needs to be done just to be able to access the vm over the 
> network. This
> is implemented by unplugging all network devices before the snapshot and 
> plugging new
> network devices in on clone. This isn't the most friendly thing to guest 
> applications,
> but it seems like the safest option for the first version of this feature.

This is not really as safe as you portray. When restoring from the snapshot the 
VM
will initially be running with virtual NIC with a different

Re: [openstack-dev] Code review study

2013-08-16 Thread Flavio Percoco

On 15/08/13 22:27 +0930, Christopher Yeoh wrote:

On Thu, Aug 15, 2013 at 9:54 PM, Daniel P. Berrange 
wrote:Commit message quality has improved somewhat since I first wrote &
published

   that page, but there's definitely still scope to improve things further.
   What
   it really needs is for more reviewers to push back against badly written
   commit messages, to nudge authors into the habit of being more verbose in
   their commits.



Agreed. There is often "what" and sometimes "why", but not very often
"how" in commit messages.




Something I'd suggest to *reviewers* and *committers* is to never assume
everyone will understand the why just because you do. Not everything
is obvious to everyone so, if there's anything that worths
emphasizing, do it (request it if you're reviewing).

FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Richard W.M. Jones
On Fri, Aug 16, 2013 at 11:05:19AM +0100, Daniel P. Berrange wrote:
> On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
> > Hi Everyone,
> > 
> > I have been trying for some time to get the code for the live-snapshot 
> > blueprint[1]
> > in. Going through the review process for the rpc and interface code[2] was 
> > easy. I
> > suspect the api-extension code[3] will also be relatively trivial to get 
> > in. The
> > main concern is with the libvirt driver implementation[4]. I'd like to 
> > discuss the
> > concerns and see if we can make some progress.
> > 
> > Short Summary (tl;dr)
> > =
> > 
> > I propose we merge live-cloning as an experimental feature for havanna and 
> > have the
> > api extension disabled by default.
> > 
> > Overview
> > 
> >
> > First of all, let me express the value of live snapshoting. The
> > slowest part of the vm provisioning process is generally booting
> > of the OS.

Like Dan I'm dubious about this whole plan.  But this ^^ statement in
particular.  I would like to see hard data to back this up.

You should be able to boot an OS pretty quickly, and furthermore it's
(a) much safer for all the reasons Dan outlines, and (b) improvements
that you make to boot times help everyone.

[...]
> > 2. Security Concerns
> > 
> > 
> > There are a number of security issues with loading state from another vm. 
> > Here is a
> > short list of things that need to be done just to make a cloned vm usable:
> > 
> > a) mac address needs to be recreated
> > b) entropy pool needs to be reset
> > c) host name must be reset
> > d) host keys bust be regenerated
> > 
> > There are others, and trying to clone a running application as well may 
> > expose other
> > sensitive data, especially if users are snaphsoting vms and making them 
> > public.

Are we talking about cloning VMs that you already trust, or cloning
random VMs and allowing random other users to use them?  These would
lead to very different solutions.  In the first case, you only care
about correctness, not security.  In the second case, you care about
security as well as correctness.

I highly doubt the second case is possible because scrubbing the disk
is going to take far too long for any supposed time-saving to matter.

As Dan says, even the first case is dubious because it won't be correct.

> The libguestfs project provide tools to perform offline cloning of
> VM disk images.  Its virt-sysprep knows how to delete alot (but by
> no means all possible) sensitive file data for common Linux &
> Windows OS. It still has to be combined with use of the
> virt-sparsify tool though, to ensure the deleted data is actually
> purged from the VM disk image as well as the filesystem, by
> releasing all unused VM disk sectors back to the host storage (and
> not all storage supports that).

Links to the tools that Dan mentions:

http://libguestfs.org/virt-sysprep.1.html
http://libguestfs.org/virt-sparsify.1.html

Note these tools can only be used on offline machines.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-16 Thread Flavio Percoco

On 14/08/13 17:08 -0300, Sandy Walsh wrote:

At Eric's request in https://review.openstack.org/#/c/41979/ I'm
bringing this to the ML for feedback.

Currently, oslo-common rpc behaviour is to always ack() a message no
matter what.


Hey,

I don't think we should keep adding new features to Oslo's rpc, I'd
rather think how this fits into oslo.messaging.


For billing purposes we can't afford to drop important notifications
(like *.exists). We only want to ack() if no errors are raised by the
consumer, otherwise we want to requeue the message.

Now, once we introduce this functionality, we will also need to support
.reject() semantics.

The use-case we've seen for this is:
1. grab notification
2. write to disk
3. do some processing on that notification, which raises an exception.
4. the event is requeued and steps 2-3 repeat very quickly. Lots of
duplicate records. In our case we've blown out our database.


Although I see some benefits from abstracting this, I'm not sure
whether we *really* need this in Oslo messaging. My main concern is
that acknowledgement is not supported by all back-ends and this can
turn out being a design flaw for apps depending on methods like ack()
/ reject().

Have you guys thought about re-sending the failed message on a
different topic / queue?

This is what Celery does to retry tasks on failures, for example.


FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Victor Sergeyev
Hello All.

Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
questions about Oslo DB code, and why is it so important to use it instead
of custom implementation and so on. As there were a lot of questions it was
really hard to answer on all this questions in IRC. So we decided that
mailing list is better place for such things.

List of main questions:

1. What includes oslo DB code?
2. Why is it safe to replace custom implementation by Oslo DB code?
3. Why oslo DB code is better than custom implementation?
4. Why oslo DB code won’t slow up project development progress?
5. What we are going actually to do in Glance?
6. What is the current status?

Answers:

1. What includes oslo DB code?

Currently Oslo code improves different aspects around DB:
-- Work with SQLAlchemy models, engine and session
-- Lot of tools for work with SQLAlchemy
-- Work with unique keys
-- Base test case for work with database
-- Test migrations against different backends
-- Sync DB Models with actual schemas in DB (add test that they are
equivalent)


2. Why is it safe to replace custom implementation by Oslo DB code?

Oslo module, as base openstack module, takes care about code quality.
Usually, common code more readable (most of flake8 checks enabled in Oslo)
and have better test coverage.  Also it was tested in different use-cases
(in production also) in an other projects so bugs in Oslo code were already
fixed. So we can be sure, that we use high-quality code.


3. Why oslo DB code is better than custom implementation?

There are some arguments pro Oslo database code

-- common code collects useful features from different projects
Different utils, for work with database, common test class, module for
database migration, and  other features are already in Oslo db code. Patch
on automatic retry db.api query if db connection lost on review at the
moment. If we use Oslo db code we should not care, how to port these (and
others - in the future) features to Glance - it will came to all projects
automaticly when it will came to Oslo.

-- unified project work with database
As it was already said,  It can help developers work with database in a
same way in different projects. It’s useful if developer work with db in a
few projects - he use same base things and got no surprises from them.

-- it’s will reduce time for running tests.
Maybe it’s minor feature, but it’s also can be important. We can removed
some tests for base `DB` classes (such as session, engines, etc)  and
replaced for work with DB to mock calls.


4. Why oslo DB code won’t slow up project development progress?

Oslo code for work with database already in such projects as Nova, Neutron,
Celiometer and Ironic. AFAIK, these projects development speed doesn’t
decelerated (please fix me, If I’m wrong). Work with database level already
improved and tested in Oslo project, so we can concentrate on work with
project features. All features, that already came to oslo code will be
available in Glance, but if you want to add some specific feature to
project *just now* you will be able to do it in project code.


5. What we are going actually to do in Glance?

-- Improve test coverage of DB API layer
We are going to increase test coverage of glance/db/sqlalchemy/api module
and fix bugs, if found.

-- Run DB API tests on all backends
-- Use Oslo migrations base test case for test migrations against different
backends
There are lot of different things in SQl backends. For example work with
casting.
In current SQLite we are able to store everything in column (with any
type). Mysql will try to convert value to required type, and postgresql
will raise IntegrityError.
If we will improve this feature, we will be sure, that all Glance DB
migrations will run correctly on all backends.

-- Use Oslo code for SA models, engine and session
-- Use Oslo SA utils
Using common code for work with database was already discussed and approved
for all projects. So we are going to implement common code for work with
database instead of Glance implementation.

-- Fix work with session and transactions
Our work items in Glance:
- don't pass session instances to public DB methods
- use explicit transactions only when necessary
- fix incorrect usage of sessions throughout the DB-related code

-- Optimize methods
When we will have tests for all functions in glance/db/sqlalchemy/api
module it’s will be safe to refactor api methods. It will make these
functions more clean, readable and faster.

The main ideas are:
- identify and remove unused methods
- consolidate duplicate methods when possible
- ensure SQLAlchemy objects are not leaking out of the API
- ensure related methods are grouped together and named consistently

-- Add missing unique constraints
We should provide missed unique constraints, based on database queries from
glance.db.sqlalchemy.api module. It’s will reduce data duplication and
became one more step to Glance database normalization.

-- Sync models definitions with DB m

Re: [openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-16 Thread Sandy Walsh


On 08/16/2013 09:47 AM, Flavio Percoco wrote:
> On 14/08/13 17:08 -0300, Sandy Walsh wrote:
>> At Eric's request in https://review.openstack.org/#/c/41979/ I'm
>> bringing this to the ML for feedback.
>>
>> Currently, oslo-common rpc behaviour is to always ack() a message no
>> matter what.
>>
> Hey,
> 
> I don't think we should keep adding new features to Oslo's rpc, I'd
> rather think how this fits into oslo.messaging.

Read on ... I think we'll face the same issues in messaging.

There is an alternative, which was my first approach. In StackTach, we
wrote our own notification consumption layer, which dealt with the
ack()/requeue() stuff directly. But, understandably, this got push back
when we attempted it in CM as the opinion was it belongs in olso. The
argument makes sense ... code duplication, would only support amqp,
reinventing the wheel, etc. The motivation was the very discussion we're
having now :)

> 
>> For billing purposes we can't afford to drop important notifications
>> (like *.exists). We only want to ack() if no errors are raised by the
>> consumer, otherwise we want to requeue the message.
>>
>> Now, once we introduce this functionality, we will also need to support
>> .reject() semantics.
>>
>> The use-case we've seen for this is:
>> 1. grab notification
>> 2. write to disk
>> 3. do some processing on that notification, which raises an exception.
>> 4. the event is requeued and steps 2-3 repeat very quickly. Lots of
>> duplicate records. In our case we've blown out our database.
> 
> Although I see some benefits from abstracting this, I'm not sure
> whether we *really* need this in Oslo messaging. My main concern is
> that acknowledgement is not supported by all back-ends and this can
> turn out being a design flaw for apps depending on methods like ack()
> / reject().

>From what I've been researching on zeromq, the consensus seems to be
"zeromq is very fast, but if you want it to be reliable you have to code
it all yourself."

We can't afford to drop billable events. That's the entire purpose of
having our notification system. So, I'm all ears for other suggestions.

> Have you guys thought about re-sending the failed message on a
> different topic / queue?

Pie/cake This essentially requeue()  :)

Like I mentioned above, it's understood that for reliability under
zeromq, impl_zeromq.py will need to handle ack/reject/requeue semantics
manually. When the time comes for CM to support ZMQ, I'm guessing we'll
have to be the ones to add that code.

Here's the salient point: For normal rpc, no one will ever see it or
have access to it. If people are calling join_consumer_pool(...,
ack_on_error=False) themselves, they have to assume all risk.

That's the only way for a developer to get this requeue()/reject()
behaviour.

-S

> This is what Celery does to retry tasks on failures, for example.
> 
> 
> FF
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Boris Pavlovic
Hi all,

We (OpenStack contributors) done a really huge and great work around DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of custom
implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic,
Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code to
separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: https://review.openstack.org/#/c/42159/


Thoughts?


Best regards,
Boris Pavlovic
--
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Davanum Srinivas
Boris,

+1 to getting started on oslo.db

-- dims


On Fri, Aug 16, 2013 at 9:52 AM, Boris Pavlovic  wrote:

> Hi all,
>
> We (OpenStack contributors) done a really huge and great work around DB
> code in Grizzly and Havana to unify it, put all common parts into
> oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
> unique keys, and to use  this code in different projects instead of custom
> implementations. (well done!)
>
> oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic,
> Ceilometer.
>
> In this moment we finished work around Glance:
> https://review.openstack.org/#/c/36207/
>
> And working around Heat and Keystone.
>
> So almost all projects use this code (or planing to use it)
>
> Probably it is the right time to start work around moving oslo.db code to
> separated lib.
>
> We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
>
> E.g. Here are two drafts:
> 1) oslo.db lib code: https://github.com/malor/oslo.db
> 2) And here is this lib in action: https://review.openstack.org/#/c/42159/
>
>
> Thoughts?
>
>
> Best regards,
> Boris Pavlovic
> --
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread David Ripton

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:


We (OpenStack contributors) done a really huge and great work around DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: https://review.openstack.org/#/c/42159/


Thoughts?


+1.  Having to manually paste code from oslo-incubator into other 
projects is error-prone.  Of course it's important to get the library 
versioning right and do releases, but that's a small cost imposed on 
just the oslo-db folks to make using this code easier for everyone else.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Michael Basnight
On Aug 16, 2013, at 6:52 AM, Boris Pavlovic  wrote:

> Hi all, 
> 
> We (OpenStack contributors) done a really huge and great work around DB code 
> in Grizzly and Havana to unify it, put all common parts into oslo-incubator, 
> fix bugs, improve handling of sqla exceptions, provide unique keys, and to 
> use  this code in different projects instead of custom implementations. (well 
> done!)
> 
> oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic, 
> Ceilometer. 
> 
> In this moment we finished work around Glance: 
> https://review.openstack.org/#/c/36207/
> 
> And working around Heat and Keystone.
> 
> So almost all projects use this code (or planing to use it)
> 
> Probably it is the right time to start work around moving oslo.db code to 
> separated lib.
> 
> We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
> 
> E.g. Here are two drafts:
> 1) oslo.db lib code: https://github.com/malor/oslo.db
> 2) And here is this lib in action: https://review.openstack.org/#/c/42159/
> 
> 
> Thoughts? 
> 

Excellent. Ill file a blueprint for Trove today! We need to upgrade to this. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Shake Chen
+1

What about the keystone status in oslo?


On Fri, Aug 16, 2013 at 10:40 PM, David Ripton  wrote:

> On 08/16/2013 09:52 AM, Boris Pavlovic wrote:
>
>  We (OpenStack contributors) done a really huge and great work around DB
>> code in Grizzly and Havana to unify it, put all common parts into
>> oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
>> unique keys, and to use  this code in different projects instead of
>> custom implementations. (well done!)
>>
>> oslo-incubator db code is already used by: Nova, Neutron, Cinder,
>> Ironic, Ceilometer.
>>
>> In this moment we finished work around Glance:
>> https://review.openstack.org/#**/c/36207/
>>
>> And working around Heat and Keystone.
>>
>> So almost all projects use this code (or planing to use it)
>>
>> Probably it is the right time to start work around moving oslo.db code
>> to separated lib.
>>
>> We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
>>
>> E.g. Here are two drafts:
>> 1) oslo.db lib code: 
>> https://github.com/malor/oslo.**db
>> 2) And here is this lib in action: https://review.openstack.org/#**
>> /c/42159/ 
>>
>>
>> Thoughts?
>>
>
> +1.  Having to manually paste code from oslo-incubator into other projects
> is error-prone.  Of course it's important to get the library versioning
> right and do releases, but that's a small cost imposed on just the oslo-db
> folks to make using this code easier for everyone else.
>
> --
> David Ripton   Red Hat   drip...@redhat.com
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>



-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Lance D Bragstad

I believe there are reviews in Keystone for bring this in:

https://review.openstack.org/#/c/38029/
https://review.openstack.org/#/c/38030/
https://blueprints.launchpad.net/keystone/+spec/use-common-oslo-db-code


Best Regards,

Lance Bragstad
Software Engineer - OpenStack
Cloud Solutions and OpenStack Development
T/L 553-5409, External 507-253-5409
ldbra...@us.ibm.com, Bld 015-2/C118



From:   Shake Chen 
To: OpenStack Development Mailing List
,
Date:   08/16/2013 09:54 AM
Subject:Re: [openstack-dev] Proposal oslo.db lib



+1

What about the keystone status in oslo?


On Fri, Aug 16, 2013 at 10:40 PM, David Ripton  wrote:
  On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

   We (OpenStack contributors) done a really huge and great work around DB
   code in Grizzly and Havana to unify it, put all common parts into
   oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
   unique keys, and to use  this code in different projects instead of
   custom implementations. (well done!)

   oslo-incubator db code is already used by: Nova, Neutron, Cinder,
   Ironic, Ceilometer.

   In this moment we finished work around Glance:
   https://review.openstack.org/#/c/36207/

   And working around Heat and Keystone.

   So almost all projects use this code (or planing to use it)

   Probably it is the right time to start work around moving oslo.db code
   to separated lib.

   We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

   E.g. Here are two drafts:
   1) oslo.db lib code: https://github.com/malor/oslo.db
   2) And here is this lib in action:
   https://review.openstack.org/#/c/42159/


   Thoughts?

  +1.  Having to manually paste code from oslo-incubator into other
  projects is error-prone.  Of course it's important to get the library
  versioning right and do releases, but that's a small cost imposed on just
  the oslo-db folks to make using this code easier for everyone else.

  --
  David Ripton   Red Hat   drip...@redhat.com


  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Doug Hellmann
I'd like to propose Alex Gaynor for core status on the requirements project.

Alex is a core Python and PyPy developer, has strong ties throughout the
wider Python community, and has been watching and reviewing requirements
changes for a little while now. I think it would be extremely helpful to
have him on the team.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Mark McClain
+1

mark

On Aug 16, 2013, at 11:04 AM, Doug Hellmann  wrote:

> I'd like to propose Alex Gaynor for core status on the requirements project.
> 
> Alex is a core Python and PyPy developer, has strong ties throughout the 
> wider Python community, and has been watching and reviewing requirements 
> changes for a little while now. I think it would be extremely helpful to have 
> him on the team.
> 
> Doug
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Savanna incubation intention

2013-08-16 Thread Sergey Lukjanov
Hi folks,

I’m glad to announce Savanna intention to apply for the incubation during 
Icehouse release. In this email I would like to provide an update on our 
current status and nearest plans and, as well as start the conversation to 
solicit the feedback on Savanna from the community.

Let’s start with the current state of Savanna project. All our code and 
bugs/specs are hosted at OpenStack Gerrit and Launchpad correspondingly. Unit 
tests and all pep8/hacking checks are run at OpenStack Jenkins and we have 
integration tests running at our own Jenkins server for each patch set. We have 
great Sphinx-based docs published at readthedocs - http://savanna.rtfd.org, it 
consists of dev, admin and user guides and descriptions of REST API, plugins 
SPI and etc. Savanna is integrated with Nova, Keystone, Glance, Cinder and 
Swift now, and we already using diskimage-builder to create a prebuilt images 
for Hadoop clusters.

We have an amazing team working on Savanna - about twenty engineers from 
Mirantis, Red Hat and Hortonworks (according to authors git stat). We have been 
holding weekly IRC meetings for the last 6 months and discussing architectural 
questions there and in openstack mailing lists as well. As for the code 
reviews, we’ve  established the same approach as other OpenStack projects. 
Change requests cannot be merged without the review from the main contributors 
for the corresponding component and this ensures high standard for all code 
that lands in master.

Currently we are actively working on two main directions - Elastic Data 
Processing (https://wiki.openstack.org/wiki/Savanna/EDP) and scalable 
architecture. Our next major 0.3 release is planned for October timeframe and 
will be based on OpenStack Havana codebase. It will contain basic EDP 
functionality, Savanna distributed design, Neutron support and, of course, 
updated OpenStack Dashboard plugin with all new features.

Let’s take a look at our future plans. We would like to integrate with other 
OpenStack components, such as Heat and Ceilometer and to adjust our release 
cycle in Icehouse. Code hardening, useful CLI implementation and enhanced 
functionality of EDP are also the things to be done and pay attention for.

So you are welcome to comment and leave your feedback on how to make Savanna 
better and become the integrated project.

Thank you!

P.S. Some links:
http://wiki.openstack.org/wiki/Savanna
http://wiki.openstack.org/wiki/Savanna/Roadmap
https://launchpad.net/savanna
https://savanna.readthedocs.org
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda
review stats: 
http://jenkins.savanna.mirantis.com/view/Infra/job/savanna-reviewstats/Savanna_Review_Stats/index.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Russell Bryant
On 08/16/2013 11:04 AM, Doug Hellmann wrote:
> I'd like to propose Alex Gaynor for core status on the requirements project.
> 
> Alex is a core Python and PyPy developer, has strong ties throughout the
> wider Python community, and has been watching and reviewing requirements
> changes for a little while now. I think it would be extremely helpful to
> have him on the team.

Sounds like a great addition to me.  +1 from me, fwiw

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Monty Taylor
+1

On 08/16/2013 11:04 AM, Doug Hellmann wrote:
> I'd like to propose Alex Gaynor for core status on the requirements project.
> 
> Alex is a core Python and PyPy developer, has strong ties throughout the
> wider Python community, and has been watching and reviewing requirements
> changes for a little while now. I think it would be extremely helpful to
> have him on the team.
> 
> Doug
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Help consuming trusts

2013-08-16 Thread Steven Hardy
Hi,

I'm looking for help, ideally some code or curl examples, figuring out why
I can't consume trusts in the manner specified in the documentation:

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md

I've been working on getting Heat integrated with the trusts functionality,
and the first step was to add keystoneclient support:

https://review.openstack.org/#/c/39899/

All works fine in terms of the actual operations on the OS-TRUST path, I
can create, list, get, delete trusts with no issues.

However I'm strugging to actually *use* the trust, i.e obtain a
trust-scoped token using the trust ID, I always seem to get the opaque
"Authorization failed. The request you have made requires authentication."
message, despite the requests on authentication looking as per the API docs.

Are there any curl examples or test code I can refer to?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-16 Thread Christopher Armstrong
On Thu, Aug 15, 2013 at 6:39 PM, Randall Burt wrote:

>
> On Aug 15, 2013, at 6:20 PM, Angus Salkeld  wrote:
>
> > On 15/08/13 17:50 -0500, Christopher Armstrong wrote:
>
> >> 2. There should be a new custom-built API for doing exactly what the
> >> autoscaling service needs on an InstanceGroup, named something
> unashamedly
> >> specific -- like "instance-group-adjust".
> >>
> >> Pros: It'll do exactly what it needs to do for this use case; very
> little
> >> state management in autoscale API; it lets Heat do all the orchestration
> >> and only give very specific delegation to the external autoscale API.
> >>
> >> Cons: The API grows an additional method for a specific use case.
> >
> > I like this one above:
> > adjust(new_size, victim_list=['i1','i7'])
> >
> > So if you are reducing the new_size we look in the victim_list to
> > choose those first. This should cover Clint's use case as well.
> >
> > -Angus
>
> We could just support victim_list=[1, 7], since these groups are
> collections of identical
> resources. Simple indexing should be sufficient, I would think.
>
> Perhaps separating the stimulus from the actions to take would let us
> design/build toward different policy implementations. Initially, we could
> have a HeatScalingPolicy that works with the signals that a scaling group
> can handle. When/if AS becomes an API outside of Heat, we can implement a
> fairly simple NovaScalingPolicy that includes the args to pass to nova boot.
>
>

I don't agree with using indices. I'd rather use the actual resource IDs.
For one, indices can change out from under you. Also, figuring out the
index of the instance you want to kill is probably an additional step most
of the time you actually care about destroying specific instances.



> >> 3. the autoscaling API should update the "Size" Property of the
> >> InstanceGroup resource in the stack that it is placed in. This would
> >> require the ability to PATCH a specific piece of a template (an
> operation
> >> isomorphic to update-stack).
>
> I think a PATCH semantic for updates would be generally useful in terms of
> "quality of life" for API users. Not having to pass the complete state and
> param values for trivial updates would be quite nice regardless of its
> implications to AS.
>

Agreed.



-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Monty Taylor


On 08/16/2013 09:52 AM, Boris Pavlovic wrote:
> Hi all, 
> 
> We (OpenStack contributors) done a really huge and great work around DB
> code in Grizzly and Havana to unify it, put all common parts into
> oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
> unique keys, and to use  this code in different projects instead of
> custom implementations. (well done!)
> 
> oslo-incubator db code is already used by: Nova, Neutron, Cinder,
> Ironic, Ceilometer. 
> 
> In this moment we finished work around Glance: 
> https://review.openstack.org/#/c/36207/
> 
> And working around Heat and Keystone.
> 
> So almost all projects use this code (or planing to use it)
> 
> Probably it is the right time to start work around moving oslo.db code
> to separated lib.
> 
> We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
> 
> E.g. Here are two drafts:
> 1) oslo.db lib code: https://github.com/malor/oslo.db
> 2) And here is this lib in action:
> https://review.openstack.org/#/c/42159/
> 

+1

Great job Boris!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Monty Taylor


On 08/16/2013 09:31 AM, Victor Sergeyev wrote:
> Hello All.
> 
> Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
> questions about Oslo DB code, and why is it so important to use it
> instead of custom implementation and so on. As there were a lot of
> questions it was really hard to answer on all this questions in IRC. So
> we decided that mailing list is better place for such things.

There is another main point - which is at the last summit, we talked
about various legit database things that need to be done to support CD
and rolling deploys. The list is not small, and it's a task that's
important. Needing to implement it in all of the projects separately is
kind of an issue, whereas if the projects are all using the database the
same way, then the database team can engineer the same mechanisms for
doing rolling schema changes, and then operators can have a consistent
expectation when they're running a cloud.

> List of main questions:
> 
> 1. What includes oslo DB code?  
> 2. Why is it safe to replace custom implementation by Oslo DB code? 
> 3. Why oslo DB code is better than custom implementation?
> 4. Why oslo DB code won’t slow up project development progress?
> 5. What we are going actually to do in Glance?
> 6. What is the current status?
> 
> Answers:
> 
> 1. What includes oslo DB code?
> 
> Currently Oslo code improves different aspects around DB:
> -- Work with SQLAlchemy models, engine and session
> -- Lot of tools for work with SQLAlchemy 
> -- Work with unique keys
> -- Base test case for work with database
> -- Test migrations against different backends
> -- Sync DB Models with actual schemas in DB (add test that they are
> equivalent)
> 
> 
> 2. Why is it safe to replace custom implementation by Oslo DB code? 
> 
> Oslo module, as base openstack module, takes care about code quality.
> Usually, common code more readable (most of flake8 checks enabled in
> Oslo) and have better test coverage.  Also it was tested in different
> use-cases (in production also) in an other projects so bugs in Oslo code
> were already fixed. So we can be sure, that we use high-quality code.
> 
> 
> 3. Why oslo DB code is better than custom implementation?
> 
> There are some arguments pro Oslo database code 
> 
> -- common code collects useful features from different projects
> Different utils, for work with database, common test class, module for
> database migration, and  other features are already in Oslo db code.
> Patch on automatic retry db.api query if db connection lost on review at
> the moment. If we use Oslo db code we should not care, how to port these
> (and others - in the future) features to Glance - it will came to all
> projects automaticly when it will came to Oslo. 
> 
> -- unified project work with database
> As it was already said,  It can help developers work with database in a
> same way in different projects. It’s useful if developer work with db in
> a few projects - he use same base things and got no surprises from them. 
> 
> -- it’s will reduce time for running tests.
> Maybe it’s minor feature, but it’s also can be important. We can removed
> some tests for base `DB` classes (such as session, engines, etc)  and
> replaced for work with DB to mock calls.
> 
> 
> 4. Why oslo DB code won’t slow up project development progress?
> 
> Oslo code for work with database already in such projects as Nova,
> Neutron, Celiometer and Ironic. AFAIK, these projects development speed
> doesn’t decelerated (please fix me, If I’m wrong). Work with database
> level already improved and tested in Oslo project, so we can concentrate
> on work with project features. All features, that already came to oslo
> code will be available in Glance, but if you want to add some specific
> feature to project *just now* you will be able to do it in project code.
> 
> 
> 5. What we are going actually to do in Glance?
> 
> -- Improve test coverage of DB API layer
> We are going to increase test coverage of glance/db/sqlalchemy/api
> module and fix bugs, if found. 
> 
> -- Run DB API tests on all backends
> -- Use Oslo migrations base test case for test migrations against
> different backends
> There are lot of different things in SQl backends. For example work with
> casting.
> In current SQLite we are able to store everything in column (with any
> type). Mysql will try to convert value to required type, and postgresql
> will raise IntegrityError. 
> If we will improve this feature, we will be sure, that all Glance DB
> migrations will run correctly on all backends.
> 
> -- Use Oslo code for SA models, engine and session
> -- Use Oslo SA utils
> Using common code for work with database was already discussed and
> approved for all projects. So we are going to implement common code for
> work with database instead of Glance implementation.
> 
> -- Fix work with session and transactions
> Our work items in Glance:
> - don't pass session instances to public DB methods
> - use explicit transactions

Re: [openstack-dev] [keystone] Help consuming trusts

2013-08-16 Thread Steve Martinelli

Hi Steven,

You can look at the unit tests being run.
https://github.com/openstack/keystone/blob/master/keystone/tests/test_v3_auth.py#L1782

It looks like you need to provide the trustee uname/password and the trust
id. Keep digging into 'build_authentication_request" to see how it's
structured, then it's just a call to /auth/tokens.

Thanks,

_
Steve Martinelli | A4-317 @ IBM Toronto Software Lab
Software Developer - OpenStack
Phone: (905) 413-2851
E-Mail: steve...@ca.ibm.com



From:   Steven Hardy 
To: openstack-dev@lists.openstack.org,
Date:   08/16/2013 11:38 AM
Subject:[openstack-dev] [keystone] Help consuming trusts



Hi,

I'm looking for help, ideally some code or curl examples, figuring out why
I can't consume trusts in the manner specified in the documentation:

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md


I've been working on getting Heat integrated with the trusts functionality,
and the first step was to add keystoneclient support:

https://review.openstack.org/#/c/39899/

All works fine in terms of the actual operations on the OS-TRUST path, I
can create, list, get, delete trusts with no issues.

However I'm strugging to actually *use* the trust, i.e obtain a
trust-scoped token using the trust ID, I always seem to get the opaque
"Authorization failed. The request you have made requires authentication."
message, despite the requests on authentication looking as per the API
docs.

Are there any curl examples or test code I can refer to?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Julien Danjou
On Fri, Aug 16 2013, Doug Hellmann wrote:

> I'd like to propose Alex Gaynor for core status on the requirements project.
>
> Alex is a core Python and PyPy developer, has strong ties throughout the
> wider Python community, and has been watching and reviewing requirements
> changes for a little while now. I think it would be extremely helpful to
> have him on the team.

LGTM :)

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Julien Danjou
On Fri, Aug 16 2013, Boris Pavlovic wrote:

> Thoughts?

Way to go.

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Eric Windisch
On Fri, Aug 16, 2013 at 9:31 AM, Victor Sergeyev  wrote:
> Hello All.
>
> Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
> questions about Oslo DB code, and why is it so important to use it instead
> of custom implementation and so on. As there were a lot of questions it was
> really hard to answer on all this questions in IRC. So we decided that
> mailing list is better place for such things.
>
> List of main questions:
>
> 1. What includes oslo DB code?
> 2. Why is it safe to replace custom implementation by Oslo DB code?

Just to head off these two really quick. The database code in Oslo as
initially submitted was actually based largely from that in Glance,
merging in some of the improvements made in Nova. There might have
been some divergence since then, but migrating over shouldn't be
terribly difficult. While it isn't necessary for Glance to switch
over, it would be somewhat ironic if it didn't.

The database code in Oslo primarily keeps base models and various
things we can easily share, reuse, and improve across projects. I
suppose a big part of this is the session management which has been
moved out of api.py and into its own module of session.py. This
session management code is probably what you'll most have to decide is
worthwhile bringing in and if Glance really has such unique
requirements that it needs to bother with maintaining this code on its
own.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Oslo project meeting

2013-08-16 Thread Mark McLoughlin
On Tue, 2013-08-13 at 22:09 +0100, Mark McLoughlin wrote:
> Hi
> 
> We're having an IRC meeting on Friday to sync up again on the messaging
> work going on:
> 
>   https://wiki.openstack.org/wiki/Meetings/Oslo
>   https://etherpad.openstack.org/HavanaOsloMessaging
> 
> Feel free to add other topics to the wiki
> 
> See you on #openstack-meeting at 1400 UTC

Logs here:

http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-16 Thread Vishvananda Ishaya

On Aug 15, 2013, at 5:58 PM, Melanie Witt  wrote:

> On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:
> 
>> +1 from me as long as this wouldn't change anything for the EC2 API's 
>> security groups support, which I assume it won't.
> 
> Correct, it's unrelated to the ec2 api.
> 
> We discussed briefly in the nova meeting today and there was consensus that 
> removing the standalone associate/disassociate actions should happen.
> 
> Now the question is whether to keep the server create piece and not remove 
> the extension entirely. The concern is about a delay in the newly provisioned 
> instance being associated with the desired security groups. With the 
> extension, the instance gets the desired security groups before the instance 
> is active (I think). Without the extension, the client would receive the 
> active instance and then call neutron to associate it with the desired 
> security groups.
> 
> Would such a delay in associating with security groups be a problem?


It seems like getting around this would be as simple as:

a. Create the port in neutron.
b. Associate a security group with the port.
c. Boot the instance with the port.

In general I'm a fan of doing all of the network creation and volume creation 
in neutron and cinder before booting the instance. Unfortunately I think this 
is pretty unfriendly to our users. One possibility is to move the smarts into 
the client side (i.e. have it talk to neutron and cinder), but I think that 
alienates all of the people using openstack who are not using python-novaclient 
or python-openstack client.

Since we are still supporting v2 this is a possibility for the v3 api, but if 
you can't do basic operations in v3 without talking to multiple services on the 
client side I think it will prevent a lot of people from using it.

Its clear to me that autocreation needs to stick around for a while just to 
keep the apis usable. I can see the argument for pulling it from the v3 api, 
but it seems like at least the basics need to stick around for now.

Vish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Flavio Percoco

On 16/08/13 11:42 -0400, Monty Taylor wrote:



On 08/16/2013 09:31 AM, Victor Sergeyev wrote:

Hello All.

Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
questions about Oslo DB code, and why is it so important to use it
instead of custom implementation and so on. As there were a lot of
questions it was really hard to answer on all this questions in IRC. So
we decided that mailing list is better place for such things.


There is another main point - which is at the last summit, we talked
about various legit database things that need to be done to support CD
and rolling deploys. The list is not small, and it's a task that's
important. Needing to implement it in all of the projects separately is
kind of an issue, whereas if the projects are all using the database the
same way, then the database team can engineer the same mechanisms for
doing rolling schema changes, and then operators can have a consistent
expectation when they're running a cloud.




Just to be clear, AFAIK, the concerns were around how / when to migrate
Glance and not about why we should share database code.



List of main questions:

1. What includes oslo DB code?
2. Why is it safe to replace custom implementation by Oslo DB code?
3. Why oslo DB code is better than custom implementation?
4. Why oslo DB code won’t slow up project development progress?
5. What we are going actually to do in Glance?
6. What is the current status?

Answers:

1. What includes oslo DB code?

Currently Oslo code improves different aspects around DB:
-- Work with SQLAlchemy models, engine and session
-- Lot of tools for work with SQLAlchemy
-- Work with unique keys
-- Base test case for work with database
-- Test migrations against different backends
-- Sync DB Models with actual schemas in DB (add test that they are
equivalent)


2. Why is it safe to replace custom implementation by Oslo DB code?

Oslo module, as base openstack module, takes care about code quality.
Usually, common code more readable (most of flake8 checks enabled in
Oslo) and have better test coverage.  Also it was tested in different
use-cases (in production also) in an other projects so bugs in Oslo code
were already fixed. So we can be sure, that we use high-quality code.




This is the point I was most worried about - and I'm still are. The
migration to Oslo's db code started a bit late in Glance and no code
has been merged yet. As for Glance, there still seems to be a lot of
work ahead on this matter.


That being said, thanks a lot for the email and for explaining all
those details.
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-16 Thread Zane Bitter

On 16/08/13 00:50, Christopher Armstrong wrote:

*Introduction and Requirements*

So there's kind of a perfect storm happening around autoscaling in Heat
right now. It's making it really hard to figure out how I should compose
this email. There are a lot of different requirements, a lot of
different cool ideas, and a lot of projects that want to take advantage
of autoscaling in one way or another: Trove, OpenShift, TripleO, just to
name a few...

I'll try to list the requirements from various people/projects that may
be relevant to autoscaling or scaling in general.

1. Some users want a service like Amazon's Auto Scaling or Rackspace's
Otter -- a simple API that doesn't really involve orchestration.
2. If such a API exists, it makes sense for Heat to take advantage of
its functionality instead of reimplementing it.


+1, obviously. But the other half of the story is that the API is likely 
be implemented using Heat on the back end, amongst other reasons because 
that implementation already exists. (As you know, since you wrote it ;)


So, just as we will have an RDS resource in Heat that calls Trove, and 
Trove will use Heat for orchestration:


  user => [Heat =>] Trove => Heat => Nova

there will be a similar workflow for Autoscaling:

  user => [Heat =>] Autoscaling -> Heat => Nova

where the first, optional, Heat stack contains the RDS/Autoscaling 
resource and the backend Heat stack contains the actual Nova instance(s).


One difference might be that the Autoscaling -> Heat step need not 
happen via the public ReST API. Since both are part of the Heat project, 
I think it would also be OK to do this over RPC only.



3. If Heat integrates with that separate API, however, that API will
need two ways to do its work:


Wut?


1. native instance-launching functionality, for the "simple" use


This is just the simplest possible case of 3.2. Why would we maintain a 
completely different implementation?



2. a way to talk back to Heat to perform orchestration-aware scaling
operations.


[IRC discussions clarified this to mean scaling arbitrary resource 
types, rather than just Nova servers.]



4. There may be things that are different than AWS::EC2::Instance that
we would want to scale (I have personally been playing around with the
concept of a ResourceGroup, which would maintain a nested stack of
resources based on an arbitrary template snippet).
5. Some people would like to be able to perform manual operations on an
instance group -- such as Clint Byrum's recent example of "remove
instance 4 from resource group A".

Please chime in with your additional requirements if you have any! Trove
and TripleO people, I'm looking at you :-)


*TL;DR*

Point 3.2. above is the main point of this email: exactly how should the
autoscaling API talk back to Heat to tell it to add more instances? I
included the other points so that we keep them in mind while considering
a solution.

*Possible Solutions*

I have heard at least three possibilities so far:

1. the autoscaling API should maintain a full template of all the nodes
in the autoscaled nested stack, manipulate it locally when it wants to
add or remove instances, and post an update-stack to the nested-stack
associated with the InstanceGroup.


This is what I had been thinking.


Pros: It doesn't require any changes to Heat.

Cons: It puts a lot of burden of state management on the autoscale API,


All other APIs need to manage state too, I don't really have a problem 
with that. It already has to handle e.g. the cooldown state; your 
scaling strategy (uh, for the service) will be determined by that.



and it arguably spreads out the responsibility of "orchestration" to the
autoscale API.


Another line of argument would be that this is not true by definition ;)


Also arguable is that automated agents outside of Heat
shouldn't be managing an "internal" template, which are typically
developed by devops people and kept in version control.

2. There should be a new custom-built API for doing exactly what the
autoscaling service needs on an InstanceGroup, named something
unashamedly specific -- like "instance-group-adjust".


+1 to having a custom (RPC-only) API if it means forcing some state out 
of the autoscaling service.


-1 for it talking to an InstanceGroup - that just brings back all our 
old problems about having "resources" that don't have their own separate 
state and APIs, but just exist inside of Heat plugins. Those are the 
cause of all of the biggest design problems in Heat. They're the thing I 
want the Autoscaling API to get rid of. (Also, see below.)



Pros: It'll do exactly what it needs to do for this use case; very
little state management in autoscale API; it lets Heat do all the
orchestration and only give very specific delegation to the external
autoscale API.

Cons: The API grows an additional method for a specific use case.

3. the autoscaling API should update the "Size" Property of the
InstanceGroup resource in the stack that it i

Re: [openstack-dev] devstack exercise test failed at euca-register

2013-08-16 Thread XINYU ZHAO
bump.
any input is appreciated.


On Thu, Aug 15, 2013 at 5:04 PM, XINYU ZHAO  wrote:

> Updated every project to the latest. but each time i ran devstack, the
> exercise test failed at the same place bundle.sh
> Any hints?
>
> In console.log
>
> Uploaded image as testbucket/bundle.img.manifest.xml
> ++ euca-register testbucket/bundle.img.manifest.xml
> ++ cut -f2
> + AMI='S3ResponseError: Unknown error occured.'
> + die_if_not_set 57 AMI 'Failure registering testbucket/bundle.img'
> + local exitcode=0
> ++ set +o
> ++ grep xtrace
> + FXTRACE='set -o xtrace'
> + set +o xtrace
> + timeout 15 sh -c 'while euca-describe-images | grep S3ResponseError: 
> Unknown error occured. | grep -q available; do sleep 1; done'
> grep: Unknown: No such file or directory
> grep: error: No such file or directory
> grep: occured.: No such file or directory
> close failed in file object destructor:
> sys.excepthook is missing
> lost sys.stderr
> + euca-deregister S3ResponseError: Unknown error occured.
> Only 1 argument (image_id) permitted
> + die 65 'Failure deregistering S3ResponseError: Unknown error occured.'
> + local exitcode=1
> + set +o xtrace
> [Call Trace]
> /opt/stack/new/devstack/exercises/bundle.sh:65:die
> [ERROR] /opt/stack/new/devstack/exercises/bundle.sh:65 Failure deregistering 
> S3ResponseError: Unknown error occured.
>
>
>
> Here is what recorded in n-api log.
>
> 2013-08-15 15:44:20.331 27003 DEBUG nova.utils [-] Reloading cached file 
> /etc/nova/policy.json read_cached_file /opt/stack/new/nova/nova/utils.py:814
> 2013-08-15 15:44:20.363 DEBUG nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] action: RegisterImage 
> __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:325
> 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: Architecture
>  val: i386 __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:328
> 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: ImageLocation   
>  val: testbucket/bundle.img.manifest.xml __call__ 
> /opt/stack/new/nova/nova/api/ec2/__init__.py:328
> 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Unexpected 
> S3ResponseError raised
> 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Environment: 
> {"CONTENT_TYPE": "application/x-www-form-urlencoded; charset=UTF-8", 
> "SCRIPT_NAME": "/services/Cloud", "REQUEST_METHOD": "POST", "HTTP_HOST": 
> "127.0.0.1:8773", "PATH_INFO": "/", "SERVER_PROTOCOL": "HTTP/1.0", 
> "HTTP_USER_AGENT": "Boto/2.10.0 (linux2)", "RAW_PATH_INFO": 
> "/services/Cloud/", "REMOTE_ADDR": "127.0.0.1", "REMOTE_PORT": "44294", 
> "wsgi.url_scheme": "http", "SERVER_NAME": "127.0.0.1", "SERVER_PORT": "8773", 
> "GATEWAY_INTERFACE": "CGI/1.1", "HTTP_ACCEPT_ENCODING": "identity"}
> 2013-08-15 15:44:20.371 DEBUG nova.api.ec2.faults 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] EC2 error response: 
> S3ResponseError: Unknown error occured. ec2_error_response 
> /opt/stack/new/nova/nova/api/ec2/faults.py:31
> 2013-08-15 15:44:20.371 INFO nova.api.ec2 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] 0.109800s 127.0.0.1 POST 
> /services/Cloud/ CloudController:RegisterImage 400 [Boto/2.10.0 (linux2)] 
> application/x-www-form-urlencoded text/xml
> 2013-08-15 15:44:20.379 INFO nova.ec2.wsgi.server 
> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] 127.0.0.1 "POST 
> /services/Cloud/ HTTP/1.1" status: 400 len: 317 time: 0.1177399
>
>
> execute manually on the machine:
>
> euca-register testbucket/bundle.img.manifest.xml --debug
> 2013-08-15 17:00:19,446 euca2ools [DEBUG]:Using access key provided by client.
> 2013-08-15 17:00:19,446 euca2ools [DEBUG]:Using secret key provided by client.
> 2013-08-15 17:00:19,446 euca2ools [DEBUG]:Method: POST
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Path: /services/Cloud/
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Data:
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Headers: {}
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Host: 127.0.0.1:8773
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Params: {'Action': 'RegisterImage', 
> 'Version': '2009-11-30', 'Architecture': 'i386', 'ImageLocation': 
> 'testbucket/bundle.img.manifest.xml'}
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:establishing HTTP connection: 
> kwargs={'timeout': 70}
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:Token: None
> 2013-08-15 17:00:19,447 euca2ools [DEBUG]:using _calc_signature_2
> 2013-08-15 17:00:19,448 euca2ools [DEBUG]:query string: 
> AWSAccessKeyId=4b14f2d81b9045fdb3a0c989d283ebbe&Action=RegisterImage&Architecture=i386&ImageLocation=testbucket%2Fbundle.img.manifest.xml&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-08-16T00%3A00%3A19Z&Version=2009-11-30
> 2013-08-15 17:00:19,448 euca2ools [DEBUG]:string_to_sign: POST127.0.0.1:8773
> /services/Cloud/
> AWSAccessKeyId=4b14f2d8

Re: [openstack-dev] Code review study

2013-08-16 Thread Maru Newby

On Aug 16, 2013, at 2:12 AM, Robert Collins  wrote:

> On 16 August 2013 20:15, Maru Newby  wrote:
> 
>>> This pattern has one slight issue, which is:
>>> 
>>>  • Do not assume the reviewer has access to external web services/site.
>>> In 6 months time when someone is on a train/plane/coach/beach/pub 
>>> troubleshooting a problem & browsing GIT history, there is no guarantee 
>>> they will have access to the online bug tracker, or online blueprint 
>>> documents. The great step forward with distributed SCM is that you no 
>>> longer need to be "online" to have access to all information about the code 
>>> repository. The commit message should be totally self-contained, to 
>>> maintain that benefit.
>> 
>> I'm not sure I agree with this.  It can't be true in all cases, so it can 
>> hardly be considered a rule.  A guideline, maybe - something to strive for.  
>> But not all artifacts of the development process are amenable to being 
>> stuffed into code or the commits associated with them.  A dvcs is great and 
>> all, but unless one is working in a silo, online resources are all but 
>> mandatory.
> 
> In a very strict sense you're right, but consider that for anyone
> doing fast iterative development the need to go hit a website is a
> huge slowdown : at least in most of the world :).

You're suggesting that it's possible to do _fast_ iterative development on a 
distributed system of immense and largely undocumented complexity (like 
openstack)?  I'd like to be working on the code you're working on!  ;) 


m.

> 
> So - while I agree that it's something to strive for, I think we
> should invert it and say 'not having everything in the repo is
> something we should permit occasional exceptions to'.
> 
> -Rob
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Jay Pipes

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: https://review.openstack.org/#/c/42159/


Thoughts?


++

Are you going to create a separate Launchpad project for the library and 
track bugs against it separately? Or are you going to use the oslo 
project in Launchpad for that?


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Vishvananda Ishaya
On Fri, Aug 16, 2013 at 3:05 AM, Daniel P. Berrange wrote:

> On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
> > Hi Everyone,
> >
> > I have been trying for some time to get the code for the live-snapshot
> blueprint[1]
> > in. Going through the review process for the rpc and interface code[2]
> was easy. I
> > suspect the api-extension code[3] will also be relatively trivial to get
> in. The
> > main concern is with the libvirt driver implementation[4]. I'd like to
> discuss the
> > concerns and see if we can make some progress.
> >
> > Short Summary (tl;dr)
> > =
> >
> > I propose we merge live-cloning as an experimental feature for havanna
> and have the
> > api extension disabled by default.
> >
> > Overview
> > 
> >
> > First of all, let me express the value of live snapshoting. The slowest
> part of the
> > vm provisioning process is generally booting of the OS. The advantage of
> live-
> > snapshotting is that it allows the possibility of bringing up
> application servers
> > while skipping the overhead of vm (and application startup).
>
> For Linux at least I think bootup time is a problem that is being solved
> by the
> distros. It is possible to boot up many modern Linux distros in a couple
> of seconds
> even in physical hardware - VMs can be even faster since they don't have
> such stupid
> BIOS to worry about & have a restricted set of possible hardware. This is
> on a par
> with, or better than, the overheads imposed by Nova itself in the boot up
> process.
>
> Windows may be a different story, but I've not used it in years so don't
> know what
> its boot performance is like.
>
> > I recognize that this capability comes with some security concerns, so I
> don't expect
> > this feature to go in and be ready to for use in production right away.
> Similarly,
> > containers have a lot of the same benefit, but have had their own
> security issues
> > which are gradually being resolved. My hope is that getting this feature
> in would
> > allow people to start experimenting with live-booting so that we could
> uncover some
> > of these security issues.
> >
> > There are two specific concerns that have been raised regarding my
> patch. The first
> > concern is related to my use of libvirt. The second concern is related
> to the security
> > issues above. Let me address them separately.
> >
> > 1. Libvirt Issues
> > =
> >
> > The only feature I require from the hypervisor is to load
> memory/processor state for
> > a vm from a file. Qemu supports this directly. The only way that libvirt
> exposes this
> > functionality is via its restore command which is specifically for
> restoring the
> > previous state of an existing vm. "Cloning", or restoring the memory
> state of a
> > cloned vm is considered unsafe (which I will address in the second
> point, below).
> >
> > The result of the limited api is that I must include some hacks to make
> the restore
> > command actually allow me to restore the state of the new vm. I
> recognize that this
> > is using an undocumented libvirt api and isn't the ideal solution, but
> it seemed
> > "better" then avoiding libvirt and talking directly to qemu.
> >
> > This is obviously not ideal. It is my hope that this 0.1 version of the
> feature will
> > allow us to iteratively improve the live-snapshot/clone proccess and get
> the security
> > to a point where the libvirt maintainers would be willing to accept a
> patch to directly
> > expose an api to load memory from a file.
>
> To characterize this as a libvirt issue is somewhat misleading. The reason
> why libvirt
> does not explicitly allow this, is that from discussions with the upstream
> QEMU/KVM
> developers, the recommendation/advise that this is not a safe operation
> and should not
> be exposed to application developers.
>
> The expectation is that the functionality in QEMU is only targetted for
> taking point in
> time snapshots & allowing rollback of a VM to those snapshots, not
> creating clones of
> active VMs.
>

Thanks for the clarification here. I wasn't aware that this requirement
came from qemu
upstream.


>
> > 2. Security Concerns
> > 
> >
> > There are a number of security issues with loading state from another
> vm. Here is a
> > short list of things that need to be done just to make a cloned vm
> usable:
> >
> > a) mac address needs to be recreated
> > b) entropy pool needs to be reset
> > c) host name must be reset
> > d) host keys bust be regenerated
> >
> > There are others, and trying to clone a running application as well may
> expose other
> > sensitive data, especially if users are snaphsoting vms and making them
> public.
> >
> > The only issue that I address on the driver side is the mac addresses.
> This is the
> > minimum that needs to be done just to be able to access the vm over the
> network. This
> > is implemented by unplugging all network devices before the snapshot and
> plugging new
> > networ

Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-16 Thread Ben Nemec

On 2013-08-14 16:10, Matthew Treinish wrote:

On Wed, Aug 14, 2013 at 11:05:35AM -0500, Ben Nemec wrote:

On 2013-08-13 16:39, Clark Boylan wrote:
>On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish
> wrote:
>>
>>Hi everyone,
>>
>>So for the past month or so I've been working on getting tempest
>>to work stably
>>with testr in parallel. As part of this you may have noticed the
>>testr-full
>>jobs that get run on the zuul check queue. I was using that job
>>to debug some
>>of the more obvious race conditions and stability issues with
>>running tempest
>>in parallel. After a bunch of fixes to tempest and finding some
>>real bugs in
>>some of the projects things seem to have smoothed out.
>>
>>So I pushed the testr-full run to the gate queue earlier today.
>>I'll be keeping
>>track of the success rate of this job vs the serial job and use
>>this as the
>>determining factor before we push this live to be the default
>>for all tempest
>>runs. So assuming that the success rate matches up well enough
>>with serial job
>>on the gate queue then I will push out the change that will
>>migrate all the
>>voting jobs to run in parallel hopefully either Friday afternoon
>>or early next
>>week. Also, if anyone has any input on what threshold they feel
>>is good enough
>>for this I'd welcome any input on that. For example, do we want
>>to ensure
>>a >= 1:1 match for job success? Or would something like 90% as
>>stable as the
>>serial job be good enough considering the speed advantage. (The
>>parallel runs
>>take about half as much time as a full serial run, the parallel
>>job normally
>>finishes in ~25-30min) Since this affects almost every project I
>>don't want to
>>define this threshold without input from everyone.
>>
>>After there is some more data for the gate queue's parallel job
>>I'll have some
>>pretty graphite graphs that I can share comparing the success
>>trends between
>>the parallel and serial jobs.
>>
>>So at this point we're in the home stretch and I'm asking for
>>everyone's help
>>in getting this merged. So, if everyone who is reviewing and
>>pushing commits
>>could watch the results from these non-voting jobs and if things
>>fail on the
>>parallel job but not the serial job please investigate the
>>failure and open a
>>bug if necessary. If it turns out to be a bug in tempest please
>>link it against
>>this blueprint:
>>
>>https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest
>>
>>so that I'll give it the attention it deserves. I'd hate to get
>>this close to
>>getting this merged and have a bit of racy code get merged at
>>the last second
>>and block us for another week or two.
>>
>>I feel that we need to get this in before the H3 rush starts up
>>as it will help
>>everyone get through the extra review load faster.
>>
>Getting this in before the H3 rush would be very helpful. When we made
>the switch with Nova's unittests we fixed as many of the test bugs
>that we could find, merged the change to switch the test runner, then
>treated all failures as very high priority bugs that received
>immediate attention. Getting this in before H3 will give everyone a
>little more time to debug any potential new issues exposed by Jenkins
>or people running the tests locally.
>
>I think we should be bold here and merge this as soon as we have good
>numbers that indicate the trend is for these tests to pass. Graphite
>can give us the pass to fail ratios over time, as long as these trends
>are similar for both the old nosetest jobs and the new testr job I say
>we go for it. (Disclaimer: most of the projecst I work on are not
>affected by the tempest jobs; however, I am often called upon to help
>sort out issues in the gate).

I'm inclined to agree.  It's not as if we don't have transient
failures now, and if we're looking at a 50% speedup in
recheck/verify times then as long as the new version isn't
significantly less stable it should be a net improvement.

Of course, without hard numbers we're kind of discussing in a vacuum
here.



I also would like to get this in sooner rather than later and fix the 
bugs as
they come in. But, I'm wary of doing this because there isn't a proven 
success
history yet. No one likes gate resets, and I've only been running it on 
the

gate queue for a day now.

So here is the graphite graph that I'm using to watch parallel vs 
serial in the

gate queue:
https://tinyurl.com/pdfz93l


Okay, so what are the y-axis units on this?  Because just guessing I 
would say that it's percentage of failing runs, in which case it looks 
like we're already within the 95% as accurate range (it never dips below 
-.05).  Am I reading it right?




On that graph the blue and yellow shows the number of jobs that 
succeeded
grouped together in per hour buckets. (yellow being parallel and blue 
serial)


Then the red line is showing failures, a horizontal bar means that 
there is no
difference in the number of failures between serial and parallel. When 
it dips
negative it is showing a failure in paral

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Ben Nemec

On 2013-08-16 11:58, Jay Pipes wrote:

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around 
DB

code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: 
https://review.openstack.org/#/c/42159/



Thoughts?


++

Are you going to create a separate Launchpad project for the library
and track bugs against it separately? Or are you going to use the oslo
project in Launchpad for that?


At the moment all of the oslo.* projects are just grouped under the 
overall Oslo project in LP.  Unless there's a reason to do otherwise I 
would expect that to be true of oslo.db too.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] as-update-policy implementation details

2013-08-16 Thread Zane Bitter

On 15/08/13 19:14, Chan, Winson C wrote:

I updated the implementation section of 
https://wiki.openstack.org/wiki/Heat/Blueprints/as-update-policy on instance 
naming to support UpdatePolicy where in the case of the LaunchConfiguration 
change, all the instances need to be replaced and to support 
MinInstancesInService, the handle_update should create new instances first 
before deleting old ones in a batch per MaxBatchSize (i.e., group capacity of 2 
with MaxBatchSize=2 and MinInstancesInService=2).  Please review as I may not 
understand the original motivation for the existing scheme in instance naming.  
Thanks.


Yeah, I don't think the naming is that important any more. Note that 
physical_resource_name() (i.e. the name used in Nova) now includes a 
randomised component on the end (stackname-resourcename-uniqueid).


So they'll probably look a bit like:

MyStack-MyASGroup--MyASGroup-1-

because the instances are now resources inside a nested stack (whose 
name is of the same form).


If we still were subclassing Instance in the autoscaling code to 
override other stuff, I'd suggest overriding physical-resource-name to 
return something like:


MyStack-MyASGroup-

(i.e. forget about numbering instances at all), but we're not 
subclassing any more, so I'm not sure if it's worth it.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Andres Lagar-Cavilla
> On Fri, Aug 16, 2013 at 11:05:19AM +0100, Daniel P. Berrange wrote:
>> On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
>>> Hi Everyone,
>>> 
>>> I have been trying for some time to get the code for the live-snapshot 
>>> blueprint[1]
>>> in. Going through the review process for the rpc and interface code[2] was 
>>> easy. I
>>> suspect the api-extension code[3] will also be relatively trivial to get 
>>> in. The
>>> main concern is with the libvirt driver implementation[4]. I'd like to 
>>> discuss the
>>> concerns and see if we can make some progress.
>>> 
>>> Short Summary (tl;dr)
>>> =
>>> 
>>> I propose we merge live-cloning as an experimental feature for havanna and 
>>> have the
>>> api extension disabled by default.
>>> 
>>> Overview
>>> 
>>> 
>>> First of all, let me express the value of live snapshoting. The
>>> slowest part of the vm provisioning process is generally booting
>>> of the OS.
> 
> Like Dan I'm dubious about this whole plan.  But this ^^ statement in
> particular.  I would like to see hard data to back this up.

What we need to keep in mind is that "boot" is a small part of the picture, at 
least "boot" as commonly referred to in Linux.

Consider a web sphere-like Java bundle of code. These things take a while to 
load. JiT-ed methods provide a tremendous performance boost. Nevermind if the 
the server constructs secondary indices to perform fast lookups of data.

That is just Linux. Windows is well known for pounding storage fabrics with 
thousands of small reads during boot storms. Certainly a boot Windows sequence 
has baked in a lot of service startup sequences that prime a lot of memory 
content for performance objectives.

Boot here means "ready to rock-n-roll", not "Cirros is up."

We have live deployments that are based on bypassing the entire *application 
startup* sequence and have a server ready to provide high-performance responses 
to queries once spawned from a live saved image.


> 
> You should be able to boot an OS pretty quickly, and furthermore it's
> (a) much safer for all the reasons Dan outlines, and (b) improvements
> that you make to boot times help everyone.
> 
> [...]
>>> 2. Security Concerns
>>> 
>>> 
>>> There are a number of security issues with loading state from another vm. 
>>> Here is a
>>> short list of things that need to be done just to make a cloned vm usable:
>>> 
>>> a) mac address needs to be recreated
>>> b) entropy pool needs to be reset
>>> c) host name must be reset
>>> d) host keys bust be regenerated
>>> 
>>> There are others, and trying to clone a running application as well may 
>>> expose other
>>> sensitive data, especially if users are snaphsoting vms and making them 
>>> public.
> 
> Are we talking about cloning VMs that you already trust, or cloning
> random VMs and allowing random other users to use them?  These would
> lead to very different solutions.  In the first case, you only care
> about correctness, not security.  In the second case, you care about
> security as well as correctness.

Case number one.

The correctness issues are a hard problem, and a particularly hard one in 
Windows, but it is pragmatically solvable.

For a common scenario in Linux, renewing dhcp leases and leveling your entropy 
pool are what you need.

> 
> I highly doubt the second case is possible because scrubbing the disk
> is going to take far too long for any supposed time-saving to matter.

That would be very counter-productive, so yes, focusing on the first case.
> 
> As Dan says, even the first case is dubious because it won't be correct.
> 
>> The libguestfs project provide tools to perform offline cloning of
>> VM disk images.  Its virt-sysprep knows how to delete alot (but by
>> no means all possible) sensitive file data for common Linux &
>> Windows OS. It still has to be combined with use of the
>> virt-sparsify tool though, to ensure the deleted data is actually
>> purged from the VM disk image as well as the filesystem, by
>> releasing all unused VM disk sectors back to the host storage (and
>> not all storage supports that).
> 
> Links to the tools that Dan mentions:
> 
> http://libguestfs.org/virt-sysprep.1.html
> http://libguestfs.org/virt-sparsify.1.html

Virt-sparsify is not strictly relevant here. The disk side of live images is 
carried out with qcow2.

Virt-sysprep is great work and highly relevant.

But virt-sysprep allows us to see the argument in a different light. Have you 
noticed nova does not run virt-sysprep before booting an ephemeral instance 
from an image? (AFAIK, could be wrong, not even regenerating host ssh keys is 
part of the assured workflow). Furthermore, one can create arbitrary (cold, 
non-live) images at any time, from live instances

This isn't necessarily wrong. It underpins massive deployments, it 
pragmatically adds value. The fundamental semantics at play with live-instances 
are the same: know what you are doing, ephemeral instances, 

[openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Maru Newby
Neutron has been in and out of the gate for the better part of the past month, 
and it didn't slow the pace of development one bit.  Most Neutron developers 
kept on working as if nothing was wrong, blithely merging changes with no 
guarantees that they weren't introducing new breakage.  New bugs were indeed 
merged, greatly increasing the time and effort required to get Neutron back in 
the gate.  I don't think this is sustainable, and I'd like to make a suggestion 
for how to minimize the impact of gate breakage.

For the record, I don't think consistent gate breakage in one project should be 
allowed to hold up the development of other projects.  The current approach of 
skipping tests or otherwise making a given job non-voting for innocent projects 
should continue.  It is arguably worth taking the risk of relaxing gating for 
those innocent projects rather than halting development unnecessarily.

However, I don't think it is a good idea to relax a broken gate for the 
offending project.  So if a broken job/test is clearly Neutron related, it 
should continue to gate Neutron, effectively preventing merges until the 
problem is fixed.  This would both raise the visibility of breakage beyond the 
person responsible for fixing it, and prevent additional breakage from slipping 
past were the gating to be relaxed.

Thoughts?


m.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Alex Gaynor
I'd strongly agree with that, a project must always be gated by any tests
for it, even if they don't gate for other projects. I'd also argue that any
time there's a non-gating test (for any project) it needs a formal
explanation of why it's not gating yet, what the plan to get it to gating
is, and on what timeframe it's expected to be.

Alex


On Fri, Aug 16, 2013 at 11:25 AM, Maru Newby  wrote:

> Neutron has been in and out of the gate for the better part of the past
> month, and it didn't slow the pace of development one bit.  Most Neutron
> developers kept on working as if nothing was wrong, blithely merging
> changes with no guarantees that they weren't introducing new breakage.  New
> bugs were indeed merged, greatly increasing the time and effort required to
> get Neutron back in the gate.  I don't think this is sustainable, and I'd
> like to make a suggestion for how to minimize the impact of gate breakage.
>
> For the record, I don't think consistent gate breakage in one project
> should be allowed to hold up the development of other projects.  The
> current approach of skipping tests or otherwise making a given job
> non-voting for innocent projects should continue.  It is arguably worth
> taking the risk of relaxing gating for those innocent projects rather than
> halting development unnecessarily.
>
> However, I don't think it is a good idea to relax a broken gate for the
> offending project.  So if a broken job/test is clearly Neutron related, it
> should continue to gate Neutron, effectively preventing merges until the
> problem is fixed.  This would both raise the visibility of breakage beyond
> the person responsible for fixing it, and prevent additional breakage from
> slipping past were the gating to be relaxed.
>
> Thoughts?
>
>
> m.
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-16 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2013-08-16 09:36:23 -0700:
> On 16/08/13 00:50, Christopher Armstrong wrote:
> > *Introduction and Requirements*
> >
> > So there's kind of a perfect storm happening around autoscaling in Heat
> > right now. It's making it really hard to figure out how I should compose
> > this email. There are a lot of different requirements, a lot of
> > different cool ideas, and a lot of projects that want to take advantage
> > of autoscaling in one way or another: Trove, OpenShift, TripleO, just to
> > name a few...
> >
> > I'll try to list the requirements from various people/projects that may
> > be relevant to autoscaling or scaling in general.
> >
> > 1. Some users want a service like Amazon's Auto Scaling or Rackspace's
> > Otter -- a simple API that doesn't really involve orchestration.
> > 2. If such a API exists, it makes sense for Heat to take advantage of
> > its functionality instead of reimplementing it.
> 
> +1, obviously. But the other half of the story is that the API is likely 
> be implemented using Heat on the back end, amongst other reasons because 
> that implementation already exists. (As you know, since you wrote it ;)
> 
> So, just as we will have an RDS resource in Heat that calls Trove, and 
> Trove will use Heat for orchestration:
> 
>user => [Heat =>] Trove => Heat => Nova
> 
> there will be a similar workflow for Autoscaling:
> 
>user => [Heat =>] Autoscaling -> Heat => Nova
> 

After a lot of consideration and an interesting IRC discussion, I think
the point above makes it clear for me. Autoscaling will have a simpler
implementation by making use of Heat's orchestration capabilities,
but the fact that Heat will also use autoscaling is orthogonal to that.

That does beg the question of why this belongs in Heat. Originally
we had taken the stance that there must be only one control system,
lest they have a policy-based battle royale. If we only ever let
autoscaled resources be controlled via Heat (via nested stack produced
by autoscaling), then there can be only one.. control service (Heat).

By enforcing that autoscaling always talks to "the world" via Heat though,
I think that reaffirms for me that autoscaling, while not really the same
project (seems like it could happily live in its own code tree), will
be best served by staying inside the "OpenStack Orchestration" program.

The question of private RPC or driving it via the API is not all that
interesting to me. I do prefer the SOA method and having things talk via
their respective public APIs as it keeps things loosely coupled and thus
easier to fit into one's brain and debug/change.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swfit] -- updating python files

2013-08-16 Thread Snider, Tim
How does one upgrade / replace swift files in 
/usr/lib/python2.7/dist-packages/swift?
I've used apt-get to remove, purge, and reinstall but those files aren't 
touched:
root@swift21:/etc# vi /usr/lib/python2.7/dist-packages/swift
swift/ swift-1.4.8.egg-info/

Thanks,
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group meeting on 8/20

2013-08-16 Thread Dugger, Donald D
Turns out I'll be traveling that day so won't be able to run the meeting.  If 
there's anyone who wants to volunteer to lead the meeting speak now, otherwise 
we can just cancel next week.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Mark Washenberger
I would prefer to pick and choose which parts of oslo common db code to
reuse in glance. Most parts there look great and very useful. However, some
parts seem like they would conflict with several goals we have.

1) To improve code sanity, we need to break away from the idea of having
one giant db api interface
2) We need to improve our position with respect to new, non SQL drivers
- mostly, we need to focus first on removing business logic (especially
authz) from database driver code
- we also need to break away from the strict functional interface,
because it limits our ability to express query filters and tends to lump
all filter handling for a given function into a single code block (which
ends up being defect-rich and confusing as hell to reimplement)
3) It is unfortunate, but I must admit that Glance's code in general is
pretty heavily coupled to the database code and in particular the schema.
Basically the only tool we have to manage that problem until we can fix it
is to try to be as careful as possible about how we change the db code and
schema. By importing another project, we lose some of that control. Also,
even with the copy-paste model for oslo incubator, code in oslo does have
some of its own reasons to change, so we could potentially end up in a
conflict where glance db migrations (which are operationally costly) have
to happen for reasons that don't really matter to glance.

So rather than framing this as "glance needs to use oslo common db code", I
would appreciate framing it as "glance database code should have features
X, Y, and Z, some of which it can get by using oslo code." Indeed, I
believe in IRC we discussed the idea of writing up a wiki listing these
feature improvements, which would allow a finer granularity for evaluation.
I really prefer that format because it feels more like planning and less
like debate :-)

 I have a few responses inline below.

On Fri, Aug 16, 2013 at 6:31 AM, Victor Sergeyev wrote:

> Hello All.
>
> Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
> questions about Oslo DB code, and why is it so important to use it instead
> of custom implementation and so on. As there were a lot of questions it was
> really hard to answer on all this questions in IRC. So we decided that
> mailing list is better place for such things.
>
> List of main questions:
>
> 1. What includes oslo DB code?
> 2. Why is it safe to replace custom implementation by Oslo DB code?
> 3. Why oslo DB code is better than custom implementation?
> 4. Why oslo DB code won’t slow up project development progress?
> 5. What we are going actually to do in Glance?
> 6. What is the current status?
>
> Answers:
>
> 1. What includes oslo DB code?
>
> Currently Oslo code improves different aspects around DB:
> -- Work with SQLAlchemy models, engine and session
> -- Lot of tools for work with SQLAlchemy
>
-- Work with unique keys
> -- Base test case for work with database
> -- Test migrations against different backends
> -- Sync DB Models with actual schemas in DB (add test that they are
> equivalent)
>
>
> 2. Why is it safe to replace custom implementation by Oslo DB code?
>
> Oslo module, as base openstack module, takes care about code quality.
> Usually, common code more readable (most of flake8 checks enabled in Oslo)
> and have better test coverage.  Also it was tested in different use-cases
> (in production also) in an other projects so bugs in Oslo code were already
> fixed. So we can be sure, that we use high-quality code.
>

Alas, while testing and static style analysis are important, they are not
the only relevant aspects of code quality. Architectural choices are also
relevant. The best reusable code places few requirements on the code that
reuses it architecturally--in some cases it may make sense to refactor oslo
db code so that glance can reuse the correct parts.


>
>
> 3. Why oslo DB code is better than custom implementation?
>
> There are some arguments pro Oslo database code
>
> -- common code collects useful features from different projects
> Different utils, for work with database, common test class, module for
> database migration, and  other features are already in Oslo db code. Patch
> on automatic retry db.api query if db connection lost on review at the
> moment. If we use Oslo db code we should not care, how to port these (and
> others - in the future) features to Glance - it will came to all projects
> automaticly when it will came to Oslo.
>
> -- unified project work with database
> As it was already said,  It can help developers work with database in a
> same way in different projects. It’s useful if developer work with db in a
> few projects - he use same base things and got no surprises from them.
>

I'm not very motivated by this argument. I rarely find novelty that
challenging to understand when working with a project, personally. Usually
I'm much more stumped when code is heavily coupled to other modules or too
many responsibilities are lum

Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Monty Taylor


On 08/16/2013 02:25 PM, Maru Newby wrote:
> Neutron has been in and out of the gate for the better part of the
> past month, and it didn't slow the pace of development one bit.  Most
> Neutron developers kept on working as if nothing was wrong, blithely
> merging changes with no guarantees that they weren't introducing new
> breakage.  New bugs were indeed merged, greatly increasing the time
> and effort required to get Neutron back in the gate.  I don't think
> this is sustainable, and I'd like to make a suggestion for how to
> minimize the impact of gate breakage.
> 
> For the record, I don't think consistent gate breakage in one project
> should be allowed to hold up the development of other projects.  The
> current approach of skipping tests or otherwise making a given job
> non-voting for innocent projects should continue.  It is arguably
> worth taking the risk of relaxing gating for those innocent projects
> rather than halting development unnecessarily.
> 
> However, I don't think it is a good idea to relax a broken gate for
> the offending project.  So if a broken job/test is clearly Neutron
> related, it should continue to gate Neutron, effectively preventing
> merges until the problem is fixed.  This would both raise the
> visibility of breakage beyond the person responsible for fixing it,
> and prevent additional breakage from slipping past were the gating to
> be relaxed.

I do not know the exact implementation that would work here, but I do
think it's worth discussing further. Essentially, a neutron bug killing
the gate for a nova dev isn't necessarily going to help - because the
nova dev doesn't necessarily have the background to fix it.

I want to be very careful that we don't wind up with an assymetrical
gate though...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Clint Byrum
Excerpts from Maru Newby's message of 2013-08-16 11:25:07 -0700:
> Neutron has been in and out of the gate for the better part of the past 
> month, and it didn't slow the pace of development one bit.  Most Neutron 
> developers kept on working as if nothing was wrong, blithely merging changes 
> with no guarantees that they weren't introducing new breakage.  New bugs were 
> indeed merged, greatly increasing the time and effort required to get Neutron 
> back in the gate.  I don't think this is sustainable, and I'd like to make a 
> suggestion for how to minimize the impact of gate breakage.
> 
> For the record, I don't think consistent gate breakage in one project should 
> be allowed to hold up the development of other projects.  The current 
> approach of skipping tests or otherwise making a given job non-voting for 
> innocent projects should continue.  It is arguably worth taking the risk of 
> relaxing gating for those innocent projects rather than halting development 
> unnecessarily.
> 
> However, I don't think it is a good idea to relax a broken gate for the 
> offending project.  So if a broken job/test is clearly Neutron related, it 
> should continue to gate Neutron, effectively preventing merges until the 
> problem is fixed.  This would both raise the visibility of breakage beyond 
> the person responsible for fixing it, and prevent additional breakage from 
> slipping past were the gating to be relaxed.
> 
> Thoughts?
> 

I think this is a cultural problem related to the code review discussing
from earlier in the week.

We are not looking at finding a defect and reverting as a good thing where
high fives should be shared all around. Instead, "you broke the gate"
seems to mean "you are a bad developer". I have been a bad actor here too,
getting frustrated with the gate-breaker and saying the wrong thing.

The problem really is "you _broke_ the gate". It should be "the gate has
found a defect, hooray!". It doesn't matter what causes the gate to stop,
it is _always_ a defect. Now, it is possible the defect is in tempest,
or jenkins, or HP/Rackspace's clouds where the tests run. But it is
always a defect that what worked before, does not work now.

Defects are to be expected. None of us can write perfect code. We should
be happy to revert commits and go forward with an enabled gate while
the team responsible for the commit gathers information and works to
correct the issue.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Russell Bryant
On 08/16/2013 01:17 PM, Vishvananda Ishaya wrote:
> 
> 
> 
> On Fri, Aug 16, 2013 at 3:05 AM, Daniel P. Berrange  > wrote:

> I don't think it is a good idea to add a feature which is considered to
> be unsupportable by the developers of the virt platform.
> 
> 
> You make excellent points. I'm not totally convinced that this feature
> is the right
> long-term direction, but I still think it is interesting. To be fair,
> I'm not convinced that
> virtual machines as a whole are the right long-term direction. I'm still
> looking for a way
> for people experiment with this and see what use-cases that come out of it.
> 
> Over the past three years OpenStack has been a place where we can
> iterate quickly and
> try new things. Multihost nova-network was an experiment of mine that
> turned into the
> most common deployment strategy for a long time.
> 
> Maybe we've grown up to the point where we have to be more careful and
> not introduce
> these kind of features and the maintenance cost of introducing
> experimental features is
> too great. If that is the community consensus, then I'm happy keep the
> live snapshot stuff
> in a branch on github for people to experiment with.

My feeling after following this discussion is that it's probably best to
keep baking in another branch (github or whatever).  The biggest reason
is because of the last comment quoted from Daniel Berrange above.  I
feel that like that is a pretty big deal.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Jay Pipes

On 08/16/2013 02:41 PM, Mark Washenberger wrote:

I think the issue here for glance is whether or not oslo common code
makes it easier or harder to make other planned improvements. In
particular, using openstack.common.db.api will make it harder to
refactor away from a giant procedural interface for the database driver.


And towards what? A giant object-oriented interface for the database driver?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Maru Newby

On Aug 16, 2013, at 11:44 AM, Monty Taylor  wrote:

> 
> 
> On 08/16/2013 02:25 PM, Maru Newby wrote:
>> Neutron has been in and out of the gate for the better part of the
>> past month, and it didn't slow the pace of development one bit.  Most
>> Neutron developers kept on working as if nothing was wrong, blithely
>> merging changes with no guarantees that they weren't introducing new
>> breakage.  New bugs were indeed merged, greatly increasing the time
>> and effort required to get Neutron back in the gate.  I don't think
>> this is sustainable, and I'd like to make a suggestion for how to
>> minimize the impact of gate breakage.
>> 
>> For the record, I don't think consistent gate breakage in one project
>> should be allowed to hold up the development of other projects.  The
>> current approach of skipping tests or otherwise making a given job
>> non-voting for innocent projects should continue.  It is arguably
>> worth taking the risk of relaxing gating for those innocent projects
>> rather than halting development unnecessarily.
>> 
>> However, I don't think it is a good idea to relax a broken gate for
>> the offending project.  So if a broken job/test is clearly Neutron
>> related, it should continue to gate Neutron, effectively preventing
>> merges until the problem is fixed.  This would both raise the
>> visibility of breakage beyond the person responsible for fixing it,
>> and prevent additional breakage from slipping past were the gating to
>> be relaxed.
> 
> I do not know the exact implementation that would work here, but I do
> think it's worth discussing further. Essentially, a neutron bug killing
> the gate for a nova dev isn't necessarily going to help - because the
> nova dev doesn't necessarily have the background to fix it.
> 
> I want to be very careful that we don't wind up with an assymetrical
> gate though…

What are your concerns regarding an 'asymmetrical gate'?  By halting neutron 
development until neutron-caused breakage is fixed, there would presumably be 
sufficient motivation to ensure timely resolution.

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-16 Thread Matthew Treinish
On Fri, Aug 16, 2013 at 01:03:57PM -0500, Ben Nemec wrote:
> >>>Getting this in before the H3 rush would be very helpful. When we made
> >>>the switch with Nova's unittests we fixed as many of the test bugs
> >>>that we could find, merged the change to switch the test runner, then
> >>>treated all failures as very high priority bugs that received
> >>>immediate attention. Getting this in before H3 will give everyone a
> >>>little more time to debug any potential new issues exposed by Jenkins
> >>>or people running the tests locally.
> >>>
> >>>I think we should be bold here and merge this as soon as we have good
> >>>numbers that indicate the trend is for these tests to pass. Graphite
> >>>can give us the pass to fail ratios over time, as long as these trends
> >>>are similar for both the old nosetest jobs and the new testr job I say
> >>>we go for it. (Disclaimer: most of the projecst I work on are not
> >>>affected by the tempest jobs; however, I am often called upon to help
> >>>sort out issues in the gate).
> >>
> >>I'm inclined to agree.  It's not as if we don't have transient
> >>failures now, and if we're looking at a 50% speedup in
> >>recheck/verify times then as long as the new version isn't
> >>significantly less stable it should be a net improvement.
> >>
> >>Of course, without hard numbers we're kind of discussing in a vacuum
> >>here.
> >>
> >
> >I also would like to get this in sooner rather than later and fix
> >the bugs as
> >they come in. But, I'm wary of doing this because there isn't a
> >proven success
> >history yet. No one likes gate resets, and I've only been running
> >it on the
> >gate queue for a day now.
> >
> >So here is the graphite graph that I'm using to watch parallel vs
> >serial in the
> >gate queue:
> >https://tinyurl.com/pdfz93l
> 
> Okay, so what are the y-axis units on this?  Because just guessing I
> would say that it's percentage of failing runs, in which case it
> looks like we're already within the 95% as accurate range (it never
> dips below -.05).  Am I reading it right?

Yeah I'm not sure what scale it is using either. I'm not sure it's percent,
or if it is then it's not grouping things over a long period of time to
calculate the percentage. I just know by manually correlating with what
I saw by watching zuul is that -0.02 was one failure, -0.03 should be 2
failures.

This graph might be easier to read:

http://tinyurl.com/n27lytl 

For this one I told graphite to do a total of events grouped at 1 hour
intervals. This time the y-axis is the number of runs. This plots the
differences between serial and parallel results. So as before, above 0 on the
y-axis means that many more jobs passed in that hour. I split out a line for 
success, failure, and aborted.

The aborted number is actually pretty important. I noticed that if there is a
gate reset (or a bunch of them) when the queue is pretty deep the testr runs are
often finished before the job at the head of the queue fails. So they get marked
as failures but the full jobs never finish and get marked as aborted. The good 
example of this is between late Aug 14 and early Aug 15 on the plot. That is 
when
when there was an intermittent test failure with horizon. Which was fixed by a
revert the next morning.

All this exercise has really shown me though is that graphing the results isn't
exactly straightforward or helpful unless everything we're measuring is gating.

So as things sit now we've found about ~5 more races and/or flaky tests while
running tempest in parallel. 2 have fixes in progress:
https://review.openstack.org/#/c/42169/
https://review.openstack.org/#/c/42351/

Then I have open bugs for the remaining 3 here:
https://bugs.launchpad.net/tempest/+bug/1213212
https://bugs.launchpad.net/tempest/+bug/1213209
https://bugs.launchpad.net/tempest/+bug/1213215

I haven't seen any other repeating failures besides these 3, and no one else has
opened a bug regarding a parallel failure. (although I doubt anyone is paying
attention to the fails, I know I wouldn't :) ) So there may be more that are
happening more infrequently that are being hidden by these 3.

At this point I'm not sure it is ready yet with the frequency I've seen the
testr run fail. But, at the same time the longer we wait the more bugs that can
be introduced. Maybe there is some middle ground like marking the parallel job
as voting on the check queue.

-Matt Treinish



> 
> >
> >On that graph the blue and yellow shows the number of jobs that
> >succeeded
> >grouped together in per hour buckets. (yellow being parallel and
> >blue serial)
> >
> >Then the red line is showing failures, a horizontal bar means that
> >there is no
> >difference in the number of failures between serial and parallel.
> >When it dips
> >negative it is showing a failure in parallel that wasn't on serial
> >a serial run
> >at the same time. When it goes positive it showing a failure on
> >serial that
> >doesn't occur on parallel at the same time. But, because the
> >serial run

Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Doug Hellmann
On Tue, Aug 13, 2013 at 2:36 PM, Thomas Maddox
wrote:

>  Hello!
>
>  I was having some chats yesterday with both Julien and Doug regarding
> some thoughts that occurred to me while digging through CM and Doug
> suggested that I bring them up on the dev list for everyones benefit and
> discussion.
>
>  My bringing this up is intended to help myself and others get a better
> understanding of why it's this way, whether we're on the correct course,
> and, if not, how we get to it. I'm not expecting anything to change quickly
> or necessarily at all from this. Ultimately the question I'm asking is: are
> we addressing the correct use cases with the correct API calls; being able
> to expect certain behavior without having to know the internals? For
> context, this is mostly using the SQLAlchemy implementation for these
> questions, but the API questions apply overall.
>
>  My concerns:
>
>- Driving get_resources() with the Meter table instead of the Resource
>table. This is mainly because of the additional filtering available in the
>Meter table, which allows us to satisfy a use case like *getting a
>list of resources a user had during a period of time to get meters to
>compute billing with*. The semantics are tripping me up a bit; the
>question this boiled down to for me was: *why use a resource query to
>get meters to show usage by a tenant*? I was curious about why we
>needed the timestamp filtering when looking at Resources, and why we would
>use Resource as a way to get at metering data, rather than a Meter request
>itself? This was answered by resources being the current vector to get at
>metering data for a tenant in terms of resources, if I understood 
> correctly.
>
>
>
>- With this implementation, we have to do aggregation to get at the
>discrete Resources (via the Meter table) rather than just filtering the
>already distinct ones in the Resource table.
>
> Querying first for resources and then getting the statistics is an
artifact of the design of the V1 API, where both the resource id and meter
name were part of the statistics API URL. After the groupby feature lands
in the V2 statistics API, we won't have to make the separate query any more
to satisfy the billing requirement.

However, that's just one example use case. Sometimes people do want to know
something about the resources that have existed besides the aggregated
samples for billing. The challenge with querying for resources is that the
metadata for a given resource has the potential to change over time. The
resource table holds the most current metadata, but the meter table has all
of the samples and all of the versions of the metadata, so we have to look
there to filter on metadata that might change (especially if we're trying
to answer questions about what resources had specific characteristics
during a time range).

>
>- This brought up some confusion with the API for me with the major
>use cases I can think of:
>   - As a new consumer of this API, I would think that *
>   /resource/* would get me details for a resource, e.g.
>   current state, when it was created, last updated/used timestamp, who 
> owns
>   it; not the attributes from the first sample to come through about it
>
> It should be returning the attributes for the *last* sample to be seen, so
that the metadata and other settings are the most recent values.

>
>-
>   - I would think that *
>   /meter/?q.field=resource_id&q.value=* ought to get me
>   a list of meter(s) details for a specific resource, e.g. name, unit, and
>   origin; but not a huge mixture of samples.
>
> The meters associated with a resource are provided as part of the response
to the resources query, so no separate call is needed.

>
>-
>   -
>  - Additionally */meter/?q.field=user_id&q.value=* would
>  get me a list of all meters that are currently related to the user
>
> Yes, we're in the process of replacing the term "meter" with "sample." Bad
choice of name that will require a deprecation period.

>
>-
>   -
>   - The ultimate use case, for billing queries, I would think that 
> */meter//statistics?   filters>&&()* would get me the measurements for
>   that meter to bill for.
>
>
>-
>
> If I understand correctly, one main intent driving this is wanting to
> avoid end users having to write a bunch of API requests themselves from the
> billing side and instead just drill down from payloads for each resource to
> get the billing information for their customers. It also looks like there's
> a BP to add grouping functionality to statistics queries to allow us this
> functionality easily (this one, I think:
> https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).
>
>  I'm new to this project, so I'm trying to get a handle on how we got
> here and maybe offer some outside perspective, if it's needed or wanted. =]
>
>  Thank you all in advance for your ti

Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-16 Thread Doug Hellmann
The notification messages don't translate 1:1 to database records. Even if
the notification payload includes multiple resources, we will store those
as multiple individual records so we can query against them. So it seems
like sending individual notifications would let us distribute the load of
processing the notifications across several collector instances, and won't
have any effect on the data storage requirements.

Doug


On Thu, Aug 15, 2013 at 11:58 AM, Alex Meade wrote:

> I don't know any actual numbers but I would have the concern that images
> tend to stick around longer than instances. For example, if someone takes
> daily snapshots of their server and keeps them around for a long time, the
> number of exists events would go up and up.
>
> Just a thought, could be a valid avenue of concern.
>
> -Alex
>
> -Original Message-
> From: "Doug Hellmann" 
> Sent: Thursday, August 15, 2013 11:17am
> To: "OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In
> Glance
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Nova generates a single exists event for each instance, and that doesn't
> cause a lot of trouble as far as I've been able to see.
>
> What is the relative number of images compared to instances in a "typical"
> cloud?
>
> Doug
>
>
> On Tue, Aug 13, 2013 at 7:20 PM, Neal, Phil  wrote:
>
> > I'm a little concerned that a batch payload won't align with "exists"
> > events generated from other services. To my recollection, Cinder, Trove
> and
> > Neutron all emit exists events on a per-instance basisa consumer
> would
> > have to figure out a way to handle/unpack these separately if they
> needed a
> > granular feed. Not the end of the world, I suppose, but a bit
> inconsistent.
> >
> > And a minor quibble: batching would also make it a much bigger issue if a
> > consumer missed a notificationthough I guess you could counteract
> that
> > by increasing the frequency (but wouldn't that defeat the purpose?)
> >
> > >
> > >
> > >
> > > On 08/13/2013 04:35 PM, Andrew Melton wrote:
> > > >> I'm just concerned with the type of notification you'd send. It has
> to
> > > >> be enough fine grained so we don't lose too much information.
> > > >
> > > > It's a tough situation, sending out an image.exists for each image
> with
> > > > the same payload as say image.upload would likely create TONS of
> > traffic.
> > > > Personally, I'm thinking about a batch payload, with a bare minimum
> of
> > the
> > > > following values:
> > > >
> > > > 'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
> > > > 'some_date', 'size': 1},
> > > >{'id': 'uuid2', 'owner': 'tenant2', 'created_at':
> > > > 'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
> > > >
> > > > That way the audit job/task could be configured to emit in batches
> > which
> > > > a deployer could tweak the settings so as to not emit too many
> > messages.
> > > > I definitely welcome other ideas as well.
> > >
> > > Would it be better to group by tenant vs. image?
> > >
> > > One .exists per tenant that contains all the images owned by that
> tenant?
> > >
> > > -S
> > >
> > >
> > > > Thanks,
> > > > Andrew Melton
> > > >
> > > >
> > > > On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou  > > > > wrote:
> > > >
> > > > On Mon, Aug 12 2013, Andrew Melton wrote:
> > > >
> > > > > So, my question to the Ceilometer community is this, does this
> > > > sound like
> > > > > something Ceilometer would find value in and use? If so, would
> > this be
> > > > > something
> > > > > we would want most deployers turning on?
> > > >
> > > > Yes. I think we would definitely be happy to have the ability to
> > drop
> > > > our pollster at some time.
> > > > I'm just concerned with the type of notification you'd send. It
> > has to
> > > > be enough fine grained so we don't lose too much information.
> > > >
> > > > --
> > > > Julien Danjou
> > > > // Free Software hacker / freelance consultant
> > > > // http://julien.danjou.info
> > > >
> > > >
> > > >
> > > >
> > > > ___
> > > > OpenStack-dev mailing list
> > > > OpenStack-dev@lists.openstack.org
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2013-08-16 11:10:09 -0700:
> On 2013-08-16 11:58, Jay Pipes wrote:
> > On 08/16/2013 09:52 AM, Boris Pavlovic wrote:
> >> Hi all,
> >> 
> >> We (OpenStack contributors) done a really huge and great work around 
> >> DB
> >> code in Grizzly and Havana to unify it, put all common parts into
> >> oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
> >> unique keys, and to use  this code in different projects instead of
> >> custom implementations. (well done!)
> >> 
> >> oslo-incubator db code is already used by: Nova, Neutron, Cinder,
> >> Ironic, Ceilometer.
> >> 
> >> In this moment we finished work around Glance:
> >> https://review.openstack.org/#/c/36207/
> >> 
> >> And working around Heat and Keystone.
> >> 
> >> So almost all projects use this code (or planing to use it)
> >> 
> >> Probably it is the right time to start work around moving oslo.db code
> >> to separated lib.
> >> 
> >> We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
> >> 
> >> E.g. Here are two drafts:
> >> 1) oslo.db lib code: https://github.com/malor/oslo.db
> >> 2) And here is this lib in action: 
> >> https://review.openstack.org/#/c/42159/
> >> 
> >> 
> >> Thoughts?
> > 
> > ++
> > 
> > Are you going to create a separate Launchpad project for the library
> > and track bugs against it separately? Or are you going to use the oslo
> > project in Launchpad for that?
> 
> At the moment all of the oslo.* projects are just grouped under the 
> overall Oslo project in LP.  Unless there's a reason to do otherwise I 
> would expect that to be true of oslo.db too.

Has that decision been re-evaluated recently?

I feel like bug trackers are more useful when they are more focused. But
perhaps there are other reasons behind using a shared bug tracker.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Jay Pipes

On 08/16/2013 04:00 PM, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2013-08-16 11:10:09 -0700:

On 2013-08-16 11:58, Jay Pipes wrote:

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around
DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action:
https://review.openstack.org/#/c/42159/


Thoughts?


++

Are you going to create a separate Launchpad project for the library
and track bugs against it separately? Or are you going to use the oslo
project in Launchpad for that?


At the moment all of the oslo.* projects are just grouped under the
overall Oslo project in LP.  Unless there's a reason to do otherwise I
would expect that to be true of oslo.db too.


Has that decision been re-evaluated recently?

I feel like bug trackers are more useful when they are more focused. But
perhaps there are other reasons behind using a shared bug tracker.


+1

The alternative (relying on users to tag bugs consistently) is error-prone.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Jay Pipes

On 08/16/2013 03:52 PM, Doug Hellmann wrote:

However, that's just one example use case. Sometimes people do want to
know something about the resources that have existed besides the
aggregated samples for billing. The challenge with querying for
resources is that the metadata for a given resource has the potential to
change over time. The resource table holds the most current metadata,
but the meter table has all of the samples and all of the versions of
the metadata, so we have to look there to filter on metadata that might
change (especially if we're trying to answer questions about what
resources had specific characteristics during a time range).


This is wasteful, IMO. We could change the strategy to say that a 
resource is immutable once it is received by Ceilometer. And if the 
"metadata" about that resource changes somehow (an example of this would 
be useful) in the future, then a new resource record with a unique ID 
would be generated and its ID shoved into the meter table instead of 
storing a redundant denormalized data in the meter.resource_metadata 
field, which AFAICT, is a VARCHAR(1000) field.


Anything that can reduce storage space in the base fact table (meter) 
per row will lead to increased performance...


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Pipeline Retry Semantics ...

2013-08-16 Thread Doug Hellmann
I added a couple of comments in the wiki page. We should have at least one
summit session about this, I think, unless we work it out before then.


On Thu, Aug 15, 2013 at 12:20 PM, Sandy Walsh wrote:

> Recently I've been focused on ensuring we don't drop notifications in
> CM. But problems still exist downstream, after we've captured the raw
> event.
>
> From the efforts going on with the Ceilometer sample pipeline, the new
> dispatcher model and the upcoming trigger pipeline, the discussion
> around retry semantics has being coming up a lot.
>
> In other words "What happens when step 4 of a 10 step pipeline fails?"
>
> As we get more into processing billing events, we really need to have a
> solid understanding of how we prevent double-counting or dropping events.
>
> I've started writing down some thoughts here:
> https://wiki.openstack.org/wiki/DuplicateWorkCeilometer
>
> It's a little scattered and I'd like some help tuning it.
>
> Hopefully it'll help grease the skids for the Icehouse Summit talks.
>
> Thanks!
> -S
>
> cc/ Josh, I think the State Management team can really help out here.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-16 Thread Doug Hellmann
If you're saying that you want to register URLs without version info
embedded in them, and let the client work that part out by talking to the
service in question (or getting a version number from the caller), then
"yes, please."


On Fri, Aug 16, 2013 at 1:47 AM, Robert Collins
wrote:

> We're just reworking our endpoint registration on cloud bring up to be
> driven by APIs, per the principled separation of concerns I outlined
> previously.
>
> One thing I note is that the keystone intialisation is basically full
> of magic constants like
> "http://$CONTROLLER_PUBLIC_ADDRESS:8004/v1/%(tenant_id)s"
>
> Now, I realise that when you have a frontend haproxy etc, the endpoint
> changes - but the suffix - v1/%(tenant_id)s in this case - is, AFAICT,
> internal neutron/cinder/ etc knowledge, as is the service type
> ('network' etc).
>
> Rather than copying those into everyones deploy scripts, I'm wondering
> if we could put that into neutronclient etc - either as a query
> function (neutron --endpoint-suffix -> 'v1/%(tenant_id)s) or perhaps
> something that will register with keystone when told to?
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Doug Hellmann
On Fri, Aug 16, 2013 at 4:15 PM, Jay Pipes  wrote:

> On 08/16/2013 03:52 PM, Doug Hellmann wrote:
>
>> However, that's just one example use case. Sometimes people do want to
>> know something about the resources that have existed besides the
>> aggregated samples for billing. The challenge with querying for
>> resources is that the metadata for a given resource has the potential to
>> change over time. The resource table holds the most current metadata,
>> but the meter table has all of the samples and all of the versions of
>> the metadata, so we have to look there to filter on metadata that might
>> change (especially if we're trying to answer questions about what
>> resources had specific characteristics during a time range).
>>
>
> This is wasteful, IMO. We could change the strategy to say that a resource
> is immutable once it is received by Ceilometer. And if the "metadata" about
> that resource changes somehow (an example of this would be useful) in the
> future, then a new resource record with a unique ID would be generated and
> its ID shoved into the meter table instead of storing a redundant
> denormalized data in the meter.resource_metadata field, which AFAICT, is a
> VARCHAR(1000) field.
>

To be clear, when I said "resource" I meant something like an instance, not
owned by ceilometer (rather than a row in the resource table).

As Julien pointed out, the existing SQL driver is based on the schema of
the Mongo driver where rather than doing a mapreduce operation every time
we want to find the most current resource data, it is stored separately.
It's quite likely that someone could improve the SQL driver to not require
the resource table at all, as you suggest.

Doug


>
> Anything that can reduce storage space in the base fact table (meter) per
> row will lead to increased performance...
>
> Best,
> -jay
>
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-16 Thread Robert Collins
On 17 August 2013 08:27, Doug Hellmann  wrote:
> If you're saying that you want to register URLs without version info
> embedded in them, and let the client work that part out by talking to the
> service in question (or getting a version number from the caller), then
> "yes, please."


That too. But primarily I don't want to be chasing devstack updates
forever because of copied code around this.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Jay Pipes

On 08/16/2013 04:37 PM, Doug Hellmann wrote:

On Fri, Aug 16, 2013 at 4:15 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 08/16/2013 03:52 PM, Doug Hellmann wrote:

However, that's just one example use case. Sometimes people do
want to
know something about the resources that have existed besides the
aggregated samples for billing. The challenge with querying for
resources is that the metadata for a given resource has the
potential to
change over time. The resource table holds the most current
metadata,
but the meter table has all of the samples and all of the
versions of
the metadata, so we have to look there to filter on metadata
that might
change (especially if we're trying to answer questions about what
resources had specific characteristics during a time range).


This is wasteful, IMO. We could change the strategy to say that a
resource is immutable once it is received by Ceilometer. And if the
"metadata" about that resource changes somehow (an example of this
would be useful) in the future, then a new resource record with a
unique ID would be generated and its ID shoved into the meter table
instead of storing a redundant denormalized data in the
meter.resource_metadata field, which AFAICT, is a VARCHAR(1000) field.


To be clear, when I said "resource" I meant something like an instance,
not owned by ceilometer (rather than a row in the resource table).

As Julien pointed out, the existing SQL driver is based on the schema of
the Mongo driver where rather than doing a mapreduce operation every
time we want to find the most current resource data, it is stored
separately. It's quite likely that someone could improve the SQL driver
to not require the resource table at all, as you suggest.\\


Actually, that's the opposite of what I'm suggesting :) I'm suggesting 
getting rid of the resource_metadata column in the meter table and using 
the resource table in joins...


-jay


Anything that can reduce storage space in the base fact table
(meter) per row will lead to increased performance...

Best,
-jay



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Doug Hellmann
On Fri, Aug 16, 2013 at 4:43 PM, Jay Pipes  wrote:

> On 08/16/2013 04:37 PM, Doug Hellmann wrote:
>
>> On Fri, Aug 16, 2013 at 4:15 PM, Jay Pipes > > wrote:
>>
>> On 08/16/2013 03:52 PM, Doug Hellmann wrote:
>>
>> However, that's just one example use case. Sometimes people do
>> want to
>> know something about the resources that have existed besides the
>> aggregated samples for billing. The challenge with querying for
>> resources is that the metadata for a given resource has the
>> potential to
>> change over time. The resource table holds the most current
>> metadata,
>> but the meter table has all of the samples and all of the
>> versions of
>> the metadata, so we have to look there to filter on metadata
>> that might
>> change (especially if we're trying to answer questions about what
>> resources had specific characteristics during a time range).
>>
>>
>> This is wasteful, IMO. We could change the strategy to say that a
>> resource is immutable once it is received by Ceilometer. And if the
>> "metadata" about that resource changes somehow (an example of this
>> would be useful) in the future, then a new resource record with a
>> unique ID would be generated and its ID shoved into the meter table
>> instead of storing a redundant denormalized data in the
>> meter.resource_metadata field, which AFAICT, is a VARCHAR(1000) field.
>>
>>
>> To be clear, when I said "resource" I meant something like an instance,
>> not owned by ceilometer (rather than a row in the resource table).
>>
>> As Julien pointed out, the existing SQL driver is based on the schema of
>> the Mongo driver where rather than doing a mapreduce operation every
>> time we want to find the most current resource data, it is stored
>> separately. It's quite likely that someone could improve the SQL driver
>> to not require the resource table at all, as you suggest.\\
>>
>
> Actually, that's the opposite of what I'm suggesting :) I'm suggesting
> getting rid of the resource_metadata column in the meter table and using
> the resource table in joins...
>

Ah, I see. That would be another good approach.

Doug


>
> -jay
>
>  Anything that can reduce storage space in the base fact table
>> (meter) per row will lead to increased performance...
>>
>> Best,
>> -jay
>>
>>
>>
>> __**___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**__org
>> 
>> > >
>> http://lists.openstack.org/__**cgi-bin/mailman/listinfo/__**
>> openstack-dev<
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>> >
>>
>>
>>
>>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Thierry Carrez
Jay Pipes wrote:
 Are you going to create a separate Launchpad project for the library
 and track bugs against it separately? Or are you going to use the oslo
 project in Launchpad for that?
>>>
>>> At the moment all of the oslo.* projects are just grouped under the
>>> overall Oslo project in LP.  Unless there's a reason to do otherwise I
>>> would expect that to be true of oslo.db too.
>>
>> Has that decision been re-evaluated recently?
>>
>> I feel like bug trackers are more useful when they are more focused. But
>> perhaps there are other reasons behind using a shared bug tracker.
> 
> +1
> 
> The alternative (relying on users to tag bugs consistently) is error-prone.

The reason is that it's actually difficult to get a view of all "oslo"
bugs due to Launchpad shortcomings (a project can only be in one project
group). So keeping them in a single "project" simplifies the work of
people that look after all of "Oslo".

This should be fixed in the future with a task tracker that handles
project groups sanely, and then there is no reason at all to use the
same project for different repositories.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] General Question about CentOS

2013-08-16 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Is OpenStack supported on CentOS running Python 2.6?

Thanks,

Mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Clark Boylan
On Fri, Aug 16, 2013 at 2:51 PM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis)  wrote:
> Is OpenStack supported on CentOS running Python 2.6?
>
I can't speak to what features are supported and whether or not it is
practical for real deployments, but we do all upstream Python 2.6 unit
testing on CentOS6.4 slaves. At the very least I would expect
unittests to work properly on CentOS.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Bob Ball
I'm running the unit tests and can confirm they do work.

I'm currently developing support for xenserver-core on CentOS 6.4 and many of 
the tempest tests pass, and I'm working through the failures that exist.

I haven't encountered anything yet which is caused by CentOS so I imagine it 
will all work.

Bob

From: Clark Boylan [clark.boy...@gmail.com]
Sent: 16 August 2013 23:08
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] General Question about CentOS

On Fri, Aug 16, 2013 at 2:51 PM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis)  wrote:
> Is OpenStack supported on CentOS running Python 2.6?
>
I can't speak to what features are supported and whether or not it is
practical for real deployments, but we do all upstream Python 2.6 unit
testing on CentOS6.4 slaves. At the very least I would expect
unittests to work properly on CentOS.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Clark Boylan
On Fri, Aug 16, 2013 at 8:04 AM, Doug Hellmann
 wrote:
> I'd like to propose Alex Gaynor for core status on the requirements project.
>
> Alex is a core Python and PyPy developer, has strong ties throughout the
> wider Python community, and has been watching and reviewing requirements
> changes for a little while now. I think it would be extremely helpful to
> have him on the team.
>
> Doug
>
+1 from me.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - python-neutronclient build failing for latest code reviews

2013-08-16 Thread Ronak Shah
Hi,
I can see on following link that many of the latest code reviews are
reporting build failure at the same point?

https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient,n,z

The backtrace looks liike:


ft46.1: tests.unit.test_shell.ShellTest.test_auth_StringException:
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-python-neutronclient-python26/tests/unit/test_shell.py",
line 71, in setUp
_shell = openstack_shell.NeutronShell('2.0')
  File 
"/home/jenkins/workspace/gate-python-neutronclient-python26/neutronclient/shell.py",
line 244, in __init__
command_manager=commandmanager.CommandManager('neutron.cli'), )
  File 
"/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py",
line 72, in __init__
self._set_streams(stdin, stdout, stderr)
  File 
"/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py",
line 89, in _set_streams
self.stdin = stdin or codecs.getreader(encoding)(sys.stdin)
  File 
"/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib64/python2.6/codecs.py",
line 984, in getreader
return lookup(encoding).streamreader
TypeError: lookup() argument 1 must be string, not None


Does anyone already looking into it?

Thanks,
Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Maru Newby

On Aug 16, 2013, at 11:44 AM, Clint Byrum  wrote:

> Excerpts from Maru Newby's message of 2013-08-16 11:25:07 -0700:
>> Neutron has been in and out of the gate for the better part of the past 
>> month, and it didn't slow the pace of development one bit.  Most Neutron 
>> developers kept on working as if nothing was wrong, blithely merging changes 
>> with no guarantees that they weren't introducing new breakage.  New bugs 
>> were indeed merged, greatly increasing the time and effort required to get 
>> Neutron back in the gate.  I don't think this is sustainable, and I'd like 
>> to make a suggestion for how to minimize the impact of gate breakage.
>> 
>> For the record, I don't think consistent gate breakage in one project should 
>> be allowed to hold up the development of other projects.  The current 
>> approach of skipping tests or otherwise making a given job non-voting for 
>> innocent projects should continue.  It is arguably worth taking the risk of 
>> relaxing gating for those innocent projects rather than halting development 
>> unnecessarily.
>> 
>> However, I don't think it is a good idea to relax a broken gate for the 
>> offending project.  So if a broken job/test is clearly Neutron related, it 
>> should continue to gate Neutron, effectively preventing merges until the 
>> problem is fixed.  This would both raise the visibility of breakage beyond 
>> the person responsible for fixing it, and prevent additional breakage from 
>> slipping past were the gating to be relaxed.
>> 
>> Thoughts?
>> 
> 
> I think this is a cultural problem related to the code review discussing
> from earlier in the week.
> 
> We are not looking at finding a defect and reverting as a good thing where
> high fives should be shared all around. Instead, "you broke the gate"
> seems to mean "you are a bad developer". I have been a bad actor here too,
> getting frustrated with the gate-breaker and saying the wrong thing.
> 
> The problem really is "you _broke_ the gate". It should be "the gate has
> found a defect, hooray!". It doesn't matter what causes the gate to stop,
> it is _always_ a defect. Now, it is possible the defect is in tempest,
> or jenkins, or HP/Rackspace's clouds where the tests run. But it is
> always a defect that what worked before, does not work now.
> 
> Defects are to be expected. None of us can write perfect code. We should
> be happy to revert commits and go forward with an enabled gate while
> the team responsible for the commit gathers information and works to
> correct the issue.

You're preaching to the choir, and I suspect that anyone with an interest in 
software quality is likely to prefer problem solving to finger pointing.  
However, my intent with this thread was not to promote more constructive 
thinking about defect detection.  Rather, I was hoping to communicate a flaw in 
the existing process and seek consensus on how that process could best be 
modified to minimize the cost of resolving gate breakage.


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Yufang Zhang
My team has deployed hundreds of compute nodes on CentOS-5.4(with python26
installed and Xen as hypervisor ) based on Folsom. It does work on our
production system :)


2013/8/17 Miller, Mark M (EB SW Cloud - R&D - Corvallis) <
mark.m.mil...@hp.com>

>   Is OpenStack supported on CentOS running Python 2.6?
>
> ** **
>
> Thanks,
>
> ** **
>
> Mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Shake Chen
Now in Centos 6.x ,the Python is 2.6.6, the Openstack can run it. you can
check the RDO

http://openstack.redhat.com/Quickstart


On Sat, Aug 17, 2013 at 8:05 AM, Yufang Zhang wrote:

> My team has deployed hundreds of compute nodes on CentOS-5.4(with python26
> installed and Xen as hypervisor ) based on Folsom. It does work on our
> production system :)
>
>
> 2013/8/17 Miller, Mark M (EB SW Cloud - R&D - Corvallis) <
> mark.m.mil...@hp.com>
>
>>   Is OpenStack supported on CentOS running Python 2.6?
>>
>> ** **
>>
>> Thanks,
>>
>> ** **
>>
>> Mark
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - python-neutronclient build failing for latest code reviews

2013-08-16 Thread Henry Gessau
I asked on #openstack-infra and clarkb immediately identified it as a
problem with cliff, and saw that the cliff folks have apparently already
fixed it in cliff 1.4.3, which is now on the openstack.org pypi mirror so
new gate jobs should start passing now.

On Fri, Aug 16, at 7:34 pm, Ronak Shah  wrote:

> Hi,
> I can see on following link that many of the latest code reviews are
> reporting build failure at the same point?
> 
> https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient,n,z
> 
> The backtrace looks liike:
> 
> 
> ft46.1: tests.unit.test_shell.ShellTest.test_auth_StringException: Traceback 
> (most recent call last):
>   File 
> "/home/jenkins/workspace/gate-python-neutronclient-python26/tests/unit/test_shell.py",
>  line 71, in setUp
> _shell = openstack_shell.NeutronShell('2.0')
>   File 
> "/home/jenkins/workspace/gate-python-neutronclient-python26/neutronclient/shell.py",
>  line 244, in __init__
> command_manager=commandmanager.CommandManager('neutron.cli'), )
>   File 
> "/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py",
>  line 72, in __init__
> self._set_streams(stdin, stdout, stderr)
>   File 
> "/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py",
>  line 89, in _set_streams
> self.stdin = stdin or codecs.getreader(encoding)(sys.stdin)
>   File 
> "/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib64/python2.6/codecs.py",
>  line 984, in getreader
> return lookup(encoding).streamreader
> TypeError: lookup() argument 1 must be string, not None
> 
> 
> Does anyone already looking into it?
> 
> Thanks,
> Ronak
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack exercise test failed at euca-register

2013-08-16 Thread XINYU ZHAO
without proxy , the test case is PASS.  with proxy set in localrc,
euca-register will fail with a 400 code. it is weird that even 127.0.0.1 is
already included in  no_proxy and it turned out that the api was never
through proxy.
Here I did a capture of both with and without proxy scenario, doing a
comparison will see that they are basically the same except the former
received 400 bad request code:


POST /services/Cloud/ HTTP/1.1

Host: 127.0.0.1:8773

Accept-Encoding: identity

Content-Length: 296

Content-Type: application/x-www-form-urlencoded; charset=UTF-8

User-Agent: Boto/2.10.0 (linux2)



AWSAccessKeyId=3cfbdaae44a94dc59959d0d88bfc4f9c&Action=RegisterImage&Architecture=i386&ImageLocation=testbucket%2Fbundle.img.manifest.xml&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-08-17T01%3A24%3A51Z&Version=2009-11-30&Signature=jk8G7EpYn2mcjxQFT%2B53Lgg4usdxviKwpvXfLnxYrHI%3DHTTP/1.1
400 Bad Request

Content-Type: text/xml

Content-Length: 207

Date: Sat, 17 Aug 2013 01:24:51 GMT




S3ResponseErrorUnknown error
occured.req-d2138d8f-6363-4b65-b793-a2bb2d12baee





Without proxy:

POST /services/Cloud/ HTTP/1.1

Host: 127.0.0.1:8773

Accept-Encoding: identity

Content-Length: 296

Content-Type: application/x-www-form-urlencoded; charset=UTF-8

User-Agent: Boto/2.10.0 (linux2)



AWSAccessKeyId=b8a07080b7394dfea0954dcd13a95aca&Action=RegisterImage&Architecture=i386&ImageLocation=testbucket%2Fbundle.img.manifest.xml&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-08-17T01%3A47%3A42Z&Version=2009-11-30&Signature=IV4heXI0GGp2a7gg90ZratX%2F2RxPbmqK6al26g72azM%3DHTTP/1.1
200 OK

Content-Type: text/xml

Content-Length: 198

Date: Sat, 17 Aug 2013 01:47:43 GMT



http://ec2.amazonaws.com/doc/2009-11-30/";>
  req-6ea23353-5902-4ac3-b298-13bd841d9409
  ami-0001





On Fri, Aug 16, 2013 at 9:38 AM, XINYU ZHAO  wrote:

> bump.
> any input is appreciated.
>
>
> On Thu, Aug 15, 2013 at 5:04 PM, XINYU ZHAO  wrote:
>
>> Updated every project to the latest. but each time i ran devstack, the
>> exercise test failed at the same place bundle.sh
>> Any hints?
>>
>> In console.log
>>
>> Uploaded image as testbucket/bundle.img.manifest.xml
>> ++ euca-register testbucket/bundle.img.manifest.xml
>> ++ cut -f2
>> + AMI='S3ResponseError: Unknown error occured.'
>> + die_if_not_set 57 AMI 'Failure registering testbucket/bundle.img'
>> + local exitcode=0
>> ++ set +o
>> ++ grep xtrace
>> + FXTRACE='set -o xtrace'
>> + set +o xtrace
>> + timeout 15 sh -c 'while euca-describe-images | grep S3ResponseError: 
>> Unknown error occured. | grep -q available; do sleep 1; done'
>> grep: Unknown: No such file or directory
>> grep: error: No such file or directory
>> grep: occured.: No such file or directory
>> close failed in file object destructor:
>> sys.excepthook is missing
>> lost sys.stderr
>> + euca-deregister S3ResponseError: Unknown error occured.
>> Only 1 argument (image_id) permitted
>> + die 65 'Failure deregistering S3ResponseError: Unknown error occured.'
>> + local exitcode=1
>> + set +o xtrace
>> [Call Trace]
>> /opt/stack/new/devstack/exercises/bundle.sh:65:die
>> [ERROR] /opt/stack/new/devstack/exercises/bundle.sh:65 Failure deregistering 
>> S3ResponseError: Unknown error occured.
>>
>>
>>
>> Here is what recorded in n-api log.
>>
>> 2013-08-15 15:44:20.331 27003 DEBUG nova.utils [-] Reloading cached file 
>> /etc/nova/policy.json read_cached_file /opt/stack/new/nova/nova/utils.py:814
>> 2013-08-15 15:44:20.363 DEBUG nova.api.ec2 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] action: RegisterImage 
>> __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:325
>> 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: Architecture   
>>  val: i386 __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:328
>> 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: ImageLocation  
>>  val: testbucket/bundle.img.manifest.xml __call__ 
>> /opt/stack/new/nova/nova/api/ec2/__init__.py:328
>> 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Unexpected 
>> S3ResponseError raised
>> 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Environment: 
>> {"CONTENT_TYPE": "application/x-www-form-urlencoded; charset=UTF-8", 
>> "SCRIPT_NAME": "/services/Cloud", "REQUEST_METHOD": "POST", "HTTP_HOST": 
>> "127.0.0.1:8773", "PATH_INFO": "/", "SERVER_PROTOCOL": "HTTP/1.0", 
>> "HTTP_USER_AGENT": "Boto/2.10.0 (linux2)", "RAW_PATH_INFO": 
>> "/services/Cloud/", "REMOTE_ADDR": "127.0.0.1", "REMOTE_PORT": "44294", 
>> "wsgi.url_scheme": "http", "SERVER_NAME": "127.0.0.1", "SERVER_PORT": 
>> "8773", "GATEWAY_INTERFACE": "CGI/1.1", "HTTP_ACCEPT_ENCODING": "identity"}
>> 2013-08-15 15:44:20.371 DEBUG nova.api.ec2.faults 
>> [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] EC2 error response: 
>

Re: [openstack-dev] [Neutron] - python-neutronclient build failing for latest code reviews

2013-08-16 Thread Dean Troyer
On Fri, Aug 16, 2013 at 8:26 PM, Henry Gessau  wrote:

> I asked on #openstack-infra and clarkb immediately identified it as a
> problem with cliff, and saw that the cliff folks have apparently already
> fixed it in cliff 1.4.3, which is now on the openstack.org pypi mirror so
> new gate jobs should start passing now.
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

This occurs in our testing when OS_STDOUT_CAPTURE is not set.  I found it
in python-openstackclient because the default setting as ised in the gate
was to not capture stdout.  As Doug found out, cliff doesn't get an
encoding in that state from the test runner.  I see OS_STDOUT_CAPTURE=1 in
your .testr.conf but it looks like that is never used in the test setup.

If you add something similar to
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/tests/utils.py#L26the
problem goes away when OS_STDOUT_CAPTURE=1.  As noted, cliff has been
fixed but it may be that we should have this in the test setup.  Most
projects already do.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Clint Byrum
Excerpts from Maru Newby's message of 2013-08-16 16:42:23 -0700:
> 
> On Aug 16, 2013, at 11:44 AM, Clint Byrum  wrote:
> 
> > Excerpts from Maru Newby's message of 2013-08-16 11:25:07 -0700:
> >> Neutron has been in and out of the gate for the better part of the past 
> >> month, and it didn't slow the pace of development one bit.  Most Neutron 
> >> developers kept on working as if nothing was wrong, blithely merging 
> >> changes with no guarantees that they weren't introducing new breakage.  
> >> New bugs were indeed merged, greatly increasing the time and effort 
> >> required to get Neutron back in the gate.  I don't think this is 
> >> sustainable, and I'd like to make a suggestion for how to minimize the 
> >> impact of gate breakage.
> >> 
> >> For the record, I don't think consistent gate breakage in one project 
> >> should be allowed to hold up the development of other projects.  The 
> >> current approach of skipping tests or otherwise making a given job 
> >> non-voting for innocent projects should continue.  It is arguably worth 
> >> taking the risk of relaxing gating for those innocent projects rather than 
> >> halting development unnecessarily.
> >> 
> >> However, I don't think it is a good idea to relax a broken gate for the 
> >> offending project.  So if a broken job/test is clearly Neutron related, it 
> >> should continue to gate Neutron, effectively preventing merges until the 
> >> problem is fixed.  This would both raise the visibility of breakage beyond 
> >> the person responsible for fixing it, and prevent additional breakage from 
> >> slipping past were the gating to be relaxed.
> >> 
> >> Thoughts?
> >> 
> > 
> > I think this is a cultural problem related to the code review discussing
> > from earlier in the week.
> > 
> > We are not looking at finding a defect and reverting as a good thing where
> > high fives should be shared all around. Instead, "you broke the gate"
> > seems to mean "you are a bad developer". I have been a bad actor here too,
> > getting frustrated with the gate-breaker and saying the wrong thing.
> > 
> > The problem really is "you _broke_ the gate". It should be "the gate has
> > found a defect, hooray!". It doesn't matter what causes the gate to stop,
> > it is _always_ a defect. Now, it is possible the defect is in tempest,
> > or jenkins, or HP/Rackspace's clouds where the tests run. But it is
> > always a defect that what worked before, does not work now.
> > 
> > Defects are to be expected. None of us can write perfect code. We should
> > be happy to revert commits and go forward with an enabled gate while
> > the team responsible for the commit gathers information and works to
> > correct the issue.
> 
> You're preaching to the choir, and I suspect that anyone with an interest in 
> software quality is likely to prefer problem solving to finger pointing.  
> However, my intent with this thread was not to promote more constructive 
> thinking about defect detection.  Rather, I was hoping to communicate a flaw 
> in the existing process and seek consensus on how that process could best be 
> modified to minimize the cost of resolving gate breakage.
> 

I believe that the process is a symptom of the culture. If we were
more eager to revert/discover/fix/re-submit on failure, we wouldn't
be turning off the gate for things. Instead we cling to whatever has
had the requisite "+2/approval" as if passing the stringent review has
imparted our code with magical powers which will eventually morph into
a passing gate.

In a perfect world we could make our CI infrastructure bisect the failures
to try and isolate the commits that did them so at least anybody can see
the commit that did the damage and revert it quickly. Realistically, most
of the time we remove from the gate because the failures are intermittent
and take _forever_ to discover, so that may not even be possible.

I am suggesting that we all change our perspective and embrace "revert
this immediately" as "thank you for finding that defect" not "you jerk
why did you revert my code". It may still be hard to find which commit
to revert, but at least one can spend that time with the idea that they
will be rewarded, rather than punished, for their efforts.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest test test_stack_crud_no_resources meets 400 bad request

2013-08-16 Thread XINYU ZHAO
Could anybody take a look at this failure. All projects are updated from trunk.



FAIL: 
tempest.api.orchestration.stacks.test_stacks.StacksTestJSON.test_stack_crud_no_resources[gate,smoke]
tempest.api.orchestration.stacks.test_stacks.StacksTestJSON.test_stack_crud_no_resources[gate,smoke]
--
_StringException: Empty attachments:
  stderr
  stdout

Traceback (most recent call last):
  File "tempest/api/orchestration/stacks/test_stacks.py", line 50, in
test_stack_crud_no_resources
stack_name, self.empty_template)
  File "tempest/api/orchestration/base.py", line 70, in create_stack
parameters=parameters)
  File "tempest/services/orchestration/json/orchestration_client.py",
line 57, in create_stack
resp, body = self.post(uri, headers=headers, body=body)
  File "tempest/common/rest_client.py", line 259, in post
return self.request('POST', url, headers, body)
  File "tempest/common/rest_client.py", line 387, in request
resp, resp_body)
  File "tempest/common/rest_client.py", line 437, in _error_checker
raise exceptions.BadRequest(resp_body)
BadRequest: Bad request
Details: {u'title': u'Bad Request', u'explanation': u'The server could
not comply with the request since it is either malformed or otherwise
incorrect.', u'code': 400, u'error': {u'message': u"'module' object
has no attribute 'extract_args'", u'traceback': u'Traceback (most
recent call last):\n\n  File
"/opt/stack/new/heat/heat/openstack/common/rpc/amqp.py", line 435, in
_process_data\n**args)\n\n  File
"/opt/stack/new/heat/heat/openstack/common/rpc/dispatcher.py", line
172, in dispatch\nresult = getattr(proxyobj, method)(ctxt,
**kwargs)\n\n  File "/opt/stack/new/heat/heat/engine/service.py", line
55, in wrapped\nreturn func(self, ctx, *args, **kwargs)\n\n  File
"/opt/stack/new/heat/heat/engine/service.py", line 248, in
create_stack\ncommon_params =
api.extract_args(args)\n\nAttributeError: \'module\' object has no
attribute \'extract_args\'\n', u'type': u'AttributeError'}}
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Launchpad bug tracker defects (was: Proposal oslo.db lib)

2013-08-16 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2013-08-16 13:55:46 -0700:
> Jay Pipes wrote:
>  Are you going to create a separate Launchpad project for the library
>  and track bugs against it separately? Or are you going to use the oslo
>  project in Launchpad for that?
> >>>
> >>> At the moment all of the oslo.* projects are just grouped under the
> >>> overall Oslo project in LP.  Unless there's a reason to do otherwise I
> >>> would expect that to be true of oslo.db too.
> >>
> >> Has that decision been re-evaluated recently?
> >>
> >> I feel like bug trackers are more useful when they are more focused. But
> >> perhaps there are other reasons behind using a shared bug tracker.
> > 
> > +1
> > 
> > The alternative (relying on users to tag bugs consistently) is error-prone.
> 
> The reason is that it's actually difficult to get a view of all "oslo"
> bugs due to Launchpad shortcomings (a project can only be in one project
> group). So keeping them in a single "project" simplifies the work of
> people that look after all of "Oslo".
> 
> This should be fixed in the future with a task tracker that handles
> project groups sanely, and then there is no reason at all to use the
> same project for different repositories.
> 

I know this sounds like a crazy idea, but have we looked at investing any
time in adding this feature to Launchpad?

TripleO has the same problem. We look at bugs for:

tripleo
diskimage-builder
os-apply-config
os-collect-config
os-refresh-config

Now, having all of those in one project is simply not an option, as they
are emphatically different things. Part of TripleO is allowing users to
swap pieces out for others, so having clear lines between components is
critical.

I remember similar problems working on juju, juju-jitsu, charm-tools,
and juju-core.

Seems like it would be worth a small investment in Launchpad vs. having
to switch to another tracker.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Launchpad bug tracker defects (was: Proposal oslo.db lib)

2013-08-16 Thread Cody A.W. Somerville
On Sat, Aug 17, 2013 at 12:57 AM, Clint Byrum  wrote:

> Excerpts from Thierry Carrez's message of 2013-08-16 13:55:46 -0700:
>
...

> > The reason is that it's actually difficult to get a view of all "oslo"
> > bugs due to Launchpad shortcomings (a project can only be in one project
> > group). So keeping them in a single "project" simplifies the work of
> > people that look after all of "Oslo".
> >
> > This should be fixed in the future with a task tracker that handles
> > project groups sanely, and then there is no reason at all to use the
> > same project for different repositories.
> >
>
> I know this sounds like a crazy idea, but have we looked at investing any
> time in adding this feature to Launchpad?
>
> TripleO has the same problem. We look at bugs for:
>
> tripleo
> diskimage-builder
> os-apply-config
> os-collect-config
> os-refresh-config
>
> Now, having all of those in one project is simply not an option, as they
> are emphatically different things. Part of TripleO is allowing users to
> swap pieces out for others, so having clear lines between components is
> critical.
>
> I remember similar problems working on juju, juju-jitsu, charm-tools,
> and juju-core.
>
> Seems like it would be worth a small investment in Launchpad vs. having
> to switch to another tracker.
>

Their issue is slightly more nuanced. They're already using project groups
to provide a unified view (for all of OpenStack - which might be of dubious
value) but the trouble is that a project can only belong to one project
group.

For tripleo, you might want to look at having a launchpad admin create you
a project group if they aren't required to be part of the openstack project
group.

-- 
Cody A.W. Somerville
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev