Re: [openstack-dev] [Climate] Use Taskflow for leases start/end ?

2013-09-24 Thread Sylvain Bauza

Hi Joshua,

Thanks for replying :-)
I was a bit worried by [1], is it still the case ? Are your patterns 
still in rewrite ?


[1] 
https://github.com/stackforge/taskflow/blob/master/taskflow/examples/fake_boot_vm.py#L8



-Sylvain

Le 23/09/2013 18:57, Joshua Harlow a écrit :

Howdy there!

Taskflow currently should be ready for usage, its not a pypi library 
yet, I am hoping for a 0.1 release soon (maybe 2 weeksish).


But in the meantime it does have a similar `update.py` as 
oslo-incubator, so you can use that to start integrating.


Jump in #openstack-state-management if u have any questions, here to help!

-Josh

From: Sylvain Bauza >
Reply-To: OpenStack Development Mailing List 
>

Date: Monday, September 23, 2013 12:20 AM
To: OpenStack Development Mailing List 
>

Subject: Re: [openstack-dev] [Climate] Use Taskflow for leases start/end ?

I can't agree more. My point was not using it for v1, but just make 
sure everoybody in the team is aware of that kind of transactional 
framework.


On a second pro, it would make sense to conceptualize transaction 
model and think on a move later, even if we're still yet not using it :-)



Taskflow ppl, do you have any kind of code freeze deadline which could 
give us a glance on when the first release will be ready for use ?


Thanks,
-Sylvain

Le 22/09/2013 09:39, Dina Belova a écrit :

Hi all,

I think that TaskFlow is an interesting initiative that could provide 
us some benefits like good encapsulation of logical code blocks, 
better exception handling and management of actions taking place in 
Climate core, rollbacks and replays management.


It looks like that we should initially understand our workflows in 
Climate and then decide if we should use TaskFlow for them or not.



On Thu, Sep 19, 2013 at 4:54 PM, Sylvain Bauza 
mailto:sylvain.ba...@bull.net>> wrote:


Hi Climate team,

I just went through https://wiki.openstack.org/wiki/TaskFlow

Do you think Taskflow could help us in providing ressources
defined in the lease on an atomic way ?

We could leave the implementation up to the plugins, but maybe
the Lease Manager could also benefit of it.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar] pecan models vs. db models

2013-09-24 Thread mar...@redhat.com
On 23/09/13 21:59, Doug Hellmann wrote:
> On Mon, Sep 23, 2013 at 12:31 PM, Petr Blaho  wrote:
> 
>> Hi,
>>
>> during my work on getting tests to pass for
>> https://review.openstack.org/#/c/46947/ I discovered that we are
>> misusing pecan models for HTTP representation of Resources.
>>
>> In controllers pecan/wsme calls actions with pecan model prepopulated
>> from HTTP request's params.
>>
>> For example, when creating new Rack, post
>> method in Racks controller is called with rack object
>> (
>> https://github.com/stackforge/tuskar/blob/master/tuskar/api/controllers/v1/rack.py#L26-L31).
>> This object is instance of Rack from
>>
>> https://github.com/stackforge/tuskar/blob/master/tuskar/api/controllers/v1/types/rack.py
>> .
>> Next this object is used in pecan.request.dbapi.create_rack(rack) call
>> (method defined here
>>
>> https://github.com/stackforge/tuskar/blob/master/tuskar/db/sqlalchemy/api.py#L385-L431
>> )
>>
>> This method just assumes that new_rack object (passed from controller)
>> has some attributes with getters defined (name, subnet, location, ...).
>>
>> This is fine if we have 1-1 relationship between how resource is stored
>> in db and how it is represented via HTTP. In fact this assumes that both
>> versions have the same set of atributes.
>>
>> Marios wrote a patch (mentioned above) which needs some internal
>> attributes only, ie. present in table but not exposed via HTTP.
>>
>> In that moment I found that we use pecan models (
>>
>> https://github.com/stackforge/tuskar/tree/master/tuskar/api/controllers/v1/types)
>> to transfer HTTP params _directly_ to our db layer
>> (
>> https://github.com/stackforge/tuskar/blob/master/tuskar/db/sqlalchemy/api.py).
>> By _directly_ I mean that we assume 1-1 mapping between attributes in
>> pecan models and db models.
>>
>> This is not to be true anymore. We can solve it by using conditionals like
>> this
>> https://review.openstack.org/#/c/46947/3/tuskar/db/sqlalchemy/api.py
>> (lines 175 to 181) but I think this is not good solution b/c of repetion
>> of code and generaly we are mixing up HTTP repr. with db repr.
>>
>> I propose to write a simple layer responsible for "translating" pecan
>> models into db representation. This way we can keep all diffs in what
>> attributes are exposed via HTTP and which not in one place - easy to
>> see, easy to change, easy to review. To scatter it around dbapi code
>> is not a good way IMHO.

Hey Petr:


I believe you mentioned 'service models' yesterday :) on irc - well we
also used these in Deltacloud for the CIMI frontend - the API calls out
to the service models which handle talking to the database and
instantiating the separate object models. If understood correctly this
is also what Doug is proposing. I think it's a great idea.


My personal concern is getting the HK stories done without too much
disruption and this would be a major change. Though I also understand
that precisely for this reason we should do it sooner rather than later.
Do you think this is something you can work on in parallel to current
dev, rebasing as frequently as possible from master and aim to merge
immediately after HK? This is my vote anyway - thoughts? You could even
keep a gerrit review going and update that to encourage more people to look


thanks, marios



> 
> 
>>
>> Another issue which comes with this coupling of pecan/db models is in
>> tests.
>>
>> In db tests we use utility helpers for creating resources in memory, ie
>> create_test_resource_class method (
>>
>> https://github.com/stackforge/tuskar/blob/master/tuskar/tests/db/test_db_racks.py#L28
>> ). This kind of methods comes from "from tuskar.tests.db import utils"
>> and they use pecan models (
>>
>> https://github.com/stackforge/tuskar/blob/master/tuskar/tests/db/utils.py#L17-L23).
>> We are now on the same page as mentioned above. These db tests uses
>> Relation and Link pecan models which means that easy solution like
>> importing db models instead of pecan models is not doable at the moment
>> b/c db models do not contains direct counterparts for Relation and Link.
>>
>>
>> I am pretty woried about this pecan-db models coupling b/c if (or when)
>> we will need more different stuctures on HTTP side than on db side (no
>> 1-1 relationship) we will have to change our code and tests more.
>>
>> I hope that we will find a solution for this sooner than tuskar code
>> will grow more complex.
>>
>>
>> I would like to see something like "service objects" in controller part
>> but simple set of "translations" can be ok if we do not break 1-1 parity
>> too
>> much.
>>
> 
> Yes, you definitely do not want to be using WSME models in the storage
> layer. In addition to the issues you've already discovered with the
> database schema not mapping directly to the API schema, it will force the
> storage code to depend on web front-end code.
> 
> In ceilometer, we solved this problem by creating a separate set of models
> for the storage layer:
> https://github.com/openstack

[openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-09-24 Thread Thierry Carrez
Today in the project/release status meeting, we'll close the last
standing feature freeze exceptions and review progress towards the
publication of the first release candidates. In particular, we'll review
the state of RC1 buglists and see if the current bugfixing velocity is
compatible with the production of the RC in a timely manner.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130924T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Gary Kotton
Hi,
Anyone know the root cause of:


2013-09-24 06:47:01.670 | Cleaning up...
2013-09-24 06:47:01.670 | No distributions matching the version for 
pyparsing>=2.0.1 (from cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
/home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
2013-09-24 06:47:01.670 | Traceback (most recent call last):
2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 'console_scripts', 
'pip')()
2013-09-24 06:47:01.671 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
 line 148, in main
2013-09-24 06:47:01.671 | return command.main(args[1:], options)
2013-09-24 06:47:01.672 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
 line 169, in main
2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode byte 
0xe2 in position 56: ordinal not in range(128)
2013-09-24 06:47:01.673 |
2013-09-24 06:47:01.673 | ERROR: could not install deps 
[-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
-r/home/jenkins/workspace/gate-nova-pep8/test-requi

Thanks

gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Token's project (tenant) not passed to API layer (v2.py) from auth_token.py

2013-09-24 Thread Julien Danjou
On Tue, Sep 24 2013, Pendergrass, Eric wrote:

> Hi, I'm struggling with a problem related to tokens.  I have one token for
> which the project ID gets passed to v2.MeterController.get_all() in the
> kwargs:
>
>  
>
> (Pdb) kwargs
>
> {'project': u'10032339952700', 'meter': u'network.outgoing.bytes'}

project_id you meant?

> As far as I can tell the token data is roughly the same, except for the
> details about the tenant.  I'm stepping through the auth_token code trying
> to find the cause.

You rather want to audit ceilometer.api.controllers.v2._query_to_kwargs
which adds the project a user is scoped to if he's not an admin.
I guess that's the difference between your 2 users/tokens.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [LBaaS][Horizon] Subnet for VIP?

2013-09-24 Thread Eugene Nikanorov
Hi Fank,

That looks like Horizon limitation that is bound to current reference
implementation of lbaas service where VIP should be on a subnet where
pool's memebers are.
So it's not a bug. Expect this to change in Icehouse.

Thanks,
Eugene.



On Tue, Sep 24, 2013 at 9:19 AM,  wrote:

> Hi,
>
> When adding a VIP for this pool, we suppose to specify the subnet which
> the VIP will be on. However, the Horizon UI forces us to provide an IP
> address for the VIP from the subnet which we used to create the pool. That
> subnet for pool is supposed to be the subnet for pool's members, not the
> subnet for the VIP.
>
> This looks like a UI bug?
>
> Thanks,
> -Kaiwei
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Jenkins} Gating broken

2013-09-24 Thread Gary Kotton
This just seems to affect Nova.
Thanks
Gary

From: Administrator mailto:gkot...@vmware.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 24, 2013 11:15 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Jenkins} Gating broken

Hi,
Anyone know the root cause of:


2013-09-24 06:47:01.670 | Cleaning up...
2013-09-24 06:47:01.670 | No distributions matching the version for 
pyparsing>=2.0.1 (from cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
/home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
2013-09-24 06:47:01.670 | Traceback (most recent call last):
2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 'console_scripts', 
'pip')()
2013-09-24 06:47:01.671 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
 line 148, in main
2013-09-24 06:47:01.671 | return command.main(args[1:], options)
2013-09-24 06:47:01.672 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
 line 169, in main
2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode byte 
0xe2 in position 56: ordinal not in range(128)
2013-09-24 06:47:01.673 |
2013-09-24 06:47:01.673 | ERROR: could not install deps 
[-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
-r/home/jenkins/workspace/gate-nova-pep8/test-requi

Thanks

gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Jenkins} Gating broken

2013-09-24 Thread Matthias Runge
On 24/09/13 11:10, Gary Kotton wrote:
> This just seems to affect Nova.
> Thanks
> Gary
> 
> From: Administrator mailto:gkot...@vmware.com>>
> Reply-To: OpenStack Development Mailing List

Sadly not. Horizon seems to be broken for the same reason.

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-24 Thread Flavio Percoco

On 23/09/13 15:21 -0400, Doug Hellmann wrote:

The policy code seems very tightly associated with the keystone work. There's
no reason for Oslo to be the only program releasing reusable libraries. We
should consider having the Keystone team manage the policy library in a repo
they own. I'd love to have the Keystone middleware work the same way, instead
of being in the client repo, but one step at a time.

Of course, if the policy code is nearing the point where it is ready to
graduate from the incubator, then maybe that suggestion is moot and we should
just continue to push ahead on the path we're on now. We could have people
submitting policy code to oslo-incubator add "keystone-core" to reviews (adding
a group automatically adds its members), so they don't have to subscribe to
oslo notifications.

How close is the policy code to being ready to graduate?



After the last huge re-factor, I think the policy code is mature
enough to live in its own repo. There are some other features I think
it should support - like other persistence backends - but I guess that
can be addressed when it's in a separate repo.

As a graduation requirement, I'd like to see other projects being
migrated to the latest code - cinder, glance, nova - as a proof that
no further changes to the API are needed. Once we get to that point,
we can pull it out of oslo-incubator and replace imports in projects
depending on it.

I can take care of that as soon as Ith development starts, unless
there's another volunteer :D.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-09-24 Thread Robert Collins
On 24 September 2013 20:08, Thierry Carrez  wrote:
> All Technical Leads for integrated programs should be present (if you
> can't make it, please name a substitute on [1]). Other program leads and
> everyone else is very welcome to attend.

Should that be 'programs with integrated projects', since AFAIK
projects, not programs are integrated into the release. e.g.
python-novaclient is *not* integrated, but 'nova' is.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Nikola Đipanov
On 24/09/13 10:15, Gary Kotton wrote:
> Hi,
> Anyone know the root cause of:
> 
> 2013-09-24 06:47:01.670 | Cleaning up...
> 2013-09-24 06:47:01.670 | No distributions matching the version for 
> pyparsing>=2.0.1 (from cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
> /home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
> 2013-09-24 06:47:01.670 | Traceback (most recent call last):
> 2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
> 2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 
> 'console_scripts', 'pip')()
> 2013-09-24 06:47:01.671 |   File 
> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
>  line 148, in main
> 2013-09-24 06:47:01.671 | return command.main(args[1:], options)
> 2013-09-24 06:47:01.672 |   File 
> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
>  line 169, in main
> 2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
> 2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode byte 
> 0xe2 in position 56: ordinal not in range(128)
> 2013-09-24 06:47:01.673 | 
> 2013-09-24 06:47:01.673 | ERROR: could not install deps 
> [-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
> -r/home/jenkins/workspace/gate-nova-pep8/test-requi
> 

This seems to be an issue with pip not being able to find the right
version of pyparsing. This is strange as the command invocation log does
not suggest any alternative indexes are used - and it works for me.

Maybe someone from the infra team has some more insights?

PS - the unicode error seems to happen when trying to log the failure -
not the cause of it.

N.

> Thanks
> 
> gary
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Virtio-Serial support for Nova libvirt driver

2013-09-24 Thread P Balaji-B37839
Hi,

Virtio-Serial interface support for Nova - Libvirt is not available now. Some 
VMs who wants to access the Host may need like running qemu-guest-agent or any 
proprietary software want to use this mode of communication with Host.

Qemu-GA uses virtio-serial communication.

We want to propose a blue-print on this for IceHouse Release.

Anybody interested on this.

Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-09-24 Thread Thierry Carrez
Robert Collins wrote:
> On 24 September 2013 20:08, Thierry Carrez  wrote:
>> All Technical Leads for integrated programs should be present (if you
>> can't make it, please name a substitute on [1]). Other program leads and
>> everyone else is very welcome to attend.
> 
> Should that be 'programs with integrated projects', since AFAIK
> projects, not programs are integrated into the release. e.g.
> python-novaclient is *not* integrated, but 'nova' is.

+1, will update boilerplate.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Thierry Carrez
Nikola Đipanov wrote:
> On 24/09/13 10:15, Gary Kotton wrote:
>> Hi,
>> Anyone know the root cause of:
>>
>> 2013-09-24 06:47:01.670 | Cleaning up...
>> 2013-09-24 06:47:01.670 | No distributions matching the version for 
>> pyparsing>=2.0.1 (from cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
>> /home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
>> 2013-09-24 06:47:01.670 | Traceback (most recent call last):
>> 2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
>> 2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 
>> 'console_scripts', 'pip')()
>> 2013-09-24 06:47:01.671 |   File 
>> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
>>  line 148, in main
>> 2013-09-24 06:47:01.671 | return command.main(args[1:], options)
>> 2013-09-24 06:47:01.672 |   File 
>> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
>>  line 169, in main
>> 2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
>> 2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode 
>> byte 0xe2 in position 56: ordinal not in range(128)
>> 2013-09-24 06:47:01.673 | 
>> 2013-09-24 06:47:01.673 | ERROR: could not install deps 
>> [-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
>> -r/home/jenkins/workspace/gate-nova-pep8/test-requi
> 
> This seems to be an issue with pip not being able to find the right
> version of pyparsing. This is strange as the command invocation log does
> not suggest any alternative indexes are used - and it works for me.
> 
> Maybe someone from the infra team has some more insights?

Should be solved once https://review.openstack.org/48014 is accepted and
merged.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Sean Dague

On 09/24/2013 06:44 AM, Nikola Đipanov wrote:

On 24/09/13 10:15, Gary Kotton wrote:

Hi,
Anyone know the root cause of:

2013-09-24 06:47:01.670 | Cleaning up...
2013-09-24 06:47:01.670 | No distributions matching the version for pyparsing>=2.0.1 (from 
cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
/home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
2013-09-24 06:47:01.670 | Traceback (most recent call last):
2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 'console_scripts', 
'pip')()
2013-09-24 06:47:01.671 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
 line 148, in main
2013-09-24 06:47:01.671 | return command.main(args[1:], options)
2013-09-24 06:47:01.672 |   File 
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
 line 169, in main
2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode byte 
0xe2 in position 56: ordinal not in range(128)
2013-09-24 06:47:01.673 |
2013-09-24 06:47:01.673 | ERROR: could not install deps 
[-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
-r/home/jenkins/workspace/gate-nova-pep8/test-requi



This seems to be an issue with pip not being able to find the right
version of pyparsing. This is strange as the command invocation log does
not suggest any alternative indexes are used - and it works for me.

Maybe someone from the infra team has some more insights?

PS - the unicode error seems to happen when trying to log the failure -
not the cause of it.

N.


Global requirements caps pyparsing at < 2. If software isn't in global 
requirements, it's not let into the mirrors for the gate (so that shadow 
dependencies don't sneak in requirements we don't support).


Which ... is exactly what happened.

cliff 1.4.5 was released, and in a patch level release bumped up a 
minimum requirement a major version (pyparsing > 2 now required). This 
created a wedge situation.


There are 2 options.

1) cap cliff
or
2) uncap pyparsing

I went for point 2, though it's not been tested with any of the stack, 
and realistically not something we probably want to do at this stage of 
release. Maybe dhellman would have other opinions once he gets online 
this morning.


https://review.openstack.org/#/c/48014/ is the review in process

And to members of our community who are also maintainers of python 
libraries, I'd ask that you please be really cognizant of when you move 
requirements up major versions, and please us that as criteria to bump 
your version numbers at least a minor level (i.e. second digit) and not 
just a patch level. Integrating 100 python sources into a coherent whole 
is a hard job, and anything that can make it easier would be appreciated.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Sean Dague

On 09/24/2013 07:27 AM, Sean Dague wrote:

On 09/24/2013 06:44 AM, Nikola Đipanov wrote:

On 24/09/13 10:15, Gary Kotton wrote:

Hi,
Anyone know the root cause of:

2013-09-24 06:47:01.670 | Cleaning up...
2013-09-24 06:47:01.670 | No distributions matching the version for
pyparsing>=2.0.1 (from
cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r
/home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
2013-09-24 06:47:01.670 | Traceback (most recent call last):
2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in

2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1',
'console_scripts', 'pip')()
2013-09-24 06:47:01.671 |   File
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
line 148, in main
2013-09-24 06:47:01.671 | return command.main(args[1:], options)
2013-09-24 06:47:01.672 |   File
"/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
line 169, in main
2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't
decode byte 0xe2 in position 56: ordinal not in range(128)
2013-09-24 06:47:01.673 |
2013-09-24 06:47:01.673 | ERROR: could not install deps
[-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt,
-r/home/jenkins/workspace/gate-nova-pep8/test-requi



This seems to be an issue with pip not being able to find the right
version of pyparsing. This is strange as the command invocation log does
not suggest any alternative indexes are used - and it works for me.

Maybe someone from the infra team has some more insights?

PS - the unicode error seems to happen when trying to log the failure -
not the cause of it.

N.


Global requirements caps pyparsing at < 2. If software isn't in global
requirements, it's not let into the mirrors for the gate (so that shadow
dependencies don't sneak in requirements we don't support).

Which ... is exactly what happened.

cliff 1.4.5 was released, and in a patch level release bumped up a
minimum requirement a major version (pyparsing > 2 now required). This
created a wedge situation.

There are 2 options.

1) cap cliff
or
2) uncap pyparsing

I went for point 2, though it's not been tested with any of the stack,
and realistically not something we probably want to do at this stage of
release. Maybe dhellman would have other opinions once he gets online
this morning.

https://review.openstack.org/#/c/48014/ is the review in process

And to members of our community who are also maintainers of python
libraries, I'd ask that you please be really cognizant of when you move
requirements up major versions, and please us that as criteria to bump
your version numbers at least a minor level (i.e. second digit) and not
just a patch level. Integrating 100 python sources into a coherent whole
is a hard job, and anything that can make it easier would be appreciated.


Hmmm... the issue being the infra pip mirrors take some time to update, 
so here is the cliff capping review which should get the system running 
again:


https://review.openstack.org/#/c/48019/

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Nikola Đipanov
On 24/09/13 13:27, Sean Dague wrote:
> 
> And to members of our community who are also maintainers of python
> libraries, I'd ask that you please be really cognizant of when you move
> requirements up major versions, and please us that as criteria to bump
> your version numbers at least a minor level (i.e. second digit) and not
> just a patch level. Integrating 100 python sources into a coherent whole
> is a hard job, and anything that can make it easier would be appreciated.
> 
> -Sean
> 

+100

Thanks for writing this!

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Sean Dague

On 09/24/2013 07:36 AM, Sean Dague wrote:


Hmmm... the issue being the infra pip mirrors take some time to update,
so here is the cliff capping review which should get the system running
again:

https://review.openstack.org/#/c/48019/


Which still apparently doesn't address the whole issue... because cliff 
1.4.5 no longer appears to be compatible with the grizzly devstack 
environment, which is uncapped for cliff - this makes it impossible to 
start grizzly devstack, which means upgrade testing is failing 
everything, even after we cap cliff in havana. We have no easy such 
spigot in grizzly, as global requirements enforce was a summer project.


http://logs.openstack.org/14/48014/1/check/gate-grenade-devstack-vm/386eb51/console.html

2013-09-24 11:21:28.151 | 2013-09-24 11:21:28 Best match: cliff 1.4.5
2013-09-24 11:21:28.151 | 2013-09-24 11:21:28 Downloading 
http://pypi.openstack.org/openstack/cliff/cliff-1.4.5.tar.gz

2013-09-24 11:21:28.152 | 2013-09-24 11:21:28 Processing cliff-1.4.5.tar.gz
2013-09-24 11:21:28.153 | 2013-09-24 11:21:28 Running 
cliff-1.4.5/setup.py -q bdist_egg --dist-dir 
/tmp/easy_install-RcNBdM/cliff-1.4.5/egg-dist-tmp-dv0xP9
2013-09-24 11:21:28.171 | 2013-09-24 11:21:28 
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'entry_points'

2013-09-24 11:21:28.173 | 2013-09-24 11:21:28   warnings.warn(msg)
2013-09-24 11:21:28.174 | 2013-09-24 11:21:28 
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'namespace_packages'

2013-09-24 11:21:28.175 | 2013-09-24 11:21:28   warnings.warn(msg)
2013-09-24 11:21:28.176 | 2013-09-24 11:21:28 
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'zip_safe'

2013-09-24 11:21:28.177 | 2013-09-24 11:21:28   warnings.warn(msg)
2013-09-24 11:21:28.178 | 2013-09-24 11:21:28 
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'include_package_data'

2013-09-24 11:21:28.179 | 2013-09-24 11:21:28   warnings.warn(msg)
2013-09-24 11:21:28.180 | 2013-09-24 11:21:28 
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'install_requires'

2013-09-24 11:21:28.181 | 2013-09-24 11:21:28   warnings.warn(msg)
2013-09-24 11:21:28.183 | 2013-09-24 11:21:28 Traceback (most recent 
call last):
2013-09-24 11:21:28.184 | 2013-09-24 11:21:28   File "setup.py", line 
22, in 

2013-09-24 11:21:28.185 | 2013-09-24 11:21:28 pbr=True)
2013-09-24 11:21:28.185 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/distutils/core.py", line 152, in setup

2013-09-24 11:21:28.186 | 2013-09-24 11:21:28 dist.run_commands()
2013-09-24 11:21:28.187 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands

2013-09-24 11:21:28.211 | 2013-09-24 11:21:28 self.run_command(cmd)
2013-09-24 11:21:28.212 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/distutils/dist.py", line 972, in run_command

2013-09-24 11:21:28.213 | 2013-09-24 11:21:28 cmd_obj.run()
2013-09-24 11:21:28.214 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 
27, in run
2013-09-24 11:21:28.215 | 2013-09-24 11:21:28 
self.install_for_development()
2013-09-24 11:21:28.216 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 
105, in install_for_development
2013-09-24 11:21:28.217 | 2013-09-24 11:21:28 
self.process_distribution(None, self.dist, not self.no_deps)
2013-09-24 11:21:28.218 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", 
line 692, in process_distribution
2013-09-24 11:21:28.219 | 2013-09-24 11:21:28 [requirement], 
self.local_index, self.easy_install
2013-09-24 11:21:28.220 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 576, in resolve
2013-09-24 11:21:28.221 | 2013-09-24 11:21:28 dist = best[req.key] = 
env.best_match(req, self, installer)
2013-09-24 11:21:28.222 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 821, in best_match
2013-09-24 11:21:28.223 | 2013-09-24 11:21:28 return 
self.obtain(req, installer) # try and download/install
2013-09-24 11:21:28.224 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/pkg_resources.py", line 833, in obtain
2013-09-24 11:21:28.225 | 2013-09-24 11:21:28 return 
installer(requirement)
2013-09-24 11:21:28.226 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", 
line 608, in easy_install
2013-09-24 11:21:28.248 | 2013-09-24 11:21:28 return 
self.install_item(spec, dist.location, tmpdir, deps)
2013-09-24 11:21:28.250 | 2013-09-24 11:21:28   File 
"/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", 
line 638, in install_item
2013-09-24 11:21:28.252 | 2013-09-24 11:21:28 dists = 
self.install_eggs(spec, download, tmpdir)
201

Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-24 Thread Julien Danjou
On Tue, Sep 24 2013, Ladislav Smola wrote:

> Well for example, if we find metrics, that can be used for measuring health
> (this is probably more undercloud talking, or hardware metrics in general),
> we could do something like "I want this alarm on all resources of this
> type",
> if there will be e.g. 100s of the resources of the same type, it would be
> pretty
> dull to connect alarm to each of them, or to decide to change them.

Agreed, but I don't know/think that Ceilometer has such capabilities
right now.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] test

2013-09-24 Thread ????
test___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-24 Thread Ladislav Smola

On 09/24/2013 02:22 PM, Julien Danjou wrote:

On Tue, Sep 24 2013, Ladislav Smola wrote:


Well for example, if we find metrics, that can be used for measuring health
(this is probably more undercloud talking, or hardware metrics in general),
we could do something like "I want this alarm on all resources of this
type",
if there will be e.g. 100s of the resources of the same type, it would be
pretty
dull to connect alarm to each of them, or to decide to change them.

Agreed, but I don't know/think that Ceilometer has such capabilities
right now.

Yes it would be good if something like this would be supported. -> 
relation of alarm to multiple entities, that

are result of sample-api query. Could it be worth creating a BP?

Though we could do some simple implementation using tags (special 
description) and keeping track that
every entity of some query has its alarm. Probably using composite alarm 
as wrapper around group of

alarms could be also good, we could use it to store the shared query.

So I guess there could be a way to implement this. Non optimal way. So 
it might be better
to support this in Ceilometer. Not sure. But definitely the upgrade from 
one way to the other would

be problematic.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-24 Thread Julien Danjou
On Tue, Sep 24 2013, Ladislav Smola wrote:

> Yes it would be good if something like this would be supported. -> 
> relation of alarm to multiple entities, that
> are result of sample-api query. Could it be worth creating a BP?

Probably indeed.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-24 Thread Thomas Maddox
I think Dragon's BP for notification triggers would solve this problem.

Instead of looking at it as applying a single alarm to several resources,
you could instead leverage the similarities of the resources:
https://blueprints.launchpad.net/ceilometer/+spec/notifications-triggers.

Compound that with configurable events:
https://blueprints.launchpad.net/ceilometer/+spec/configurable-event-defini
tions

-Thomas

On 9/24/13 7:46 AM, "Julien Danjou"  wrote:

>On Tue, Sep 24 2013, Ladislav Smola wrote:
>
>> Yes it would be good if something like this would be supported. ->
>> relation of alarm to multiple entities, that
>> are result of sample-api query. Could it be worth creating a BP?
>
>Probably indeed.
>
>-- 
>Julien Danjou
># Free Software hacker # independent consultant
># http://julien.danjou.info
>



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group meeting on 9/24

2013-09-24 Thread Gary Kotton
Hi,
Done is still away on his vacation (lucky guy).
Last week we did not have time to discuss the proposals by Mike Spreitzer 
(https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps/edit).
Today can give us an opportunity to go over that and additional scheduling 
developments, for example: https://review.openstack.org/#/c/45867/
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-24 Thread Zane Bitter

On 24/09/13 05:31, Mike Spreitzer wrote:

I was not trying to raise issues of geographic dispersion and other
higher level structures, I think the issues I am trying to raise are
relevant even without them.  This is not to deny the importance, or
relevance, of higher levels of structure.  But I would like to first
respond to the discussion that I think is relevant even without them.

I think it is valuable for OpenStack to have a place for holistic
infrastructure scheduling.  I am not the only one to argue for this, but
I will give some use cases.  Consider Hadoop, which stresses the path
between Compute and Block Storage.  In the usual way of deploying and
configuring Hadoop, you want each data node to be using directly
attached storage.  You could address this by scheduling one of those two
services first, and then the second with constraints from the first ---
but the decisions made by the first could paint the second into a
corner.  It is better to be able to schedule both jointly.  Also
consider another approach to Hadoop, in which the block storage is
provided by a bank of storage appliances that is equidistant (in
networking terms) from all the Compute.  In this case the Storage and
Compute scheduling decisions have no strong interaction --- but the
Compute scheduling can interact with the network (you do not want to
place Compute in a way that overloads part of the network).


Thanks for writing this up, it's very helpful for figuring out what you 
mean by a 'holistic' scheduler.


I don't yet see how this could be considered in-scope for the 
Orchestration program, which uses only the public APIs of other services.


To take the first example, wouldn't your holistic scheduler effectively 
have to reserve a compute instance and some directly attached block 
storage prior to actually creating them? Have you considered Climate 
rather than Heat as an integration point?



Once a holistic infrastructure scheduler has made its decisions, there
is then a need for infrastructure orchestration.  The infrastructure
orchestration function is logically downstream from holistic scheduling.


I agree that it's necessarily 'downstream' (in the sense of happening 
afterwards). I'd hesitate to use the word 'logically', since I think by 
it's very nature a holistic scheduler introduces dependencies between 
services that were intended to be _logically_ independent.



  I do not favor creating a new and alternate way of doing
infrastructure orchestration in this position.  Rather I think it makes
sense to use essentially today's heat engine.

Today Heat is the only thing that takes a holistic view of
patterns/topologies/templates, and there are various pressures to expand
the mission of Heat.  A marquee expansion is to take on software
orchestration.  I think holistic infrastructure scheduling should be
downstream from the preparatory stage of software orchestration (the
other stage of software orchestration is the run-time action in and
supporting the resources themselves).  There are other pressures to
expand the mission of Heat too.  This leads to conflicting usages for
the word "heat": it can mean the infrastructure orchestration function
that is the main job of today's heat engine, or it can mean the full
expanded mission (whatever you think that should be).  I have been
mainly using "heat" in that latter sense, but I do not really want to
argue over naming of bits and assemblies of functionality.  Call them
whatever you want.  I am more interested in getting a useful arrangement
of functionality.  I have updated my picture at
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U---
do you agree that the arrangement of functionality makes sense?


Candidly, no.

As proposed, the software configs contain directives like 'hosted_on: 
server_name'. (I don't know that I'm a huge fan of this design, but I 
don't think the exact details are relevant in this context.) There's no 
non-trivial processing in the preparatory stage of software 
orchestration that would require it to be performed before scheduling 
could occur.


Let's make sure we distinguish between doing holistic scheduling, which 
requires a priori knowledge of the resources to be created, and 
automatic scheduling, which requires psychic knowledge of the user's 
mind. (Did the user want to optimise for performance or availability? 
How would you infer that from the template?) There's nothing that 
happens while preparing the software configurations that's necessary for 
the former nor sufficient for the latter.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] The last IRC meeting

2013-09-24 Thread Tomas Sedovic
I planned to cancel today's meeting, but I was reminded that we ought to 
finish the naming votes from the last week and it would be good to talk 
a bit more about the Tuskar coming under TripleO.


The time:

Tuesday, 24th September, 2013 at 19:00 UTC

The agenda:

* Discuss merger with TripleO including meeting time moving to this slot.
* Finish last week's voting https://etherpad.openstack.org/tuskar-naming
* Open discussion

https://wiki.openstack.org/wiki/Meetings/Tuskar

This is most likely going to be the last Tuskar-specific weekly meeting, 
going forward we'll just be a part of the TripleO one.


T.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Basic questions about climate

2013-09-24 Thread Mike Spreitzer
Climate is about reserving resources.  Are those physical resources or 
virtual ones?  Where was I supposed to read the answer to basic questions 
like that?

If climate is about reserving virtual resources, how is that different 
from "scheduling" them?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Virtio-Serial support for Nova libvirt driver

2013-09-24 Thread Ravi Chunduru
There is implementation for qemu guest agent checked into the code.
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


On Tue, Sep 24, 2013 at 3:40 AM, P Balaji-B37839 wrote:

> Hi,
>
> Virtio-Serial interface support for Nova - Libvirt is not available now.
> Some VMs who wants to access the Host may need like running
> qemu-guest-agent or any proprietary software want to use this mode of
> communication with Host.
>
> Qemu-GA uses virtio-serial communication.
>
> We want to propose a blue-print on this for IceHouse Release.
>
> Anybody interested on this.
>
> Regards,
> Balaji.P
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Virtio-Serial support for Nova libvirt driver

2013-09-24 Thread Haomai Wang

On Sep 24, 2013, at 6:40 PM, P Balaji-B37839  wrote:

> Hi,
> 
> Virtio-Serial interface support for Nova - Libvirt is not available now. Some 
> VMs who wants to access the Host may need like running qemu-guest-agent or 
> any proprietary software want to use this mode of communication with Host.
> 
> Qemu-GA uses virtio-serial communication.
> 
> We want to propose a blue-print on this for IceHouse Release.
> 
> Anybody interested on this.

Great! We have common interest and I hope we can promote it for IceHouse.

BTW, do you have a initial plan or description about it. 

And I think this bp may invoke. 
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

> 
> Regards,
> Balaji.P
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Best regards,
Haomai Wang, UnitedStack Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo PTL election

2013-09-24 Thread Doug Hellmann
On Mon, Sep 23, 2013 at 8:27 PM, Adam Young  wrote:

> On 09/23/2013 02:37 AM, Mark McLoughlin wrote:
>
>> Hey
>>
>> I meant to send this as soon as nominations opened - I figure that
>> incumbent PTLs should make it clear if they don't intend to nominate
>> themselves for re-election.
>>
>> To that end - I'm not going to put myself forward for election as Oslo
>> PTL this time around. This is purely based on a gut instinct that doing
>> so will actually be better for Oslo. I still care a great deal about
>> Oslo's mission and will continue to work on Oslo in Icehouse, e.g. doing
>> reviews and getting the oslo.messaging work over the line.
>>
>> I think the legacy of cut-and-paste code across OpenStack is a serious
>> threat to OpenStack's future progress and tackling it effectively is
>> going to require the help of many more of the core developers from other
>> projects. I'm hoping that by not being PTL, there'll be more space for
>> others to jump in and help drive Oslo's direction with new ideas and new
>> approaches.
>>
>> Thanks,
>> Mark.
>>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
> Knowing your commitment to Open Stack, I don't feel any need to say "Sorry
> to see you go" as I know this means just an adjustment of focus for you.
>
> What if we said that all core developers on other projects were considered
> core developers on Oslo?  In other words, all of OpenStack has a vested
> interest in code review of the common components.  Would this encourage
> more reviews in Oslo?  Or, would it hurt the overall quality of the Oslo
> code base?  It would certainly broaden the pool of developers, but there
> would be a need to level set the coding standards.
>

Oslo core tries to take a cross-project perspective to everything. That's
not something I think we should assume every core reviewer for the other
projects is ready to do -- understandably, most of them are focused on
their project. That's not to say we wouldn't welcome help, of course, just
that we should build the Oslo team in the same way that we build the other
teams.


>
> Another approach would be that each of the major projects "adopts" a
> subset of Oslo functionality.  For example, Keystone could claim
> responsibility for revewing Crypto and Policy changes.  I realize that Oslo
> already has a way of identifying developers that can review subsets of the
> code, but this would mean that core projects would collectively have more
> responsiblity and ownership of the Oslo libraries.


This may work for some cases, and I would like to explore our options (see
my message in the other thread on this topic). One impediment is deciding
whether the shared code is ready to move into a library yet. A goal for the
incubator repository is to allow projects to adopt unstable APIs as they
are able, without preventing changes to the APIs in the mean time. Moving
something to a library prematurely makes it more difficult to have that
flexibility.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tools for making upstreaming / backporting easier in git

2013-09-24 Thread Adam Spiers
Hi all,

Back in April, I created some wrapper scripts around git-cherry(1) and
git-notes(1), which can help when you have more than a trivial number
of commits to upstream or backport from one branch to another.  Since
then I've improved these tools, and also written a higher-level CLI
which should make the whole process pretty easy.

Last week I finally finished a blog post with all the details:

  http://blog.adamspiers.org/2013/09/19/easier-upstreaming-with-git/

in which I demonstrate how to use the tools via an artificial example
involving backporting some commits from Nova's master branch to its
stable/grizzly release branch.

These tools worked pretty well for me and my team on code outside
OpenStack, but no doubt some people will have ideas how to improve
them, or have different techniques for tackling the problem.  Either
way, I hope this is of interest, and I'd love to hear what people
think!

Cheers,
Adam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Basic questions about climate

2013-09-24 Thread Dina Belova
Hello Mike, all.


We are planning to support both virtual and physical resources in Climate.
There are several documents that describe our view on how this service
should look like:


[1] https://wiki.openstack.org/wiki/Resource-reservation-service

[2]
https://docs.google.com/document/d/1vsN9LLq9pEP4c5BJyfG8g2TecoPxWqIY4WMT1juNBPI/edit?usp=sharing

[3]
https://wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api

[4]
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013415.html


[1] and [2] is mostly about virtual reservations, [3] - physical. Now we
have several Bps and Wiki pages, so we need to prepare one document for all
these thoughts, I suppose.

[4] is our (Mirantis + XLcloud guys) discussion about how Climate should
look like.


Generally speaking, for the virtual resources we should provide the
opportunity not only to schedule VM, volume or stack or anything else in
the future, but also to make user sure he will have the resources to run
his virtual resources in that future. That means not only creating schedule
to run/suspend virtual resources, but to create these resources some time
before lease actually starts in special 'RESERVED' status.


You may take a look on the links I mentioned (but really, we need to create
new document describing all features and architectural moments Climate team
is now thinking about).


Thank you

Dina


On Tue, Sep 24, 2013 at 6:23 PM, Mike Spreitzer  wrote:

> Climate is about reserving resources.  Are those physical resources or
> virtual ones?  Where was I supposed to read the answer to basic questions
> like that?
>
> If climate is about reserving virtual resources, how is that different
> from "scheduling" them?
>
> Thanks,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-24 Thread Doug Hellmann
On Mon, Sep 23, 2013 at 9:56 PM, Adam Young  wrote:

>  On 09/23/2013 03:21 PM, Doug Hellmann wrote:
>
>
>
>
> On Mon, Sep 23, 2013 at 4:25 AM, Flavio Percoco  wrote:
>
>> On 20/09/13 15:20 -0700, Monty Taylor wrote:
>>
>>>  On 09/20/2013 02:55 PM, Ben Nemec wrote:
>>>
 Not from a Gerrit perspective, but the Oslo policy is that a maintainer
 +1 on the code they maintain is the equivalent of a +2, so only one core
 is needed to approve.

 See
 https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L28

>>>
>>> What if we rethought the organization just a little bit. Instead of
>>> having oslo-incubator from which we copy code, and then oslo.* that we
>>> consume as libraries, what if:
>>>
>>> - we split all oslo modules into their own repos from the start
>>>
>>
>> IIRC, we're planning to have a design session around these lines at
>> the summit. I think the only issue here is figuring out where some
>> modules belong. For example, where would we put strutils? Should we
>> have a single repo for it or perhaps have a more generic one, say
>> oslo.text, were we could group strutils, jsonutils and some other
>> modules?
>>
>> There are plenty of "single-file" packages out there but I'd
>> personally prefer grouping modules as much as possible.
>>
>
>  I agree.
>
>
>>
>> Another thing to consider is, what happens with Oslo modules depending
>> on other oslo modules? I guess we would make sure all the dependencies
>> are copied in the project as we do today but, when it comes to testing
>> the single module, I think this could be an issue. For example,
>> policy.py depends on fileutils, gettextutils and other oslo module
>> which wouldn't fit in the same package, oslo.policy. This will make
>> testing oslo.policy a real pain since we would have to "copy" its
>> dependencies in its own tree as well.
>
>
>  This is a great reason to keep everything together in a single incubator
> repository until a package is ready to stand on its own as a library.
> Libraries can easily declare dependencies to be installed for testing, but
> if we start copying bits of oslo around into separate git repositories then
> we'll all go mad trying to keep all of the repos up to date. :-) In the
> mean time, any review pain we have can be used as encouragement to bring
> the library to a point where it can be moved out of the incubator.
>
>  It sounds like the primary concern is having enough keystone folks
> looking at reviews of the policy code, without being overwhelmed by
> tracking all Oslo changes. There are a couple of ways to address that.
>
>  The policy code seems very tightly associated with the keystone work.
> There's no reason for Oslo to be the only program releasing reusable
> libraries. We should consider having the Keystone team manage the policy
> library in a repo they own. I'd love to have the Keystone middleware work
> the same way, instead of being in the client repo, but one step at a time.
>
>  Of course, if the policy code is nearing the point where it is ready to
> graduate from the incubator, then maybe that suggestion is moot and we
> should just continue to push ahead on the path we're on now. We could have
> people submitting policy code to oslo-incubator add "keystone-core" to
> reviews (adding a group automatically adds its members), so they don't have
> to subscribe to oslo notifications.
>
>  How close is the policy code to being ready to graduate?
>
>
> I would argue that it should graduate now.  Keystone is willing to take it
> on as a subproject, just like  the keystoneclient code is.  We discussed
> putting it in keystoneclient, since auth_token middleware is there
> already.   Thus, anything already using auth_token middleware already has
> the package.
>

I like that in general, although I'd rather see it in a separate repository
than piled into the client -- unless there's a connection between the
policy code and the client code that I just don't understand?

Doug


>
>
>
>
>
>  Doug
>
>
>>
>>
>> - we make update.py a utility that groks copying from a directory that
>>> contains a bunch of repos - so that a person wanting to use is might
>>> have:
>>>  ~/src
>>>  ~/src/oslo
>>>  ~/src/oslo/oslo.db
>>>  ~/src/oslo/oslo.policy
>>>  and then when they run update.py ~/src/oslo ~/src/nova and get the
>>> same results (the copying and name changing and whatnot)
>>>
>>
>>  If we split modules in its own repos, I'd rather use git submodules,
>> which would then work better.
>
>
>>
>>
>>> That way, we can add per-module additional core easily like we can for
>>> released oslo modules (like hacking and pbr have now)
>>>
>>
>>  +1
>>
>>
>>
>>> Also, that would mean that moving from copying to releasing is more a
>>> matter of just making a release than it is of doing the git magic to
>>> split the repo out into a separate one and then adding the new repo to
>>> gerrit.
>>>
>>>
>>  +1
>>
>>  Thoughts?
>>>
>>
>> I like the idea overall, I'm a bit worried about how th

Re: [openstack-dev] Client and Policy

2013-09-24 Thread Doug Hellmann
On Tue, Sep 24, 2013 at 5:00 AM, Flavio Percoco  wrote:

> On 23/09/13 15:21 -0400, Doug Hellmann wrote:
>
>> The policy code seems very tightly associated with the keystone work.
>> There's
>> no reason for Oslo to be the only program releasing reusable libraries. We
>> should consider having the Keystone team manage the policy library in a
>> repo
>> they own. I'd love to have the Keystone middleware work the same way,
>> instead
>> of being in the client repo, but one step at a time.
>>
>> Of course, if the policy code is nearing the point where it is ready to
>> graduate from the incubator, then maybe that suggestion is moot and we
>> should
>> just continue to push ahead on the path we're on now. We could have people
>> submitting policy code to oslo-incubator add "keystone-core" to reviews
>> (adding
>> a group automatically adds its members), so they don't have to subscribe
>> to
>> oslo notifications.
>>
>> How close is the policy code to being ready to graduate?
>>
>>
> After the last huge re-factor, I think the policy code is mature
> enough to live in its own repo. There are some other features I think
> it should support - like other persistence backends - but I guess that
> can be addressed when it's in a separate repo.
>
> As a graduation requirement, I'd like to see other projects being
> migrated to the latest code - cinder, glance, nova - as a proof that
> no further changes to the API are needed. Once we get to that point,
> we can pull it out of oslo-incubator and replace imports in projects
> depending on it.
>
> I can take care of that as soon as Ith development starts, unless
> there's another volunteer :D.


+1, that sounds like a good approach.

Doug


>
>
> Cheers,
> FF
>
> --
> @flaper87
> Flavio Percoco
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [LBaaS][Horizon] Subnet for VIP?

2013-09-24 Thread fank
I didn't know current lbaas plugin has such limitation. Although I think the UI 
should not incur such limitation as the lbaas API definition does allow this, 
I'm fine with it. 

Thanks, 
-Kaiwei 

- Original Message -

From: "Eugene Nikanorov"  
To: "OpenStack Development Mailing List"  
Sent: Tuesday, September 24, 2013 2:00:33 AM 
Subject: Re: [openstack-dev] [LBaaS][Horizon] Subnet for VIP? 

Hi Fank, 

That looks like Horizon limitation that is bound to current reference 
implementation of lbaas service where VIP should be on a subnet where pool's 
memebers are. 
So it's not a bug. Expect this to change in Icehouse. 

Thanks, 
Eugene. 



On Tue, Sep 24, 2013 at 9:19 AM, < f...@vmware.com > wrote: 


Hi, 

When adding a VIP for this pool, we suppose to specify the subnet which the VIP 
will be on. However, the Horizon UI forces us to provide an IP address for the 
VIP from the subnet which we used to create the pool. That subnet for pool is 
supposed to be the subnet for pool's members, not the subnet for the VIP. 

This looks like a UI bug? 

Thanks, 
-Kaiwei 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 





___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2013-09-24 Thread Peter Pouliot
Hi All,

Quick agenda for today's Hyper-V Meeting.


* Puppet module changes.

* Summit Updates

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [LBaaS][Horizon] Subnet for VIP?

2013-09-24 Thread Yongsheng Gong
VIP can be on a different subnet than the members are on. But we need to
set up a router so that the haproxy can work.


On Tue, Sep 24, 2013 at 11:40 PM,  wrote:

> I didn't know current lbaas plugin has such limitation. Although I think
> the UI should not incur such limitation as the lbaas API definition does
> allow this, I'm fine with it.
>
> Thanks,
> -Kaiwei
>
> --
> *From: *"Eugene Nikanorov" 
> *To: *"OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>
> *Sent: *Tuesday, September 24, 2013 2:00:33 AM
> *Subject: *Re: [openstack-dev] [LBaaS][Horizon] Subnet for VIP?
>
>
> Hi Fank,
>
> That looks like Horizon limitation that is bound to current reference
> implementation of lbaas service where VIP should be on a subnet where
> pool's memebers are.
> So it's not a bug. Expect this to change in Icehouse.
>
> Thanks,
> Eugene.
>
>
>
> On Tue, Sep 24, 2013 at 9:19 AM,  wrote:
>
>> Hi,
>>
>> When adding a VIP for this pool, we suppose to specify the subnet which
>> the VIP will be on. However, the Horizon UI forces us to provide an IP
>> address for the VIP from the subnet which we used to create the pool. That
>> subnet for pool is supposed to be the subnet for pool's members, not the
>> subnet for the VIP.
>>
>> This looks like a UI bug?
>>
>> Thanks,
>> -Kaiwei
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On the usage of pip vs. setup.py install

2013-09-24 Thread Thomas Goirand
Hi Monty,

On 09/24/2013 09:44 AM, Monty Taylor wrote:
> Instead of:
> 
> "python setup.py install"
> 
> Run:
> 
> "pip install ."

No way that this happens on the packaging side. Buildd have no network
access (on purpose), and we must not do any network access when building.

So I wonder what this post is trying to achieve.

On 09/24/2013 09:44 AM, Monty Taylor wrote:
> It is common practice in python to run:
>
> python setup.py install
> or
> python setup.py develop
>
> So much so that we spend a giant amount of effort to make sure that
> those always work.

Please continue to spend "a giant amount of effort" to make sure that
python setup.py install works (I won't care about develop), as this is
what is being used by debhelper by default.

On 09/24/2013 09:44 AM, Monty Taylor wrote:
> It should have the exact same result, but pip can succeed in some
> places where setup.py install directly can fail.

If "setup.py install" fails, this is a bug and it shall be fixed. :)

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On the usage of pip vs. setup.py install

2013-09-24 Thread Anita Kuno

My plague is feeling much better now, thanks.

Keeping using pip!

Anita.

On 09/23/2013 09:43 PM, Joshua Harlow wrote:

I wonder who got the plague if u got the food :-/

Sent from my really tiny device...

On Sep 23, 2013, at 8:07 PM, "Michael Basnight"  wrote:


But I got suddenly full. Interesting thing that is.

Sent from my digital shackles


On Sep 23, 2013, at 7:16 PM, Joshua Harlow  wrote:

I ran that but world peace didn't happen.

Where can I get my refund?

Sent from my really tiny device...


On Sep 23, 2013, at 6:47 PM, "Monty Taylor"  wrote:

tl;dr - easy_install sucks, so use pip

It is common practice in python to run:

python setup.py install
or
python setup.py develop

So much so that we spend a giant amount of effort to make sure that
those always work.

Fortunately for us, the underlying mechanism, setuptools, can often be a
pile of monkies. pip, while also with its fair share of issues, _is_ a
bit better at navigating the shallow waters at times. SO - I'd like to
suggest:

Instead of:

"python setup.py install"

Run:

"pip install ."

It should have the exact same result, but pip can succeed in some places
where setup.py install directly can fail.

Also, if you'd like to run python setup.py develop, simply run:

"pip install -e ."

Which you may not have known will run setup.py develop behind the scenes.

Things this will help with:
- world peace
- requirements processing
- global hunger
- the plague

Enjoy.

Monty

PS. The other should work. It's just sometimes it doesn't, and when it
doesn't it's less my fault.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On the usage of pip vs. setup.py install

2013-09-24 Thread Monty Taylor


On 09/24/2013 12:11 PM, Thomas Goirand wrote:
> Hi Monty,
> 
> On 09/24/2013 09:44 AM, Monty Taylor wrote:
>> Instead of:
>>
>> "python setup.py install"
>>
>> Run:
>>
>> "pip install ."
> 
> No way that this happens on the packaging side. Buildd have no network
> access (on purpose), and we must not do any network access when building.
>
> So I wonder what this post is trying to achieve.

This post has nothing to do with packaging. The OpenStack project does
not produce packages.

This is informational for people who are already running setup.py
install, suggesting that they use pip install . instead.

If, however, you are in a packaging situation and you are running
setup.py install as part of your packaging scripts such as in your
debian/rules file or your rpm spec file, continuing to use setup.py
install should have no negative effects, as all of the requirements
processing should be avoided due to the system packaging having taken
care of it already, so the evil that is easy_install will not be invoked.

Further, all debian/rules and rpm spec files that are packaging
openstack project should really add the SKIP_PIP_INSTALL env var. This
will turn off the additional pip operations that pbr does - which are
again pointless in a distro-packaging world.

> On 09/24/2013 09:44 AM, Monty Taylor wrote:
>> It is common practice in python to run:
>>
>> python setup.py install
>> or
>> python setup.py develop
>>
>> So much so that we spend a giant amount of effort to make sure that
>> those always work.
> 
> Please continue to spend "a giant amount of effort" to make sure that
> python setup.py install works (I won't care about develop), as this is
> what is being used by debhelper by default.

It will.

> On 09/24/2013 09:44 AM, Monty Taylor wrote:
>> It should have the exact same result, but pip can succeed in some
>> places where setup.py install directly can fail.
> 
> If "setup.py install" fails, this is a bug and it shall be fixed. :)

Not to beat a dead horse - but the things I am talking about here have
to do with dependency resolution, not with actual installation.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Scheduler] API extension for instance groups

2013-09-24 Thread Debojyoti Dutta
Hi Folks

Sorry for missing today's meeting. Was reading the logs
 
http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-09-24-15.03.log.html

Regarding the API: the instance groups API extension could be used as
a starting point although it could be refined. Right now its quite
simple and abstract. For example one can pass arbitrary policy
objects.

Should we use the next meeting to discuss what we need in a placement
API. In my mind, we need the following:
* policy description to encapsulate the rules? you can pass a bunch of
policy objects and be done
* description of the resources to be placed. Possibly a virtual
topology? For now the instance group API extension can be passed a
topology instead of a list of instances ... voila!
* do we need patterns etc? That should be done by a higher level API like Heat

My thesis: if we improve the instance group API extension a bit, we
should be able to cover the cases discussed in today's meeting. Happy
to do a deep dive next week!

-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Use Taskflow for leases start/end ?

2013-09-24 Thread Joshua Harlow
Never worry! That is being actively fixed right now (as we speak).

Good catch though :)

https://review.openstack.org/#/c/47562/

Jump in #openstack-state-management if u want to chat. :)

-Josh

From: Sylvain Bauza mailto:sylvain.ba...@bull.net>>
Date: Tuesday, September 24, 2013 12:02 AM
To: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Cc: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Use Taskflow for leases start/end ?

Hi Joshua,

Thanks for replying :-)
I was a bit worried by [1], is it still the case ? Are your patterns still in 
rewrite ?

[1] 
https://github.com/stackforge/taskflow/blob/master/taskflow/examples/fake_boot_vm.py#L8


-Sylvain

Le 23/09/2013 18:57, Joshua Harlow a écrit :
Howdy there!

Taskflow currently should be ready for usage, its not a pypi library yet, I am 
hoping for a 0.1 release soon (maybe 2 weeksish).

But in the meantime it does have a similar `update.py` as oslo-incubator, so 
you can use that to start integrating.

Jump in #openstack-state-management if u have any questions, here to help!

-Josh

From: Sylvain Bauza mailto:sylvain.ba...@bull.net>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 23, 2013 12:20 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Use Taskflow for leases start/end ?

I can't agree more. My point was not using it for v1, but just make sure 
everoybody in the team is aware of that kind of transactional framework.

On a second pro, it would make sense to conceptualize transaction model and 
think on a move later, even if we're still yet not using it :-)


Taskflow ppl, do you have any kind of code freeze deadline which could give us 
a glance on when the first release will be ready for use ?

Thanks,
-Sylvain

Le 22/09/2013 09:39, Dina Belova a écrit :
Hi all,

I think that TaskFlow is an interesting initiative that could provide us some 
benefits like good encapsulation of logical code blocks, better exception 
handling and management of actions taking place in Climate core, rollbacks and 
replays management.

It looks like that we should initially understand our workflows in Climate and 
then decide if we should use TaskFlow for them or not.


On Thu, Sep 19, 2013 at 4:54 PM, Sylvain Bauza 
mailto:sylvain.ba...@bull.net>> wrote:
Hi Climate team,

I just went through https://wiki.openstack.org/wiki/TaskFlow
Do you think Taskflow could help us in providing ressources defined in the 
lease on an atomic way ?

We could leave the implementation up to the plugins, but maybe the Lease 
Manager could also benefit of it.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] PTL nomination

2013-09-24 Thread Devananda van der Veen
Hi!

I would like to nominate myself for the OpenStack Bare Metal Provisioning
(Ironic) PTL position.

I have been working with OpenStack for over 18 months, and was a
scalability and performance consultant at Percona for four years prior.
Since '99, I have worked as a developer, team lead, database admin, and
linux systems architect for a variety of companies.

I am the current PTL of the Bare Metal Provisioning (Ironic) program, which
began incubation during Havana. In collaboration with many fine folks from
HP, NTT Docomo, USC/ISI, and VirtualTech, I worked extensively on the Nova
Baremetal driver during the Grizzly cycle. I also helped start the TripleO
program, which relies heavily on the baremetal driver to achieve its goals.
During the Folsom cycle, I led the effort to improve Nova's DB API layer
and added devstack support for the OpenVZ driver. Through that work, I
became a member of nova-core for a time, though my attention has shifted
away from Nova more recently.

Once I had seen nova-baremetal and TripleO running in our test environment
and began to assess our longer-term goals (eg, HA, scalability, integration
with other OpenStack services), I felt very strongly that bare metal
provisioning was a separate problem domain from Nova and would be best
served with a distinct API service and a different HA framework than what
is provided by Nova. I circulated this idea during the last summit, and
then proposed it to the TC shortly thereafter.

During this development cycle, I feel that Ironic has made significant
progress. Starting from the initial "git bisect" to retain the history of
the baremetal driver, I added an initial service and RPC framework,
implemented some architectural pieces, and left a lot of #TODO's. Today,
with commits from 10 companies during Havana (*) and integration already
underway with devstack, tempest, and diskimage-builder, I believe we will
have a functional release within the Icehouse time frame.

I feel that a large part of my role as PTL has been - and continues to be -
to gather ideas from a wide array of individuals and companies interested
in bare metal provisioning, then translate those ideas into a direction for
the program that fits within the OpenStack ecosystem. Additionally, I am
often guiding compromise between the long-term goals, such as firmware
management, and the short-term needs of getting the project to a
fully-functional state. To that end, here is a brief summary of my goals
for the project in the Icehouse cycle.

* API service and client library (likely finished before the summit)
* Nova driver (blocked, depends on ironic client library)
* Finish RPC bindings for power and deploy management
* Finish merging bm-deploy-helper with Ironic's PXE driver
* PXE boot integration with Neutron
* Integrate with TripleO / TOCI for automated testing
* Migration script for existing deployments to move off the nova-baremetal
driver
* Fault tolerance of the ironic-conductor nodes
* Translation support
* Docs, docs, docs!

Beyond this, there are many long-term goals which I would very much like to
facilitate, such as:

* hardware discovery
* better integration with SDN capable hardware
* pre-provisioning tools, eg. management of bios, firmware, and raid
config, hardware burn-in, etc.
* post-provisioning tools, eg. secure-erase
* boot from network volume
* secure boot (protect deployment against MITM attacks)
* validation of signed firmware (protect tenant against prior tenant)

Overall, I feel honored to be working with so many talented individuals
across the OpenStack community, and know that there is much more to learn
as a developer, and as a program lead.

(*)
http://www.stackalytics.com/?release=havana&metric=commits&project_type=All&module=ironic

http://russellbryant.net/openstack-stats/ironic-reviewers-30.txt
http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

--
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance PTL candidacy

2013-09-24 Thread Mark Washenberger
Hi all,

I'd like to submit myself as a candidate to continue in the role of Glance
PTL for the Icehouse release cycle.

A bit of history about me: I joined Rackspace's Team Titan back in February
2011, where were initially focused on filling out the OpenStack 1.1 api for
nova. I've been working with Glance apis and core functionality for about 4
cycles now. Since November 2012 I've been an employee of Nebula, working
alongside some of the original NASA OpenStack folks and the former Glance
PTL Brian Waldon.

As Icehouse PTL, I expect to proceed much in the same way as during Havana,
hopefully with lots of realized opportunities for incremental improvement.
That's about the most honest endorsement I can give of myself. I'm very
open to any advice, suggestions, or other contributors who want to take on
more responsibility in the near future. One thing I learned during Havana
was that I was in a better position before becoming PTL to work on the
particular items I care most about, so any sensible responsibility sharing
that would free me up to continue working on refining code organization and
test performance is very welcome.

Thank you,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins} Gating broken

2013-09-24 Thread Bhuvan Arumugam
On Tue, Sep 24, 2013 at 1:15 AM, Gary Kotton  wrote:

> Hi,
> Anyone know the root cause of:
>
> 2013-09-24 06:47:01.670 | Cleaning up...
> 2013-09-24 06:47:01.670 | No distributions matching the version for 
> pyparsing>=2.0.1 (from cliff>=1.4.3->python-neutronclient>=2.3.0,<3->-r 
> /home/jenkins/workspace/gate-nova-pep8/requirements.txt (line 25))
> 2013-09-24 06:47:01.670 | Traceback (most recent call last):
> 2013-09-24 06:47:01.671 |   File ".tox/pep8/bin/pip", line 9, in 
> 2013-09-24 06:47:01.671 | load_entry_point('pip==1.4.1', 
> 'console_scripts', 'pip')()
> 2013-09-24 06:47:01.671 |   File 
> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/__init__.py",
>  line 148, in main
> 2013-09-24 06:47:01.671 | return command.main(args[1:], options)
> 2013-09-24 06:47:01.672 |   File 
> "/home/jenkins/workspace/gate-nova-pep8/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py",
>  line 169, in main
> 2013-09-24 06:47:01.672 | text = '\n'.join(complete_log)
> 2013-09-24 06:47:01.672 | UnicodeDecodeError: 'ascii' codec can't decode byte 
> 0xe2 in position 56: ordinal not in range(128)
> 2013-09-24 06:47:01.673 |
> 2013-09-24 06:47:01.673 | ERROR: could not install deps 
> [-r/home/jenkins/workspace/gate-nova-pep8/requirements.txt, 
> -r/home/jenkins/workspace/gate-nova-pep8/test-requi
>
>
Likely due to missing dependency in pypi mirror. pyparsing 2.0.1 was not
synced with the mirror at that point in time. pyparsing 2.0.1 is now
updated in the mirror. It shouldn't be an issue anymore. The tests ran at
6:47, while the package was uploaded/synced at 14:25.
  http://pypi.openstack.org/openstack/pyparsing/pyparsing-2.0.1.tar.gz

I see it as an issue between time the requirements is changed and time the
packages are sync'ed to the mirror. In this case, a newer version is
released and cliff>=1.4.3 attempt to install newer version v1.4.5. The new
version of cliff v1.4.5 released yesterday require pyparsing>=2.0.1.

-- 
Regards,
Bhuvan Arumugam
www.livecipher.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-24 Thread Steven Gonzales
Celiometer Team,

I am a developer on the Project Meniscus team.  I noticed the conversation on 
adding ElasticSearch and Kibana to Ceilometer and thought I would share some 
information regarding our project.  I would love to discuss a way our projects 
could work together on some of these common goals and possibly collaborate.

Project Meniscus is an open-source Python logging-as-a-service solution.  The 
multi-tenant service will allow the dispatch of log messages to sinks such as 
ElasticSearch, Swift, and HDFS.  Our initial implementation is defaulting to 
ElasticSearch.

The system was designed with the intention to scale and to be resilient to 
failure.  We have written a tcp server for receiving syslog messages from 
standard syslog servers/daemons such as RSYSLOG and SYSLOG-NG.  The server 
receives syslog messages over long-lived tcp connections and parses individual 
log messages into json documents.  The server uses the tornado tcp server, and 
the parser itself is written in C and uses Cython bindings.

We have implemented features such as normalization of log data by writing a 
python library that that binds to liglognorm, a C library for log processing.

In our very early alpha implementation we have been able to process about 30-40 
GB of syslog messages per day on a single worker
node with a very small amount of load on the server.  Our current worker nodes 
are 8GB RAM Virtual Machines running on Nova.


Currently we are working on:
 1. load balancing for syslog messages after parsing(since syslog servers 
transmit using long lived tcp connections)
 2. Implementing keystone authentication into Kibana 3
 3. Building a proxy in front of ElasticSearch to limit queries by tenant.

Our project page is http://projectmeniscus.org/
Our repo is located at: https://github.com/ProjectMeniscus

The repo contains the main code base and all supporting projects, including our 
chef repository.

We would love to discuss a way our projects could work together on some of 
these common goals and possibly collaborate.  Would it be possible to set up a 
time for us talk briefly?

Steven Gonzales
Software Developer
Rackspace Hosting
steven.gonza...@rackspace.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Incubator Application Q&A section

2013-09-24 Thread Sergey Lukjanov
Hello,

I've added new section to Savanna Incubator Application with recent questions 
and answers:

https://wiki.openstack.org/wiki/Savanna/Incubation#Raised_Questions_.2B_Answers

And here are details about integration with Heat:

https://wiki.openstack.org/wiki/Savanna/HeatIntegration

Thanks.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday September 24th at 19:00 UTC

2013-09-24 Thread Elizabeth Krumbach Joseph
On Mon, Sep 23, 2013 at 7:07 PM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday September 24th, at 19:00 UTC in
> #openstack-meeting

Meeting minutes and logs:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-24-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-24-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-24-19.00.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Current list of confirmed PTL Candidates

2013-09-24 Thread Anita Kuno
I currently have the following list of confirmed PTL candidates: 
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates


Confirmed candidates for Fall 2013 PTL Elections (alphabetically by last 
name):


 * Compute (Nova)
 o Russell Bryant
   


 * Object Storage (Swift)
 o

 * Image Service (Glance)
 o Mark Washenberger
   


 * Identity (Keystone)
 o Dolph Mathews
   


 * Dashboard (Horizon)
 o Gabriel Hurley
   


 o David Lyle
   


 * Networking (Neutron)
 o Mark McClain
   


 * Block Storage (Cinder)
 o John Griffith
   


 * Metering/Monitoring (Ceilometer)
 o Julien Danjou
   


 * Orchestration (Heat)
 o Steve Baker
   


 * Database Service (Trove)
 o Michael Basnight
   


 * Bare metal (Ironic)
 o Devananda van der Veen
   


 * Queue service (Marconi)
 o Kurt Griffiths
   


 * Common Libraries (Oslo)
 o Doug Hellmann
   


 * Infrastructure
 o James E. Blair
   


 * Documentation
 o Anne Gentle
   


 * Quality Assurance (QA)
 o Sean Dague
   


 * Deployment (TripleO)
 o Robert Collins
   


 * Devstack (DevStack)
 o Dean Troyer
   


 * Release cycle management
 o Thierry Carrez
   



If you have announced your candidacy and I have not listed your name 
yet, please get in contact with me via IRC or the email I used to post 
this email so that I can include your name.


Thank you,
Anita. (anteaya)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna-dashboard] version 0.3 - updated UI mockups for EDP workflow

2013-09-24 Thread Liz Blanchard
Chad & Nadya,

Sorry for the late reply on this, I've been meaning to send some thoughts and 
finally had some time today to pull it all together :)

First off, an additional place that you should feel free to post 
wireframes/design ideas and have a discussion is on our G+ OpenStack UX page:
https://plus.google.com/u/0/communities/100954512393463248122

We are actually hoping to move this to use Askbot in the future, but for now 
this community is still very active and you could get some reviewers there who 
might not have seen this e-mail.

As for these designs, I have a bit of feedback:
1) The addition of a high level tab for the main section of "Savanna" features 
might introduce some complications. I've been seeing a lot of developers adding 
tabs here which work if they are the only additional tab in addition to 
"Project" and "Admin", but it doesn't scale well if there are more than three 
tabs. We are trying to address this in a navigation enhancement blueprint for 
Horizon:
https://blueprints.launchpad.net/horizon/+spec/navigation-enhancement

Hopefully in the Icehouse release, it will be much easier to scale out and add 
new sections at the top level, but I wonder if this would make more sense as a 
new "Panel" which would sit at the same level as "Manage Compute" under the 
current Project tab. Just an idea!

2) Currently in Horizon, there are a few "Create" modal windows where the modal 
is labeled with the action such as "Launch Instance" and the user is given one 
or more tabs of fields to fill out. The first tab is typically the "Details" 
section with the general fields that need to be filled out. There could be more 
tabs for more groups of fields as needed. If you take a look at the way the 
launch Instance modal works, I think the Job Launch/Creation modals that are 
being designed for Savanna could be more consistent with this design. This 
includes things like the "Add" button next to some of the fields. Here is a 
screenshot of the Launch Instance "Details" tab for reference:
http://people.redhat.com/~lsurette/OpenStack/Launch%20Instance.png

3) This is a small point, but I just want to be sure that in the table designs 
it is expected that there would be some overall table level buttons for 
"Launch" and "Terminate" that would allow the user to click the check box for 
multiple items and select this action. I see in one of the mockups that the 
checkbox is selected, but I didn't see any buttons on top of the table, so I 
figured I would mention it.

Hopefully this helps! I'm happy to chat more about these designs in detail and 
help move them forward too. Let me know if you have any questions on my 
thoughts here.

Thanks,
Liz

On Sep 10, 2013, at 12:46 PM, Nadya Privalova  wrote:

> Hi all,
> 
> I've created a temporary page for UIs mockups. Please take a look:
> https://wiki.openstack.org/wiki/Savanna/UIMockups/JobCreationProposal
> 
> Chad, it's just pictures demonstrate how we see dependencies in UI. It's not 
> a final decision.
> Guys, feel free to comment this. I think it's time to start discussions.
> 
> Regards,
> Nadya
> 
> 
> On Mon, Sep 9, 2013 at 10:19 PM, Chad Roberts  wrote:
> Updated UI mockups for savanna dashboard EDP.
> https://wiki.openstack.org/wiki/Savanna/UIMockups/JobCreation
> 
> Regards,
> Chad
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-24 Thread Michael Still
To confirm, we have a little bit over a day left for people to nominate, right?

On a personal note, I'm a little sad to see so many single candidate
elections. I guess it might indicate a strong consensus, but I worry
it encourages group think over time.

Michael

On Wed, Sep 25, 2013 at 6:08 AM, Anita Kuno  wrote:
> I currently have the following list of confirmed PTL candidates:
> https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates
>
> Confirmed candidates for Fall 2013 PTL Elections (alphabetically by last
> name):
>
> Compute (Nova)
>
> Russell Bryant
>
> Object Storage (Swift)
>
>
> Image Service (Glance)
>
> Mark Washenberger
>
> Identity (Keystone)
>
> Dolph Mathews
>
> Dashboard (Horizon)
>
> Gabriel Hurley
> David Lyle
>
> Networking (Neutron)
>
> Mark McClain
>
> Block Storage (Cinder)
>
> John Griffith
>
> Metering/Monitoring (Ceilometer)
>
> Julien Danjou
>
> Orchestration (Heat)
>
> Steve Baker
>
> Database Service (Trove)
>
> Michael Basnight
>
> Bare metal (Ironic)
>
> Devananda van der Veen
>
> Queue service (Marconi)
>
> Kurt Griffiths
>
> Common Libraries (Oslo)
>
> Doug Hellmann
>
> Infrastructure
>
> James E. Blair
>
> Documentation
>
> Anne Gentle
>
> Quality Assurance (QA)
>
> Sean Dague
>
> Deployment (TripleO)
>
> Robert Collins
>
> Devstack (DevStack)
>
> Dean Troyer
>
> Release cycle management
>
> Thierry Carrez
>
> If you have announced your candidacy and I have not listed your name yet,
> please get in contact with me via IRC or the email I used to post this email
> so that I can include your name.
>
> Thank you,
> Anita. (anteaya)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-24 Thread Anita Kuno

On 09/24/2013 04:15 PM, Michael Still wrote:

To confirm, we have a little bit over a day left for people to nominate, right?

Confirmed.

"Nominations for OpenStack PTLs (Project Technical Leads) ... will 
remain open until 23:59 UTC September 26, 2013" 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015369.html


On a personal note, I'm a little sad to see so many single candidate
elections. I guess it might indicate a strong consensus, but I worry
it encourages group think over time.

Michael

I too had expected to see more candidate nominations than we have so far.

I think democracy is a healthy process and dynamic tension helps to weed 
out complacency and keep us strong as a community.


Anita.


On Wed, Sep 25, 2013 at 6:08 AM, Anita Kuno  wrote:

I currently have the following list of confirmed PTL candidates:
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates

Confirmed candidates for Fall 2013 PTL Elections (alphabetically by last
name):

Compute (Nova)

Russell Bryant

Object Storage (Swift)


Image Service (Glance)

Mark Washenberger

Identity (Keystone)

Dolph Mathews

Dashboard (Horizon)

Gabriel Hurley
David Lyle

Networking (Neutron)

Mark McClain

Block Storage (Cinder)

John Griffith

Metering/Monitoring (Ceilometer)

Julien Danjou

Orchestration (Heat)

Steve Baker

Database Service (Trove)

Michael Basnight

Bare metal (Ironic)

Devananda van der Veen

Queue service (Marconi)

Kurt Griffiths

Common Libraries (Oslo)

Doug Hellmann

Infrastructure

James E. Blair

Documentation

Anne Gentle

Quality Assurance (QA)

Sean Dague

Deployment (TripleO)

Robert Collins

Devstack (DevStack)

Dean Troyer

Release cycle management

Thierry Carrez

If you have announced your candidacy and I have not listed your name yet,
please get in contact with me via IRC or the email I used to post this email
so that I can include your name.

Thank you,
Anita. (anteaya)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-24 Thread Joshua Harlow
+2

I think we need to as a community figure out why this is the case and
figure out ways to make it not the case.

Is it education around what a PTL is? Is it lack of time? Is it something
else?

-Josh

On 9/24/13 1:15 PM, "Michael Still"  wrote:

>To confirm, we have a little bit over a day left for people to nominate,
>right?
>
>On a personal note, I'm a little sad to see so many single candidate
>elections. I guess it might indicate a strong consensus, but I worry
>it encourages group think over time.
>
>Michael
>
>On Wed, Sep 25, 2013 at 6:08 AM, Anita Kuno  wrote:
>> I currently have the following list of confirmed PTL candidates:
>> https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates
>>
>> Confirmed candidates for Fall 2013 PTL Elections (alphabetically by last
>> name):
>>
>> Compute (Nova)
>>
>> Russell Bryant
>>
>> Object Storage (Swift)
>>
>>
>> Image Service (Glance)
>>
>> Mark Washenberger
>>
>> Identity (Keystone)
>>
>> Dolph Mathews
>>
>> Dashboard (Horizon)
>>
>> Gabriel Hurley
>> David Lyle
>>
>> Networking (Neutron)
>>
>> Mark McClain
>>
>> Block Storage (Cinder)
>>
>> John Griffith
>>
>> Metering/Monitoring (Ceilometer)
>>
>> Julien Danjou
>>
>> Orchestration (Heat)
>>
>> Steve Baker
>>
>> Database Service (Trove)
>>
>> Michael Basnight
>>
>> Bare metal (Ironic)
>>
>> Devananda van der Veen
>>
>> Queue service (Marconi)
>>
>> Kurt Griffiths
>>
>> Common Libraries (Oslo)
>>
>> Doug Hellmann
>>
>> Infrastructure
>>
>> James E. Blair
>>
>> Documentation
>>
>> Anne Gentle
>>
>> Quality Assurance (QA)
>>
>> Sean Dague
>>
>> Deployment (TripleO)
>>
>> Robert Collins
>>
>> Devstack (DevStack)
>>
>> Dean Troyer
>>
>> Release cycle management
>>
>> Thierry Carrez
>>
>> If you have announced your candidacy and I have not listed your name
>>yet,
>> please get in contact with me via IRC or the email I used to post this
>>email
>> so that I can include your name.
>>
>> Thank you,
>> Anita. (anteaya)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Rackspace Australia
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-24 Thread Jaromir Coufal

 Hey folks,

I want to introduce our direction of Tuskar UI, currently described with 
POC wireframes. Keep in mind, that wireframes which I am sending were 
made for purpose of proof of concept (which was built and released in 
August) and there are various changes since then, which were already 
adopted. However, basic concepts are staying similar. Any updates for 
wireframes and future direction will be sent here to the dev-list for 
feedback and reviews.


http://people.redhat.com/~jcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf

Just quick description of what is happening there:
* 1st step implementation - Layouts (page 2)
- just showing that we are re-using all Horizon components and layouts
* Where we are heading - Layouts (page 8)
- possible smaller improvements to Horizon concepts
- majority just smaller CSS changes in POC timeframe scope
* Resource Management - Flavors (page 15) - ALREADY REMOVED
- these were templates for flavors, which were part of selection in 
resource class creation process
- currently the whole flavor definition moved under compute 
resource class completely (templates are no longer used)

* Resource Management - Resources (page 22)
- this is rack management
- creation workflow was based on currently obsolete data (settings 
are going to be changed a bit)
- upload rack needs to make sure that we know some standard csv 
file format (can we specify some?)
- detail page of rack and node, which are going through enhancement 
process

* Resource Management - Classes (page 40)
- resource class management
- few changes will happen here as well regarding creation workflow
- detail page is going through enhancements as well as racks/nodes 
detail pages

* Graphic Design
- just showing the very similar look and feel as OpenStack Dashboard

If you have any further questions, just follow this thread, I'll be very 
happy to answer as much as possible.


Cheers,
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-09-24 Thread Mark Washenberger
Hi folks,

Just to remind you, we'll be having a team meeting this Thursday, September
26th at 14:00 UTC. For your local time, please see
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meeting&iso=20130926T14&ah=1
.

In particular, we'll want to make sure we've signed off on RC1 or any last
bugfixes during this week's meeting.

Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] PTL Candidacy

2013-09-24 Thread John Dickinson
I would like to nominate myself for PTL of Swift.

I've been involved in OpenStack Swift since it started, and I'd like
to share a few of the thins in-progress and where I want to see Swift
go.

Swift has always been a world-class storage system, proven at scale
and production-ready from day one. In the past few years Swift has
been deployed in public and private storage clouds all over the world,
and it is in use at the largest companies in the world.

My goal for Swift is that everyone will use Swift, every day, even if
they don't realize it. And taking a look at where Swift is being used
today, we're well on our way to that goal. We'll continue to move
towards Swift being everywhere as Swift grows to solve more real-world
use cases.

Right now, there is work being done in Swift that will give deployers
a very high degree of flexibility in how they can store data. We're
working on implementing storage policies in Swift. These storage
policies give deployers the ability to choose:

(a) what subset of hardware the data lives on
(b) how the data is stored across that hardware
(c) how communication with an actual storage volume happens.

Supporting (a) allows for storage tiers and isolated storage hardware.
Supporting (b) allows for different replication or non-replication
schemes. Supporting (c) allows for specific optimizations for
particular filesystems or storage hardware. Combined, it's even
feasable to have a Swift cluster take advantage of other storage
systems as a storage policy (imagine an S3 storage policy).

As PTL, I want to help coordinate this work and see it to completion.
Many people from many different companies are working on it, in
addition to the normal day-to-day activity in Swift.

I'm excited by the future of Swift, and would be honored to continue
to serve as Swift PTL.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-24 Thread Debojyoti Dutta
Joining the party late :)

I think there have been a lot of interesting ideas around wholistic
scheduling over the last few summits. However seems there is no clear
agreement on 1) where it should be specified and implemented 2) what
the specifications look like - VRT, policies, templates etc etc etc 3)
how to scale such implementations given the complexity.

In order to tackle the seemingly complex problem, why dont we get
together during the summit (and do homework beforehand) and converge
upon the above independent of where we implement. Maybe we implement
it in a separate scheduling layer which is independent of existing
services so that it touches less things.

Mike: agree that we should be specifying a VRT, constraints etc and
matching them via a constrained optimization ... all this is music to
my ears. However I feel we will just make it harder to get it done if
we dont simplify the problem without giving up room for where we want
to be. At the end of the day, in order to place resources efficiently
you need a demand vector/matrix etc and you match with the available
resource vector/matrix  so why not have an abstract resource
placement layer that hides details except for the quantities that are
really needed. That way it can be used inside Heat or it can be used
standalone 

To that effect - if you look at the BP
https://blueprints.launchpad.net/nova/+spec/solver-scheduler, and the
associated code (WIP), its an attempt to do the same within the
current Nova framework. One thing we could do is to have a layer that
abstracts the VRT and constraints so that a simple optimization
framework could then make the decisions instead of hand crafted
algorithms that are harder to extend (e.g. the entire suite of
scheduler filters that exist today).

debo

On Tue, Sep 24, 2013 at 7:01 AM, Zane Bitter  wrote:
> On 24/09/13 05:31, Mike Spreitzer wrote:
>>
>> I was not trying to raise issues of geographic dispersion and other
>> higher level structures, I think the issues I am trying to raise are
>> relevant even without them.  This is not to deny the importance, or
>> relevance, of higher levels of structure.  But I would like to first
>> respond to the discussion that I think is relevant even without them.
>>
>> I think it is valuable for OpenStack to have a place for holistic
>> infrastructure scheduling.  I am not the only one to argue for this, but
>> I will give some use cases.  Consider Hadoop, which stresses the path
>> between Compute and Block Storage.  In the usual way of deploying and
>> configuring Hadoop, you want each data node to be using directly
>> attached storage.  You could address this by scheduling one of those two
>> services first, and then the second with constraints from the first ---
>> but the decisions made by the first could paint the second into a
>> corner.  It is better to be able to schedule both jointly.  Also
>> consider another approach to Hadoop, in which the block storage is
>> provided by a bank of storage appliances that is equidistant (in
>> networking terms) from all the Compute.  In this case the Storage and
>> Compute scheduling decisions have no strong interaction --- but the
>> Compute scheduling can interact with the network (you do not want to
>> place Compute in a way that overloads part of the network).
>
>
> Thanks for writing this up, it's very helpful for figuring out what you mean
> by a 'holistic' scheduler.
>
> I don't yet see how this could be considered in-scope for the Orchestration
> program, which uses only the public APIs of other services.
>
> To take the first example, wouldn't your holistic scheduler effectively have
> to reserve a compute instance and some directly attached block storage prior
> to actually creating them? Have you considered Climate rather than Heat as
> an integration point?
>
>
>> Once a holistic infrastructure scheduler has made its decisions, there
>> is then a need for infrastructure orchestration.  The infrastructure
>> orchestration function is logically downstream from holistic scheduling.
>
>
> I agree that it's necessarily 'downstream' (in the sense of happening
> afterwards). I'd hesitate to use the word 'logically', since I think by it's
> very nature a holistic scheduler introduces dependencies between services
> that were intended to be _logically_ independent.
>
>
>>   I do not favor creating a new and alternate way of doing
>> infrastructure orchestration in this position.  Rather I think it makes
>> sense to use essentially today's heat engine.
>>
>> Today Heat is the only thing that takes a holistic view of
>> patterns/topologies/templates, and there are various pressures to expand
>> the mission of Heat.  A marquee expansion is to take on software
>> orchestration.  I think holistic infrastructure scheduling should be
>> downstream from the preparatory stage of software orchestration (the
>> other stage of software orchestration is the run-time action in and
>> supporting the resources themselves).  The

Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-24 Thread Gabriel Hurley
> > 3. There is a thought about watching correlation of multiple alarm
> > histories in one Chart (either Alarm Histories, or the real statistics
> > the Alarm is defined by). Do you think it will be needed? Any real
> > life examples you have in mind?
> 
> I think the first use case is to debug combined alarms.
> There's also a lot of potential to debug an entire platform activity by
> superimposing several alarm graphs.

Yep, this is where it gets useful for admins. For a regular user a basic set of 
alarms is fine, you just want to react to certain conditions in your 
app/workload/whatever. But for an admin if you can correlate alarms to hosts 
and metrics and cross-project resource creation/deletion/etc. and start to 
understand the cloud as a whole. I think this is an end-game use case that's 
very valuable, and which many companies have built their entire businesses 
around (which is to say it's not an easy problem or a small problem, but the 
need is very real).

> > 4. There is a thought about tagging the alarms by user defined tag, so
> > user can easily group alarms together and then watch them together
> > based on their tag.
> 
> The alarm API don't provide that directly, but you can imagine some sort of
> filter based on description matching some texts.

I'd love to see this as an extension to the alarm API. I think tracking 
metadata about alarms (e.g. tags or arbitrary key-value pairs) would be 
tremendously useful.

> > 5. There is a thought about generating a default alarms, that could
> > observe the most important things (verifying good behaviour, showing bad
> behaviour).
> > Does anybody have an idea which alarms could be the most important and
> > usable for everybody?
> 
> I'm not sure you want to create alarm by default; alarm are resources, I don't
> think we should create resources without the user asking for it.

Seconded.

> Maybe you were talking about generating alarm template? You could start
> with things like CPU usage staying at >90% for more than 1 hour, and having
> an action that alerts the user via mail.
> Same for disk usage.

We do this kind of "template" for common user tasks with security group rules 
already. The same concept applies to alarms.

> > 6. There is a thought about making overview pages customizable by the
> > users, so they can really observe, what they need. (includes
> > Ceilometer statistics and alarms)
> 
> I think that could be as easy as picking the alarms you want in overviews with
> a very small and narrowed graph.

Conceptually easy pickings, non-trivial work. But agreed.

All the best,

- Gabriel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-24 Thread Gabriel Hurley
Really digging a lot of that. Particularly the inter-rack/inter-node 
communication stuff around page 36ish or so.

I’m concerned about using the term “Class”. Maybe it’s just me as a developer, 
but I couldn’t think of a more generic, less inherently meaningful word there. 
I read through it and I still only vaguely understand what a “Class” is in this 
context. We either need better terminolody or some serious documentation/user 
education on that one.

Also, I can’t quite say how, but I feel like the “Class” stuff ought to be 
meshed with the Resource side of things. The separation seems artificial and 
based more on the API structure (presumably?) than on the most productive user 
flow when interacting with that system. Maybe start with the question “if the 
system were empty, what would I need to do and how would I find it?”

Very cool though.


-  Gabriel

From: Jaromir Coufal [mailto:jcou...@redhat.com]
Sent: Tuesday, September 24, 2013 2:04 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

 Hey folks,

I want to introduce our direction of Tuskar UI, currently described with POC 
wireframes. Keep in mind, that wireframes which I am sending were made for 
purpose of proof of concept (which was built and released in August) and there 
are various changes since then, which were already adopted. However, basic 
concepts are staying similar. Any updates for wireframes and future direction 
will be sent here to the dev-list for feedback and reviews.

http://people.redhat.com/~jcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf

Just quick description of what is happening there:
* 1st step implementation - Layouts (page 2)
- just showing that we are re-using all Horizon components and layouts
* Where we are heading - Layouts (page 8)
- possible smaller improvements to Horizon concepts
- majority just smaller CSS changes in POC timeframe scope
* Resource Management - Flavors (page 15) - ALREADY REMOVED
- these were templates for flavors, which were part of selection in 
resource class creation process
- currently the whole flavor definition moved under compute resource class 
completely (templates are no longer used)
* Resource Management - Resources (page 22)
- this is rack management
- creation workflow was based on currently obsolete data (settings are 
going to be changed a bit)
- upload rack needs to make sure that we know some standard csv file format 
(can we specify some?)
- detail page of rack and node, which are going through enhancement process
* Resource Management - Classes (page 40)
- resource class management
- few changes will happen here as well regarding creation workflow
- detail page is going through enhancements as well as racks/nodes detail 
pages
* Graphic Design
- just showing the very similar look and feel as OpenStack Dashboard

If you have any further questions, just follow this thread, I'll be very happy 
to answer as much as possible.

Cheers,
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-24 Thread Robert Collins
A few quick notes.

Flavors need their extra attributes to be editable (e.g. architecture,
raid config etc) - particularly in the undercloud, but it's also
relevant for overcloud : If we're creating flavors for deployers, we
need to expose the full capabilities of flavors.

Secondly, if we're creating flavors for deployers, the UI should
reflect that: is it directly editing flavors, or editing the inputs to
the algorithm that creates flavors.

We seemed to have consensus @ the sprint in Seattle that Racks aren't
really Racks : that is that Rack as currently defined is more of a
logical construct (and some clouds may have just one), vs the actual
physical 'This is a Rack in a DC' aspect. If there is a physical thing
that the current logical Rack maps too, perhaps we should use that as
the top level UI construct?

The related thing is we need to expose the other things that also tend
to map failure domains - shared switches, power bars, A/C - but that
is future work I believe.

The 'add rack' thing taking a list of MACs seems odd : a MAC address
isn't enough to deploy even a hardware inventory image to (you need
power management details + CPU arch + one-or-more MACs to configure
TFTP and PXE enough to do a detailed automatic inventory). Long term
I'd like to integrate with the management network switch, so we can
drive the whole thing automatically, but in the short term, I think we
want to drive folk to use the API for mass enrollment. What do you
think?

Regardless, the node list will need to deal with nodes having N MAC
addresses and management credentials, not just a management IP.
Lastly, whats node name for? Instances [may] have names, but I don't
see any reason for nodes to have a name.

Similarly, it's a little weird that racks would have names.

CSV uploads stand out to me as an odd thing: JSON is the standard
serialisation format we use, does using CSV really make sense? Tied
into that is the question above - does it make sense to put bulk
enrollment in the web UI at all, or would it be better to just have
prerolled API scripts for that? Having 'upload racks' etc as a UI
element is right in the users face, but will be used quite rarely.

I don't follow why racks would be bound to classes: class seems to be
an aspect of a specific machine + network capacity configuration, but
Rack is a logical not physical construct, so it's immediately
decoupled from that. Perhaps it's a keep-it-simple thing, which I can
get behind - but in that case, reducing the emphasis on Rack /
renaming Rack becomes more important IMO.

HTH,
Rob

-Rob

On 25 September 2013 09:03, Jaromir Coufal  wrote:
>  Hey folks,
>
> I want to introduce our direction of Tuskar UI, currently described with POC
> wireframes. Keep in mind, that wireframes which I am sending were made for
> purpose of proof of concept (which was built and released in August) and
> there are various changes since then, which were already adopted. However,
> basic concepts are staying similar. Any updates for wireframes and future
> direction will be sent here to the dev-list for feedback and reviews.
>
> http://people.redhat.com/~jcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf
>
> Just quick description of what is happening there:
> * 1st step implementation - Layouts (page 2)
> - just showing that we are re-using all Horizon components and layouts
> * Where we are heading - Layouts (page 8)
> - possible smaller improvements to Horizon concepts
> - majority just smaller CSS changes in POC timeframe scope
> * Resource Management - Flavors (page 15) - ALREADY REMOVED
> - these were templates for flavors, which were part of selection in
> resource class creation process
> - currently the whole flavor definition moved under compute resource
> class completely (templates are no longer used)
> * Resource Management - Resources (page 22)
> - this is rack management
> - creation workflow was based on currently obsolete data (settings are
> going to be changed a bit)
> - upload rack needs to make sure that we know some standard csv file
> format (can we specify some?)
> - detail page of rack and node, which are going through enhancement
> process
> * Resource Management - Classes (page 40)
> - resource class management
> - few changes will happen here as well regarding creation workflow
> - detail page is going through enhancements as well as racks/nodes
> detail pages
> * Graphic Design
> - just showing the very similar look and feel as OpenStack Dashboard
>
> If you have any further questions, just follow this thread, I'll be very
> happy to answer as much as possible.
>
> Cheers,
> -- Jarda
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev

[openstack-dev] [TripleO] Tuskar merging with TripleO

2013-09-24 Thread Robert Collins
Hi, as previously mentioned here, we're going to merge the Tuskar
effort into TripleO: we're both focused on deploying OpenStack - and
thats a holistic thing, not just 'getting the code onto some servers';
the Tuskar contributors have had a few days to talk through and noone
has objected, so it's now official.

To accomodate the larger set of European contributors, we're going to
move the TripleO meeting an hour earlier, but as that would conflict
with the Ironic meeting (which many deployment orientated folk are
interested in) we're going to move to the current Tuskar slot - 1900
UTC Tuesdays.

Merging wiki content and blueprints etc will take place over the next
wee while. We're going to put all blueprints on /tripleo just to make
finding them easy for people.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On the usage of pip vs. setup.py install

2013-09-24 Thread Thomas Goirand
On 09/25/2013 12:20 AM, Monty Taylor wrote:
> 
> 
> On 09/24/2013 12:11 PM, Thomas Goirand wrote:
>> Hi Monty,
>>
>> On 09/24/2013 09:44 AM, Monty Taylor wrote:
>>> Instead of:
>>>
>>> "python setup.py install"
>>>
>>> Run:
>>>
>>> "pip install ."
>>
>> No way that this happens on the packaging side. Buildd have no network
>> access (on purpose), and we must not do any network access when building.
>>
>> So I wonder what this post is trying to achieve.
> 
> This post has nothing to do with packaging. The OpenStack project does
> not produce packages.
> 
> This is informational for people who are already running setup.py
> install, suggesting that they use pip install . instead.
> 
> If, however, you are in a packaging situation and you are running
> setup.py install as part of your packaging scripts such as in your
> debian/rules file or your rpm spec file, continuing to use setup.py
> install should have no negative effects, as all of the requirements
> processing should be avoided due to the system packaging having taken
> care of it already, so the evil that is easy_install will not be invoked.
> 
> Further, all debian/rules and rpm spec files that are packaging
> openstack project should really add the SKIP_PIP_INSTALL env var. This
> will turn off the additional pip operations that pbr does - which are
> again pointless in a distro-packaging world.

Hi,

Thanks for this clarification. You got me scared!!! :)

BTW, as I wrote already, I'm not really a fan of adding
SKIP_PIP_INSTALL, because if I do, I might not see the errors due to
missing dependencies. If there's a missing dependency and pip tries to
install it, it will break the build process, which really is what I
want. Or is it that SKIP_PIP_INSTALL still does a check to see if
dependencies are there? If that's not the case, then I would find it
really nice if it was possible to have a mode in which there is a pip
check that does a *hard break* error, and just stops the build process
(no need to wait until dpkg-buildpackage sees that there's an egg-info
folder that shouldn't be there). Any thoughts on this?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Common] context-is-admin causes project_id to not be passed to API layer

2013-09-24 Thread Pendergrass, Eric
While debugging a token auth problem I noticed that the enforcer searches the 
role list in a token for a role called 'admin' (any case).  If it's present, 
the enforcer returns true and the acl does not set the X-Project-Id header on 
the request.

I was wondering what the reason for not setting project id is in this case.  I 
assume it is a mechanism for privilege scoping for a highly-privileged user.

Also, the name 'admin' seems like a sensible choice to denote an admin user.  
Is there any other meaning behind the role name than this?

Many thanks,
Eric
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-24 Thread Robert Collins
One of the major things Tuskar does is model a datacenter - which is
very useful for error correlation, capacity planning and scheduling.

Long term I'd like this to be held somewhere where it is accessible
for schedulers and ceilometer etc. E.g. network topology + switch
information might be held by neutron where schedulers can rely on it
being available, or possibly held by a unified topology db with
scheduler glued into that, but updated by neutron / nova / cinder.
Obviously this is a) non-trivial and b) not designed yet.

However, the design of Tuskar today needs to accomodate a few things:
 - multiple reference architectures for clouds (unless there really is
one true design)
 - the fact that today we don't have such an integrated vertical scheduler.

So the current Tuskar model has three constructs that tie together to
model the DC:
 - nodes
 - resource classes (grouping different types of nodes into service
offerings - e.g. nodes that offer swift, or those that offer nova).
 - 'racks'

AIUI the initial concept of Rack was to map to a physical rack, but
this rapidly got shifted to be 'Logical Rack' rather than physical
rack, but I think of Rack as really just a special case of a general
modelling problem..

>From a deployment perspective, if you have two disconnected
infrastructures, thats two AZ's, and two underclouds : so we know that
any one undercloud is fully connected (possibly multiple subnets, but
one infrastructure). When would we want to subdivide that?

One case is quick fault aggregation: if a physical rack loses power,
rather than having 16 NOC folk independently investigating the same 16
down hypervisors, one would prefer to identify that the power to the
rack has failed (for non-HA powered racks); likewise if a single
switch fails (for non-HA network topologies) you want to identify that
that switch is down rather than investigating all the cascaded errors
independently.

A second case is scheduling: you may want to put nova instances on the
same switch as the cinder service delivering their block devices, when
possible, or split VM's serving HA tasks apart. (We currently do this
with host aggregates, but being able to do it directly would be much
nicer).

Lastly, if doing physical operations like power maintenance or moving
racks around in a datacentre, being able to identify machines in the
same rack can be super useful for planning, downtime announcements, or
host evacuation, and being able to find a specific machine in a DC is
also important (e.g. what shelf in the rack, what cartridge in a
chassis).

Back to 'Logical Rack' - you can see then that having a single
construct to group machines together doesn't really support these use
cases in a systematic fasion:- Physical rack modelling supports only a
subset of the location/performance/failure use cases, and Logical rack
doesn't support them at all: we're missing all the rich data we need
to aggregate faults rapidly : power, network, air conditioning - and
these things cover both single machine/groups of machines/racks/rows
of racks scale (consider a networked PDU with 10 hosts on it - thats a
fraction of a rack).

So, what I'm suggesting is that we model the failure and performance
domains directly, and include location (which is the incremental data
racks add once failure and performance domains are modelled) too. We
can separately noodle on exactly what failure domain and performance
domain modelling looks like - e.g. the scheduler focus group would be
a good place to have that discussion.

E.g. for any node I should be able to ask:
- what failure domains is this in? [e.g. power-45, switch-23, ac-15,
az-3, region-1]
- what locality-of-reference features does this have? [e.g. switch-23,
az-3, region-1]
- where is it [e.g. DC 2, pod 4, enclosure 2, row 5, rack 3, RU 30,
cartridge 40].

And then we should be able to slice and dice the DC easily by these aspects:
- location: what machines are in DC 2, or DC2 pod 4
- performance: what machines are all in region-1, or az-3, or switch-23.
- failure: what failure domains do machines X and Y have in common?
- failure: if we power off switch-23, what machines will be impacted?

So, what do you think?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] PTL Candidacy

2013-09-24 Thread Gareth
Good plan!


On Wed, Sep 25, 2013 at 6:40 AM, John Dickinson  wrote:

> I would like to nominate myself for PTL of Swift.
>
> I've been involved in OpenStack Swift since it started, and I'd like
> to share a few of the thins in-progress and where I want to see Swift
> go.
>
> Swift has always been a world-class storage system, proven at scale
> and production-ready from day one. In the past few years Swift has
> been deployed in public and private storage clouds all over the world,
> and it is in use at the largest companies in the world.
>
> My goal for Swift is that everyone will use Swift, every day, even if
> they don't realize it. And taking a look at where Swift is being used
> today, we're well on our way to that goal. We'll continue to move
> towards Swift being everywhere as Swift grows to solve more real-world
> use cases.
>
> Right now, there is work being done in Swift that will give deployers
> a very high degree of flexibility in how they can store data. We're
> working on implementing storage policies in Swift. These storage
> policies give deployers the ability to choose:
>
> (a) what subset of hardware the data lives on
> (b) how the data is stored across that hardware
> (c) how communication with an actual storage volume happens.
>
> Supporting (a) allows for storage tiers and isolated storage hardware.
> Supporting (b) allows for different replication or non-replication
> schemes. Supporting (c) allows for specific optimizations for
> particular filesystems or storage hardware. Combined, it's even
> feasable to have a Swift cluster take advantage of other storage
> systems as a storage policy (imagine an S3 storage policy).
>
> As PTL, I want to help coordinate this work and see it to completion.
> Many people from many different companies are working on it, in
> addition to the normal day-to-day activity in Swift.
>
> I'm excited by the future of Swift, and would be honored to continue
> to serve as Swift PTL.
>
> --John
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack *
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-24 Thread Mike Spreitzer
Let me elaborate a little on my thoughts about software orchestration, and 
respond to the recent mails from Zane and Debo.  I have expanded my 
picture at 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and added a companion picture at 
https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g 
that shows an alternative.

One of the things I see going on is discussion about better techniques for 
software orchestration than are supported in plain CFN.  Plain CFN allows 
any script you want in userdata, and prescription of certain additional 
setup elsewhere in cfn metadata.  But it is all mixed together and very 
concrete.  I think many contributors would like to see something with more 
abstraction boundaries, not only within one template but also the ability 
to have modular sources.

I work closely with some colleagues who have a particular software 
orchestration technology they call Weaver.  It takes as input for one 
deployment not a single monolithic template but rather a collection of 
modules.  Like higher level constructs in programming languages, these 
have some independence and can be re-used in various combinations and 
ways.  Weaver has a compiler that weaves together the given modules to 
form a monolithic model.  In fact, the input is a modular Ruby program, 
and the Weaver compiler is essentially running that Ruby program; this 
program produces the monolithic model as a side effect.  Ruby is a pretty 
good language in which to embed a domain-specific language, and my 
colleagues have done this.  The modular Weaver input mostly looks 
declarative, but you can use Ruby to reduce the verboseness of, e.g., 
repetitive stuff --- as well as plain old modularity with abstraction.  We 
think the modular Weaver input is much more compact and better for human 
reading and writing than plain old CFN.  This might not be obvious when 
you are doing the "hello world" example, but when you get to realistic 
examples it becomes clear.

The Weaver input discusses infrastructure issues, in the rich way Debo and 
I have been advocating, as well as software.  For this reason I describe 
it as an integrated model (integrating software and infrastructure 
issues).  I hope for HOT to evolve to be similarly expressive to the 
monolithic integrated model produced by the Weaver compiler.

In Weaver, as well as in some of the other software orchestration 
technologies being discussed, there is a need for some preparatory work 
before the infrastructure (e.g., VMs) is created.  This preparatory stage 
begins the implementation of the software orchestration abstractions. Here 
is the translation from something more abstract into flat userdata and 
other cfn metadata.  For Weaver, this stage also involves some 
stack-specific setup in a distinct coordination service.  When the VMs 
finally run their userdata, the Weaver-generated scripts there use that 
pre-configured part of the coordination service to interact properly with 
each other.

I think that, to a first-order approximation, the software orchestration 
preparatory stage commutes with holistic infrastructure scheduling.  They 
address independent issues, and can be done in either order.  That is why 
I have added a companion picture; the two pictures show the two orders.

My claim of commutativity is limited, as I and colleagues have 
demonstrated only one of the two orderings; the other is just a matter of 
recent thought.  There could be gotchas lurking in there.

Between the two orderings, I have a preference for the one I first 
mentioned and have experience with actually running.  It has the virtue of 
keeping related things closer together: the software orchestration 
compiler is next to the software orchestration preparatory stage, and the 
holistic infrastructure scheduling is next to the infrastructure 
orchestration.

In response to Debo's remark about flexibility: I am happy to see an 
architecture that allows either ordering if it turns out that they are 
both viable and the community really wants that flexibility.  I am not so 
sure we can totally give up on architecting where things go, but this 
level of flexibility I can understand and get behind (provided it works).

Just as a LP solver is a general utility whose uses do not require 
architecting, I can imagine a higher level utility that solves abstract 
placement problems.  Actually, this is not a matter of imagination.  My 
group has been evolving such a thing for years.  It is now based, as Debo 
recommends, on a very flexible and general optimization algorithm.  But 
the plumbing between it and the rest of the system is significant; I would 
not expect many users to take on that magnitude of task.

I do not really want to get into dogmatic fights over what gets labelled 
"heat".  I will leave the questions about which piece goes where in the 
OpenStack programs and projects to those more informed and anointed.  What 
I am trying t

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-24 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
> Let me elaborate a little on my thoughts about software orchestration, and 
> respond to the recent mails from Zane and Debo.  I have expanded my 
> picture at 
> https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
>  
> and added a companion picture at 
> https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g
>  
> that shows an alternative.
> 
> One of the things I see going on is discussion about better techniques for 
> software orchestration than are supported in plain CFN.  Plain CFN allows 
> any script you want in userdata, and prescription of certain additional 
> setup elsewhere in cfn metadata.  But it is all mixed together and very 
> concrete.  I think many contributors would like to see something with more 
> abstraction boundaries, not only within one template but also the ability 
> to have modular sources.
> 

Yes please. Orchestrate things, don't configure them. That is what
configuration tools are for.

There is a third stealth-objective that CFN has caused to linger in
Heat. That is "packaging cloud applications". By allowing the 100%
concrete CFN template to stand alone, users can "ship" the template.

IMO this marrying of software assembly, config, and orchestration is a
concern unto itself, and best left outside of the core infrastructure
orchestration system.

> I work closely with some colleagues who have a particular software 
> orchestration technology they call Weaver.  It takes as input for one 
> deployment not a single monolithic template but rather a collection of 
> modules.  Like higher level constructs in programming languages, these 
> have some independence and can be re-used in various combinations and 
> ways.  Weaver has a compiler that weaves together the given modules to 
> form a monolithic model.  In fact, the input is a modular Ruby program, 
> and the Weaver compiler is essentially running that Ruby program; this 
> program produces the monolithic model as a side effect.  Ruby is a pretty 
> good language in which to embed a domain-specific language, and my 
> colleagues have done this.  The modular Weaver input mostly looks 
> declarative, but you can use Ruby to reduce the verboseness of, e.g., 
> repetitive stuff --- as well as plain old modularity with abstraction.  We 
> think the modular Weaver input is much more compact and better for human 
> reading and writing than plain old CFN.  This might not be obvious when 
> you are doing the "hello world" example, but when you get to realistic 
> examples it becomes clear.
> 
> The Weaver input discusses infrastructure issues, in the rich way Debo and 
> I have been advocating, as well as software.  For this reason I describe 
> it as an integrated model (integrating software and infrastructure 
> issues).  I hope for HOT to evolve to be similarly expressive to the 
> monolithic integrated model produced by the Weaver compiler.
> 

Indeed, we're dealing with this very problem in TripleO right now. We need
to be able to compose templates that vary slightly for various reasons.

A ruby DSL is not something I think is ever going to happen in
OpenStack. But python has its advantages for DSL as well. I have been
trying to use clever tricks in yaml for a while, but perhaps we should
just move to a client-side python DSL that pushes the compiled yaml/json
templates into the engine.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev