Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Stephen Gran

On 19/11/13 16:33, Clint Byrum wrote:

Excerpts from Vijay Venkatachalam's message of 2013-11-19 05:48:43 -0800:

Hi Sam, Eugene,&  Avishay, etal,

 Today I spent some time to create a write-up for SSL 
Termination not exactly design doc. Please share your comments!

https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYTvMkMJ_inbo/edit

Would like comments/discussion especially on the following note:

SSL Termination requires certificate management. The ideal way is to handle 
this via an independent IAM service. This would take time to implement so the 
thought was to add the certificate details in VIP resource and send them 
directly to device. Basically don't store the certificate key in the DB there 
by avoiding security concerns of maintaining certificates in controller.


I don't see why it does.  Nothing in openstack needs to trust 
user-uploaded certs.  Just storing them as independent certificate 
objects that can be referenced by N VIPs makes sense to me.


If the backend is SSL, I would think you could do one of:
a) upload client certs
b) upload CA that has signed backend certs
c) opt to disable cert checking for backends

With the default being c).

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-20 Thread Julien Danjou
On Tue, Nov 19 2013, Devananda van der Veen wrote:

> If there is a fixed set of information (eg, temp, fan speed, etc) that
> ceilometer will want,

Sure, we want everything.

> let's make a list of that and add a driver interface
> within Ironic to abstract the collection of that information from physical
> nodes. Then, each driver will be able to implement it as necessary for that
> vendor. Eg., an iLO driver may poll its nodes differently than a generic
> IPMI driver, but the resulting data exported to Ceilometer should have the
> same structure.

I like the idea.

> An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
> this would need to be implemented by Ceilometer.

We're working on adding pollster for that indeed.

> As far as where the SNMP agent would need to run, it should be on the
> same host(s) as ironic-conductor so that it has access to the
> management network (the physically-separate network for hardware
> management, IPMI, etc). We should keep the number of applications with
> direct access to that network to a minimum, however, so a thin agent
> that collects and forwards the SNMP data to the central agent would be
> preferable, in my opinion.

We can keep things simple by having the agent only doing that polling I
think. Building a new agent sounds like it will complicate deployment
again.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Samuel Bercovici
Hi,

Evgeny has outlined the wiki for the proposed change at: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line with what 
was discussed during the summit.
The 
https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYTvMkMJ_inbo/edit
 discuss in addition Certificate Chains.

What would be the benefit of having a certificate that must be connected to VIP 
vs. embedding it in the VIP?
When we get a system that can store certificates (ex: Barbican), we will add 
support to it in the LBaaS model.

-Sam.



From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: Wednesday, November 20, 2013 8:06 AM
To: Eugene Nikanorov
Cc: Samuel Bercovici; Avishay Balderman; openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

Hi Eugene,

The proposal is simple, create a separate resource called 
certificate (which will also be handled by Neutron+LBaaS plugin) rather than 
having them in the VIP. It will maintain the original thought of security, the 
certificate confidential parameters will  be transient and the certificate will 
be directly sent to the device/driver and will not be stored. This is done by 
having VIP as a property in certificate.

Thanks,
Vijay V.

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, November 20, 2013 1:59 AM
To: Vijay Venkatachalam
Cc: Samuel Bercovici; Avishay Balderman; 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

Hi Vijay,

Thanks for working on this. As was discussed at the summit, immediate solution 
seems to be passing certificates via transient fields in Vip object, which will 
avoid the need for certificate management (incl. storing them).
If certificate management is concerned then I agree that it needs to be a 
separate specialized service.

> My thought was to have independent certificate resource with VIP uuid as one 
> of the properties. VIP is already created and
> will help to identify the driver/device. The VIP property can be depreciated 
> in the long term.
I think it's a little bit forward looking at this moment, also I think that 
certificates are not specific for load balancing.
Transient fields could help here too: client could pass certificate Id and 
driver of corresponding device will communicate with external service fetching 
corresponding certificate.

Thanks,
Eugene.

On Tue, Nov 19, 2013 at 5:48 PM, Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>> wrote:
Hi Sam, Eugene, & Avishay, etal,

Today I spent some time to create a write-up for SSL 
Termination not exactly design doc. Please share your comments!

https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYTvMkMJ_inbo/edit

Would like comments/discussion especially on the following note:

SSL Termination requires certificate management. The ideal way is to handle 
this via an independent IAM service. This would take time to implement so the 
thought was to add the certificate details in VIP resource and send them 
directly to device. Basically don't store the certificate key in the DB there 
by avoiding security concerns of maintaining certificates in controller.

I would expect the certificates to become an independent resource in future 
thereby causing backward compatibility issues.

Any ideas how to achieve this?

My thought was to have independent certificate resource with VIP uuid as one of 
the properties. VIP is already created and will help to identify the 
driver/device. The VIP property can be depreciated in the long term.
Thanks,
Vijay V.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose "project story wiki" idea

2013-11-20 Thread Joshua Harlow
Agreed I like the idea.

It reminds me of the blog the solum team is setting up. I think I asked then 
when they announced that blog if there was plans to make it easy for other 
projects to also have there own supported blog.

http://lists.openstack.org/pipermail/openstack-dev/2013-October/017977.html

Maybe we should see how that could be done since the update page is pretty 
similar to a blog (and a blog can include other things as well, like "tips and 
tricks" for using rally for example).

Thoughts?

Sent from my really tiny device...

On Nov 19, 2013, at 9:36 PM, "Boris Pavlovic" 
mailto:bpavlo...@mirantis.com>> wrote:

Hi stackers,


Currently what I see is growing amount of interesting projects, that at least I 
would like to track. But reading all mailing lists, and reviewing all patches 
in all interesting projects to get high level understanding of what is happing 
in project now, is quite hard or even impossible task (at least for me). 
Especially after 2 weeks vacation =)


The idea of this proposal is that every OpenStack project should have "story" 
wiki page. It means to publish every week one short message that contains most 
interesting updates for the last week, and high level road map for future week. 
So reading this for 10-15 minutes you can see what changed in project, and get 
better understanding of high level road map of the project.

E.g. we start doing this in Rally: https://wiki.openstack.org/wiki/Rally/Updates


I think that the best way to organize this, is to have person (or few persons) 
that will track all changes in project and prepare such updates each week.



Best regards,
Boris Pavlovic
--
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
> Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
> > From: Steve Baker 
> > To: openstack-dev@lists.openstack.org,
> > Date: 19.11.2013 21:43
> > Subject: Re: [openstack-dev] [Heat] HOT software configuration
> > refined after design summit discussions
> >
> 
> > I think there needs to a CM tool specific agent delivered to the server
> > which os-collect-config invokes. This agent will transform the config
> > data (input values, CM script, CM specific specialness) to a CM tool
> > invocation.
> >
> > How to define and deliver this agent is the challenge. Some options are:
> > 1) install it as part of the image customization/bootstrapping (golden
> > images or cloud-init)
> > 2) define a (mustache?) template in the SoftwareConfig which
> > os-collect-config transforms into the agent script, which
> > os-collect-config then executes
> > 3) a CM tool specific implementation of SoftwareApplier builds and
> > delivers a complete agent to os-collect-config which executes it
> >
> > I may be leaning towards 3) at the moment. Hopefully any agent can be
> > generated with a sufficiently sophisticated base SoftwareApplier type,
> > plus maybe some richer intrinsic functions.
> 
> This is good summary of options; about the same we had in mind. And we were
> also leaning towards 3. Probably the approach we would take is to get a
> SoftwareApplier running for one CM tool (e.g. Chef), then look at another
> tool (base shell scripts), and then see what the generic parts art that can
> be factored into a base class.
> 
> > >> The POC I'm working on is actually backed by a REST API which does
> dumb
> > >> (but structured) storage of SoftwareConfig and SoftwareApplier
> entities.
> > >> This has some interesting implications for managing SoftwareConfig
> > >> resources outside the context of the stack which uses them, but lets
> not
> > >> worry too much about that *yet*.
> > > Sounds good. We are also defining some blueprints to break down the
> overall
> > > software config topic. We plan to share them later this week, and then
> we
> > > can consolidate with your plans and see how we can best join forces.
> > >
> > >
> > At this point it would be very helpful to spec out how specific CM tools
> > are invoked with given inputs, script, and CM tool specific options.
> 
> That's our plan; and we would probably start with scripts and chef.
> 
> >
> > Maybe if you start with shell scripts, cfn-init and chef then we can all
> > contribute other CM tools like os-config-applier, puppet, ansible,
> > saltstack.
> >
> > Hopefully by then my POC will at least be able to create resources, if
> > not deliver some data to servers.
> 
> We've been thinking about getting metadata to the in-instance parts on the
> server and whether the resources you are building can serve the purpose.
> I.e. pass and endpoint to the SoftwareConfig resources to the instance and
> let the instance query the metadata from the resource. Sounds like this is
> what you had in mind, so that would be a good point for integrating the
> work. In the meantime, we can think of some shortcuts.
> 

Note that os-collect-config is intended to be a light-weight generic
in-instance agent to do exactly this. Watch for Metadata changes, and
feed them to an underlying tool in a predictable interface. I'd hope
that any of the appliers would mostly just configure os-collect-config
to run a wrapper that speaks os-collect-config's interface.

The interface is defined in the README:

https://pypi.python.org/pypi/os-collect-config

It is inevitable that we will extend os-collect-config to be able to
collect config data from whatever API these config applier resources
make available. I would suggest then that we not all go off and reinvent
os-collect-config for each applier, but rather enhance os-collect-config
as needed and write wrappers for the other config tools which implement
its interface.

os-apply-config already understands this interface for obvious reasons.

Bash scripts can use os-apply-config to extract individual values, as
you might see in some of the os-refresh-config scripts that are run as
part of tripleo. I don't think anything further is really needed there.

For chef, some kind of ohai plugin to read os-collect-config's collected
data would make sense.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] TaskFlow 0.1 integration

2013-11-20 Thread haruka tanizawa
Hi there!

Thank you for your suggestion.
If you have any untouched or unfinished APIs, can I help you?

Taskflow is important for me from point of cancelling with using taskflow.
So, let me help you to impl taskflow to API.

Sincerely, Haruka Tanizawa


2013/11/20 Joshua Harlow 

> Sweet!
>
> Feel free to ask lots of questions and jump on both irc channels
> (#openstack-state-management and #openstack-cinder) if u need any help that
> can be better solved in real time chat.
>
> Thanks for helping getting this ball rolling :-)
>
> Sent from my really tiny device...
>
> > On Nov 19, 2013, at 8:46 AM, "Walter A. Boring IV" 
> wrote:
> >
> > Awesome guys,
> >  Thanks for picking this up.   I'm looking forward to the reviews :)
> >
> > Walt
> >>> On 19.11.2013 10:38, Kekane, Abhishek wrote:
> >>> Hi All,
> >>>
> >>> Greetings!!!
> >> Hi there!
> >>
> >> And thanks for your interest in cinder and taskflow!
> >>
> >>> We are in process of implementing the TaskFlow 0.1 in Cinder for "copy
> >>> volume to image" and "delete volume".
> >>>
> >>> I have added two blueprints for the same.
> >>> 1.
> https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow
> >>> 2.
> https://blueprints.launchpad.net/cinder/+spec/delete-volume-task-flow
> >>>
> >>> I would like to know if any other developers/teams are working or
> >>> planning to work on any cinder api apart from above two api's.
> >>>
> >>> Your help is appreciated.
> >> Anastasia Karpinska works on updating existing flows to use released
> >> TaskFlow 0.1.1 instead of internal copy:
> >>
> >> https://review.openstack.org/53922
> >>
> >> It was blocked because taskflow was not in openstack/requirements, but
> >> now we're there, and Anastasia promised to finish the work and submit
> >> updated changeset for review in couple of days.
> >>
> >> There are also two changesets that convert cinder APIs to use TaskFlow:
> >> - https://review.openstack.org/53480 for create_backup by Victor
> >>   Rodionov
> >> - https://review.openstack.org/55134 for create_snapshot by Stanislav
> >>   Kudriashev
> >>
> >> As far as I know both Stanislav and Victor suspended their work unitil
> >> Anastasia's change lands.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-20 Thread Elena Ezhova
20.11.2013, 06:18, "John Griffith" :

On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin  wrote:

>  On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
>>  Random OSLO updates with no list of what changed, what got fixed etc
>>  are unlikely to get review attention - doing such a review is
>>  extremely difficult. I was -2ing them and asking for more info, but
>>  they keep popping up. I'm really not sure what the best way of
>>  updating from OSLO is, but this isn't it.
>  Best practice is to include a list of changes being synced, for example:
>
>https://review.openstack.org/54660
>
>  Every so often, we throw around ideas for automating the generation of
>  this changes list - e.g. cinder would have the oslo-incubator commit ID
>  for each module stored in a file in git to tell us when it was last
>  synced.
>
>  Mark.
>
>  _
>
> __
> >  OpenStack-dev mailing list
> >  OpenStack-dev@lists.openstack.org
> >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Been away on vacation so I'm afraid I'm a bit late on this... but;
>
> I think the point Duncan is bringing up here is that there are some
> VERY large and significant patches coming from OSLO pulls.  The DB
> patch in particular being over 1K lines of code to a critical portion
> of the code is a bit unnerving to try and do a review on.  I realize
> that there's a level of trust that goes with the work that's done in
> OSLO and synchronizing those changes across the projects, but I think
> a few key concerns here are:
>
> 1. Doing huge pulls from OSLO like the DB patch here are nearly
> impossible to thoroughly review and test.  Over time we learn a lot
> about real usage scenarios and the database and tweak things as we go,
> so seeing a patch set like this show up is always a bit unnerving and
> frankly nobody is overly excited to review it.
>
> 2. Given a certain level of *trust* for the work that folks do on the
> OSLO side in submitting these patches and new additions, I think some
> of the responsibility on the review of the code falls on the OSLO
> team.  That being said there is still the issue of how these changes
> will impact projects *other* than Nova which I think is sometimes
> neglected.  There have been a number of OSLO synchs pushed to Cinder
> that fail gating jobs, some get fixed, some get abandoned, but in
> either case it shows that there wasn't any testing done with projects
> other than Nova (PLEASE note, I'm not referring to this particular
> round of patches or calling any patch set out, just stating a
> historical fact).
>
> 3. We need better documentation in commit messages explaining why the
> changes are necessary and what they do for us.  I'm sorry but in my
> opinion the answer "it's the latest in OSLO and Nova already has it"
> is not enough of an answer in my opinion.  The patches mentioned in
> this thread in my opinion met the minimum requirements because they at
> least reference the OSLO commit which is great.  In addition I'd like
> to see something to address any discovered issues or testing done with
> the specific projects these changes are being synced to.
>
> I'm in no way saying I don't want Cinder to play nice with the common
> code or to get in line with the way other projects do things but I am
> saying that I think we have a ways to go in terms of better
> communication here and in terms of OSLO code actually keeping in mind
> the entire OpenStack eco-system as opposed to just changes that were
> needed/updated in Nova.  Cinder in particular went through some pretty
> massive DB re-factoring and changes during Havana and there was a lot
> of really good work there but it didn't come without a cost and the
> benefits were examined and weighed pretty heavily.  I also think that
> some times the indirection introduced by adding some of the
> openstack.common code is unnecessary and in some cases makes things
> more difficult than they should be.
>
> I'm just not sure that we always do a very good ROI investigation or
> risk assessment on changes, and that opinion applies to ALL changes in
> OpenStack projects, not OSLO specific or anything else.
>
> All of that being said, a couple of those syncs on the list are
> outdated.  We should start by doing a fresh pull for these and if
> possible add some better documentation in the commit messages as to
> the justification for the patches that would be great.  We can take a
> closer look at the changes and the history behind them and try to get
> some review progress made here.  Mark mentioned some good ideas
> regarding capturing commit ID's from synchronization pulls and I'd
> like to look into that a bit as well.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I see now that updating OSLO is a far more complex issue than it may seem

Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Samuel Bercovici
Hi Stephen,

When this was discussed in the past, customer were not happy about storing 
their SSL certificates in the OpenStack database as plain fields as they felt 
that this is not secured enough.
Do you say, that you are OK with storing SSL certificates in  the OpenStack 
database?

-Sam.


-Original Message-
From: Stephen Gran [mailto:stephen.g...@theguardian.com] 
Sent: Wednesday, November 20, 2013 10:15 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

On 19/11/13 16:33, Clint Byrum wrote:
> Excerpts from Vijay Venkatachalam's message of 2013-11-19 05:48:43 -0800:
>> Hi Sam, Eugene,&  Avishay, etal,
>>
>>  Today I spent some time to create a write-up for SSL 
>> Termination not exactly design doc. Please share your comments!
>>
>> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYT
>> vMkMJ_inbo/edit
>>
>> Would like comments/discussion especially on the following note:
>>
>> SSL Termination requires certificate management. The ideal way is to handle 
>> this via an independent IAM service. This would take time to implement so 
>> the thought was to add the certificate details in VIP resource and send them 
>> directly to device. Basically don't store the certificate key in the DB 
>> there by avoiding security concerns of maintaining certificates in 
>> controller.

I don't see why it does.  Nothing in openstack needs to trust user-uploaded 
certs.  Just storing them as independent certificate objects that can be 
referenced by N VIPs makes sense to me.

If the backend is SSL, I would think you could do one of:
a) upload client certs
b) upload CA that has signed backend certs
c) opt to disable cert checking for backends

With the default being c).

Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com Please consider the environment 
before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our 
iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access.
Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender and delete the e-mail 
and all attachments immediately.
Do not disclose the contents to another person. You may not use the information 
for any purpose, or store, or copy, it in any way.
 
Guardian News & Media Limited is not liable for any computer viruses or other 
material transmitted with or as part of this e-mail. You should employ virus 
checking software.
 
Guardian News & Media Limited
 
A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP
 
Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Steve Baker
On 11/20/2013 09:29 PM, Clint Byrum wrote:
> Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
>> Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
>>> From: Steve Baker 
>>> To: openstack-dev@lists.openstack.org,
>>> Date: 19.11.2013 21:43
>>> Subject: Re: [openstack-dev] [Heat] HOT software configuration
>>> refined after design summit discussions
>>>
>> 
>>> I think there needs to a CM tool specific agent delivered to the server
>>> which os-collect-config invokes. This agent will transform the config
>>> data (input values, CM script, CM specific specialness) to a CM tool
>>> invocation.
>>>
>>> How to define and deliver this agent is the challenge. Some options are:
>>> 1) install it as part of the image customization/bootstrapping (golden
>>> images or cloud-init)
>>> 2) define a (mustache?) template in the SoftwareConfig which
>>> os-collect-config transforms into the agent script, which
>>> os-collect-config then executes
>>> 3) a CM tool specific implementation of SoftwareApplier builds and
>>> delivers a complete agent to os-collect-config which executes it
>>>
>>> I may be leaning towards 3) at the moment. Hopefully any agent can be
>>> generated with a sufficiently sophisticated base SoftwareApplier type,
>>> plus maybe some richer intrinsic functions.
>> This is good summary of options; about the same we had in mind. And we were
>> also leaning towards 3. Probably the approach we would take is to get a
>> SoftwareApplier running for one CM tool (e.g. Chef), then look at another
>> tool (base shell scripts), and then see what the generic parts art that can
>> be factored into a base class.
>>
> The POC I'm working on is actually backed by a REST API which does
>> dumb
> (but structured) storage of SoftwareConfig and SoftwareApplier
>> entities.
> This has some interesting implications for managing SoftwareConfig
> resources outside the context of the stack which uses them, but lets
>> not
> worry too much about that *yet*.
 Sounds good. We are also defining some blueprints to break down the
>> overall
 software config topic. We plan to share them later this week, and then
>> we
 can consolidate with your plans and see how we can best join forces.


>>> At this point it would be very helpful to spec out how specific CM tools
>>> are invoked with given inputs, script, and CM tool specific options.
>> That's our plan; and we would probably start with scripts and chef.
>>
>>> Maybe if you start with shell scripts, cfn-init and chef then we can all
>>> contribute other CM tools like os-config-applier, puppet, ansible,
>>> saltstack.
>>>
>>> Hopefully by then my POC will at least be able to create resources, if
>>> not deliver some data to servers.
>> We've been thinking about getting metadata to the in-instance parts on the
>> server and whether the resources you are building can serve the purpose.
>> I.e. pass and endpoint to the SoftwareConfig resources to the instance and
>> let the instance query the metadata from the resource. Sounds like this is
>> what you had in mind, so that would be a good point for integrating the
>> work. In the meantime, we can think of some shortcuts.
>>
> Note that os-collect-config is intended to be a light-weight generic
> in-instance agent to do exactly this. Watch for Metadata changes, and
> feed them to an underlying tool in a predictable interface. I'd hope
> that any of the appliers would mostly just configure os-collect-config
> to run a wrapper that speaks os-collect-config's interface.
>
> The interface is defined in the README:
>
> https://pypi.python.org/pypi/os-collect-config
>
> It is inevitable that we will extend os-collect-config to be able to
> collect config data from whatever API these config applier resources
> make available. I would suggest then that we not all go off and reinvent
> os-collect-config for each applier, but rather enhance os-collect-config
> as needed and write wrappers for the other config tools which implement
> its interface.
>
> os-apply-config already understands this interface for obvious reasons.
>
> Bash scripts can use os-apply-config to extract individual values, as
> you might see in some of the os-refresh-config scripts that are run as
> part of tripleo. I don't think anything further is really needed there.
>
> For chef, some kind of ohai plugin to read os-collect-config's collected
> data would make sense.
>
I'd definitely start with occ as Clint outlines. It would be nice if occ
only had to be configured to poll metadata for the OS::Nova::Server to
fetch the aggregated data for the currently available SoftwareAppliers.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Stephen Gran
Hi,

Yes, definitely yes.

It's just a bootstrap problem - you can't both have a reliable,
resilient loadbalancer that can be respawned, and not store all the data
necessary to respawn it.

I agree there are privacy concerns, just as there are with any hoster.
But if you don't trust your hoster with your SSL certs, you probably
shouldn't host any content that matters with them.

Cheers,

On Wed, 2013-11-20 at 08:43 +, Samuel Bercovici wrote:
> Hi Stephen,
> 
> When this was discussed in the past, customer were not happy about storing 
> their SSL certificates in the OpenStack database as plain fields as they felt 
> that this is not secured enough.
> Do you say, that you are OK with storing SSL certificates in  the OpenStack 
> database?
> 
> -Sam.
> 
> 
> -Original Message-
> From: Stephen Gran [mailto:stephen.g...@theguardian.com] 
> Sent: Wednesday, November 20, 2013 10:15 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> On 19/11/13 16:33, Clint Byrum wrote:
> > Excerpts from Vijay Venkatachalam's message of 2013-11-19 05:48:43 -0800:
> >> Hi Sam, Eugene,&  Avishay, etal,
> >>
> >>  Today I spent some time to create a write-up for SSL 
> >> Termination not exactly design doc. Please share your comments!
> >>
> >> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYT
> >> vMkMJ_inbo/edit
> >>
> >> Would like comments/discussion especially on the following note:
> >>
> >> SSL Termination requires certificate management. The ideal way is to 
> >> handle this via an independent IAM service. This would take time to 
> >> implement so the thought was to add the certificate details in VIP 
> >> resource and send them directly to device. Basically don't store the 
> >> certificate key in the DB there by avoiding security concerns of 
> >> maintaining certificates in controller.
> 
> I don't see why it does.  Nothing in openstack needs to trust user-uploaded 
> certs.  Just storing them as independent certificate objects that can be 
> referenced by N VIPs makes sense to me.
> 
> If the backend is SSL, I would think you could do one of:
> a) upload client certs
> b) upload CA that has signed backend certs
> c) opt to disable cert checking for backends
> 
> With the default being c).
> 
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - theguardian.com Please consider the environment 
> before printing this email.
> --
> Visit theguardian.com   
> 
> On your mobile, download the Guardian iPhone app theguardian.com/iphone and 
> our iPad edition theguardian.com/iPad   
> Save up to 33% by subscribing to the Guardian and Observer - choose the 
> papers you want and get full digital access.
> Visit subscribe.theguardian.com
> 
> This e-mail and all attachments are confidential and may also be privileged. 
> If you are not the named recipient, please notify the sender and delete the 
> e-mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use the 
> information for any purpose, or store, or copy, it in any way.
>  
> Guardian News & Media Limited is not liable for any computer viruses or other 
> material transmitted with or as part of this e-mail. You should employ virus 
> checking software.
>  
> Guardian News & Media Limited
>  
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
>  
> Registered in England Number 908396
> 
> --
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Stephen Gran
Senior Systems Integrator - The Guardian

Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our 
iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access.
Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.
 
Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus c

Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Stephen Gran
Hi,

On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
> Hi,
> 
>  
> 
> Evgeny has outlined the wiki for the proposed change at:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
> with what was discussed during the summit.
> 
> The
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2nYTvMkMJ_inbo/edit
>  discuss in addition Certificate Chains.
> 
>  
> 
> What would be the benefit of having a certificate that must be
> connected to VIP vs. embedding it in the VIP?

You could reuse the same certificate for multiple loadbalancer VIPs.
This is a fairly common pattern - we have a dev wildcard cert that is
self-signed, and is used for lots of VIPs.

> When we get a system that can store certificates (ex: Barbican), we
> will add support to it in the LBaaS model.

It probably doesn't need anything that complicated, does it?

Cheers,
-- 
Stephen Gran
Senior Systems Integrator - The Guardian

Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our 
iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access.
Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.
 
Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.
 
Guardian News & Media Limited
 
A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP
 
Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][savanna][tripleo][trove] jobs for dib elements

2013-11-20 Thread Sergey Lukjanov
Hi folks,

it looks like all of us need to have some jobs to check, build and publish our 
guest images.
Please, correct me if such discussion/thread is already exists.

There was a short discussion on it with Trove guys and we have same thoughts on 
it.
I’m not really sure about Heat/TripleO and so guys please comment here about 
your needs.
In Savanna we have some elements to pre-built images with Hadoop and, so, we’d 
like to gate them.

The main idea of this email is to coordinate our efforts in this area and share 
thoughts/solutions earlier.

I’ve created an etherpad to collect some thoughts - 
https://etherpad.openstack.org/p/dib-elements-jobs

Thank you.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Horizon] poweroff/shutdown action in horizon

2013-11-20 Thread Jaromir Coufal

  
  
Hey Gareth,

I am for sake of consistency and it looks that what we want here is
'Shutting down'.

There is discussion forum for similar topics, feel free to join:
http://ask-openstackux.rhcloud.com.
Also, for live discussions, you can join #openstack-ux channel.

-- Jarda

On 2013/15/11 07:49, Gareth wrote:


  
Hi all


There is a confused word in horizon when stopping an
  instance. Please take a look at following snapshots:


1. In the action list of an install, there is an action
  that is "Shut Off"



2. If you press that, In the confirm window, we still call
  this action "Shut Off"

  
  
  
  3. When doing this action, it is called "Powering off"
  
  
  
  4. Shut off again after finished
  
  
  
  So should we use a unified name for it? For example we
could use "Shuting down" instead. But I'm not sure for this,
because horizon has a long history actually, maybe horizon
developers choosing current verbs have a more rational
reason.


-- 
Gareth
  

Cloud Computing,
  OpenStack, Fitness, Basketball
OpenStack contributor
Company: UnitedStack
My promise: if you find
  any spelling or grammar mistakes in my email from
  Mar 1 2013, notify me 
and I'll donate $1 or ¥1
  to an open organization you specify.
  

  

  
  
  
  
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose "project story wiki" idea

2013-11-20 Thread Flavio Percoco

On 20/11/13 09:33 +0400, Boris Pavlovic wrote:

Hi stackers,


Currently what I see is growing amount of interesting projects, that at least I
would like to track. But reading all mailing lists, and reviewing all patches
in all interesting projects to get high level understanding of what is happing
in project now, is quite hard or even impossible task (at least for me).
Especially after 2 weeks vacation =)


The idea of this proposal is that every OpenStack project should have "story"
wiki page. It means to publish every week one short message that contains most
interesting updates for the last week, and high level road map for future week.
So reading this for 10-15 minutes you can see what changed in project, and get
better understanding of high level road map of the project.

E.g. we start doing this in Rally: https://wiki.openstack.org/wiki/Rally/
Updates


I think that the best way to organize this, is to have person (or few persons)
that will track all changes in project and prepare such updates each week. 



I like the idea. We've been doing so for Marconi here[0] but I agree
that a more common place makes sense.

The thing I like about thoughtstreams is that members can have their
own stream and publish updates whenever they've something to say.
They'll all end up in the combined stream, which also has an RSS feed
that can be consumed.

Anyway, I'm pretty sure this is also possible with other tools.

Cheers,
FF

[0] https://thoughtstreams.io/combined/marconi-progress-and-updates/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Sylvain Bauza

Hi,

When reviewing https://review.openstack.org/#/c/54539/, it appeared to 
me that we need to make consensus on the way to know that a request is 
having admin creds.
Currently, for implementing policies check in Climate, I'm looking at 
context.roles dict, which contains the unicode string 'admin' if so. I 
don't need to have an extra param. That said, having a method 
check_is_admin on the context module would be great, and I fully reckon 
it would be helpful.


What do you think all folks ?

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Julien Danjou
On Wed, Nov 20 2013, Sylvain Bauza wrote:

> When reviewing https://review.openstack.org/#/c/54539/, it appeared to me
> that we need to make consensus on the way to know that a request is having
> admin creds.
> Currently, for implementing policies check in Climate, I'm looking at
> context.roles dict, which contains the unicode string 'admin' if so. I don't
> need to have an extra param. That said, having a method check_is_admin on
> the context module would be great, and I fully reckon it would be helpful.
>
> What do you think all folks ?

It depends on how fine grained you want your ACL to be, but anyway
setting a user as admin depends on the 'admin' role being present in the
context.roles dict or not, so I don't see the point of having an extra
param; a helper method, at best.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] When is a blueprint unnecessary?

2013-11-20 Thread Thierry Carrez
Russell Bryant wrote:
> One of the bits of feedback that came from the "Nova Project Structure
> and Process" session at the design summit was that it would be nice to
> skip having blueprints for smaller items.
> 
> In an effort to capture this, I updated the blueprint review criteria
> [1] with the following:
> 
>   Some blueprints are closed as unnecessary. Blueprints are used for
>   tracking significant development efforts. In general, small and/or
>   straight forward cleanups do not need blueprints. A blueprint should
>   be filed if:
> 
>- it impacts the release notes
>- it covers significant internal development or refactoring efforts
> [...]

While I agree we should not *require* blueprints for minor
features/efforts, should we actively prevent people from filing them (or
close them if they are filed ?)

Personally (I know I'm odd) I like to have my work (usually small stuff)
covered by a blueprint so that I can track and communicate its current
completion status -- helps me keep track of where I am.

So the question is... is there a cost associated with tolerating "small"
blueprints ? Once they are set to "Low" priority they mostly disappear
from release management tracking so it's not really a burden there.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-11-20 Thread Daniel P. Berrange
On Wed, Nov 20, 2013 at 11:18:24AM +0800, Wangpan wrote:
> Hi Daniel,
> 
> Thanks for your help in advance.
> I have read your wiki page and it explains this issue very clearly.
> But I have a question about the 'technical design', you give us a prototype 
> method as below:
> def get_guest_cpu_topology(self, inst_type, image, preferred_topology, 
> mandatory_topology):
> my question is that, how/where we can get these two parameters 
> 'preferred_topology, mandatory_topology'?
> from the nova config file? or get from the hypervisor?

I expected that those would be determined dynamically by the virt
drivers using a variety of approaches. Probably best to start off
with it fairly simple. For example preferred topology may simpy
be used to express that the driver prefers use of sockets and cores,
over threads, while mandatory topology would encode the maximum
it was prepared to accept.

So I might say libvirt would just go for a default of

  preferred = { max_sockets = 64, max_cores = 4, max_threads = 1 }
  mandatory = { max_sockets = 256, max_cores = 8, max_threads = 2 }

When libvirt gets some more intelligence  around NUMA placement, allowing
it to map guest NUMA topology to host NUMA topology, then it might get more
inventive tailoring the 'preferred' value to match how it wants to place
the guest.

For example, if libvirt is currently trying to fit guests entirely on to
one host socket by default then it might decide to encourage use of cores
by saying

   preferred = { max_sockets = 1, max_cores = 8, max_threads = 2 }

Conversely if it knows that the memory size of the flavour will exceed
one NUMA node, then it will want to force use of multiple sockets and
discourage cores/threads

   preferred = { max_sockets = 64, max_cores = 2, max_threads = 1 }

Again, I think we want to just keep it fairly dumb & simple to start
with.

> 发件人:"Daniel P. Berrange" 
> 发送时间:2013-11-19 20:15
> 主题:[openstack-dev] [Nova] Blueprint: standard specification of guest CPU 
> topology
> 收件人:"openstack-dev"
> 抄送:
> 
> For attention of maintainers of Nova virt drivers 
> 
> A while back there was a bug requesting the ability to set the CPU 
> topology (sockets/cores/threads) for guests explicitly 
> 
>https://bugs.launchpad.net/nova/+bug/1199019 
> 
> I countered that setting explicit topology doesn't play well with 
> booting images with a variety of flavours with differing vCPU counts. 
> 
> This led to the following change which used an image property to 
> express maximum constraints on CPU topology (max-sockets/max-cores/ 
> max-threads) which the libvirt driver will use to figure out the 
> actual topology (sockets/cores/threads) 
> 
>   https://review.openstack.org/#/c/56510/ 
> 
> I believe this is a prime example of something we must co-ordinate 
> across virt drivers to maximise happiness of our users. 
> 
> There's a blueprint but I find the description rather hard to 
> follow 
> 
>   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology 
> 
> So I've created a standalone wiki page which I hope describes the 
> idea more clearly 
> 
>   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology 
> 
> Launchpad doesn't let me link the URL to the blueprint since I'm not 
> the blurprint creator :-( 
> 
> Anyway this mail is to solicit input on the proposed standard way to 
> express this which is hypervisor portable and the addition of some 
> shared code for doing the calculations which virt driver impls can 
> just all into rather than re-inventing 
> 
> I'm looking for buy-in to the idea from the maintainers of each 
> virt driver that this conceptual approach works for them, before 
> we go merging anything with the specific impl for libvirt. 



Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Potential to increase min required libvirt version to 0.9.8 ?

2013-11-20 Thread Daniel P. Berrange
On Wed, Nov 20, 2013 at 02:50:03PM +1100, Tom Fifield wrote:
> On 20/11/13 14:33, Robert Collins wrote:
> >On 20 November 2013 08:02, Daniel P. Berrange  wrote:
> >>Currently the Nova libvirt driver is declaring that it wants a minimum
> >>of libvirt 0.9.6.
> >...
> >>If there are other distros I've missed which expect to support deployment
> >>of Icehouse please add them to this list. Hopefully there won't be any
> >>with libvirt software older than Ubuntu 12.04 LTS
> >>
> >>
> >>The reason I'm asking this now, is that we're working to make the libvirt
> >>python module a separate tar.gz that can build with multiple libvirt
> >>versions, and I need to decide how ancient a libvirt we should support
> >>for it.
> >
> >Fantastic!!!
> >
> >The Ubuntu cloud archive
> >https://wiki.ubuntu.com/ServerTeam/CloudArchive is how OpenStack is
> >delivered by Canonical for Ubuntu LTS users. So I think you can go
> >with e.g. 0.9.11 or even 0.9.12 depending on what the Suse folk say.
> 
> Just confirming that the documentation for Ubuntu sets users up with
> the Cloud Archive. In Havana, Libvirt is 1.1.1 and qemu is 1.5.0.
> I've added a row to the table.

Ah, so you're saying that the Cloud Archive repository actually
provides users with newer libvirt + qemu packages than they would
otherwise get in the regular LTS repository. That's news to me !

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-20 Thread Ladislav Smola
Ok, I'll try to summarize what will be done in the near future for 
Undercloud monitoring.


1. There will be Central agent running on the same host(hosts once the 
central agent horizontal scaling is finished) as Ironic
2. It will have SNMP pollster, SNMP pollster will be able to get list of 
hosts and their IPs from Nova (last time I
checked it was in Nova) so it can poll them for stats. Hosts to 
poll can be also defined statically in config file.
3. It will have IPMI pollster, that will poll Ironic API, getting list 
of hosts and a fixed set of stats (basically everything

that we can get :-))
4. Ironic will also emit messages (basically all events regarding the 
hardware) and send them directly to Ceilometer collector


Does it seems to be correct? I think that is the basic we must have to 
have Undercloud monitored. We can then build on that.


Kind regards,
Ladislav

On 11/20/2013 09:22 AM, Julien Danjou wrote:

On Tue, Nov 19 2013, Devananda van der Veen wrote:


If there is a fixed set of information (eg, temp, fan speed, etc) that
ceilometer will want,

Sure, we want everything.


let's make a list of that and add a driver interface
within Ironic to abstract the collection of that information from physical
nodes. Then, each driver will be able to implement it as necessary for that
vendor. Eg., an iLO driver may poll its nodes differently than a generic
IPMI driver, but the resulting data exported to Ceilometer should have the
same structure.

I like the idea.


An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
this would need to be implemented by Ceilometer.

We're working on adding pollster for that indeed.


As far as where the SNMP agent would need to run, it should be on the
same host(s) as ironic-conductor so that it has access to the
management network (the physically-separate network for hardware
management, IPMI, etc). We should keep the number of applications with
direct access to that network to a minimum, however, so a thin agent
that collects and forwards the SNMP data to the central agent would be
preferable, in my opinion.

We can keep things simple by having the agent only doing that polling I
think. Building a new agent sounds like it will complicate deployment
again.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Search Project - summit follow up

2013-11-20 Thread Thierry Carrez
Dmitri Zimin(e) | StackStorm wrote:
> Hi Stackers,
> 
> The project Search is a service providing fast full-text search for
> resources across OpenStack services.
> [...]

At first glance this looks slightly scary from a security / tenant
isolation perspective. Most search results would be extremely
user-specific (and leaking data from one user to another would be
catastrophic), so the benefits of indexing (vs. querying DB) would be
very limited ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-20 Thread Thomas Spatzier
Steve Baker  wrote on 20.11.2013 09:51:34:
> From: Steve Baker 
> To: openstack-dev@lists.openstack.org,
> Date: 20.11.2013 09:55
> Subject: Re: [openstack-dev] [Heat] HOT software configuration
> refined after design summit discussions
>
> On 11/20/2013 09:29 PM, Clint Byrum wrote:
> > Excerpts from Thomas Spatzier's message of 2013-11-19 23:35:40 -0800:
> >> Excerpts from Steve Baker's message on 19.11.2013 21:40:54:
> >>> From: Steve Baker 
> >>> To: openstack-dev@lists.openstack.org,
> >>> Date: 19.11.2013 21:43
> >>> Subject: Re: [openstack-dev] [Heat] HOT software configuration
> >>> refined after design summit discussions
> >>>
> >> 
> >>> I think there needs to a CM tool specific agent delivered to the
server
> >>> which os-collect-config invokes. This agent will transform the config
> >>> data (input values, CM script, CM specific specialness) to a CM tool
> >>> invocation.
> >>>
> >>> How to define and deliver this agent is the challenge. Some options
are:
> >>> 1) install it as part of the image customization/bootstrapping
(golden
> >>> images or cloud-init)
> >>> 2) define a (mustache?) template in the SoftwareConfig which
> >>> os-collect-config transforms into the agent script, which
> >>> os-collect-config then executes
> >>> 3) a CM tool specific implementation of SoftwareApplier builds and
> >>> delivers a complete agent to os-collect-config which executes it
> >>>
> >>> I may be leaning towards 3) at the moment. Hopefully any agent can be
> >>> generated with a sufficiently sophisticated base SoftwareApplier
type,
> >>> plus maybe some richer intrinsic functions.
> >> This is good summary of options; about the same we had in mind. And we
were
> >> also leaning towards 3. Probably the approach we would take is to get
a
> >> SoftwareApplier running for one CM tool (e.g. Chef), then look at
another
> >> tool (base shell scripts), and then see what the generic parts art
that can
> >> be factored into a base class.
> >>
> > The POC I'm working on is actually backed by a REST API which does
> >> dumb
> > (but structured) storage of SoftwareConfig and SoftwareApplier
> >> entities.
> > This has some interesting implications for managing SoftwareConfig
> > resources outside the context of the stack which uses them, but
lets
> >> not
> > worry too much about that *yet*.
>  Sounds good. We are also defining some blueprints to break down the
> >> overall
>  software config topic. We plan to share them later this week, and
then
> >> we
>  can consolidate with your plans and see how we can best join forces.
> 
> 
> >>> At this point it would be very helpful to spec out how specific CM
tools
> >>> are invoked with given inputs, script, and CM tool specific options.
> >> That's our plan; and we would probably start with scripts and chef.
> >>
> >>> Maybe if you start with shell scripts, cfn-init and chef then we can
all
> >>> contribute other CM tools like os-config-applier, puppet, ansible,
> >>> saltstack.
> >>>
> >>> Hopefully by then my POC will at least be able to create resources,
if
> >>> not deliver some data to servers.
> >> We've been thinking about getting metadata to the in-instance parts on
the
> >> server and whether the resources you are building can serve the
purpose.
> >> I.e. pass and endpoint to the SoftwareConfig resources to the instance
and
> >> let the instance query the metadata from the resource. Sounds like
this is
> >> what you had in mind, so that would be a good point for integrating
the
> >> work. In the meantime, we can think of some shortcuts.
> >>
> > Note that os-collect-config is intended to be a light-weight generic
> > in-instance agent to do exactly this. Watch for Metadata changes, and
> > feed them to an underlying tool in a predictable interface. I'd hope
> > that any of the appliers would mostly just configure os-collect-config
> > to run a wrapper that speaks os-collect-config's interface.
> >
> > The interface is defined in the README:
> >
> > https://pypi.python.org/pypi/os-collect-config
> >
> > It is inevitable that we will extend os-collect-config to be able to
> > collect config data from whatever API these config applier resources
> > make available. I would suggest then that we not all go off and
reinvent
> > os-collect-config for each applier, but rather enhance
os-collect-config
> > as needed and write wrappers for the other config tools which implement
> > its interface.
> >
> > os-apply-config already understands this interface for obvious reasons.
> >
> > Bash scripts can use os-apply-config to extract individual values, as
> > you might see in some of the os-refresh-config scripts that are run as
> > part of tripleo. I don't think anything further is really needed there.
> >
> > For chef, some kind of ohai plugin to read os-collect-config's
collected
> > data would make sense.

Thanks for all that information, Clint. Fully agree that we should leverage
what is already there instead of re-inventing the wheel.

Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Sylvain Bauza

Le 20/11/2013 11:18, Julien Danjou a écrit :
It depends on how fine grained you want your ACL to be, 



Then, that's policy matter to let you know if you can trust the user or not.
I'm digging into 
http://adam.younglogic.com/2013/11/policy-enforcement-in-openstack/,great value 
for knowing how managing fine grained ACLs.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] When is a blueprint unnecessary?

2013-11-20 Thread John Garbutt
On 20 November 2013 10:21, Thierry Carrez  wrote:
> Russell Bryant wrote:
>> One of the bits of feedback that came from the "Nova Project Structure
>> and Process" session at the design summit was that it would be nice to
>> skip having blueprints for smaller items.
>>
>> In an effort to capture this, I updated the blueprint review criteria
>> [1] with the following:
>>
>>   Some blueprints are closed as unnecessary. Blueprints are used for
>>   tracking significant development efforts. In general, small and/or
>>   straight forward cleanups do not need blueprints. A blueprint should
>>   be filed if:
>>
>>- it impacts the release notes
>>- it covers significant internal development or refactoring efforts
>> [...]
>
> While I agree we should not *require* blueprints for minor
> features/efforts, should we actively prevent people from filing them (or
> close them if they are filed ?)
>
> Personally (I know I'm odd) I like to have my work (usually small stuff)
> covered by a blueprint so that I can track and communicate its current
> completion status -- helps me keep track of where I am.
>
> So the question is... is there a cost associated with tolerating "small"
> blueprints ? Once they are set to "Low" priority they mostly disappear
> from release management tracking so it's not really a burden there.

Approving a small number of smaller blueprints seems OK to me. But
maybe a bug could be a lower cost alternative.

But too many blueprints is certainly its a happier place than not
having detailed enough blueprints for the code that needs them.

Maybe we should soften the language to describe when we don't
_require_ a blueprint.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] When is a blueprint unnecessary?

2013-11-20 Thread Daniel P. Berrange
On Wed, Nov 20, 2013 at 11:21:14AM +0100, Thierry Carrez wrote:
> Russell Bryant wrote:
> > One of the bits of feedback that came from the "Nova Project Structure
> > and Process" session at the design summit was that it would be nice to
> > skip having blueprints for smaller items.
> > 
> > In an effort to capture this, I updated the blueprint review criteria
> > [1] with the following:
> > 
> >   Some blueprints are closed as unnecessary. Blueprints are used for
> >   tracking significant development efforts. In general, small and/or
> >   straight forward cleanups do not need blueprints. A blueprint should
> >   be filed if:
> > 
> >- it impacts the release notes
> >- it covers significant internal development or refactoring efforts
> > [...]
> 
> While I agree we should not *require* blueprints for minor
> features/efforts, should we actively prevent people from filing them (or
> close them if they are filed ?)
> 
> Personally (I know I'm odd) I like to have my work (usually small stuff)
> covered by a blueprint so that I can track and communicate its current
> completion status -- helps me keep track of where I am.
> 
> So the question is... is there a cost associated with tolerating "small"
> blueprints ? Once they are set to "Low" priority they mostly disappear
> from release management tracking so it's not really a burden there.

IIUC, Russell has a desire that unless a blueprint is approved, then the
corresponding patches would not be merged. So from that POV, each blueprint
has a burden of approval to consider, even if it is 'Low' priority. This
would be a reason to not require blueprints for 'trivial' changes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Automatically post new threads from AskBot to the list

2013-11-20 Thread Thierry Carrez
Julie Pichon wrote:
> Hi folks,
> 
> I've been thinking about the AskBot UX website [0] and its lack of
> visibility, particularly for new community members.
> 
> I think it would be valuable to have the first post from new
> conversation threads be posted to the -dev list with the appropriate
> [UX] tag, with a link to AskBot and a request to continue the
> conversation over there. This would help newcomers realising there is a
> space for UX conversations, and existing community members to pop by
> when a topic comes up around an area of interest. The traffic has been
> low so far, and I wouldn't expect it to become more overwhelming than
> any of the current projects currently communicating via the developers
> list.
> 
> Would there be any concern or objections?

Frankly, automatically duplicating information (or cross-posting) should
never be the solution.

I'd rather have an edited, weekly summary of UX discussions posted to
-dev: that would be both useful to people who do not follow that site
*and* regularly remember the existence of that site for UX-minded folks.

Yes, it's more work, but instead of something I'd ignore (by setting up
yet another filter), it would be something I would actually enjoy reading.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-11-20 Thread John Garbutt
On 20 November 2013 10:19, Daniel P. Berrange  wrote:
> On Wed, Nov 20, 2013 at 11:18:24AM +0800, Wangpan wrote:
>> Hi Daniel,
>>
>> Thanks for your help in advance.
>> I have read your wiki page and it explains this issue very clearly.
>> But I have a question about the 'technical design', you give us a prototype 
>> method as below:
>> def get_guest_cpu_topology(self, inst_type, image, preferred_topology, 
>> mandatory_topology):
>> my question is that, how/where we can get these two parameters 
>> 'preferred_topology, mandatory_topology'?
>> from the nova config file? or get from the hypervisor?
>
> I expected that those would be determined dynamically by the virt
> drivers using a variety of approaches. Probably best to start off
> with it fairly simple. For example preferred topology may simpy
> be used to express that the driver prefers use of sockets and cores,
> over threads, while mandatory topology would encode the maximum
> it was prepared to accept.
>
> So I might say libvirt would just go for a default of
>
>   preferred = { max_sockets = 64, max_cores = 4, max_threads = 1 }
>   mandatory = { max_sockets = 256, max_cores = 8, max_threads = 2 }
>
> When libvirt gets some more intelligence  around NUMA placement, allowing
> it to map guest NUMA topology to host NUMA topology, then it might get more
> inventive tailoring the 'preferred' value to match how it wants to place
> the guest.
>
> For example, if libvirt is currently trying to fit guests entirely on to
> one host socket by default then it might decide to encourage use of cores
> by saying
>
>preferred = { max_sockets = 1, max_cores = 8, max_threads = 2 }
>
> Conversely if it knows that the memory size of the flavour will exceed
> one NUMA node, then it will want to force use of multiple sockets and
> discourage cores/threads
>
>preferred = { max_sockets = 64, max_cores = 2, max_threads = 1 }
>
> Again, I think we want to just keep it fairly dumb & simple to start
> with.
>
>> 发件人:"Daniel P. Berrange" 
>> 发送时间:2013-11-19 20:15
>> 主题:[openstack-dev] [Nova] Blueprint: standard specification of guest CPU 
>> topology
>> 收件人:"openstack-dev"
>> 抄送:
>>
>> For attention of maintainers of Nova virt drivers
>>
>> A while back there was a bug requesting the ability to set the CPU
>> topology (sockets/cores/threads) for guests explicitly
>>
>>https://bugs.launchpad.net/nova/+bug/1199019
>>
>> I countered that setting explicit topology doesn't play well with
>> booting images with a variety of flavours with differing vCPU counts.
>>
>> This led to the following change which used an image property to
>> express maximum constraints on CPU topology (max-sockets/max-cores/
>> max-threads) which the libvirt driver will use to figure out the
>> actual topology (sockets/cores/threads)
>>
>>   https://review.openstack.org/#/c/56510/
>>
>> I believe this is a prime example of something we must co-ordinate
>> across virt drivers to maximise happiness of our users.
>>
>> There's a blueprint but I find the description rather hard to
>> follow
>>
>>   https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
>>
>> So I've created a standalone wiki page which I hope describes the
>> idea more clearly
>>
>>   https://wiki.openstack.org/wiki/VirtDriverGuestCPUTopology
>>
>> Launchpad doesn't let me link the URL to the blueprint since I'm not
>> the blurprint creator :-(
>>
>> Anyway this mail is to solicit input on the proposed standard way to
>> express this which is hypervisor portable and the addition of some
>> shared code for doing the calculations which virt driver impls can
>> just all into rather than re-inventing
>>
>> I'm looking for buy-in to the idea from the maintainers of each
>> virt driver that this conceptual approach works for them, before
>> we go merging anything with the specific impl for libvirt.

This seems to work from a XenAPI perspective.

Right now I feel I would ignore threads and really just worry about a
setting for "cores-per-socket":
http://support.citrix.com/article/CTX126524

But that would certainly be very useful for windows guests.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
Looking at implementations in Keystone and Nova, I found the only use for
is_admin but it is essential.

Whenever in code you need to run a piece of code with admin privileges, you
can create a new context with  is_admin=True keeping all other parameters
as is, run code requiring admin access and then revert context back.
My first though was: "Hey, why don't they just add 'admin' role then?". But
what if in current deployment admin role is named like
'TheVerySpecialAdmin'? What if user has tweaked policy.json to better suite
one's needs?

So my current understanding is (and I suggest to follow this logic):
- 'admin' role in context.roles can vary, it's up to cloud admin to set
necessary value in policy.json;
- 'is_admin' flag is used to elevate privileges from code and it's name is
fixed;
- policy check should assume that user is admin if either special role is
present or is_admin flag is set.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Vijay Venkatachalam
Hi,

Replies Inline!

> -Original Message-
> From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
> Sent: Wednesday, November 20, 2013 2:59 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> Hi,
> 
> Yes, definitely yes.
> 
> It's just a bootstrap problem - you can't both have a reliable, resilient
> loadbalancer that can be respawned, and not store all the data necessary to
> respawn it.
> 

Not necessarily. Devices can be in HA or clustering mode. Any configuration 
that is 
sent to one device will be synced with other paired devices securely and would 
also
failover at the right time.

> I agree there are privacy concerns, just as there are with any hoster.
> But if you don't trust your hoster with your SSL certs, you probably shouldn't
> host any content that matters with them.
> 

I am no way expert in this area, but I think it is not a question of trust but 
it is a fear that 
a security loophole in the controller could be exploited to read the 
certificates. 

I don't know of any concerns though.

> Cheers,
> 
> On Wed, 2013-11-20 at 08:43 +, Samuel Bercovici wrote:
> > Hi Stephen,
> >
> > When this was discussed in the past, customer were not happy about
> storing their SSL certificates in the OpenStack database as plain fields as 
> they
> felt that this is not secured enough.
> > Do you say, that you are OK with storing SSL certificates in  the OpenStack
> database?
> >
> > -Sam.
> >
> >
> > -Original Message-
> > From: Stephen Gran [mailto:stephen.g...@theguardian.com]
> > Sent: Wednesday, November 20, 2013 10:15 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> >
> > On 19/11/13 16:33, Clint Byrum wrote:
> > > Excerpts from Vijay Venkatachalam's message of 2013-11-19 05:48:43 -
> 0800:
> > >> Hi Sam, Eugene,&  Avishay, etal,
> > >>
> > >>  Today I spent some time to create a write-up for SSL
> Termination not exactly design doc. Please share your comments!
> > >>
> > >>
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
> > >> YT
> > >> vMkMJ_inbo/edit
> > >>
> > >> Would like comments/discussion especially on the following note:
> > >>
> > >> SSL Termination requires certificate management. The ideal way is to
> handle this via an independent IAM service. This would take time to
> implement so the thought was to add the certificate details in VIP resource
> and send them directly to device. Basically don't store the certificate key in
> the DB there by avoiding security concerns of maintaining certificates in
> controller.
> >
> > I don't see why it does.  Nothing in openstack needs to trust user-uploaded
> certs.  Just storing them as independent certificate objects that can be
> referenced by N VIPs makes sense to me.
> >
> > If the backend is SSL, I would think you could do one of:
> > a) upload client certs
> > b) upload CA that has signed backend certs
> > c) opt to disable cert checking for backends
> >
> > With the default being c).
> >
> > Cheers,
> > --
> > Stephen Gran
> > Senior Systems Integrator - theguardian.com Please consider the
> environment before printing this email.
> > --
> > Visit theguardian.com
> >
> > On your mobile, download the Guardian iPhone app
> theguardian.com/iphone and our iPad edition theguardian.com/iPad
> > Save up to 33% by subscribing to the Guardian and Observer - choose the
> papers you want and get full digital access.
> > Visit subscribe.theguardian.com
> >
> > This e-mail and all attachments are confidential and may also be privileged.
> If you are not the named recipient, please notify the sender and delete the
> e-mail and all attachments immediately.
> > Do not disclose the contents to another person. You may not use the
> information for any purpose, or store, or copy, it in any way.
> >
> > Guardian News & Media Limited is not liable for any computer viruses or
> other material transmitted with or as part of this e-mail. You should employ
> virus checking software.
> >
> > Guardian News & Media Limited
> >
> > A member of Guardian Media Group plc
> > Registered Office
> > PO Box 68164
> > Kings Place
> > 90 York Way
> > London
> > N1P 2AP
> >
> > Registered in England Number 908396
> >
> > --
> > 
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> Stephen Gran
> Senior Systems Integrator - The Guardian
> 
> Please consider the environ

Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Sylvain Bauza

Hi Yuriy,

Le 20/11/2013 11:56, Yuriy Taraday a écrit :
Looking at implementations in Keystone and Nova, I found the only use 
for is_admin but it is essential.


Whenever in code you need to run a piece of code with admin 
privileges, you can create a new context with  is_admin=True keeping 
all other parameters as is, run code requiring admin access and then 
revert context back.
My first though was: "Hey, why don't they just add 'admin' role 
then?". But what if in current deployment admin role is named like 
'TheVerySpecialAdmin'? What if user has tweaked policy.json to better 
suite one's needs?


So my current understanding is (and I suggest to follow this logic):
- 'admin' role in context.roles can vary, it's up to cloud admin to 
set necessary value in policy.json;
- 'is_admin' flag is used to elevate privileges from code and it's 
name is fixed;
- policy check should assume that user is admin if either special role 
is present or is_admin flag is set.



Yes indeed, that's something coming into my mind. Looking at Nova, I 
found a "context_is_admin" policy in policy.json allowing you to say 
which role is admin or not [1] and is matched in policy.py [2], which 
itself is called when creating a context [3].


I'm OK copying that, any objections to it ?


[1] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2
[2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116
[3] https://github.com/openstack/nova/blob/master/nova/context.py#L102
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose "project story wiki" idea

2013-11-20 Thread Thierry Carrez
Boris Pavlovic wrote:
> The idea of this proposal is that every OpenStack project should have
> "story" wiki page. It means to publish every week one short message that
> contains most interesting updates for the last week, and high level road
> map for future week. So reading this for 10-15 minutes you can see what
> changed in project, and get better understanding of high level road map
> of the project.
> [...]

I like the idea, can be very short updates, I don't think it should be
automated (and it doesn't have to be every week if there is nothing to say).

Ideally we would have a single forum for all of those, rather than have
to fish for each appropriate wiki page. If everyone posted to planet.o.o
that would be a start... Some other aggregator or site (like the one
Flavio suggested) could also be used.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Vijay Venkatachalam


> -Original Message-
> From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
> Sent: Wednesday, November 20, 2013 3:01 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> Hi,
> 
> On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
> > Hi,
> >
> >
> >
> > Evgeny has outlined the wiki for the proposed change at:
> > https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
> > with what was discussed during the summit.
> >
> > The
> >
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
> YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
> >
> >
> >
> > What would be the benefit of having a certificate that must be
> > connected to VIP vs. embedding it in the VIP?
> 
> You could reuse the same certificate for multiple loadbalancer VIPs.
> This is a fairly common pattern - we have a dev wildcard cert that is self-
> signed, and is used for lots of VIPs.
> 
If certificates can be totally independent and can be reused, it will be 
awesome.
But even otherwise, certificate connected to VIP is just better modeling and 
provides an easier migration path towards an independent certificate resource.

> > When we get a system that can store certificates (ex: Barbican), we
> > will add support to it in the LBaaS model.
> 
> It probably doesn't need anything that complicated, does it?
> 
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - The Guardian
> 
> Please consider the environment before printing this email.
> --
> Visit theguardian.com
> 
> On your mobile, download the Guardian iPhone app
> theguardian.com/iphone and our iPad edition theguardian.com/iPad
> Save up to 33% by subscribing to the Guardian and Observer - choose the
> papers you want and get full digital access.
> Visit subscribe.theguardian.com
> 
> This e-mail and all attachments are confidential and may also be privileged. 
> If
> you are not the named recipient, please notify the sender and delete the e-
> mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use the
> information for any purpose, or store, or copy, it in any way.
> 
> Guardian News & Media Limited is not liable for any computer viruses or
> other material transmitted with or as part of this e-mail. You should employ
> virus checking software.
> 
> Guardian News & Media Limited
> 
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
> 
> Registered in England Number 908396
> 
> --
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Server Side Encryption

2013-11-20 Thread David Hadas

Hi all,

We created a wiki page discussing the addition of software side encryption
to Swift:
"The general scheme is to create a swift proxy middleware that will encrypt
and sign the object data during PUT and check the signature + decrypt it
during GET. The target is to create two domains - the user domain between
the client and the middleware where the data is decrypted and the system
domain between the middleware and the data at rest (on the device) where
the data is encrypted.
Design goals include: (1) Extend swift as necessary but without changing
existing swift behaviors and APIs; (2) Support encrypting data incoming
from unchanged clients"

See:  https://wiki.openstack.org/wiki/Swift/server-side-enc
We would like to invite feedback.

DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor
IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Dina Belova
I suppose it's ok - just rebase from Swann's commit to have is_admin param
to use.


On Wed, Nov 20, 2013 at 3:21 PM, Sylvain Bauza wrote:

>  Hi Yuriy,
>
> Le 20/11/2013 11:56, Yuriy Taraday a écrit :
>
>  Looking at implementations in Keystone and Nova, I found the only use
> for is_admin but it is essential.
>
>  Whenever in code you need to run a piece of code with admin privileges,
> you can create a new context with  is_admin=True keeping all other
> parameters as is, run code requiring admin access and then revert context
> back.
> My first though was: "Hey, why don't they just add 'admin' role then?".
> But what if in current deployment admin role is named like
> 'TheVerySpecialAdmin'? What if user has tweaked policy.json to better suite
> one's needs?
>
>  So my current understanding is (and I suggest to follow this logic):
> - 'admin' role in context.roles can vary, it's up to cloud admin to set
> necessary value in policy.json;
> - 'is_admin' flag is used to elevate privileges from code and it's name is
> fixed;
> - policy check should assume that user is admin if either special role is
> present or is_admin flag is set.
>
>
>
> Yes indeed, that's something coming into my mind. Looking at Nova, I found
> a "context_is_admin" policy in policy.json allowing you to say which role
> is admin or not [1] and is matched in policy.py [2], which itself is called
> when creating a context [3].
>
> I'm OK copying that, any objections to it ?
>
>
> [1] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2
> [2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116
> [3] https://github.com/openstack/nova/blob/master/nova/context.py#L102
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-20 Thread Roman Bogorodskiy
I know it was brought up on the list a number of times, but...

If we're talking about storing commit ids for each module and writing
some shell scripts for that, isn't it a chance to reconsider using git
submodules?




On Wed, Nov 20, 2013 at 12:37 PM, Elena Ezhova  wrote:

>
> 20.11.2013, 06:18, "John Griffith" :
>
>
> On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin 
> wrote:
>
> >  On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
> >>  Random OSLO updates with no list of what changed, what got fixed etc
> >>  are unlikely to get review attention - doing such a review is
> >>  extremely difficult. I was -2ing them and asking for more info, but
> >>  they keep popping up. I'm really not sure what the best way of
> >>  updating from OSLO is, but this isn't it.
> >  Best practice is to include a list of changes being synced, for example:
> >
> >https://review.openstack.org/54660
> >
> >  Every so often, we throw around ideas for automating the generation of
> >  this changes list - e.g. cinder would have the oslo-incubator commit ID
> >  for each module stored in a file in git to tell us when it was last
> >  synced.
> >
> >  Mark.
> >
> >  _
>>
>> __
>> >  OpenStack-dev mailing list
>> >  OpenStack-dev@lists.openstack.org
>> >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> Been away on vacation so I'm afraid I'm a bit late on this... but;
>>
>> I think the point Duncan is bringing up here is that there are some
>> VERY large and significant patches coming from OSLO pulls.  The DB
>> patch in particular being over 1K lines of code to a critical portion
>> of the code is a bit unnerving to try and do a review on.  I realize
>> that there's a level of trust that goes with the work that's done in
>> OSLO and synchronizing those changes across the projects, but I think
>> a few key concerns here are:
>>
>> 1. Doing huge pulls from OSLO like the DB patch here are nearly
>> impossible to thoroughly review and test.  Over time we learn a lot
>> about real usage scenarios and the database and tweak things as we go,
>> so seeing a patch set like this show up is always a bit unnerving and
>> frankly nobody is overly excited to review it.
>>
>> 2. Given a certain level of *trust* for the work that folks do on the
>> OSLO side in submitting these patches and new additions, I think some
>> of the responsibility on the review of the code falls on the OSLO
>> team.  That being said there is still the issue of how these changes
>> will impact projects *other* than Nova which I think is sometimes
>> neglected.  There have been a number of OSLO synchs pushed to Cinder
>> that fail gating jobs, some get fixed, some get abandoned, but in
>> either case it shows that there wasn't any testing done with projects
>> other than Nova (PLEASE note, I'm not referring to this particular
>> round of patches or calling any patch set out, just stating a
>> historical fact).
>>
>> 3. We need better documentation in commit messages explaining why the
>> changes are necessary and what they do for us.  I'm sorry but in my
>> opinion the answer "it's the latest in OSLO and Nova already has it"
>> is not enough of an answer in my opinion.  The patches mentioned in
>> this thread in my opinion met the minimum requirements because they at
>> least reference the OSLO commit which is great.  In addition I'd like
>> to see something to address any discovered issues or testing done with
>> the specific projects these changes are being synced to.
>>
>> I'm in no way saying I don't want Cinder to play nice with the common
>> code or to get in line with the way other projects do things but I am
>> saying that I think we have a ways to go in terms of better
>> communication here and in terms of OSLO code actually keeping in mind
>> the entire OpenStack eco-system as opposed to just changes that were
>> needed/updated in Nova.  Cinder in particular went through some pretty
>> massive DB re-factoring and changes during Havana and there was a lot
>> of really good work there but it didn't come without a cost and the
>> benefits were examined and weighed pretty heavily.  I also think that
>> some times the indirection introduced by adding some of the
>> openstack.common code is unnecessary and in some cases makes things
>> more difficult than they should be.
>>
>> I'm just not sure that we always do a very good ROI investigation or
>> risk assessment on changes, and that opinion applies to ALL changes in
>> OpenStack projects, not OSLO specific or anything else.
>>
>> All of that being said, a couple of those syncs on the list are
>> outdated.  We should start by doing a fresh pull for these and if
>> possible add some better documentation in the commit messages as to
>> the justification for the patches that would be great.  We can take a
>> closer look at the changes and the history behind them and try to get
>> some review progress made here.  Mark mentioned some good ideas

Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Samuel Bercovici
HI,

Besides a forward looking model do you see other differences?

-Sam.

-Original Message-
From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
Sent: Wednesday, November 20, 2013 1:22 PM
To: stephen.g...@guardian.co.uk; OpenStack Development Mailing List (not for 
usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up



> -Original Message-
> From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
> Sent: Wednesday, November 20, 2013 3:01 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> Hi,
> 
> On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
> > Hi,
> >
> >
> >
> > Evgeny has outlined the wiki for the proposed change at:
> > https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line 
> > with what was discussed during the summit.
> >
> > The
> >
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
> YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
> >
> >
> >
> > What would be the benefit of having a certificate that must be 
> > connected to VIP vs. embedding it in the VIP?
> 
> You could reuse the same certificate for multiple loadbalancer VIPs.
> This is a fairly common pattern - we have a dev wildcard cert that is 
> self- signed, and is used for lots of VIPs.
> 
If certificates can be totally independent and can be reused, it will be 
awesome.
But even otherwise, certificate connected to VIP is just better modeling and 
provides an easier migration path towards an independent certificate resource.

> > When we get a system that can store certificates (ex: Barbican), we 
> > will add support to it in the LBaaS model.
> 
> It probably doesn't need anything that complicated, does it?
> 
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - The Guardian
> 
> Please consider the environment before printing this email.
> --
> Visit theguardian.com
> 
> On your mobile, download the Guardian iPhone app 
> theguardian.com/iphone and our iPad edition theguardian.com/iPad Save 
> up to 33% by subscribing to the Guardian and Observer - choose the 
> papers you want and get full digital access.
> Visit subscribe.theguardian.com
> 
> This e-mail and all attachments are confidential and may also be 
> privileged. If you are not the named recipient, please notify the 
> sender and delete the e- mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use the 
> information for any purpose, or store, or copy, it in any way.
> 
> Guardian News & Media Limited is not liable for any computer viruses 
> or other material transmitted with or as part of this e-mail. You 
> should employ virus checking software.
> 
> Guardian News & Media Limited
> 
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
> 
> Registered in England Number 908396
> 
> --
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Vendor specific erros

2013-11-20 Thread Avishay Balderman
Hi
We need to take into account that the tenant is well aware to the LBaaS 
provider (driver) that he is working with.  After all when the tenant create 
Pool he needs to select a provider.
Can you please change the bug status? 
https://bugs.launchpad.net/neutron/+bug/1248423
The current status is "Incomplete" which is wrong. A much better status would 
be:
"Opinion"   OR "Confirmed"

Thanks
Avishay

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Tuesday, November 19, 2013 7:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] - Vendor specific erros

Thanks Avishay.

I think the status description error was introduced with this aim.
Whether vendor-specific error descriptions can make sense to a tenant, that's a 
good question.
Personally, I feel like as a tenant that information would not be a lot useful 
to me, as I would not be able to do any debug or maintenance on the appliance 
where the error was generated; on the other hand, as a deployer I might find 
that message very useful, but probably I would look for it in the logs rather 
than in API responses; furthermore, as a deployer I might find more convenient 
to not provide tenants any detail about the peculiar driver being used.

On this note however, this is just my personal opinion. I'm sure there are 
plenty of valid use cases for giving tenants vendor-specific error messages.

Salvatore

On 19 November 2013 13:00, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Salvatore
I think you are mixing between the state machine (ACTIVE,PENDEING_XYZ,etc)  and 
the status description
All I want to do is to write a vendor specific error message when the state is 
ERROR.
I DO NOT want to touch the state machine.

See: https://bugs.launchpad.net/neutron/+bug/1248423

Thanks

Avishay

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Thursday, November 14, 2013 1:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] - Vendor specific erros

In general, an error state make sense.
I think you might want to send more details about how this state plugs into the 
load balancer state machine, but I reckon it is a generally non-recoverable 
state which could be reached by any other state; in that case it would be a 
generic enough case which might be supported by all drivers.

It is good to point out that driver-specific state transitions however, in my 
opinion, are to avoid; application using the Neutron API will become 
non-portable, or at least users of the Neutron API would need to be aware that 
an entity might have a different state machine from driver to driver, which I 
reckon would be bad enough for a developer to decide to switch over to 
Cloudstack or AWS APIs!

Salvatore

PS: On the last point I am obviously joking, but not so much.


On 12 November 2013 08:00, Avishay Balderman 
mailto:avish...@radware.com>> wrote:

Hi
Some of the DB entities in the LBaaS domain inherit from 
HasStatusDescription
With this we can set the entity status (ACTIVE, PENDING_CREATE,etc) and a 
description for the status.
There are flows in the Radware LBaaS driver that the  driver needs to set the 
entity status to ERROR and it is able to set the description of the error -  
the description is Radware specific.
My question is:  Does it make sense to do that?
After all the tenant is aware to the fact that he works against Radware load 
balancer -  the tenant selected Radware as the lbaas provider in the UI.
Any reason not to do that?

This is a generic issue/question and does not relate  to a specific plugin or 
driver.

Thanks

Avishay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum]: Etherpad notes for today's workshop

2013-11-20 Thread Roshan Agrawal
https://etherpad.openstack.org/p/SolumWorkshopTrack1Notes

I also took a stab at documenting the minimal set of features Solum should 
implement for the first milestone at the etherpad page above

Thanks & Regards,
Roshan Agrawal
Direct:    512.874.1278
Mobile:  512.354.5253
roshan.agra...@rackspace.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FOSDEM 2014 devroom CFP

2013-11-20 Thread Thierry Carrez
Only 10 days left -- and there aren't that many OpenStack talks
submitted yet.

If you plan to be around and you have a topic to discuss that may be
interesting for a developers audience, please consider submitting a talk !

Thierry Carrez wrote:
> There will be a two-day "Virtualisation and IaaS" devroom at FOSDEM 2014
> (Brussels, February 1-2). See below for the CFP.
> 
> Note: For this edition we'll avoid high-level, generic project
> presentations and give priority to deep dives and developer-oriented
> content, so please take that into account before submitting anything.
> 
> --
> Call for Participation
> --
> 
> The scope for this devroom is open source, openly-developed projects in
> the areas of virtualisation and IaaS type clouds, ranging from low level
> to data center, up to cloud management platforms and cloud resource
> orchestration.
> 
> Sessions should always target a developer audience. Bonus points for
> collaborative sessions that would be appealing to developers from
> multiple projects.
> 
> We are particularly interested in the following themes:
> * low level virtualisation aspects
> * new features in classic and container-based virtualisation technologies
> * new use cases for virtualisation, such as virtualisation in mobile,
> automotive and embedded in general
> * other resource virtualisation technologies: networking, storage, …
> * deep technical dives into specific IaaS or virtualisation management
> projects features
> * relationship between IaaS projects and specific dependencies (not just
> virtualisation)
> * integration and development leveraging solutions from multiple projects
> 
> 
> Important dates
> ---
> 
> Submission deadline: Sunday, December 1st, 2013
> Acceptance notifications: Sunday, December 15th, 2013
> Final schedule announcement: Friday January 10th, 2014
> Devroom @ FOSDEM'14: February 1st & 2nd, 2014
> 
> 
> Practical
> -
> 
> Submissions should be 40 minutes, and consist of a 30 minute
> presentation with 10 minutes of Q&A or 40 minutes of discussions (e.g.,
> requests for feedback, open discussions, etc.). Interactivity is
> encouraged, but optional. Talks are in English only.
> 
> We do not provide travel assistance or reimbursement of travel expenses
> for accepted speakers.
> 
> Submissions should be made via the FOSDEM submission page at
> https://penta.fosdem.org/submission/FOSDEM14 :
> 
> * If necessary, create a Pentabarf account and activate it
> * In the “Person” section, provide First name, Last name (in the
> “General” tab), Email (in the “Contact” tab) and Bio (“Abstract” field
> in the “Description” tab)
> * Submit a proposal by clicking on “Create event"
> * Important! Select the "Virtualisation and IaaS" track (on the
> “General” tab)
> * Provide the title of your talk (“Event title” in the “General” tab)
> * Provide a 250-word description of the subject of the talk and the
> intended audience (in the “Abstract” field of the “Description” tab)
> * Provide a rough outline of the talk or goals of the session (a short
> list of bullet points covering topics that will be discussed) in the
> “Full description” field in the “Description” tab
> 
> 
> Contact
> ---
> 
> For questions w.r.t. the Virtualisation and IaaS DevRoom at FOSDEM'14,
> please contact the organizers via
> fosdem14-virt-and-iaas-devr...@googlegroups.com (or via
> https://groups.google.com/forum/#!forum/fosdem14-virt-and-iaas-devroom).
> 
> 
> This CFP is also visible at:
> https://groups.google.com/forum/#!topic/fosdem14-virt-and-iaas-devroom/04y5YkyqzIo
> 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-20 Thread Isaku Yamahata
On Tue, Nov 19, 2013 at 08:59:38AM -0800,
Edgar Magana  wrote:

> Do you have in mind any implementation, any BP?
> We could actually work on this together, all plugins will get the benefits
> of a better implementation.

Yes, let's work together. Here is my blueprint (it's somewhat old.
So needs to be updated.)
https://blueprints.launchpad.net/neutron/+spec/fix-races-of-db-based-plugin
https://docs.google.com/file/d/0B4LNMvjOzyDuU2xNd0piS3JBMHM/edit

Although I've thought of status change(adding more status) and locking
protocol so far, TaskFlow seems something to look at before starting and
another possible approach is decoupling backend process from api call
as Salvatore suggested like NVP plugin.
Even with taskflow or decoupling approach, some kind of enhancing status
change/locking protocol will be necessary for performance of creating
many ports at once.

thanks,

> 
> Thanks,
> 
> Edgar
> 
> On 11/19/13 3:57 AM, "Isaku Yamahata"  wrote:
> 
> >On Mon, Nov 18, 2013 at 03:55:49PM -0500,
> >Robert Kukura  wrote:
> >
> >> On 11/18/2013 03:25 PM, Edgar Magana wrote:
> >> > Developers,
> >> > 
> >> > This topic has been discussed before but I do not remember if we have
> >>a
> >> > good solution or not.
> >> 
> >> The ML2 plugin addresses this by calling each MechanismDriver twice. The
> >> create_network_precommit() method is called as part of the DB
> >> transaction, and the create_network_postcommit() method is called after
> >> the transaction has been committed. Interactions with devices or
> >> controllers are done in the postcommit methods. If the postcommit method
> >> raises an exception, the plugin deletes that partially-created resource
> >> and returns the exception to the client. You might consider a similar
> >> approach in your plugin.
> >
> >Splitting works into two phase, pre/post, is good approach.
> >But there still remains race window.
> >Once the transaction is committed, the result is visible to outside.
> >So the concurrent request to same resource will be racy.
> >There is a window after pre_xxx_yyy before post_xxx_yyy() where
> >other requests can be handled.
> >
> >The state machine needs to be enhanced, I think. (plugins need
> >modification)
> >For example, adding more states like pending_{create, delete, update}.
> >Also we would like to consider serializing between operation of ports
> >and subnets. or between operation of subnets and network depending on
> >performance requirement.
> >(Or carefully audit complex status change. i.e.
> >changing port during subnet/network update/deletion.)
> >
> >I think it would be useful to establish reference locking policy
> >for ML2 plugin for SDN controllers.
> >Thoughts or comments? If this is considered useful and acceptable,
> >I'm willing to help.
> >
> >thanks,
> >Isaku Yamahata
> >
> >> -Bob
> >> 
> >> > Basically, if concurrent API calls are sent to Neutron, all of them
> >>are
> >> > sent to the plug-in level where two actions have to be made:
> >> > 
> >> > 1. DB transaction ? No just for data persistence but also to collect
> >>the
> >> > information needed for the next action
> >> > 2. Plug-in back-end implementation ? In our case is a call to the
> >>python
> >> > library than consequentially calls PLUMgrid REST GW (soon SAL)
> >> > 
> >> > For instance:
> >> > 
> >> > def create_port(self, context, port):
> >> > with context.session.begin(subtransactions=True):
> >> > # Plugin DB - Port Create and Return port
> >> > port_db = super(NeutronPluginPLUMgridV2,
> >> > self).create_port(context,
> >> >   
> >> port)
> >> > device_id = port_db["device_id"]
> >> > if port_db["device_owner"] == "network:router_gateway":
> >> > router_db = self._get_router(context, device_id)
> >> > else:
> >> > router_db = None
> >> > try:
> >> > LOG.debug(_("PLUMgrid Library: create_port() called"))
> >> > # Back-end implementation
> >> > self._plumlib.create_port(port_db, router_db)
> >> > except Exception:
> >> > Š
> >> > 
> >> > The way we have implemented at the plugin-level in Havana (even in
> >> > Grizzly) is that both action are wrapped in the same "transaction"
> >>which
> >> > automatically rolls back any operation done to its original state
> >> > protecting mostly the DB of having any inconsistency state or left
> >>over
> >> > data if the back-end part fails.=.
> >> > The problem that we are experiencing is when concurrent calls to the
> >> > same API are sent, the number of operation at the plug-in back-end are
> >> > long enough to make the next concurrent API call to get stuck at the
> >>DB
> >> > transaction level, which creates a hung state for the Neutron Server
> >>to
> >> > the point that all concurrent API calls will fail.
> >> > 
> >> > This can be fixed if we include some "locking" system such as calling:
> >> > 
> >> > from neutron.common import uti

Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Sylvain Bauza
Well, I'm guessing the best way is the contrary, Swann needing to rebase 
from the change I proposed about policies. The latter is still as draft, 
committing myself to finish it by today.


-Sylvain

Le 20/11/2013 12:42, Dina Belova a écrit :
I suppose it's ok - just rebase from Swann's commit to have is_admin 
param to use.



On Wed, Nov 20, 2013 at 3:21 PM, Sylvain Bauza > wrote:


Hi Yuriy,

Le 20/11/2013 11:56, Yuriy Taraday a écrit :

Looking at implementations in Keystone and Nova, I found the only
use for is_admin but it is essential.

Whenever in code you need to run a piece of code with admin
privileges, you can create a new context with  is_admin=True
keeping all other parameters as is, run code requiring admin
access and then revert context back.
My first though was: "Hey, why don't they just add 'admin' role
then?". But what if in current deployment admin role is named
like 'TheVerySpecialAdmin'? What if user has tweaked policy.json
to better suite one's needs?

So my current understanding is (and I suggest to follow this logic):
- 'admin' role in context.roles can vary, it's up to cloud admin
to set necessary value in policy.json;
- 'is_admin' flag is used to elevate privileges from code and
it's name is fixed;
- policy check should assume that user is admin if either special
role is present or is_admin flag is set.



Yes indeed, that's something coming into my mind. Looking at Nova,
I found a "context_is_admin" policy in policy.json allowing you to
say which role is admin or not [1] and is matched in policy.py
[2], which itself is called when creating a context [3].

I'm OK copying that, any objections to it ?


[1]
https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2
[2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116
[3]
https://github.com/openstack/nova/blob/master/nova/context.py#L102

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] VNC issue with multi compute node with openstack havana

2013-11-20 Thread Vikash Kumar
Hi,

  I used devstack Multi-Node + VLANs to install openstack-havana recently.
Installation was successful and i verified basic things like vm launch,
ping between vm's.

  I have two nodes: 1. Ctrl+Compute  2. Compute

  The VM which gets launched on second compute node (here 2, see above)
doesn't gets vnc console. I tried to acces from both horizon and the url
given by nova-cli.

   The *n-novnc* screen on first node which is controller (here 1) gave
this error log:

   Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/websockify/websocket.py",
line 711, in top_new_client
self.new_client()
  File "/opt/stack/nova/nova/console/websocketproxy.py", line 68, in
new_client
tsock = self.socket(host, port, connect=True)
  File "/usr/local/lib/python2.7/dist-packages/websockify/websocket.py",
line 188, in socket
sock.connect(addrs[0][4])
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py", line
192, in connect
socket_checkerr(fd)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py", line
46, in socket_checkerr
raise socket.error(err, errno.errorcode[err])
error: [Errno 111] ECONNREFUSED


  The vnc related configuration in nova.conf on Ctrl+Compute node:

   vncserver_proxyclient_address = 127.0.0.1
   vncserver_listen = 127.0.0.1
   vnc_enabled = true
   xvpvncproxy_base_url = http://192.168.2.151:6081/console
   novncproxy_base_url = http://192.168.2.151:6080/vnc_auto.html

   and on second Compute node:
  /* I corrected the I.P. of first two address, by default it sets to
127.0.0.1 */
   vncserver_proxyclient_address = 192.168.2.157
   vncserver_listen = 0.0.0.0
   vnc_enabled = true
   xvpvncproxy_base_url = http://192.168.2.151:6081/console
   novncproxy_base_url = http://192.168.2.151:6080/vnc_auto.html

I also added the host name of compute node in hosts file of
controllernode. With this ERORR 111 gone and new error came.

connecting to: 192.168.2.157:-1
  7: handler exception: [Errno -8] Servname not supported for ai_socktype
  7: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/websockify/websocket.py",
line 711, in top_new_client
self.new_client()
  File "/opt/stack/nova/nova/console/websocketproxy.py", line 68, in
new_client
tsock = self.socket(host, port, connect=True)
  File "/usr/local/lib/python2.7/dist-packages/websockify/websocket.py",
line 180, in socket
socket.IPPROTO_TCP, flags)
  gaierror: [Errno -8] Servname not supported for ai_socktype


   What need to be done to resolve this ?

Thnx
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Diagnostic] Diagnostic API: summit follow-up

2013-11-20 Thread Oleg Gelbukh
Hi, fellow stackers,

There was a conversation during 'Enhance debugability' session at the
summit about Diagnostic API which allows gate to get 'state of world' of
OpenStack installation. 'State of world' includes hardware- and operating
system-level configurations of servers in cluster.

This info would help to compare the expected effect of tests on a system
with its actual state, thus providing Tempest with ability to see into it
(whitebox tests) as one of possible use cases. Another use case is to
provide input for validation of OpenStack configuration files.

We're putting together an initial version of data model of API with example
values in the following etherpad:
https://etherpad.openstack.org/p/icehouse-diagnostic-api-spec

This version covers most hardware and system-level configurations managed
by OpenStack in Linux system. What is missing from there? What information
you'd like to see in such an API? Please, feel free to share your thoughts
in ML, or in the etherpad directly.


--
Best regards,
Oleg Gelbukh
Mirantis Labs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-20 Thread john casey
Hi guys,

New to the discussion, so please correct if I'm off base.

if I understand correctly, the controllers handle the state/race-condition in a 
controller specify way.  So what we are talking about is a new state-flow for 
inside of the openstack db to describe the state of the network as known to 
openstack.   Is this correct?
Thanks,
john
-Original Message-
From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com] 
Sent: Wednesday, November 20, 2013 5:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Robert Kukura; Isaku Yamahata
Subject: Re: [openstack-dev] [Neutron] Race condition between DB layer and 
plugin back-end implementation

On Tue, Nov 19, 2013 at 08:59:38AM -0800, Edgar Magana  
wrote:

> Do you have in mind any implementation, any BP?
> We could actually work on this together, all plugins will get the 
> benefits of a better implementation.

Yes, let's work together. Here is my blueprint (it's somewhat old.
So needs to be updated.)
https://blueprints.launchpad.net/neutron/+spec/fix-races-of-db-based-plugin
https://docs.google.com/file/d/0B4LNMvjOzyDuU2xNd0piS3JBMHM/edit

Although I've thought of status change(adding more status) and locking protocol 
so far, TaskFlow seems something to look at before starting and another 
possible approach is decoupling backend process from api call as Salvatore 
suggested like NVP plugin.
Even with taskflow or decoupling approach, some kind of enhancing status 
change/locking protocol will be necessary for performance of creating many 
ports at once.

thanks,

> 
> Thanks,
> 
> Edgar
> 
> On 11/19/13 3:57 AM, "Isaku Yamahata"  wrote:
> 
> >On Mon, Nov 18, 2013 at 03:55:49PM -0500, Robert Kukura 
> > wrote:
> >
> >> On 11/18/2013 03:25 PM, Edgar Magana wrote:
> >> > Developers,
> >> > 
> >> > This topic has been discussed before but I do not remember if we 
> >> > have
> >>a
> >> > good solution or not.
> >> 
> >> The ML2 plugin addresses this by calling each MechanismDriver 
> >> twice. The
> >> create_network_precommit() method is called as part of the DB 
> >> transaction, and the create_network_postcommit() method is called 
> >> after the transaction has been committed. Interactions with devices 
> >> or controllers are done in the postcommit methods. If the 
> >> postcommit method raises an exception, the plugin deletes that 
> >> partially-created resource and returns the exception to the client. 
> >> You might consider a similar approach in your plugin.
> >
> >Splitting works into two phase, pre/post, is good approach.
> >But there still remains race window.
> >Once the transaction is committed, the result is visible to outside.
> >So the concurrent request to same resource will be racy.
> >There is a window after pre_xxx_yyy before post_xxx_yyy() where other 
> >requests can be handled.
> >
> >The state machine needs to be enhanced, I think. (plugins need
> >modification)
> >For example, adding more states like pending_{create, delete, update}.
> >Also we would like to consider serializing between operation of ports 
> >and subnets. or between operation of subnets and network depending on 
> >performance requirement.
> >(Or carefully audit complex status change. i.e.
> >changing port during subnet/network update/deletion.)
> >
> >I think it would be useful to establish reference locking policy for 
> >ML2 plugin for SDN controllers.
> >Thoughts or comments? If this is considered useful and acceptable, 
> >I'm willing to help.
> >
> >thanks,
> >Isaku Yamahata
> >
> >> -Bob
> >> 
> >> > Basically, if concurrent API calls are sent to Neutron, all of 
> >> > them
> >>are
> >> > sent to the plug-in level where two actions have to be made:
> >> > 
> >> > 1. DB transaction ? No just for data persistence but also to 
> >> > collect
> >>the
> >> > information needed for the next action 2. Plug-in back-end 
> >> > implementation ? In our case is a call to the
> >>python
> >> > library than consequentially calls PLUMgrid REST GW (soon SAL)
> >> > 
> >> > For instance:
> >> > 
> >> > def create_port(self, context, port):
> >> > with context.session.begin(subtransactions=True):
> >> > # Plugin DB - Port Create and Return port
> >> > port_db = super(NeutronPluginPLUMgridV2, 
> >> > self).create_port(context,
> >> >   
> >> port)
> >> > device_id = port_db["device_id"]
> >> > if port_db["device_owner"] == "network:router_gateway":
> >> > router_db = self._get_router(context, device_id)
> >> > else:
> >> > router_db = None
> >> > try:
> >> > LOG.debug(_("PLUMgrid Library: create_port() 
> >> > called")) # Back-end implementation
> >> > self._plumlib.create_port(port_db, router_db)
> >> > except Exception:
> >> > Š
> >> > 
> >> > The way we have implemented at the plugin-level in Havana (even 
> >> > in
> >> > Grizzly) is that both action are wrapped i

Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up

2013-11-20 Thread Vijay Venkatachalam
Yes. The following can be added

1. Certificate Chain as you already observed
2. Backend certificates for trust, basically CA certs.
  These certificates will be used by loadbalancer to validate the 
certificate presented by the backend services.

Thanks,
Vijay V.


> -Original Message-
> From: Samuel Bercovici [mailto:samu...@radware.com]
> Sent: Wednesday, November 20, 2013 5:40 PM
> To: OpenStack Development Mailing List (not for usage questions);
> stephen.g...@guardian.co.uk; Vijay Venkatachalam
> Subject: RE: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> HI,
> 
> Besides a forward looking model do you see other differences?
> 
> -Sam.
> 
> -Original Message-
> From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
> Sent: Wednesday, November 20, 2013 1:22 PM
> To: stephen.g...@guardian.co.uk; OpenStack Development Mailing List (not
> for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> 
> 
> 
> > -Original Message-
> > From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
> > Sent: Wednesday, November 20, 2013 3:01 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination write-up
> >
> > Hi,
> >
> > On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
> > > Hi,
> > >
> > >
> > >
> > > Evgeny has outlined the wiki for the proposed change at:
> > > https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line
> > > with what was discussed during the summit.
> > >
> > > The
> > >
> >
> https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
> > YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
> > >
> > >
> > >
> > > What would be the benefit of having a certificate that must be
> > > connected to VIP vs. embedding it in the VIP?
> >
> > You could reuse the same certificate for multiple loadbalancer VIPs.
> > This is a fairly common pattern - we have a dev wildcard cert that is
> > self- signed, and is used for lots of VIPs.
> >
> If certificates can be totally independent and can be reused, it will be
> awesome.
> But even otherwise, certificate connected to VIP is just better modeling and
> provides an easier migration path towards an independent certificate
> resource.
> 
> > > When we get a system that can store certificates (ex: Barbican), we
> > > will add support to it in the LBaaS model.
> >
> > It probably doesn't need anything that complicated, does it?
> >
> > Cheers,
> > --
> > Stephen Gran
> > Senior Systems Integrator - The Guardian
> >
> > Please consider the environment before printing this email.
> > --
> > Visit theguardian.com
> >
> > On your mobile, download the Guardian iPhone app
> > theguardian.com/iphone and our iPad edition theguardian.com/iPad Save
> > up to 33% by subscribing to the Guardian and Observer - choose the
> > papers you want and get full digital access.
> > Visit subscribe.theguardian.com
> >
> > This e-mail and all attachments are confidential and may also be
> > privileged. If you are not the named recipient, please notify the
> > sender and delete the e- mail and all attachments immediately.
> > Do not disclose the contents to another person. You may not use the
> > information for any purpose, or store, or copy, it in any way.
> >
> > Guardian News & Media Limited is not liable for any computer viruses
> > or other material transmitted with or as part of this e-mail. You
> > should employ virus checking software.
> >
> > Guardian News & Media Limited
> >
> > A member of Guardian Media Group plc
> > Registered Office
> > PO Box 68164
> > Kings Place
> > 90 York Way
> > London
> > N1P 2AP
> >
> > Registered in England Number 908396
> >
> > --
> > 
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Anita Kuno
Thanks for posting this, Joe. It really helps to create focus so we can
address these bugs.

We are chatting in #openstack-neutron about 1251784, 1249065, and 1251448.

We are looking for someone to work on 1251784 - I had mentioned it at
Monday's Neutron team meeting and am trying to shop it around in
-neutron now. We need someone other than Salvatore, Aaron or Maru to
work on this since they each have at least one very important bug they
are working on. Please join us in #openstack-neutron and lend a hand -
all of OpenStack needs your help.

Bug 1249065 is assigned to Aaron Rosen, who isn't in the channel at the
moment, so I don't have an update on his progress or any blockers he is
facing. Hopefully (if you are reading this Aaron) he will join us in
channel soon and I had hear from him about his status.

Bug 1251448 is assigned to Maru Newby, who I am talking with now in
-neutron. He is addressing the bug. I will share what information I have
regarding this one when I have some.

We are all looking forward to a more stable gate and this information
really helps.

Thanks again, Joe,
Anita.

On 11/20/2013 01:09 AM, Joe Gordon wrote:
> Hi All,
> 
> As many of you have noticed the gate has been in very bad shape over the
> past few days.  Here is a list of some of the top open bugs (without
> pending patches, and many recent hits) that we are hitting.  Gate won't be
> stable, and it will be hard to get your code merged, until we fix these
> bugs.
> 
> 1) https://bugs.launchpad.net/bugs/1251920
>  nova
> 468 Hits
> 2) https://bugs.launchpad.net/bugs/1251784
>  neutron, Nova
>  328 Hits
> 3) https://bugs.launchpad.net/bugs/1249065
>  neutron
>   122 hits
> 4) https://bugs.launchpad.net/bugs/1251448
>  neutron
> 65 Hits
> 
> Raw Data:
> 
> 
> Note: If a bug has any hits for anything besides failure, it means the
> fingerprint isn't perfect.
> 
> Elastic recheck known issues
>  Bug: https://bugs.launchpad.net/bugs/1251920 => message:"assertionerror:
> console output was empty" AND filename:"console.html" Title: Tempest
> failures due to failure to return console logs from an instance Project:
> Status nova: Confirmed Hits FAILURE: 468 Bug:
> https://bugs.launchpad.net/bugs/1251784 => message:"Connection to neutron
> failed: Maximum attempts reached" AND filename:"logs/screen-n-cpu.txt"
> Title: nova+neutron scheduling error: Connection to neutron failed: Maximum
> attempts reached Project: Status neutron: New nova: New Hits FAILURE: 328
> UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256 =>
> message:" 503" AND filename:"logs/syslog.txt" AND
> syslog_program:"proxy-server" Title: swift proxy-server returning 503
> during tempest run Project: Status openstack-ci: Incomplete swift: New
> tempest: New Hits FAILURE: 136 SUCCESS: 83
> Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 => message:"No
> nw_info cache associated with instance" AND
> filename:"logs/screen-n-api.txt" Title: Tempest failure:
> tempest/scenario/test_snapshot_pattern.py Project: Status neutron: New
> nova: Confirmed Hits FAILURE: 122 Bug:
> https://bugs.launchpad.net/bugs/1252514 => message:"Got error from Swift:
> put_object" AND filename:"logs/screen-g-api.txt" Title: glance doesn't
> recover if Swift returns an error Project: Status devstack: New glance: New
> swift: New Hits FAILURE: 95
> Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =>
> message:"NovaException: Unexpected vif_type=binding_failed" AND
> filename:"logs/screen-n-cpu.txt" Title: binding_failed because of l2 agent
> assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
> SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 => message:"
> possible networks found, use a Network ID to be more specific. (HTTP 400)"
> AND filename:"console.html" Title: BadRequest: Multiple possible networks
> found, use a Network ID to be more specific. Project: Status neutron: New
> Hits FAILURE: 65 Bug: https://bugs.launchpad.net/bugs/1239856 =>
> message:"tempest/services" AND message:"/images_client.py" AND
> message:"wait_for_image_status" AND filename:"console.html" Title:
> "TimeoutException: Request timed out" on
> tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML
> Project: Status glance: New Hits FAILURE: 62 Bug:
> https://bugs.launchpad.net/bugs/1235435 => message:"One or more ports have
> an IP allocation from this subnet" AND message:" SubnetInUse: Unable to
> complete operation on subnet" AND filename:"logs/screen-q-svc.txt" Title:
> 'SubnetInUse: Unable to complete operation on subnet UUID. One or more
> ports have an IP allocation from this subnet.' Project: Status neutron:
> Incomplete nova: Fix Committed tempest: New Hits FAILURE: 48 Bug:
> https://bugs.launchpad.net/bugs/1224001 =>
> message:"tempest.scenario.test_network_basic_ops AssertionError: Timed out
> waiting for" AND filename:"console.html" Title: test_network_basic_ops
> fails waiting for network to become available Project: Status 

Re: [openstack-dev] [Nova] VNC issue with multi compute node with openstack havana

2013-11-20 Thread Matt Riedemann



On Wednesday, November 20, 2013 7:49:50 AM, Vikash Kumar wrote:

Hi,

  I used devstack Multi-Node + VLANs to install openstack-havana
recently. Installation was successful and i verified basic things like
vm launch, ping between vm's.

  I have two nodes: 1. Ctrl+Compute  2. Compute

  The VM which gets launched on second compute node (here 2, see
above) doesn't gets vnc console. I tried to acces from both horizon
and the url given by nova-cli.

   The *n-novnc* screen on first node which is controller (here 1)
gave this error log:

   Traceback (most recent call last):
  File
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line
711, in top_new_client
self.new_client()
  File "/opt/stack/nova/nova/console/websocketproxy.py", line 68, in
new_client
tsock = self.socket(host, port, connect=True)
  File
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line
188, in socket
sock.connect(addrs[0][4])
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py",
line 192, in connect
socket_checkerr(fd)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio.py",
line 46, in socket_checkerr
raise socket.error(err, errno.errorcode[err])
error: [Errno 111] ECONNREFUSED


  The vnc related configuration in nova.conf on Ctrl+Compute node:

   vncserver_proxyclient_address = 127.0.0.1
   vncserver_listen = 127.0.0.1
   vnc_enabled = true
   xvpvncproxy_base_url = http://192.168.2.151:6081/console
   novncproxy_base_url = http://192.168.2.151:6080/vnc_auto.html

   and on second Compute node:
  /* I corrected the I.P. of first two address, by default it sets to
127.0.0.1 */
   vncserver_proxyclient_address = 192.168.2.157
   vncserver_listen = 0.0.0.0
   vnc_enabled = true
   xvpvncproxy_base_url = http://192.168.2.151:6081/console
   novncproxy_base_url = http://192.168.2.151:6080/vnc_auto.html

I also added the host name of compute node in hosts file of
controllernode. With this ERORR 111 gone and new error came.

connecting to: 192.168.2.157:-1
  7: handler exception: [Errno -8] Servname not supported for ai_socktype
  7: Traceback (most recent call last):
  File
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line
711, in top_new_client
self.new_client()
  File "/opt/stack/nova/nova/console/websocketproxy.py", line 68, in
new_client
tsock = self.socket(host, port, connect=True)
  File
"/usr/local/lib/python2.7/dist-packages/websockify/websocket.py", line
180, in socket
socket.IPPROTO_TCP, flags)
  gaierror: [Errno -8] Servname not supported for ai_socktype


   What need to be done to resolve this ?

Thnx






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This mailing list is for development discussion only. For support, you 
should go to the general mailing list:


https://wiki.openstack.org/wiki/Mailing_Lists#General_List

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diagnostic] Diagnostic API: summit follow-up

2013-11-20 Thread Matt Riedemann



On Wednesday, November 20, 2013 7:52:39 AM, Oleg Gelbukh wrote:

Hi, fellow stackers,

There was a conversation during 'Enhance debugability' session at the
summit about Diagnostic API which allows gate to get 'state of world'
of OpenStack installation. 'State of world' includes hardware- and
operating system-level configurations of servers in cluster.

This info would help to compare the expected effect of tests on a
system with its actual state, thus providing Tempest with ability to
see into it (whitebox tests) as one of possible use cases. Another use
case is to provide input for validation of OpenStack configuration files.

We're putting together an initial version of data model of API with
example values in the following etherpad:
https://etherpad.openstack.org/p/icehouse-diagnostic-api-spec

This version covers most hardware and system-level configurations
managed by OpenStack in Linux system. What is missing from there? What
information you'd like to see in such an API? Please, feel free to
share your thoughts in ML, or in the etherpad directly.


--
Best regards,
Oleg Gelbukh
Mirantis Labs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Oleg,

There has been some discussion over the nova virtapi's get_diagnostics 
method.  The background is in a thread from October [1].  The timing is 
pertinent since the VMware team is working on implementing that API for 
their nova virt driver [2].  The main issue is there is no consistency 
between the nova virt drivers and how they would implement the 
get_diagnostics API, they only return information that is 
hypervisor-specific.  The API docs and current Tempest test covers the 
libvirt driver's implementation, but wouldn't work for say xen, vmware 
or powervm drivers.


I think the solution right now is to namespace the keys in the dict 
that is returned from the API so a caller could at least check for that 
and know how to handle processing the result, but it's not ideal.


Does your solution take into account the nova virtapi's get_diagnostics 
method?


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

[2] https://review.openstack.org/#/c/51404/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Potential to increase min required libvirt version to 0.9.8 ?

2013-11-20 Thread Ralf Haferkamp
On Wed, Nov 20, 2013 at 04:33:22PM +1300, Robert Collins wrote:
> On 20 November 2013 08:02, Daniel P. Berrange  wrote:
> > Currently the Nova libvirt driver is declaring that it wants a minimum
> > of libvirt 0.9.6.
> ...
> > If there are other distros I've missed which expect to support deployment
> > of Icehouse please add them to this list. Hopefully there won't be any
> > with libvirt software older than Ubuntu 12.04 LTS
> >
> >
> > The reason I'm asking this now, is that we're working to make the libvirt
> > python module a separate tar.gz that can build with multiple libvirt
> > versions, and I need to decide how ancient a libvirt we should support
> > for it.
> 
> Fantastic!!!
> 
> The Ubuntu cloud archive
> https://wiki.ubuntu.com/ServerTeam/CloudArchive is how OpenStack is
> delivered by Canonical for Ubuntu LTS users. So I think you can go
> with e.g. 0.9.11 or even 0.9.12 depending on what the Suse folk say.
I think 0.9.11 is fine for us. I am not worried too much about 0.9.12 either
since openSUSE 12.2 (which has 0.9.11) will reach its EOL soon. I also added
SLES to the table on:
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

-- 
regards,
Ralf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Christopher Armstrong
On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter  wrote:

> On 19/11/13 19:14, Christopher Armstrong wrote:
>
>>
>>
[snip]



>> It'd be interesting to see some examples, I think. I'll provide some
>> examples of my proposals, with the following caveats:
>>
>
> Excellent idea, thanks :)
>
>
>  - I'm assuming a separation of launch configuration from scaling group,
>> as you proposed -- I don't really have a problem with this.
>> - I'm also writing these examples with the plural "resources" parameter,
>> which there has been some bikeshedding around - I believe the structure
>> can be the same whether we go with singular, plural, or even
>> whole-template-as-a-string.
>>
>> # trivial example: scaling a single server
>>
>> POST /launch_configs
>>
>> {
>>  "name": "my-launch-config",
>>  "resources": {
>>  "my-server": {
>>  "type": "OS::Nova::Server",
>>  "properties": {
>>  "image": "my-image",
>>  "flavor": "my-flavor", # etc...
>>  }
>>  }
>>  }
>> }
>>
>
> This case would be simpler with my proposal, assuming we allow a default:
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   }
>  }
>
> If we don't allow a default it might be something more like:
>
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   },
>   "provider_template_uri": "http://heat.example.com/<
> tenant_id>/resources_types/OS::Nova::Server/template"
>
>  }
>
>
>  POST /groups
>>
>> {
>>  "name": "group-name",
>>  "launch_config": "my-launch-config",
>>  "min_size": 0,
>>  "max_size": 0,
>> }
>>
>
> This would be the same.
>
>
>
>> (and then, the user would continue on to create a policy that scales the
>> group, etc)
>>
>> # complex example: scaling a server with an attached volume
>>
>> POST /launch_configs
>>
>> {
>>  "name": "my-launch-config",
>>  "resources": {
>>  "my-volume": {
>>  "type": "OS::Cinder::Volume",
>>  "properties": {
>>  # volume properties...
>>  }
>>  },
>>  "my-server": {
>>  "type": "OS::Nova::Server",
>>  "properties": {
>>  "image": "my-image",
>>  "flavor": "my-flavor", # etc...
>>  }
>>  },
>>  "my-volume-attachment": {
>>  "type": "OS::Cinder::VolumeAttachment",
>>  "properties": {
>>  "volume_id": {"get_resource": "my-volume"},
>>  "instance_uuid": {"get_resource": "my-server"},
>>  "mountpoint": "/mnt/volume"
>>  }
>>  }
>>  }
>> }
>>
>
> This appears slightly more complex on the surface; I'll explain why in a
> second.
>
>
>  POST /launch_configs
>
>  {
>   "name": "my-launch-config",
>   "parameters": {
>
>   "image": "my-image",
>   "flavor": "my-flavor", # etc...
>   }
>   "provider_template": {
>   "hot_format_version": "some random date",
>   "parameters" {
>   "image_name": {
>   "type": "string"
>   },
>   "flavor": {
>   "type": "string"
>   } # &c. ...
>
>   },
>   "resources" {
>   "my-volume": {
>   "type": "OS::Cinder::Volume",
>   "properties": {
>   # volume properties...
>   }
>   },
>   "my-server": {
>   "type": "OS::Nova::Server",
>   "properties": {
>   "image": {"get_param": "image_name"},
>   "flavor": {"get_param": "flavor"}, # etc...
>
>  }
>   },
>   "my-volume-attachment": {
>   "type": "OS::Cinder::VolumeAttachment",
>   "properties": {
>   "volume_id": {"get_resource": "my-volume"},
>   "instance_uuid": {"get_resource": "my-server"},
>   "mountpoint": "/mnt/volume"
>   }
>   }
>   },
>   "outputs" {
>"public_ip_address": {
>"Value": {"get_attr": ["my-server",
> "public_ip_address"]} # &c. ...
>   }
>   }
>  }
>
> (BTW the template could just as easily be included in the group rather
> than the launch config. If we put it here we can validate the parameters
> though.)
>
> There are a number of advantages to including the whole template, rather
> than a resource snippet:
>  - Templates are versioned!
>  - Templates accept parameters
>  - Templates can provide outputs - we'll need these when we go to do
> notifications (e.g.

Re: [openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-11-20 Thread Cédric Soulas
Thanks for all the feedback on the "Enhance UX of launch instance form" subject 
and its prototype.

Try the latest version of the prototype:
http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance

This update was made after several discussion on those different channels:

- openstack ux google group
- launchpad horizon (and now launchpad openstack ux)
- mailing list and IRC
- the new ask bots for openstack UX

We tried to write back most of discussions on ask bot, and are now focusing on 
this tool.

Below a "digest" of those discussions, with links to ask bot (on each subject, 
there are links to related blueprints, google doc drafts, etc)

= General topics =

- Modals and supporting different screen sizes [2]
  Current modal doesn't work well on the top 8 screen resolutions [2]
  => Responsive and full screen modal added on the prototype [1]

- Wizard mode for some modals [3]
  => try the wizard [1]

= Specific to "launch instance" =

- Improve "boot source" options [4]
  * first choose to boot from ephemeral or persistent disk
  * if no ephemeral flavor are available, hide the selector
  * group by "public", "project", "shared with me"
  * warning message added for "delete on terminate" option (when boot from 
persistent)

- Scaling the flavor list [5]
  * sort the columns of the table. In particular: by name.
  * group of flavor list (for example: "performance", "standard"...)?

- Scaling the image list [5]
  * a scrollbar on the image list
  * limit the number of list items and add a "x more instance snapshots - See 
more" line
  * a search / filter feature would be great, like discussed at the "scaling 
horizon" design session

- Step 1 / Step 2 workflow: when the user click on "select" from one boot 
source item it goes directly to the step 2.
  If it goes back from step 2 to step 1:
  * the text "Please select a boot source" would be replaced with a "Next" 
button
  * the button "select" on the selected boot source item would be replaced with 
a check-mark (or equivalent).
  * the user would still have the possibility to select another boot source

- flavor depending on image requirements and quotas available: 
   * this a very good point, lot of things to discuss about
   => should open a separate thread on this
 
- Network: still a work in progress
  * if a single choice: then make it default choice

- Several wording updates ("cancel", "ephemeral boot source", ...)

[1] http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
[2] 
http://ask-openstackux.rhcloud.com/question/11/modals-and-supporting-different-screen-sizes/
[3] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow
[4] 
http://ask-openstackux.rhcloud.com/question/13/improve-boot-source-ux-ephemeral-vs-persistent-disk/
[5] 
http://ask-openstackux.rhcloud.com/question/12/enhance-the-selection-of-a-flavor-and-an-image/

Best,

Cédric

> Oct 11 17:11:26 UTC 2013, Jesse Pretorius   
> wrote:
>
> +1
> 
> A few comments:
> 
> 1. Bear in mind that sometimes a user may not have access to any Ephemeral
> flavors, so the tabbing should ideally be adaptive. An alternative would
> not to bother with the tabs and just show a flavor list. In our deployment
> we have no flavors with ephemeral disk space larger than 0.
> 2. Whenever there's a selection, but only one choice, make it a default
> choice. It's tedious to choose the only selection only because you have to.
> It's common for our users to have one network/subnet defined, but the
> current UI requires them to switch tabs and select the network which is
> rather tedious.
> 3. The selection of the flavor is divorced from the quota available and
> from the image requirements. Ideally those two items should somehow be
> incorporated. A user needs to know up-front that the server will build> 
> based on both their quota and the image minimum requirements.
> 4. We'd like to see options for sorting on items like flavors. Currently
> the sort is by 'id' and we'd like to see an option to sort by name
> alphabetically.
> 
> 
> 
> On 11 October 2013 18:53, Cédric Soulas  
> wrote:
> 
> > Hi,
> >
> > I just started a draft with suggestions to enhance the UX of the "Launch
> > Instance" form:
> >
> > https://docs.google.com/document/d/1hUdmyxpVxbYwgGtPbzDsBUXsv0_rtKbfgCHYxOgFjlo
> >
> > Try the live prototype:
> > http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
> >
> > Best,
> >
> > Cédric
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Thierry Carrez
Hi everyone,

How should we proceed to make sure UX (user experience) is properly
taken into account into OpenStack development ? Historically it was hard
for UX sessions (especially the ones that affect multiple projects, like
CLI / API experience) to get session time at our design summits. This
visibility issue prompted the recent request by UX-minded folks to make
UX an official OpenStack program.

However, as was apparent in the Technical Committee meeting discussion
about it yesterday, most of us are not convinced that establishing and
blessing a separate team is the most efficient way to give UX the
attention it deserves. Ideally, UX-minded folks would get active
*within* existing project teams rather than form some sort of
counter-power as a separate team. In the same way we want scalability
and security mindset to be present in every project, we want UX to be
present in every project. It's more of an advocacy group than a
"program" imho.

So my recommendation would be to encourage UX folks to get involved
within projects and during project-specific weekly meetings to
efficiently drive better UX there, as a direct project contributor. If
all the UX-minded folks need a forum to coordinate, I think [UX] ML
threads and, maybe, a UX weekly meeting would be an interesting first step.

There would still be an issue with UX session space at the Design
Summit... but that's a well known issue that affects more than just UX:
the way our design summits were historically organized (around programs
only) made it difficult to discuss cross-project and cross-program
issues. To address that, the plan is to carve cross-project space into
the next design summit, even if that means a little less topical
sessions for everyone else.

Thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] When is a blueprint unnecessary?

2013-11-20 Thread Russell Bryant
On 11/20/2013 05:37 AM, Daniel P. Berrange wrote:
> On Wed, Nov 20, 2013 at 11:21:14AM +0100, Thierry Carrez wrote:
>> Russell Bryant wrote:
>>> One of the bits of feedback that came from the "Nova Project Structure
>>> and Process" session at the design summit was that it would be nice to
>>> skip having blueprints for smaller items.
>>>
>>> In an effort to capture this, I updated the blueprint review criteria
>>> [1] with the following:
>>>
>>>   Some blueprints are closed as unnecessary. Blueprints are used for
>>>   tracking significant development efforts. In general, small and/or
>>>   straight forward cleanups do not need blueprints. A blueprint should
>>>   be filed if:
>>>
>>>- it impacts the release notes
>>>- it covers significant internal development or refactoring efforts
>>> [...]
>>
>> While I agree we should not *require* blueprints for minor
>> features/efforts, should we actively prevent people from filing them (or
>> close them if they are filed ?)
>>
>> Personally (I know I'm odd) I like to have my work (usually small stuff)
>> covered by a blueprint so that I can track and communicate its current
>> completion status -- helps me keep track of where I am.
>>
>> So the question is... is there a cost associated with tolerating "small"
>> blueprints ? Once they are set to "Low" priority they mostly disappear
>> from release management tracking so it's not really a burden there.
> 
> IIUC, Russell has a desire that unless a blueprint is approved, then the
> corresponding patches would not be merged. So from that POV, each blueprint
> has a burden of approval to consider, even if it is 'Low' priority. This
> would be a reason to not require blueprints for 'trivial' changes.

Correct, but there aren't that many that fall into this category, and
they are pretty easy to review in comparison.

I closed a few like this, but after these comments, I think I agree that
it doesn't hurt to allow them and may actually be discouraging to the
submitter to have their blueprint closed out on these grounds.

I'll update the language as suggested by John, to reflect criteria
around requiring blueprints, not around when we close them.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-20 Thread Collins, Sean (Contractor)
I've put up a preliminary agenda for tomorrow's meeting. Please feel
free to add anything that comes to mind.

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_Nov._21_2013
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-20 Thread Isaku Yamahata
On Tue, Nov 19, 2013 at 11:22:46PM +0100,
Salvatore Orlando  wrote:

> For what is worth we have considered this aspect from the perspective of
> the Neutron plugin my team maintains (NVP) during the past release cycle.
> 
> The synchronous model that most plugins with a controller on the backend
> currently implement is simple and convenient, but has some flaws:
> 
> - reliability: the current approach where the plugin orchestrates the
> backend is not really optimal when it comes to ensuring your running
> configuration (backend/control plane) is in sync with your desired
> configuration (neutron/mgmt plane); moreover in some case, due to neutron
> internals, API calls to the backend are wrapped in a transaction too,
> leading to very long SQL transactions, which are quite dangerous indeed. It
> is not easy to recover from a failure due to an eventlet thread deadlocking
> with a mysql transaction, where by 'recover' I mean ensuring neutron and
> backend state are in sync.
> 
> - maintainability: since handling rollback in case of failures on the
> backend and/or the db is cumbersome, this often leads to spaghetti code
> which is very hard to maintain regardless of the effort (ok, I agree here
> that this also depends on how good the devs are - most of the guys in my
> team are very good, but unfortunately they have me too...).
> 
> - performance & scalability:
> -  roundtrips to the backend take a non-negligible toll on the duration
> of an API call, whereas most Neutron API calls should probably just
> terminate at the DB just like a nova boot call does not wait for the VM to
> be ACTIVE to return.
> - we need to keep some operation serialized in order to avoid the
> mentioned race issues
> 
> For this reason we're progressively moving toward a change in the NVP
> plugin with a series of patches under this umbrella-blueprint [1].

Interesting. A question from curiosity. 
successful return of POST/PUT doesn't necessarily mean that
creation/update was completed.
So polling by client side is needed to wait for its completion. Right?
Or some kind of callback? Especially vif creation case would matter.


> For answering the issues mentioned by Isaku, we've been looking at a task
> management library with an efficient and reliable set of abstractions for
> ensuring operations are properly ordered thus avoiding those races (I agree
> on the observation on the pre/post commit solution).

This discussion has been started with core plugin, another resources like
service (lbaas, fw, vpn...) have similar race condition, I think.

Thanks,
Isaku Yamahata

> We are currently looking at using celery [2] rather than taskflow; mostly
> because we've already have expertise on how to use it into our
> applications, and has very easy abstractions for workflow design, as well
> as for handling task failures.
> Said that, I think we're still open to switch to taskflow should we become
> aware of some very good reason for using it.
> 
> Regards,
> Salvatore
> 
> [1]
> https://blueprints.launchpad.net/neutron/+spec/nvp-async-backend-communication
> [2] http://docs.celeryproject.org/en/master/index.html
> 
> 
> 
> On 19 November 2013 19:42, Joshua Harlow  wrote:
> 
> > And also of course, nearly forgot a similar situation/review in heat.
> >
> > https://review.openstack.org/#/c/49440/
> >
> > Except theres was/is dealing with stack locking (a heat concept).
> >
> > On 11/19/13 10:33 AM, "Joshua Harlow"  wrote:
> >
> > >If you start adding these states you might really want to consider the
> > >following work that is going on in other projects.
> > >
> > >It surely appears that everyone is starting to hit the same problem (and
> > >joining efforts would produce a more beneficial result).
> > >
> > >Relevant icehouse etherpads:
> > >- https://etherpad.openstack.org/p/CinderTaskFlowFSM
> > >- https://etherpad.openstack.org/p/icehouse-oslo-service-synchronization
> > >
> > >And of course my obvious plug for taskflow (which is designed to be a
> > >useful library to help in all these usages).
> > >
> > >- https://wiki.openstack.org/wiki/TaskFlow
> > >
> > >The states u just mentioned start to line-up with
> > >https://wiki.openstack.org/wiki/TaskFlow/States_of_Task_and_Flow
> > >
> > >If this sounds like a useful way to go (joining efforts) then lets see how
> > >we can make it possible.
> > >
> > >IRC: #openstack-state-management is where I am usually at.
> > >
> > >On 11/19/13 3:57 AM, "Isaku Yamahata"  wrote:
> > >
> > >>On Mon, Nov 18, 2013 at 03:55:49PM -0500,
> > >>Robert Kukura  wrote:
> > >>
> > >>> On 11/18/2013 03:25 PM, Edgar Magana wrote:
> > >>> > Developers,
> > >>> >
> > >>> > This topic has been discussed before but I do not remember if we have
> > >>>a
> > >>> > good solution or not.
> > >>>
> > >>> The ML2 plugin addresses this by calling each MechanismDriver twice.
> > >>>The
> > >>> create_network_precommit() method is called as part of the DB
> > >>> transaction, and the create_network_postcommit() method

[openstack-dev] [Neutron][LBaaS] L7 Switching

2013-11-20 Thread Avishay Balderman
Hi
I have created this wiki page: (WIP)
https://wiki.openstack.org/wiki/Neutron/LBaaS/l7

Comments / Questions are welcomed.

Thanks

Avishay 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Dolph Mathews
On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez wrote:

> Hi everyone,
>
> How should we proceed to make sure UX (user experience) is properly
> taken into account into OpenStack development ? Historically it was hard
> for UX sessions (especially the ones that affect multiple projects, like
> CLI / API experience) to get session time at our design summits. This
> visibility issue prompted the recent request by UX-minded folks to make
> UX an official OpenStack program.
>
> However, as was apparent in the Technical Committee meeting discussion
> about it yesterday, most of us are not convinced that establishing and
> blessing a separate team is the most efficient way to give UX the
> attention it deserves. Ideally, UX-minded folks would get active
> *within* existing project teams rather than form some sort of
> counter-power as a separate team. In the same way we want scalability
> and security mindset to be present in every project, we want UX to be
> present in every project. It's more of an advocacy group than a
> "program" imho.
>
> So my recommendation would be to encourage UX folks to get involved
> within projects and during project-specific weekly meetings to
> efficiently drive better UX there, as a direct project contributor. If
> all the UX-minded folks need a forum to coordinate, I think [UX] ML
> threads and, maybe, a UX weekly meeting would be an interesting first step.
>

++

UX is an issue at nearly every layer. OpenStack has a huge variety of
interfaces, all of which deserve consistent, top tier UX attention and
community-wide HIG's-- CLIs, client libraries / language bindings, HTTP
APIs, web UIs, messaging and even pluggable driver interfaces. Each type of
interface generally caters to a different audience, each with slightly
different expectations.


>
> There would still be an issue with UX session space at the Design
> Summit... but that's a well known issue that affects more than just UX:
> the way our design summits were historically organized (around programs
> only) made it difficult to discuss cross-project and cross-program
> issues. To address that, the plan is to carve cross-project space into
> the next design summit, even if that means a little less topical
> sessions for everyone else.
>

I'd be happy to "contribute" a design session to focus on improving UX
across the community, and I would certainly attend!


>
> Thoughts ?
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Anne Gentle
On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez wrote:

> Hi everyone,
>
> How should we proceed to make sure UX (user experience) is properly
> taken into account into OpenStack development ? Historically it was hard
> for UX sessions (especially the ones that affect multiple projects, like
> CLI / API experience) to get session time at our design summits. This
> visibility issue prompted the recent request by UX-minded folks to make
> UX an official OpenStack program.
>
> However, as was apparent in the Technical Committee meeting discussion
> about it yesterday, most of us are not convinced that establishing and
> blessing a separate team is the most efficient way to give UX the
> attention it deserves. Ideally, UX-minded folks would get active
> *within* existing project teams rather than form some sort of
> counter-power as a separate team. In the same way we want scalability
> and security mindset to be present in every project, we want UX to be
> present in every project. It's more of an advocacy group than a
> "program" imho.
>
>
I'm not sure "most of us" is accurate. Mostly you and Robert Collins were
unconvinced. Here's my take.

It's nigh-impossible with the UX resources there now (four core) for them
to attend all the project meetings with an eye to UX. Docs are in a similar
situation. We also want docs to be present in every project. Docs as a
program makes sense, and to me, UX as a program makes sense as well. The UX
program can then prioritize what to focus on with the resources they have.

However, as pointed out in the meeting, the UX resources now are mostly
focused on Horizon. It'd be nice to have a group aiming to take the big
picture of the entire OpenStack experience. Maybe this group is the one,
maybe they're not. The big picture would be:
Dashboard experience
CLI experience
logging consistency
troubleshooting consistency
consistency across APIs like pagination behavior

Just like QA ends up focusing on tempest, UX might end up focusing on
Dashboard, CLI and API experience. That'd be fine with me and would give
measurable trackable points.

What's more interesting is how does the user committee fit into this?
There's an interesting discussion already about how to get user concerns
worked on by developers, is it actually through product managers? What
would an Experience program look like if it were about productization?


> So my recommendation would be to encourage UX folks to get involved
> within projects and during project-specific weekly meetings to
> efficiently drive better UX there, as a direct project contributor. If
> all the UX-minded folks need a forum to coordinate, I think [UX] ML
> threads and, maybe, a UX weekly meeting would be an interesting first step.
>
>
I think a weekly UX meeting and a mailing list (which is probably already
their Google Plus group) would be a good way to gather more people as
contributors. Then we get an idea of what contributions look like.

To summarize my take -- UX is a lot like docs in that it's tough to get
devs to care, and also the work should be done with an eye towards the big
picture and with resources from member companies.

Anne


> There would still be an issue with UX session space at the Design
> Summit... but that's a well known issue that affects more than just UX:
> the way our design summits were historically organized (around programs
> only) made it difficult to discuss cross-project and cross-program
> issues. To address that, the plan is to carve cross-project space into
> the next design summit, even if that means a little less topical
> sessions for everyone else.
>
> Thoughts ?
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Lorin Hochstein
On Wed, Nov 20, 2013 at 10:37 AM, Dolph Mathews wrote:

>
>
>
> On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez wrote:
>
>> Hi everyone,
>>
>> How should we proceed to make sure UX (user experience) is properly
>> taken into account into OpenStack development ? Historically it was hard
>> for UX sessions (especially the ones that affect multiple projects, like
>> CLI / API experience) to get session time at our design summits. This
>> visibility issue prompted the recent request by UX-minded folks to make
>> UX an official OpenStack program.
>>
>> However, as was apparent in the Technical Committee meeting discussion
>> about it yesterday, most of us are not convinced that establishing and
>> blessing a separate team is the most efficient way to give UX the
>> attention it deserves. Ideally, UX-minded folks would get active
>> *within* existing project teams rather than form some sort of
>> counter-power as a separate team. In the same way we want scalability
>> and security mindset to be present in every project, we want UX to be
>> present in every project. It's more of an advocacy group than a
>> "program" imho.
>>
>> So my recommendation would be to encourage UX folks to get involved
>> within projects and during project-specific weekly meetings to
>> efficiently drive better UX there, as a direct project contributor. If
>> all the UX-minded folks need a forum to coordinate, I think [UX] ML
>> threads and, maybe, a UX weekly meeting would be an interesting first
>> step.
>>
>
> ++
>
> UX is an issue at nearly every layer. OpenStack has a huge variety of
> interfaces, all of which deserve consistent, top tier UX attention and
> community-wide HIG's-- CLIs, client libraries / language bindings, HTTP
> APIs, web UIs, messaging and even pluggable driver interfaces. Each type of
> interface generally caters to a different audience, each with slightly
> different expectations.
>
>

+1

I would add things like configuration file syntax and log file message
formats into the "UX" category. These are a fundamental interface for
operators who are setting up and debugging an initial OpenStack deployment.
Would love for these interfaces to be treated as first-class entities from
a UX perspective.



>
>> There would still be an issue with UX session space at the Design
>> Summit... but that's a well known issue that affects more than just UX:
>> the way our design summits were historically organized (around programs
>> only) made it difficult to discuss cross-project and cross-program
>> issues. To address that, the plan is to carve cross-project space into
>> the next design summit, even if that means a little less topical
>> sessions for everyone else.
>>
>
> I'd be happy to "contribute" a design session to focus on improving UX
> across the community, and I would certainly attend!
>
>

Me too.

Lorin


-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Server Side Encryption

2013-11-20 Thread Gregory Holt
On Nov 20, 2013, at 5:26 AM, David Hadas  wrote:

> 
> Hi all,
> 
> We created a wiki page discussing the addition of software side encryption
> to Swift:
> "The general scheme is to create a swift proxy middleware that will encrypt
> and sign the object data during PUT and check the signature + decrypt it
> during GET. The target is to create two domains - the user domain between
> the client and the middleware where the data is decrypted and the system
> domain between the middleware and the data at rest (on the device) where
> the data is encrypted.
> Design goals include: (1) Extend swift as necessary but without changing
> existing swift behaviors and APIs; (2) Support encrypting data incoming
> from unchanged clients"
> 
> See:  https://wiki.openstack.org/wiki/Swift/server-side-enc
> We would like to invite feedback.

I'll bite, though I'm fairly sure I already know the response. Why all this 
complexity for what amounts to just encrypting data on disk in case the disk is 
stolen, lost, or reused? That's the only protection I see this providing and it 
would seem it could be achieved with a single cluster key stored only on the 
Swift proxy servers. All the rest seems like gyrations that provide no true 
additional benefit. If a client actually cares about having their data 
encrypted, they should encrypt it themselves and only they would keep the key.

> 
> DH
> 
> 
> Regards,
> David Hadas,
> Openstack Swift ATC, Architect, Master Inventor
> IBM Research Labs, Haifa
> Tel:Int+972-4-829-6104
> Fax:   Int+972-4-829-6112
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Derek Higgins
On 20/11/13 14:21, Anita Kuno wrote:
> Thanks for posting this, Joe. It really helps to create focus so we can
> address these bugs.
> 
> We are chatting in #openstack-neutron about 1251784, 1249065, and 1251448.
> 
> We are looking for someone to work on 1251784 - I had mentioned it at
> Monday's Neutron team meeting and am trying to shop it around in
> -neutron now. We need someone other than Salvatore, Aaron or Maru to
> work on this since they each have at least one very important bug they
> are working on. Please join us in #openstack-neutron and lend a hand -
> all of OpenStack needs your help.

I've been hitting this in tripleo intermittently for the last few days
(or it at least looks to be the same bug), this morning while trying to
debug the problem I noticed http request/responses happening out of
order. I've added details to the bug.

https://bugs.launchpad.net/tripleo/+bug/1251784

> 
> Bug 1249065 is assigned to Aaron Rosen, who isn't in the channel at the
> moment, so I don't have an update on his progress or any blockers he is
> facing. Hopefully (if you are reading this Aaron) he will join us in
> channel soon and I had hear from him about his status.
> 
> Bug 1251448 is assigned to Maru Newby, who I am talking with now in
> -neutron. He is addressing the bug. I will share what information I have
> regarding this one when I have some.
> 
> We are all looking forward to a more stable gate and this information
> really helps.
> 
> Thanks again, Joe,
> Anita.
> 
> On 11/20/2013 01:09 AM, Joe Gordon wrote:
>> Hi All,
>>
>> As many of you have noticed the gate has been in very bad shape over the
>> past few days.  Here is a list of some of the top open bugs (without
>> pending patches, and many recent hits) that we are hitting.  Gate won't be
>> stable, and it will be hard to get your code merged, until we fix these
>> bugs.
>>
>> 1) https://bugs.launchpad.net/bugs/1251920
>>  nova
>> 468 Hits
>> 2) https://bugs.launchpad.net/bugs/1251784
>>  neutron, Nova
>>  328 Hits
>> 3) https://bugs.launchpad.net/bugs/1249065
>>  neutron
>>   122 hits
>> 4) https://bugs.launchpad.net/bugs/1251448
>>  neutron
>> 65 Hits
>>
>> Raw Data:
>>
>>
>> Note: If a bug has any hits for anything besides failure, it means the
>> fingerprint isn't perfect.
>>
>> Elastic recheck known issues
>>  Bug: https://bugs.launchpad.net/bugs/1251920 => message:"assertionerror:
>> console output was empty" AND filename:"console.html" Title: Tempest
>> failures due to failure to return console logs from an instance Project:
>> Status nova: Confirmed Hits FAILURE: 468 Bug:
>> https://bugs.launchpad.net/bugs/1251784 => message:"Connection to neutron
>> failed: Maximum attempts reached" AND filename:"logs/screen-n-cpu.txt"
>> Title: nova+neutron scheduling error: Connection to neutron failed: Maximum
>> attempts reached Project: Status neutron: New nova: New Hits FAILURE: 328
>> UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256 =>
>> message:" 503" AND filename:"logs/syslog.txt" AND
>> syslog_program:"proxy-server" Title: swift proxy-server returning 503
>> during tempest run Project: Status openstack-ci: Incomplete swift: New
>> tempest: New Hits FAILURE: 136 SUCCESS: 83
>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 => message:"No
>> nw_info cache associated with instance" AND
>> filename:"logs/screen-n-api.txt" Title: Tempest failure:
>> tempest/scenario/test_snapshot_pattern.py Project: Status neutron: New
>> nova: Confirmed Hits FAILURE: 122 Bug:
>> https://bugs.launchpad.net/bugs/1252514 => message:"Got error from Swift:
>> put_object" AND filename:"logs/screen-g-api.txt" Title: glance doesn't
>> recover if Swift returns an error Project: Status devstack: New glance: New
>> swift: New Hits FAILURE: 95
>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =>
>> message:"NovaException: Unexpected vif_type=binding_failed" AND
>> filename:"logs/screen-n-cpu.txt" Title: binding_failed because of l2 agent
>> assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
>> SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 => message:"
>> possible networks found, use a Network ID to be more specific. (HTTP 400)"
>> AND filename:"console.html" Title: BadRequest: Multiple possible networks
>> found, use a Network ID to be more specific. Project: Status neutron: New
>> Hits FAILURE: 65 Bug: https://bugs.launchpad.net/bugs/1239856 =>
>> message:"tempest/services" AND message:"/images_client.py" AND
>> message:"wait_for_image_status" AND filename:"console.html" Title:
>> "TimeoutException: Request timed out" on
>> tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML
>> Project: Status glance: New Hits FAILURE: 62 Bug:
>> https://bugs.launchpad.net/bugs/1235435 => message:"One or more ports have
>> an IP allocation from this subnet" AND message:" SubnetInUse: Unable to
>> complete operation on subnet" AND filename:"logs/screen-q-svc.txt" Title:
>> 'SubnetInUse: Una

[openstack-dev] [horizon] Javascript development improvement

2013-11-20 Thread Maxime Vidori
Hi all, I know it is pretty annoying but I have to resurrect this subject.

With the integration of Angularjs into Horizon we will encounter a lot of 
issues with javascript. I ask you to reconsider to bring back Nodejs as a 
development platform. I am not talking about production, we are all agree that 
Node is not ready for production, and we do not want it as a backend. But the 
facts are that we need a lot of its features, which will increase the tests and 
the development. Currently, we do not have any javascript code quality: jslint 
is a great tool and can be used easily into node. Angularjs also provides 
end-to-end testing based on nodejs again, testing is important especially if we 
start to put more logic into JS. Selenium is used just to run qUnit tests, we 
can bring back these tests into node and have a clean unified testing platform. 
Tests will be easier to perform. 

Finally, (do not punch me in the face) lessc which is used for bootstrap is 
completely integrated into it. I am afraid that modern javascript development 
can not be perform without this tool.

Regards

Maxime Vidori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Alarms

2013-11-20 Thread Waines, Greg
I have a general question about alarming in OpenStack.

Service 'ceilometer-alarm-singleton' generates 'Threshold' Alarms based on 
threshold crossings of the meters it is monitoring.

And service 'ceilometer-alarm-notifier' listens to the RPC bus for Alarm 
Notification events and executes the defined triggered action(s) for any alarms.

However, a System has more than just meter-based threshold alarms. E.g. a 
Compute Node Failure, a physical interface failure, etc. .
Is there a "GENERAL" RPC Bus for Alarm Notification Events in OpenStack ?
Is it the one used by Ceilometer ?
Is the intent that Ceilometer is also collecting "ALL" Alarms, storing them in 
DB, and supporting 'publishers' for Alarms (e.g. an SNMP Trap publisher ) ?



Greg.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-20 Thread Soren Hansen
2013/11/18 Mike Spreitzer :
> There were some concerns expressed at the summit about scheduler
> scalability in Nova, and a little recollection of Boris' proposal to
> keep the needed state in memory.


> I also heard one guy say that he thinks Nova does not really need a
> general SQL database, that a NOSQL database with a bit of
> denormalization and/or client-maintained secondary indices could
> suffice.

I may have said something along those lines. Just to clarify -- since
you started this post by talking about scheduler scalability -- the main
motivation for using a non-SQL backend isn't scheduler scalability, it's
availability and resilience. I just don't accept the failure modes that
MySQL (and derivatives such as Galera) impose.

> Has that sort of thing been considered before?

It's been talked about on and off since... well, probably since we
started this project.

> What is the community's level of interest in exploring that?

The session on adding a backend using a non-SQL datastore was pretty
well attended.


-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-20 Thread Chris Friesen

On 11/20/2013 10:06 AM, Soren Hansen wrote:

2013/11/18 Mike Spreitzer :

There were some concerns expressed at the summit about scheduler
scalability in Nova, and a little recollection of Boris' proposal to
keep the needed state in memory.




I also heard one guy say that he thinks Nova does not really need a
general SQL database, that a NOSQL database with a bit of
denormalization and/or client-maintained secondary indices could
suffice.


I may have said something along those lines. Just to clarify -- since
you started this post by talking about scheduler scalability -- the main
motivation for using a non-SQL backend isn't scheduler scalability, it's
availability and resilience. I just don't accept the failure modes that
MySQL (and derivatives such as Galera) impose.


Has that sort of thing been considered before?


It's been talked about on and off since... well, probably since we
started this project.


What is the community's level of interest in exploring that?


The session on adding a backend using a non-SQL datastore was pretty
well attended.


What about a hybrid solution?

There is data that is only used by the scheduler--for performance 
reasons maybe it would make sense to store that information in RAM as 
described at


https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

For the rest of the data, perhaps it could be persisted using some 
alternate backend.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
On Wed, Nov 20, 2013 at 3:21 PM, Sylvain Bauza wrote:
>
> Yes indeed, that's something coming into my mind. Looking at Nova, I found
> a "context_is_admin" policy in policy.json allowing you to say which role
> is admin or not [1] and is matched in policy.py [2], which itself is called
> when creating a context [3].
>
> I'm OK copying that, any objections to it ?
>

I would suggest not to copy this stuff from Nova. There's a lot of legacy
there and it's based on old openstack.common.policy version. We should rely
on openstack.common.policy alone, no need to add more checks here.


> [1] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2
>

This rule is here just to support


> [2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116
>

this, which is used only


> [3] https://github.com/openstack/nova/blob/master/nova/context.py#L102
>

here. This is not what I would call a consistent usage of policies.

>
If we need to check access rights to some method, we should use an
appropriate decorator or helper method and let it check appropriate policy
rule that would contain "rule:admin_required", just like in Keystone:
https://github.com/openstack/keystone/blob/master/etc/policy.json.

context.is_admin should not be checked directly from code, only through
policy rules. It should be set only if we need to elevate privileges from
code. That should be the meaning of it.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Thierry Carrez
Anne Gentle wrote:
> It's nigh-impossible with the UX resources there now (four core) for
> them to attend all the project meetings with an eye to UX. Docs are in a
> similar situation. We also want docs to be present in every project.
> Docs as a program makes sense, and to me, UX as a program makes sense as
> well. The UX program can then prioritize what to focus on with the
> resources they have. 

The key difference between docs and UX is that documentation is a
separate deliverable, and is reviewed by the docs core team. UX work
ends up in each project's code, and gets reviewed by the project's core
team, not the UX team.

Blessing a separate team with no connection with the project core team
is imho a recipe for disaster, potentially with tension between a team
that recommends work to be done and a team that needs to actually get
the work coded, reviewed and merged.

That's why I see UX more like security or scalability, than like
documentation. A design goal rather than a deliverable. And design goals
need to be baked in the team that ends up writing and reviewing the
code. Making it separate will just make it less efficient.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Dolph Mathews
On Wed, Nov 20, 2013 at 10:24 AM, Yuriy Taraday  wrote:

>
> On Wed, Nov 20, 2013 at 3:21 PM, Sylvain Bauza wrote:
>
>> Yes indeed, that's something coming into my mind. Looking at Nova, I
>> found a "context_is_admin" policy in policy.json allowing you to say which
>> role is admin or not [1] and is matched in policy.py [2], which itself is
>> called when creating a context [3].
>>
>> I'm OK copying that, any objections to it ?
>>
>
> I would suggest not to copy this stuff from Nova. There's a lot of legacy
> there and it's based on old openstack.common.policy version. We should rely
> on openstack.common.policy alone, no need to add more checks here.
>
>
>> [1] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2
>>
>
> This rule is here just to support
>
>
>> [2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116
>>
>
> this, which is used only
>
>
>> [3] https://github.com/openstack/nova/blob/master/nova/context.py#L102
>>
>
> here. This is not what I would call a consistent usage of policies.
>
>>
> If we need to check access rights to some method, we should use an
> appropriate decorator or helper method and let it check appropriate policy
> rule that would contain "rule:admin_required", just like in Keystone:
> https://github.com/openstack/keystone/blob/master/etc/policy.json.
>

++ Define actual policy rules with a suggested policy.json file, but do NOT
hardcode a definition of "admin". Allow the deployer to define more
granular policy. oslo.policy makes this pretty easy. If you're looking at
keystone, be sure to look at how we protect v3 controller methods (which
ask the policy engine, "does the requestor have authorization to perform
this operation?"), NOT how we protect v2 controller methods (which ask the
policy engine, "does the requestor have a magical pre-defined role?"
regardless of what operation is actually being performed).


>
> context.is_admin should not be checked directly from code, only through
> policy rules. It should be set only if we need to elevate privileges from
> code. That should be the meaning of it.
>

is_admin is a short sighted and not at all granular -- it needs to die, so
avoid imitating it.


>
>
> --
>
> Kind regards, Yuriy.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] TaskFlow 0.1 integration

2013-11-20 Thread Joshua Harlow
Howdy!

My guess is there is very much enough work to go around and it might be useful 
to jump on irc to discuss where u want to help out.

I have some ideas and I'm sure the cinder folks do also, did u just want to 
work with cinder? Glance I think also could be an interesting taskflow 
integration point, or nova (or...). So much todo :)

Sent from my really tiny device...

On Nov 20, 2013, at 12:36 AM, "haruka tanizawa" 
mailto:harube...@gmail.com>> wrote:

Hi there!

Thank you for your suggestion.
If you have any untouched or unfinished APIs, can I help you?

Taskflow is important for me from point of cancelling with using taskflow.
So, let me help you to impl taskflow to API.

Sincerely, Haruka Tanizawa


2013/11/20 Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Sweet!

Feel free to ask lots of questions and jump on both irc channels 
(#openstack-state-management and #openstack-cinder) if u need any help that can 
be better solved in real time chat.

Thanks for helping getting this ball rolling :-)

Sent from my really tiny device...

> On Nov 19, 2013, at 8:46 AM, "Walter A. Boring IV" 
> mailto:walter.bor...@hp.com>> wrote:
>
> Awesome guys,
>  Thanks for picking this up.   I'm looking forward to the reviews :)
>
> Walt
>>> On 19.11.2013 10:38, Kekane, Abhishek wrote:
>>> Hi All,
>>>
>>> Greetings!!!
>> Hi there!
>>
>> And thanks for your interest in cinder and taskflow!
>>
>>> We are in process of implementing the TaskFlow 0.1 in Cinder for "copy
>>> volume to image" and "delete volume".
>>>
>>> I have added two blueprints for the same.
>>> 1. 
>>> https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow
>>> 2. https://blueprints.launchpad.net/cinder/+spec/delete-volume-task-flow
>>>
>>> I would like to know if any other developers/teams are working or
>>> planning to work on any cinder api apart from above two api's.
>>>
>>> Your help is appreciated.
>> Anastasia Karpinska works on updating existing flows to use released
>> TaskFlow 0.1.1 instead of internal copy:
>>
>> https://review.openstack.org/53922
>>
>> It was blocked because taskflow was not in openstack/requirements, but
>> now we're there, and Anastasia promised to finish the work and submit
>> updated changeset for review in couple of days.
>>
>> There are also two changesets that convert cinder APIs to use TaskFlow:
>> - https://review.openstack.org/53480 for create_backup by Victor
>>   Rodionov
>> - https://review.openstack.org/55134 for create_snapshot by Stanislav
>>   Kudriashev
>>
>> As far as I know both Stanislav and Victor suspended their work unitil
>> Anastasia's change lands.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
Hello, Dolph.

On Wed, Nov 20, 2013 at 8:42 PM, Dolph Mathews wrote:

>
> On Wed, Nov 20, 2013 at 10:24 AM, Yuriy Taraday wrote:
>
>>
>> context.is_admin should not be checked directly from code, only through
>> policy rules. It should be set only if we need to elevate privileges from
>> code. That should be the meaning of it.
>>
>
> is_admin is a short sighted and not at all granular -- it needs to die, so
> avoid imitating it.
>

 I suggest keeping it in case we need to elevate privileges from code. In
this case we can't rely on roles so just one flag should work fine.
As I said before, we should avoid setting or reading is_admin directly from
code. It should be set only in context.elevated and read only by
"admin_required" policy rule.

Does this sound reasonable?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Jesse Noller

On Nov 20, 2013, at 10:27 AM, Thierry Carrez  wrote:

> Anne Gentle wrote:
>> It's nigh-impossible with the UX resources there now (four core) for
>> them to attend all the project meetings with an eye to UX. Docs are in a
>> similar situation. We also want docs to be present in every project.
>> Docs as a program makes sense, and to me, UX as a program makes sense as
>> well. The UX program can then prioritize what to focus on with the
>> resources they have. 
> 
> The key difference between docs and UX is that documentation is a
> separate deliverable, and is reviewed by the docs core team. UX work
> ends up in each project's code, and gets reviewed by the project's core
> team, not the UX team.
> 
> Blessing a separate team with no connection with the project core team
> is imho a recipe for disaster, potentially with tension between a team
> that recommends work to be done and a team that needs to actually get
> the work coded, reviewed and merged.
> 
> That's why I see UX more like security or scalability, than like
> documentation. A design goal rather than a deliverable. And design goals
> need to be baked in the team that ends up writing and reviewing the
> code. Making it separate will just make it less efficient.

What if “the UX” team was not focused on dictating standards; but rather 
outlining them and then directly working with the individual teams to actually 
do the work? This is something I’ve been actually thinking about heavily and 
planned on typing up a proposal on, primarily focused on SDK / API / CLI 
consistency and developer experience.

This is something near and dear to my heart - I’ve actually been talking to 
Doug Hellmann and others about this and trying to find a balance between the 
competing forces you describe: I don’t think a stand alone team that doesn’t 
write the actual code/backend in conjunction with the individual projects can 
work; but I do feel there needs to be a unified vision and common plan held 
together by a team focused on this subject.

So in *my* ideal world: the “ux” team for this would be a mixture of individual 
project members, people solely focused on this but writing code *and* 
specifications and generally working across teams.

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-20 Thread Edgar Magana
Let me take a look and circle back to you in a bit. This is a very
sensitive part of the code, so we need to
Handle properly any change.

Thanks,

Edgar

On 11/20/13 5:46 AM, "Isaku Yamahata"  wrote:

>On Tue, Nov 19, 2013 at 08:59:38AM -0800,
>Edgar Magana  wrote:
>
>> Do you have in mind any implementation, any BP?
>> We could actually work on this together, all plugins will get the
>>benefits
>> of a better implementation.
>
>Yes, let's work together. Here is my blueprint (it's somewhat old.
>So needs to be updated.)
>https://blueprints.launchpad.net/neutron/+spec/fix-races-of-db-based-plugi
>n
>https://docs.google.com/file/d/0B4LNMvjOzyDuU2xNd0piS3JBMHM/edit
>
>Although I've thought of status change(adding more status) and locking
>protocol so far, TaskFlow seems something to look at before starting and
>another possible approach is decoupling backend process from api call
>as Salvatore suggested like NVP plugin.
>Even with taskflow or decoupling approach, some kind of enhancing status
>change/locking protocol will be necessary for performance of creating
>many ports at once.
>
>thanks,
>
>> 
>> Thanks,
>> 
>> Edgar
>> 
>> On 11/19/13 3:57 AM, "Isaku Yamahata"  wrote:
>> 
>> >On Mon, Nov 18, 2013 at 03:55:49PM -0500,
>> >Robert Kukura  wrote:
>> >
>> >> On 11/18/2013 03:25 PM, Edgar Magana wrote:
>> >> > Developers,
>> >> > 
>> >> > This topic has been discussed before but I do not remember if we
>>have
>> >>a
>> >> > good solution or not.
>> >> 
>> >> The ML2 plugin addresses this by calling each MechanismDriver twice.
>>The
>> >> create_network_precommit() method is called as part of the DB
>> >> transaction, and the create_network_postcommit() method is called
>>after
>> >> the transaction has been committed. Interactions with devices or
>> >> controllers are done in the postcommit methods. If the postcommit
>>method
>> >> raises an exception, the plugin deletes that partially-created
>>resource
>> >> and returns the exception to the client. You might consider a similar
>> >> approach in your plugin.
>> >
>> >Splitting works into two phase, pre/post, is good approach.
>> >But there still remains race window.
>> >Once the transaction is committed, the result is visible to outside.
>> >So the concurrent request to same resource will be racy.
>> >There is a window after pre_xxx_yyy before post_xxx_yyy() where
>> >other requests can be handled.
>> >
>> >The state machine needs to be enhanced, I think. (plugins need
>> >modification)
>> >For example, adding more states like pending_{create, delete, update}.
>> >Also we would like to consider serializing between operation of ports
>> >and subnets. or between operation of subnets and network depending on
>> >performance requirement.
>> >(Or carefully audit complex status change. i.e.
>> >changing port during subnet/network update/deletion.)
>> >
>> >I think it would be useful to establish reference locking policy
>> >for ML2 plugin for SDN controllers.
>> >Thoughts or comments? If this is considered useful and acceptable,
>> >I'm willing to help.
>> >
>> >thanks,
>> >Isaku Yamahata
>> >
>> >> -Bob
>> >> 
>> >> > Basically, if concurrent API calls are sent to Neutron, all of them
>> >>are
>> >> > sent to the plug-in level where two actions have to be made:
>> >> > 
>> >> > 1. DB transaction ? No just for data persistence but also to
>>collect
>> >>the
>> >> > information needed for the next action
>> >> > 2. Plug-in back-end implementation ? In our case is a call to the
>> >>python
>> >> > library than consequentially calls PLUMgrid REST GW (soon SAL)
>> >> > 
>> >> > For instance:
>> >> > 
>> >> > def create_port(self, context, port):
>> >> > with context.session.begin(subtransactions=True):
>> >> > # Plugin DB - Port Create and Return port
>> >> > port_db = super(NeutronPluginPLUMgridV2,
>> >> > self).create_port(context,
>> >> >
>> >> port)
>> >> > device_id = port_db["device_id"]
>> >> > if port_db["device_owner"] == "network:router_gateway":
>> >> > router_db = self._get_router(context, device_id)
>> >> > else:
>> >> > router_db = None
>> >> > try:
>> >> > LOG.debug(_("PLUMgrid Library: create_port()
>>called"))
>> >> > # Back-end implementation
>> >> > self._plumlib.create_port(port_db, router_db)
>> >> > except Exception:
>> >> > ?
>> >> > 
>> >> > The way we have implemented at the plugin-level in Havana (even in
>> >> > Grizzly) is that both action are wrapped in the same "transaction"
>> >>which
>> >> > automatically rolls back any operation done to its original state
>> >> > protecting mostly the DB of having any inconsistency state or left
>> >>over
>> >> > data if the back-end part fails.=.
>> >> > The problem that we are experiencing is when concurrent calls to
>>the
>> >> > same API are sent, the number of operation at the plug-in back-end
>>are
>> >> > long enough 

Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-20 Thread Dean Troyer
On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez wrote:

> However, as was apparent in the Technical Committee meeting discussion
>
about it yesterday, most of us are not convinced that establishing and
> blessing a separate team is the most efficient way to give UX the
> attention it deserves. Ideally, UX-minded folks would get active
> *within* existing project teams rather than form some sort of
> counter-power as a separate team. In the same way we want scalability
> and security mindset to be present in every project, we want UX to be
> present in every project. It's more of an advocacy group than a
> "program" imho.
>

Having been working on a cross-project project for a while now I can
confirm that there is a startling lack of coordination between projects on
the same/similar tasks.  Oslo was a HUGE step toward reducing that and
providing a good reference for code and libraries.  There is nothing today
for the intangible parts that are both user and developer facing such as
common message (log) formats, common terms (tenant vs project) and so on.

I think the model of the OSSG is a good one.  After reading the log of
yesterday's meeting I think I would have thrown in that the need from my
perspective is for a coordination role as much as anything.

The deliverables in the UX Program proposal seem a bit fuzzy to me as far
as what might go into the repo.  If it is interface specs, those should be
in either the project code repos docs/ tree or in the docs project
directly.  Same for a Human Interface Guide (HIG) that both Horizon and OSC
have (well, I did steal a lot of OSC's guide from Horizon's).

The interaction with users seems very valuable to me.  Frankly, user polls
are not my favorite task, even if I always learn a lot about my project
watching users bend it in ways I never imagined.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Roman Podoliaka
Hey all,

I think I found a serious bug in our usage of eventlet thread local
storage. Please check out this snippet [1].

This is how we use eventlet TLS in Nova and common Oslo code [2]. This
could explain how [3] actually breaks TripleO devtest story and our
gates.

Am I right? Or I am missing something and should get some sleep? :)

Thanks,
Roman

[1] http://paste.openstack.org/show/53686/
[2] 
https://github.com/openstack/nova/blob/master/nova/openstack/common/local.py#L48
[3] 
https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5

On Wed, Nov 20, 2013 at 5:55 PM, Derek Higgins  wrote:
> On 20/11/13 14:21, Anita Kuno wrote:
>> Thanks for posting this, Joe. It really helps to create focus so we can
>> address these bugs.
>>
>> We are chatting in #openstack-neutron about 1251784, 1249065, and 1251448.
>>
>> We are looking for someone to work on 1251784 - I had mentioned it at
>> Monday's Neutron team meeting and am trying to shop it around in
>> -neutron now. We need someone other than Salvatore, Aaron or Maru to
>> work on this since they each have at least one very important bug they
>> are working on. Please join us in #openstack-neutron and lend a hand -
>> all of OpenStack needs your help.
>
> I've been hitting this in tripleo intermittently for the last few days
> (or it at least looks to be the same bug), this morning while trying to
> debug the problem I noticed http request/responses happening out of
> order. I've added details to the bug.
>
> https://bugs.launchpad.net/tripleo/+bug/1251784
>
>>
>> Bug 1249065 is assigned to Aaron Rosen, who isn't in the channel at the
>> moment, so I don't have an update on his progress or any blockers he is
>> facing. Hopefully (if you are reading this Aaron) he will join us in
>> channel soon and I had hear from him about his status.
>>
>> Bug 1251448 is assigned to Maru Newby, who I am talking with now in
>> -neutron. He is addressing the bug. I will share what information I have
>> regarding this one when I have some.
>>
>> We are all looking forward to a more stable gate and this information
>> really helps.
>>
>> Thanks again, Joe,
>> Anita.
>>
>> On 11/20/2013 01:09 AM, Joe Gordon wrote:
>>> Hi All,
>>>
>>> As many of you have noticed the gate has been in very bad shape over the
>>> past few days.  Here is a list of some of the top open bugs (without
>>> pending patches, and many recent hits) that we are hitting.  Gate won't be
>>> stable, and it will be hard to get your code merged, until we fix these
>>> bugs.
>>>
>>> 1) https://bugs.launchpad.net/bugs/1251920
>>>  nova
>>> 468 Hits
>>> 2) https://bugs.launchpad.net/bugs/1251784
>>>  neutron, Nova
>>>  328 Hits
>>> 3) https://bugs.launchpad.net/bugs/1249065
>>>  neutron
>>>   122 hits
>>> 4) https://bugs.launchpad.net/bugs/1251448
>>>  neutron
>>> 65 Hits
>>>
>>> Raw Data:
>>>
>>>
>>> Note: If a bug has any hits for anything besides failure, it means the
>>> fingerprint isn't perfect.
>>>
>>> Elastic recheck known issues
>>>  Bug: https://bugs.launchpad.net/bugs/1251920 => message:"assertionerror:
>>> console output was empty" AND filename:"console.html" Title: Tempest
>>> failures due to failure to return console logs from an instance Project:
>>> Status nova: Confirmed Hits FAILURE: 468 Bug:
>>> https://bugs.launchpad.net/bugs/1251784 => message:"Connection to neutron
>>> failed: Maximum attempts reached" AND filename:"logs/screen-n-cpu.txt"
>>> Title: nova+neutron scheduling error: Connection to neutron failed: Maximum
>>> attempts reached Project: Status neutron: New nova: New Hits FAILURE: 328
>>> UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256 =>
>>> message:" 503" AND filename:"logs/syslog.txt" AND
>>> syslog_program:"proxy-server" Title: swift proxy-server returning 503
>>> during tempest run Project: Status openstack-ci: Incomplete swift: New
>>> tempest: New Hits FAILURE: 136 SUCCESS: 83
>>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 => message:"No
>>> nw_info cache associated with instance" AND
>>> filename:"logs/screen-n-api.txt" Title: Tempest failure:
>>> tempest/scenario/test_snapshot_pattern.py Project: Status neutron: New
>>> nova: Confirmed Hits FAILURE: 122 Bug:
>>> https://bugs.launchpad.net/bugs/1252514 => message:"Got error from Swift:
>>> put_object" AND filename:"logs/screen-g-api.txt" Title: glance doesn't
>>> recover if Swift returns an error Project: Status devstack: New glance: New
>>> swift: New Hits FAILURE: 95
>>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =>
>>> message:"NovaException: Unexpected vif_type=binding_failed" AND
>>> filename:"logs/screen-n-cpu.txt" Title: binding_failed because of l2 agent
>>> assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
>>> SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 => message:"
>>> possible networks found, use a Network ID to be more specific. (HTTP 400)"
>>> AND filename:"console.html" Title: BadRequest: Multiple possi

Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Alex Gaynor
Nope, you're totally right, corolocal.local is a class, whose instances are
the actual coroutine local storage.

Alex


On Wed, Nov 20, 2013 at 9:11 AM, Roman Podoliaka wrote:

> Hey all,
>
> I think I found a serious bug in our usage of eventlet thread local
> storage. Please check out this snippet [1].
>
> This is how we use eventlet TLS in Nova and common Oslo code [2]. This
> could explain how [3] actually breaks TripleO devtest story and our
> gates.
>
> Am I right? Or I am missing something and should get some sleep? :)
>
> Thanks,
> Roman
>
> [1] http://paste.openstack.org/show/53686/
> [2]
> https://github.com/openstack/nova/blob/master/nova/openstack/common/local.py#L48
> [3]
> https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5
>
> On Wed, Nov 20, 2013 at 5:55 PM, Derek Higgins  wrote:
> > On 20/11/13 14:21, Anita Kuno wrote:
> >> Thanks for posting this, Joe. It really helps to create focus so we can
> >> address these bugs.
> >>
> >> We are chatting in #openstack-neutron about 1251784, 1249065, and
> 1251448.
> >>
> >> We are looking for someone to work on 1251784 - I had mentioned it at
> >> Monday's Neutron team meeting and am trying to shop it around in
> >> -neutron now. We need someone other than Salvatore, Aaron or Maru to
> >> work on this since they each have at least one very important bug they
> >> are working on. Please join us in #openstack-neutron and lend a hand -
> >> all of OpenStack needs your help.
> >
> > I've been hitting this in tripleo intermittently for the last few days
> > (or it at least looks to be the same bug), this morning while trying to
> > debug the problem I noticed http request/responses happening out of
> > order. I've added details to the bug.
> >
> > https://bugs.launchpad.net/tripleo/+bug/1251784
> >
> >>
> >> Bug 1249065 is assigned to Aaron Rosen, who isn't in the channel at the
> >> moment, so I don't have an update on his progress or any blockers he is
> >> facing. Hopefully (if you are reading this Aaron) he will join us in
> >> channel soon and I had hear from him about his status.
> >>
> >> Bug 1251448 is assigned to Maru Newby, who I am talking with now in
> >> -neutron. He is addressing the bug. I will share what information I have
> >> regarding this one when I have some.
> >>
> >> We are all looking forward to a more stable gate and this information
> >> really helps.
> >>
> >> Thanks again, Joe,
> >> Anita.
> >>
> >> On 11/20/2013 01:09 AM, Joe Gordon wrote:
> >>> Hi All,
> >>>
> >>> As many of you have noticed the gate has been in very bad shape over
> the
> >>> past few days.  Here is a list of some of the top open bugs (without
> >>> pending patches, and many recent hits) that we are hitting.  Gate
> won't be
> >>> stable, and it will be hard to get your code merged, until we fix these
> >>> bugs.
> >>>
> >>> 1) https://bugs.launchpad.net/bugs/1251920
> >>>  nova
> >>> 468 Hits
> >>> 2) https://bugs.launchpad.net/bugs/1251784
> >>>  neutron, Nova
> >>>  328 Hits
> >>> 3) https://bugs.launchpad.net/bugs/1249065
> >>>  neutron
> >>>   122 hits
> >>> 4) https://bugs.launchpad.net/bugs/1251448
> >>>  neutron
> >>> 65 Hits
> >>>
> >>> Raw Data:
> >>>
> >>>
> >>> Note: If a bug has any hits for anything besides failure, it means the
> >>> fingerprint isn't perfect.
> >>>
> >>> Elastic recheck known issues
> >>>  Bug: https://bugs.launchpad.net/bugs/1251920 =>
> message:"assertionerror:
> >>> console output was empty" AND filename:"console.html" Title: Tempest
> >>> failures due to failure to return console logs from an instance
> Project:
> >>> Status nova: Confirmed Hits FAILURE: 468 Bug:
> >>> https://bugs.launchpad.net/bugs/1251784 => message:"Connection to
> neutron
> >>> failed: Maximum attempts reached" AND filename:"logs/screen-n-cpu.txt"
> >>> Title: nova+neutron scheduling error: Connection to neutron failed:
> Maximum
> >>> attempts reached Project: Status neutron: New nova: New Hits FAILURE:
> 328
> >>> UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256=>
> >>> message:" 503" AND filename:"logs/syslog.txt" AND
> >>> syslog_program:"proxy-server" Title: swift proxy-server returning 503
> >>> during tempest run Project: Status openstack-ci: Incomplete swift: New
> >>> tempest: New Hits FAILURE: 136 SUCCESS: 83
> >>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 =>
> message:"No
> >>> nw_info cache associated with instance" AND
> >>> filename:"logs/screen-n-api.txt" Title: Tempest failure:
> >>> tempest/scenario/test_snapshot_pattern.py Project: Status neutron: New
> >>> nova: Confirmed Hits FAILURE: 122 Bug:
> >>> https://bugs.launchpad.net/bugs/1252514 => message:"Got error from
> Swift:
> >>> put_object" AND filename:"logs/screen-g-api.txt" Title: glance doesn't
> >>> recover if Swift returns an error Project: Status devstack: New
> glance: New
> >>> swift: New Hits FAILURE: 95
> >>> Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =>
> >>> message:"NovaException: 

Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Russell Bryant
On 11/20/2013 12:21 PM, Alex Gaynor wrote:
> Nope, you're totally right, corolocal.local is a class, whose instances
> are the actual coroutine local storage.

But I don't think his example is what is being used.

Here is an example using the openstack.common.local module, which is
what nova uses for this.  This produces the expected output.

http://paste.openstack.org/show/53687/

https://git.openstack.org/cgit/openstack/nova/tree/nova/openstack/common/local.py

For reference, original example from OP:
http://paste.openstack.org/show/53686/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Tempest] Tempest API test for Neutron LBaaS

2013-11-20 Thread David Kranz

On 11/19/2013 09:40 PM, Sean Dague wrote:

On 11/17/2013 09:55 PM, Eugene Nikanorov wrote:

Hi folks,

I'm working on major change to Neutron LBaaS API, obviously it will
break existing tempest API tests for LBaaS.
What would be the right process to deal with this? I guess I can't just
push fixed tests to tempest because they will not pass against existing
neutron code, and vice versa.


The answer is, you don't. If it's a published API than it has to be 
versioned. Changing an existing API in a non compatible way is not 
allowed.

Indeed. See these for more details:

https://wiki.openstack.org/wiki/Governance/Approved/APIStability
https://wiki.openstack.org/wiki/APIChangeGuidelines

 -David




So please explain the deprecation and versioning plan for LBaaS, then 
we can probably figure out the right path forward.


-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Dolph Mathews
On Wed, Nov 20, 2013 at 10:52 AM, Yuriy Taraday  wrote:

> Hello, Dolph.
>
> On Wed, Nov 20, 2013 at 8:42 PM, Dolph Mathews wrote:
>
>>
>> On Wed, Nov 20, 2013 at 10:24 AM, Yuriy Taraday wrote:
>>
>>>
>>> context.is_admin should not be checked directly from code, only through
>>> policy rules. It should be set only if we need to elevate privileges from
>>> code. That should be the meaning of it.
>>>
>>
>> is_admin is a short sighted and not at all granular -- it needs to die,
>> so avoid imitating it.
>>
>
>  I suggest keeping it in case we need to elevate privileges from code.
>

Can you expand on this point? It sounds like you want to ignore the
deployer-specified authorization configuration...


> In this case we can't rely on roles so just one flag should work fine.
> As I said before, we should avoid setting or reading is_admin directly
> from code. It should be set only in context.elevated and read only by
> "admin_required" policy rule.
>
> Does this sound reasonable?
>
> --
>
> Kind regards, Yuriy.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-20 Thread Devananda van der Veen
Responses inline.

On Wed, Nov 20, 2013 at 2:19 AM, Ladislav Smola  wrote:

> Ok, I'll try to summarize what will be done in the near future for
> Undercloud monitoring.
>
> 1. There will be Central agent running on the same host(hosts once the
> central agent horizontal scaling is finished) as Ironic
>

Ironic is meant to be run with >1 conductor service. By i-2 milestone we
should be able to do this, and running at least 2 conductors will be
recommended. When will Ceilometer be able to run with multiple agents?

On a side note, it is a bit confusing to call something a "central agent"
if it is meant to be horizontally scaled. The ironic-conductor service has
been designed to scale out in a similar way to nova-conductor; that is,
there may be many of them in an AZ. I'm not sure that there is a need for
Ceilometer's agent to scale in exactly a 1:1 relationship with
ironic-conductor?


> 2. It will have SNMP pollster, SNMP pollster will be able to get list of
> hosts and their IPs from Nova (last time I
> checked it was in Nova) so it can poll them for stats. Hosts to poll
> can be also defined statically in config file.
>

Assuming all the undercloud images have an SNMP daemon baked in, which they
should, then this is fine. And yes, Nova can give you the IP addresses for
instances provisioned via Ironic.


> 3. It will have IPMI pollster, that will poll Ironic API, getting list of
> hosts and a fixed set of stats (basically everything
> that we can get :-))
>

No -- I thought we just agreed that Ironic will not expose an API for IPMI
data. You can poll Nova to get a list of instances (that are on bare metal)
and you can poll Ironic to get a list of nodes (either nodes that have an
instance associated, or nodes that are unprovisioned) but this will only
give you basic information about the node (such as the MAC addresses of its
network ports, and whether it is on/off, etc).


> 4. Ironic will also emit messages (basically all events regarding the
> hardware) and send them directly to Ceilometer collector
>

Correct. I've updated the BP:

https://blueprints.launchpad.net/ironic/+spec/add-ceilometer-agent

Let me know if that looks like a good description.

-Devananda



> Does it seems to be correct? I think that is the basic we must have to
> have Undercloud monitored. We can then build on that.
>
> Kind regards,
> Ladislav
>
>

> On 11/20/2013 09:22 AM, Julien Danjou wrote:
>
>> On Tue, Nov 19 2013, Devananda van der Veen wrote:
>>
>>  If there is a fixed set of information (eg, temp, fan speed, etc) that
>>> ceilometer will want,
>>>
>> Sure, we want everything.
>>
>>  let's make a list of that and add a driver interface
>>> within Ironic to abstract the collection of that information from
>>> physical
>>> nodes. Then, each driver will be able to implement it as necessary for
>>> that
>>> vendor. Eg., an iLO driver may poll its nodes differently than a generic
>>> IPMI driver, but the resulting data exported to Ceilometer should have
>>> the
>>> same structure.
>>>
>> I like the idea.
>>
>>  An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
>>> this would need to be implemented by Ceilometer.
>>>
>> We're working on adding pollster for that indeed.
>>
>>  As far as where the SNMP agent would need to run, it should be on the
>>> same host(s) as ironic-conductor so that it has access to the
>>> management network (the physically-separate network for hardware
>>> management, IPMI, etc). We should keep the number of applications with
>>> direct access to that network to a minimum, however, so a thin agent
>>> that collects and forwards the SNMP data to the central agent would be
>>> preferable, in my opinion.
>>>
>> We can keep things simple by having the agent only doing that polling I
>> think. Building a new agent sounds like it will complicate deployment
>> again.
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to stage client major releases in Gerrit?

2013-11-20 Thread Mark Washenberger
Hi folks,

The project python-glanceclient is getting close to needing a major release
in order to finally remove some long-deprecated features, and to make some
minor adjustments that are technically backwards-incompatible.

Normally, our release process works great. When we cut a release (say
1.0.0), if we realize it doesn't contain a feature we need, we can just add
the feature and release a new minor version (say 1.1.0). However, when it
comes to cutting out the fat for a major release, if we find a feature that
we failed to remove before releasing 1.0.0, we're basically screwed. We
have to keep that feature around until we feel like releasing 2.0.0.

In order to mitigate that risk, I think it would make a lot of sense to
have a place to stage and carefully consider all the breaking changes we
want to make. I also would like to have that place be somewhere in Gerrit
so that it fits in with our current submission and review process. But if
that place is the 'master' branch and we take a long time, then we can't
really release any bug fixes to the v0 series in the meantime.

I can think of a few workarounds, but they all seem kinda bad. For example,
we could put all the breaking changes together in one commit, or we could
do all this prep in github.

My question is, is there a correct way to stage breaking changes in Gerrit?
Has some other team already dealt with this problem?

DISCLAIMER:
For the purposes of this discussion, it will be utterly unproductive to
discuss the relative merits of backwards-breaking changes. Rather let's
assume that all breaking changes that would eventually land in the next
major release are necessary and have been properly communicated well in
advance. If a given breaking change is *not* proper, well that's the kind
of thing I want to catch in gerrit reviews in the staging area!

Respectully,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Salvatore Orlando
I've noticed that
https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5stores
the network client in local.strong_store which is a reference to
corolocal.local (the class, not the instance).

In Russell's example instead the code accesses local.store which is an
instance of WeakLocal (inheriting from corolocal.local).

Perhaps then Roman's findings apply to the issue being observed on the gate.

Regards,
Salvatore


On 20 November 2013 18:32, Russell Bryant  wrote:

> On 11/20/2013 12:21 PM, Alex Gaynor wrote:
> > Nope, you're totally right, corolocal.local is a class, whose instances
> > are the actual coroutine local storage.
>
> But I don't think his example is what is being used.
>
> Here is an example using the openstack.common.local module, which is
> what nova uses for this.  This produces the expected output.
>
> http://paste.openstack.org/show/53687/
>
>
> https://git.openstack.org/cgit/openstack/nova/tree/nova/openstack/common/local.py
>
> For reference, original example from OP:
> http://paste.openstack.org/show/53686/
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Robert Collins
I'm putting up a patch now.

On 21 November 2013 07:19, Salvatore Orlando  wrote:
> I've noticed that
> https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5
> stores the network client in local.strong_store which is a reference to
> corolocal.local (the class, not the instance).
>
> In Russell's example instead the code accesses local.store which is an
> instance of WeakLocal (inheriting from corolocal.local).
>
> Perhaps then Roman's findings apply to the issue being observed on the gate.
>
> Regards,
> Salvatore
>
>
> On 20 November 2013 18:32, Russell Bryant  wrote:
>>
>> On 11/20/2013 12:21 PM, Alex Gaynor wrote:
>> > Nope, you're totally right, corolocal.local is a class, whose instances
>> > are the actual coroutine local storage.
>>
>> But I don't think his example is what is being used.
>>
>> Here is an example using the openstack.common.local module, which is
>> what nova uses for this.  This produces the expected output.
>>
>> http://paste.openstack.org/show/53687/
>>
>>
>> https://git.openstack.org/cgit/openstack/nova/tree/nova/openstack/common/local.py
>>
>> For reference, original example from OP:
>> http://paste.openstack.org/show/53686/
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Robert Collins
Which of these bugs would be appropriate to use for the fix to
strong_store - it affects lockutils and rpc, both of which are going
to create havoc :)

-Rob

On 21 November 2013 07:19, Salvatore Orlando  wrote:
> I've noticed that
> https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5
> stores the network client in local.strong_store which is a reference to
> corolocal.local (the class, not the instance).
>
> In Russell's example instead the code accesses local.store which is an
> instance of WeakLocal (inheriting from corolocal.local).
>
> Perhaps then Roman's findings apply to the issue being observed on the gate.
>
> Regards,
> Salvatore
>
>
> On 20 November 2013 18:32, Russell Bryant  wrote:
>>
>> On 11/20/2013 12:21 PM, Alex Gaynor wrote:
>> > Nope, you're totally right, corolocal.local is a class, whose instances
>> > are the actual coroutine local storage.
>>
>> But I don't think his example is what is being used.
>>
>> Here is an example using the openstack.common.local module, which is
>> what nova uses for this.  This produces the expected output.
>>
>> http://paste.openstack.org/show/53687/
>>
>>
>> https://git.openstack.org/cgit/openstack/nova/tree/nova/openstack/common/local.py
>>
>> For reference, original example from OP:
>> http://paste.openstack.org/show/53686/
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diagnostic] Diagnostic API: summit follow-up

2013-11-20 Thread Oleg Gelbukh
Matt,

Thank you for bringing this up. I've been following this thread and the
idea is somewhat aligned with our approach, but we'd like to take one step
further.

In this Diagnostic API, we want to collect information about system state
from sources outside to OpenStack. We'd probably should extract this call
from Nova API and use it in our implementation to get hypervisor-specific
information about virtual machines which exist on the node. But the idea is
to get vision into the system state alternative to that provided by
OpenStack APIs.

May be we should reconsider our naming to avoid confusion and call this
Instrumentation API or something like that?

--
Best regards,
Oleg Gelbukh


On Wed, Nov 20, 2013 at 6:45 PM, Matt Riedemann
wrote:

>
>
> On Wednesday, November 20, 2013 7:52:39 AM, Oleg Gelbukh wrote:
>
>> Hi, fellow stackers,
>>
>> There was a conversation during 'Enhance debugability' session at the
>> summit about Diagnostic API which allows gate to get 'state of world'
>> of OpenStack installation. 'State of world' includes hardware- and
>> operating system-level configurations of servers in cluster.
>>
>> This info would help to compare the expected effect of tests on a
>> system with its actual state, thus providing Tempest with ability to
>> see into it (whitebox tests) as one of possible use cases. Another use
>> case is to provide input for validation of OpenStack configuration files.
>>
>> We're putting together an initial version of data model of API with
>> example values in the following etherpad:
>> https://etherpad.openstack.org/p/icehouse-diagnostic-api-spec
>>
>> This version covers most hardware and system-level configurations
>> managed by OpenStack in Linux system. What is missing from there? What
>> information you'd like to see in such an API? Please, feel free to
>> share your thoughts in ML, or in the etherpad directly.
>>
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>> Mirantis Labs
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Hi Oleg,
>
> There has been some discussion over the nova virtapi's get_diagnostics
> method.  The background is in a thread from October [1].  The timing is
> pertinent since the VMware team is working on implementing that API for
> their nova virt driver [2].  The main issue is there is no consistency
> between the nova virt drivers and how they would implement the
> get_diagnostics API, they only return information that is
> hypervisor-specific.  The API docs and current Tempest test covers the
> libvirt driver's implementation, but wouldn't work for say xen, vmware or
> powervm drivers.
>
> I think the solution right now is to namespace the keys in the dict that
> is returned from the API so a caller could at least check for that and know
> how to handle processing the result, but it's not ideal.
>
> Does your solution take into account the nova virtapi's get_diagnostics
> method?
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2013-
> October/016385.html
> [2] https://review.openstack.org/#/c/51404/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Robert Collins
We settled on 1251920.

https://review.openstack.org/57509 is the fix for that bug.

Note that Oslo was fixed on Jun 28th, nova hasn't synced since then.
If we were using oslo as a library we would have had the fix as soon
as olso did a release.

These are the references to strong_store - and thus broken in nova
trunk (and if any references exist in H, in H too):
./nova/network/neutronv2/__init__.py:58:if not
hasattr(local.strong_store, 'neutron_client'):
./nova/network/neutronv2/__init__.py:59:
local.strong_store.neutron_client = _get_client(token=None)
./nova/network/neutronv2/__init__.py:60:return
local.strong_store.neutron_client
./nova/openstack/common/rpc/__init__.py:102:if
((hasattr(local.strong_store, 'locks_held')
./nova/openstack/common/rpc/__init__.py:103: and
local.strong_store.locks_held)):
./nova/openstack/common/rpc/__init__.py:108: {'locks':
local.strong_store.locks_held,
./nova/openstack/common/local.py:47:strong_store = threading.local()
./nova/openstack/common/lockutils.py:173:if not
hasattr(local.strong_store, 'locks_held'):
./nova/openstack/common/lockutils.py:174:
local.strong_store.locks_held = []
./nova/openstack/common/lockutils.py:175:
local.strong_store.locks_held.append(name)
./nova/openstack/common/lockutils.py:217:
local.strong_store.locks_held.remove(name)
./nova/tests/network/test_neutronv2.py:1837:
local.strong_store.neutron_client = None


So we can expect lockutils to be broken, and rpc to be broken. Clearly
they are being impacted more subtly than the neutron client usage.

-Rob


On 21 November 2013 07:44, Robert Collins  wrote:
> Which of these bugs would be appropriate to use for the fix to
> strong_store - it affects lockutils and rpc, both of which are going
> to create havoc :)
>
> -Rob
>
> On 21 November 2013 07:19, Salvatore Orlando  wrote:
>> I've noticed that
>> https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5
>> stores the network client in local.strong_store which is a reference to
>> corolocal.local (the class, not the instance).
>>
>> In Russell's example instead the code accesses local.store which is an
>> instance of WeakLocal (inheriting from corolocal.local).
>>
>> Perhaps then Roman's findings apply to the issue being observed on the gate.
>>
>> Regards,
>> Salvatore
>>
>>
>> On 20 November 2013 18:32, Russell Bryant  wrote:
>>>
>>> On 11/20/2013 12:21 PM, Alex Gaynor wrote:
>>> > Nope, you're totally right, corolocal.local is a class, whose instances
>>> > are the actual coroutine local storage.
>>>
>>> But I don't think his example is what is being used.
>>>
>>> Here is an example using the openstack.common.local module, which is
>>> what nova uses for this.  This produces the expected output.
>>>
>>> http://paste.openstack.org/show/53687/
>>>
>>>
>>> https://git.openstack.org/cgit/openstack/nova/tree/nova/openstack/common/local.py
>>>
>>> For reference, original example from OP:
>>> http://paste.openstack.org/show/53686/
>>>
>>> --
>>> Russell Bryant
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Goals for Icehouse

2013-11-20 Thread John Dickinson
During the past month, Swift contributors have gathered in Austin,
Hong Kong, and online to discuss projects underway. There are some
major efforts underway, and I hope to lay them out and tie them
together here, so that we all know what the goals for the next six
months are.

The biggest feature set is storage policies. Storage policies will
give deployers and users incredible flexibility in how to manage their
data in the storage cluster. There are three basic parts to storage
policies.

First, given the global set of hardware available in a single Swift
cluster, choose which subset of hardware on which to store data. This
can be done by geography (e.g. US-East vs EU vs APAC vs global) or by
hardware properties (e.g. SATA vs SSDs). An obviously, the combination
can give a lot of flexibility.

Second, given the subset of hardware being used to store the data,
choose how to encode the data across that set of hardware. For
example, perhaps you have 2-replica, 3-replica, or erasure code
policies. Combining this with the hardware possibilities, you get e.g.
US-East reduced redundancy, global triple replicas, and EU erasure
coded.

Third, give the subset of hardware and how to store the data across
that hardware, control how Swift talks to a particular storage volume.
This may be optimized local file systems. This may be Gluster volumes.
This may be non-POSIX volumes like Seagate's new Kinetic drives.

We're well on our way to getting this set of work done. In Hong Kong
there was a demo of the current state of multi-ring support (for parts
one and two). We've also got a great start in refactoring the
interface between Swift and on-disk files and databases (for part
three).

But there is still a ton of work to do.

* we need to finalize the multi-ring work for the WSGI processes
* we need to ensure that large objects work with storage policies
* replication needs to be multi-ring aware
* auditing needs to be multi-ring aware
* updaters need to be multi-ring aware
* we need to write a multi-ring reconciler
* we need to merge all of this multi-ring stuff into master
* we need to finalize the DiskFile refactoring
* we need to properly refactor the DBBrokers
* we need the make sure the daemons can be extended to support different 
DiskFiles

The top-level blueprint for this is at
https://blueprints.launchpad.net/swift/+spec/storage-policies

Our target for the storage policy work is to support erasure coded
data within Swift. After the general storage policy work above is
done, we need to work on refactoring Swift's proxy server to make sure
its interaction with the storage nodes allows for differing storage
schemes.

I'm tremendously excited about the potential for storage policies. I
think it's the most significant development in Swift since the entire
project was open-sourced. Storage policies allow Swift to grow from
being the engine powering the world's largest storage clouds to a
storage platform enabling broader use cases by offering the
flexibility to very specifically match many different deployment
patterns.

Oh and if that's not enough to keep us all busy, there is other work
going on in the community, too. Some has been merged into Swift, and
some will stay in the ecosystem of tools for Swift, but they are all
important to a storage system. We've got an improved replication
platform merged into Swift, and it needs to be thoroughly tested and
polished. Once it's stable, we'll be able to build on it to really
improve Swift around MTTD and MTTR metrics. We're in the process of
refactoring metadata so that it is strongly separate into stuff that a
user can see and change and stuff the user can't see and change.

There is also some serious effort being put forth by a few companies
(including HP and IBM) to provide a way to add powerful metadata
searching into a Swift cluster. The ZeroVM team is interested in
extending Swift to better support large-scale data processing. The
Barbican team is looking in to providing good ways to offer encryption
for data stored in Swift. Others are looking in to how to grow
clusters (changing the partition power) and extend clusters
(transparently federating multiple Swift clusters).

Not all of this is going to be done by the OpenStack Icehouse release.
We cut stable releases of Swift fairly often, and these features will
roll out in those releases as they are done. My goal for Icehouse is
to see the storage policy work done and ready for production.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration

2013-11-20 Thread Mike Spreitzer
Zane Bitter  wrote on 11/15/2013 05:59:06 PM:
> On 15/11/13 22:17, Keith Bray wrote:
> > The way I view 2 vs. 4 is that 2 is more complicated and you don't 
gain
> > any benefit of availability.  If, in 2, your global heat endpoint is 
down,
> > you can't update the whole stack.  You have to work around it by 
talking
> > to Heat (or the individual service endpoints) in the region that is 
still
> > alive.
> 
> Not really, you still have the templates for all of the nested stacks 
> that are in the remaining region(s). You can manipulate those directly 
> and not risk getting things out of sync when the Heat master comes back.
> 
> If you really want a global stack, you can recreate it in a different 
> region and adopt the remaining parts.

If there were no dynamic data flowing between regions then the story would 
be this simple.  But we have not forbidden output from one regional 
template to be used in the input to another.  Maybe we should?  That would 
rule out some situations where a user can write something that looks valid 
but will simply not work (e.g., communicate cross region via wait 
condition).  That would only leave the problem of how the client keeps 
track of the outputs.

OTOH, the more we restrict what can be done, the less useful this really 
is.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-20 Thread Rochelle.Grober
For External connectivity beyond the network gateway, rather than pinging  
google.com, configuring the vm for an external DNS server  and pinging it by 
IPaddress would be a good initial test of external connectivity.

--Rocky

From: Tomoe Sugihara [mailto:to...@midokura.com]
Sent: Tuesday, November 19, 2013 7:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Rami Vaknin
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for 
external connectivity

Hi Salvatore, et al,

On Mon, Nov 18, 2013 at 9:19 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
Hi Yair,

I had in mind of doing something similar. I also registered a tempest blueprint 
for it.
Basically I think we can assume test machines have access to the Internet, but 
the devstack deployment might not be able to route packets from VMs to the 
Internet.

Being able to ping the external network gateway, which by default is 
172.24.4.225 is a valuable connectivity test IMHO (and that's your #1 item)
For items #2 and #3 I'm not sure of your intentions; precisely so far I'm not 
sure if we're adding any coverage to Neutron. I assume you want to check 
servers such as www.google.com are reachable, but the 
routing from the external_gateway_ip to the final destination is beyond's 
Neutron control. DNS resolution might be interesting, but however I think 
resolution of external names is too beyond Neutron's control.

Two more things to consider on external network connectivity tests:
1) SNAT can be enabled or not. In this case we need a test that can tell us the 
SRC IP of the host connecting to the public external gateway, because I think 
that if SNAT kicks in, it should be an IP on the ext network, otherwise it 
should be an IP on the internal network. In this case we can use netcat to this 
aim, emulating a web server and use verbose output to print the source IP
2) When the connection happens from a port associated with a floating IP it is 
important that the SNAT happens with the floating IP address, and not with the 
default SNAT address. This is actually a test which would have avoided us a 
regression in the havana release cycle.


As far as I know from the code (I'm new to tempest and might be missing 
something), test_network_basic_ops launches a single VM with a floating IP 
associated and test is performed by accessing from the tempest host to the 
guest VM using floating ip.

So, I have some questions:

- How can we test the internal network connectivity (when the tenant networks 
are not accessible from the host, which I believe is the case for most of the 
plugins)?

- For external connectivity, how can we test connectivity without floating ip?
  - should we have another vm and control that from the access VM e.g. by ssh 
remote command? or
  - spawn specific VMs which sends traffic upon boot (e.g. metadata server + 
userdata with cloud init installed VM, etc) to public and assert traffics on 
the tempest host side?

Thanks,
Tomoe


Regards,
Salvatore

On 18 November 2013 13:13, Giulio Fidente 
mailto:gfide...@redhat.com>> wrote:
On 11/18/2013 11:41 AM, Yair Fried wrote:
I'm editing tempest/scenario/test_network_basic_ops.py for external
connectivity as the TODO listed in its docstring.

the test cases are for pinging against external ip and url to test
connectivity and dns respectivly.
since default deployement (devstack gate) doesn't have external
connectivity I was thinking on one or all of the following

I think it's a nice thing to have!
 2. add fields in tempest.conf for
  * external connectivity = False/True
  * external ip to test against (ie 8.8.8.8)

I like this option.

One can easily disable it entirely OR pick a "more relevant" ip address if 
needed. Seems to me it would give the greatest flexibility.
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Search Project - summit follow up

2013-11-20 Thread Dmitri Zimin(e) | StackStorm
Thanks Terry for highlighting this:

Yes, tenant isolation is the must. It's not reflected in the prototype - it
queries Solr directly; but the proper implementation will go through the
query API service, where ACL will be applied.

UX folks are welcome to comment on expected queries.

I think the key benefit of cross-resource index over querying DBs is that
it saves the clients from implementing complex queries case by case,
leaving flexibility to the user.

-- Dmitri.




On Wed, Nov 20, 2013 at 2:27 AM, Thierry Carrez wrote:

> Dmitri Zimin(e) | StackStorm wrote:
> > Hi Stackers,
> >
> > The project Search is a service providing fast full-text search for
> > resources across OpenStack services.
> > [...]
>
> At first glance this looks slightly scary from a security / tenant
> isolation perspective. Most search results would be extremely
> user-specific (and leaking data from one user to another would be
> catastrophic), so the benefits of indexing (vs. querying DB) would be
> very limited ?
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-20 Thread Ben Nemec
 

I don't recall the full discussion from before, but I know one of the
big problems with doing that is it actually makes it more difficult to
review these syncs. Instead of having 1000 lines of copied changes to
review, you have a one-line commit hash to look at and you then have to
try to figure out what changed since the previous hash, whether that's
one line or 1000. 

I think there were some other issues too, but I don't remember what they
were so maybe someone else can chime in. 

-Ben 

On 2013-11-20 06:04, Roman Bogorodskiy wrote: 

> I know it was brought up on the list a number of times, but...
> 
> If we're talking about storing commit ids for each module and writing
> some shell scripts for that, isn't it a chance to reconsider using git 
> submodules?
> 
> On Wed, Nov 20, 2013 at 12:37 PM, Elena Ezhova  wrote:
> 
> 20.11.2013, 06:18, "John Griffith" : 
> 
> On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin  wrote:
> 
>> On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
>>> Random OSLO updates with no list of what changed, what got fixed etc
>>> are unlikely to get review attention - doing such a review is
>>> extremely difficult. I was -2ing them and asking for more info, but
>>> they keep popping up. I'm really not sure what the best way of
>>> updating from OSLO is, but this isn't it.
>> Best practice is to include a list of changes being synced, for example:
>>
>> https://review.openstack.org/54660 [1]
>>
>> Every so often, we throw around ideas for automating the generation of
>> this changes list - e.g. cinder would have the oslo-incubator commit ID
>> for each module stored in a file in git to tell us when it was last
>> synced.
>>
>> Mark.
>>
>> _ __
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]
> 
> Been away on vacation so I'm afraid I'm a bit late on this... but;
> 
> I think the point Duncan is bringing up here is that there are some
> VERY large and significant patches coming from OSLO pulls. The DB
> patch in particular being over 1K lines of code to a critical portion
> of the code is a bit unnerving to try and do a review on. I realize
> that there's a level of trust that goes with the work that's done in
> OSLO and synchronizing those changes across the projects, but I think
> a few key concerns here are:
> 
> 1. Doing huge pulls from OSLO like the DB patch here are nearly
> impossible to thoroughly review and test. Over time we learn a lot
> about real usage scenarios and the database and tweak things as we go,
> so seeing a patch set like this show up is always a bit unnerving and
> frankly nobody is overly excited to review it.
> 
> 2. Given a certain level of *trust* for the work that folks do on the
> OSLO side in submitting these patches and new additions, I think some
> of the responsibility on the review of the code falls on the OSLO
> team. That being said there is still the issue of how these changes
> will impact projects *other* than Nova which I think is sometimes
> neglected. There have been a number of OSLO synchs pushed to Cinder
> that fail gating jobs, some get fixed, some get abandoned, but in
> either case it shows that there wasn't any testing done with projects
> other than Nova (PLEASE note, I'm not referring to this particular
> round of patches or calling any patch set out, just stating a
> historical fact).
> 
> 3. We need better documentation in commit messages explaining why the
> changes are necessary and what they do for us. I'm sorry but in my
> opinion the answer "it's the latest in OSLO and Nova already has it"
> is not enough of an answer in my opinion. The patches mentioned in
> this thread in my opinion met the minimum requirements because they at
> least reference the OSLO commit which is great. In addition I'd like
> to see something to address any discovered issues or testing done with
> the specific projects these changes are being synced to.
> 
> I'm in no way saying I don't want Cinder to play nice with the common
> code or to get in line with the way other projects do things but I am
> saying that I think we have a ways to go in terms of better
> communication here and in terms of OSLO code actually keeping in mind
> the entire OpenStack eco-system as opposed to just changes that were
> needed/updated in Nova. Cinder in particular went through some pretty
> massive DB re-factoring and changes during Havana and there was a lot
> of really good work there but it didn't come without a cost and the
> benefits were examined and weighed pretty heavily. I also think that
> some times the indirection introduced by adding some of the
> openstack.common code is unnecessary and in some cases makes things
> more difficult than they should be.
> 
> I'm just not sure that we always do a very good ROI investigation or
> risk assessment on changes, and that opinion applies to ALL changes in
> OpenStack project

Re: [openstack-dev] [Glance] Interested in a mid-Icehouse Glance mini-summit?

2013-11-20 Thread Mark Washenberger
On Thu, Nov 14, 2013 at 1:35 PM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:

> Hi folks,
>
> There's been some talk about copying Nova for a mid-cycle meetup to
> reorganize around our development priorities in Glance.
>
> So far, the plan is still tentative, but we're focusing on late January
> and the Washington, DC area. If you're interested and think you may be able
> to attend, please fill out this survey.
>
>
> https://docs.google.com/forms/d/11DjkNAzVAdtMCPrsLiyjA7ck33jnexmKqlqaCl5olO8/viewform
>
> <3,
> markwash
>


As a reminder, please fill out the form above if you are interested in a
Glance mid-cycle meetup. Depending on interest, this meetup could serve a
lot of purposes. For one, it will be an opportunity for Glance developers
to meet face to face and hammer out the details for finishing the Icehouse
release. For another, it can be a good opportunity to discover and plan
longer term features for the product. Finally, we may also have the chance
for developers to spend time together hacking or even learn about a few new
techniques that may be relevant to future development. But it need not be
restricted to current Glance developers--indeed some representation from
other projects would be appreciated to help us improve how we serve the
overall suite of projects.

Thanks for your interest!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose "project story wiki" idea

2013-11-20 Thread Rochelle.Grober
+1

I'd also love to see a tag or keyword associated with items that (affects 
{project x,y,z}) or (possibly affects {project x,y,z} to highlight areas in 
need of collaboration between teams.  There is so much going on cross-project 
these days, that if the project team thinks the change has side effects or 
interaction changes beyond the internal project, they should raise a flag.

--Rocky

From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Tuesday, November 19, 2013 9:33 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Propose "project story wiki" idea

Hi stackers,


Currently what I see is growing amount of interesting projects, that at least I 
would like to track. But reading all mailing lists, and reviewing all patches 
in all interesting projects to get high level understanding of what is happing 
in project now, is quite hard or even impossible task (at least for me). 
Especially after 2 weeks vacation =)


The idea of this proposal is that every OpenStack project should have "story" 
wiki page. It means to publish every week one short message that contains most 
interesting updates for the last week, and high level road map for future week. 
So reading this for 10-15 minutes you can see what changed in project, and get 
better understanding of high level road map of the project.

E.g. we start doing this in Rally: https://wiki.openstack.org/wiki/Rally/Updates


I think that the best way to organize this, is to have person (or few persons) 
that will track all changes in project and prepare such updates each week.



Best regards,
Boris Pavlovic
--
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-11-20 Thread Carl Baldwin
Hi, sorry for the delay in response.  I'm glad to look at it.

Can you be more specific about the error?  Maybe paste the error your
seeing in paste.openstack.org?  I don't find any reference to "2006".
Maybe I'm missing something.

Also, is the patch that you applied the most recent?  With the final
version of the patch it was no longer necessary for me to set
pool_recycle or idle_interval.

Thanks,
Carl

On Tue, Nov 19, 2013 at 7:14 PM, Zhongyue Luo  wrote:
> Carl, Yingjun,
>
> I'm still getting the 2006 error even by configuring idle_interval to 1.
>
> I applied the patch to the RDO havana dist on centos 6.4.
>
> Are there any other options I should be considering such as min/max pool
> size or use_tpool?
>
> Thanks.
>
>
>
> On Sat, Sep 7, 2013 at 3:33 AM, Baldwin, Carl (HPCS Neutron)
>  wrote:
>>
>> This pool_recycle parameter is already configurable using the idle_timeout
>> configuration variable in neutron.conf.  I tested this with a value of 1
>> as suggested and it did get rid of the mysql server gone away messages.
>>
>> This is a great clue but I think I would like a long-term solution that
>> allows the end-user to still configure this like they were before.
>>
>> I'm currently thinking along the lines of calling something like
>> pool.dispose() in each child immediately after it is spawned.  I think
>> this should invalidate all of the existing connections so that when a
>> connection is checked out of the pool a new one will be created fresh.
>>
>> Thoughts?  I'll be testing.  Hopefully, I'll have a fixed patch up soon.
>>
>> Cheers,
>> Carl
>>
>> From:  Yingjun Li 
>> Reply-To:  OpenStack Development Mailing List
>> 
>> Date:  Thursday, September 5, 2013 8:28 PM
>> To:  OpenStack Development Mailing List
>> 
>> Subject:  Re: [openstack-dev] [Neutron] The three API server multi-worker
>> process patches.
>>
>>
>> +1 for Carl's patch, and i have abandoned my patch..
>>
>> About the `MySQL server gone away` problem, I fixed it by set
>> 'pool_recycle' to 1 in db/api.py.
>>
>> 在 2013年9月6日星期五,Nachi Ueno 写道:
>>
>> Hi Folks
>>
>> We choose https://review.openstack.org/#/c/37131/ <-- This patch to go on.
>> We are also discussing in this patch.
>>
>> Best
>> Nachi
>>
>>
>>
>> 2013/9/5 Baldwin, Carl (HPCS Neutron) :
>> > Brian,
>> >
>> > As far as I know, no consensus was reached.
>> >
>> > A problem was discovered that happens when spawning multiple processes.
>> > The mysql connection seems to "go away" after between 10-60 seconds in
>> > my
>> > testing causing a seemingly random API call to fail.  After that, it is
>> > okay.  This must be due to some interaction between forking the process
>> > and the mysql connection pool.  This needs to be solved but I haven't
>> > had
>> > the time to look in to it this week.
>> >
>> > I'm not sure if the other proposal suffers from this problem.
>> >
>> > Carl
>> >
>> > On 9/4/13 3:34 PM, "Brian Cline"  wrote:
>> >
>> >>Was any consensus on this ever reached? It appears both reviews are
>> >> still
>> >>open. I'm partial to review 37131 as it attacks the problem a more
>> >>concisely, and, as mentioned, combined the efforts of the two more
>> >>effective patches. I would echo Carl's sentiments that it's an easy
>> >>review minus the few minor behaviors discussed on the review thread
>> >>today.
>> >>
>> >>We feel very strongly about these making it into Havana -- being
>> >> confined
>> >>to a single neutron-server instance per cluster or region is a huge
>> >>bottleneck--essentially the only controller process with massive CPU
>> >>churn in environments with constant instance churn, or sudden large
>> >>batches of new instance requests.
>> >>
>> >>In Grizzly, this behavior caused addresses not to be issued to some
>> >>instances during boot, due to quantum-server thinking the DHCP agents
>> >>timed out and were no longer available, when in reality they were just
>> >>backlogged (waiting on quantum-server, it seemed).
>> >>
>> >>Is it realistically looking like this patch will be cut for h3?
>> >>
>> >>--
>> >>Brian Cline
>> >>Software Engineer III, Product Innovation
>> >>
>> >>SoftLayer, an IBM Company
>> >>4849 Alpha Rd, Dallas, TX 75244
>> >>214.782.7876 direct  |  bcl...@softlayer.com
>> >>
>> >>
>> >>-Original Message-
>> >>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
>> >>Sent: Wednesday, August 28, 2013 3:04 PM
>> >>To: Mark McClain
>> >>Cc: OpenStack Development Mailing List
>> >>Subject: [openstack-dev] [Neutron] The three API server multi-worker
>> >>process patches.
>> >>
>> >>All,
>> >>
>> >>We've known for a while now that some duplication of work happened with
>> >>respect to adding multiple worker processes to the neutron-server.
>> >> There
>> >>were a few mistakes made which led to three patches being done
>> >>independently of each other.
>> >>
>> >>Can we settle on one and accept it?
>> >>
>> >>I have changed my patch at the suggestion of one of the other 2 authors,
>> >>Peter Feiner, in attempt to find common 

Re: [openstack-dev] [Ironic] A question about getting IPMI data

2013-11-20 Thread Jarrod B Johnson

pyghmi is growing SDR, SEL, FRU, and other stuff.

I can add in-band so long as there is always python in there, however
out-of-band does not need to be too bad.  In the case of a baremetal guest,
for example, gathering such data in-band would require more cooperation of
the tenant image, whereas out-of-band should always be workable in ipmi
equipped systems.

The biggest ding in out-of-band is that ipmitool is one-to-one and also has
to retrieve SDR every time it wants to go read sensors.  pyghmi will
support an evolution of xCAT's SDR caching scheme to avoid the hit (at
least if the sensors objects are reused).  We have had success completing a
complete sensor read of 4,000 systems with a single system in about 15
seconds out of band in xCAT, and pyghmi optimizes away some overhead that
xCAT did not (though that overhead isn't as significant for something as
verbose as sensor reading).



From:   "Gao, Fengqian" 
To: "openstack-dev@lists.openstack.org"

Cc: Devananda van der Veen , Jarrod B
Johnson/Raleigh/IBM@IBMUS, "Lu, Lianhao"
, "Wang, Shane" 
Date:   11/19/2013 01:07 AM
Subject:[Ironic] A question about getting IPMI data



Hi, all,
As   the   summit   session
https://etherpad.openstack.org/p/icehouse-summit-ceilometer-hardware-sensors
 said,  ironic  will  expose  IPMI  data to ceilometer, but I have a couple
questions here.
   1.What kind of IPMI data will be collected? I assume that power,
   temperature or other sensor data is needed, right?

   2.   How do we get all these data?

IIUC,  ironic  did  not  involve  too  much  about  IPMI for now, only used
ipmitool and pyghmi module.
Using  IPMItool  to  execute  command  and  parsing  the  output is easy to
understand.  But  seems  not  a  good  way for me. The pyghmi only supports
out-of-band(net-payload)  now.  So,  I  am  wondering  if we can extend the
function of pyghmi to provide more interfaces?
Be  specifically,  I  think  pyghmi  can  provide  in-band  IPMI  interface
including IPMB or system interface etc. and allowed to get IPMI data in the
OS.

Is all this make sense to you?

Thanks for your response

Best Regards,

-fengqian<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose "project story wiki" idea

2013-11-20 Thread Jay Pipes

On 11/20/2013 12:33 AM, Boris Pavlovic wrote:

Hi stackers,


Currently what I see is growing amount of interesting projects, that at
least I would like to track. But reading all mailing lists, and
reviewing all patches in all interesting projects to get high level
understanding of what is happing in project now, is quite hard or even
impossible task (at least for me). Especially after 2 weeks vacation =)


The idea of this proposal is that every OpenStack project should have
"story" wiki page. It means to publish every week one short message that
contains most interesting updates for the last week, and high level road
map for future week. So reading this for 10-15 minutes you can see what
changed in project, and get better understanding of high level road map
of the project.

E.g. we start doing this in Rally:
https://wiki.openstack.org/wiki/Rally/Updates


I think that the best way to organize this, is to have person (or few
persons) that will track all changes in project and prepare such updates
each week.


+1 Great idea, Boris. Also like the follow up ideas from Rocky about 
tagging for cross-project referencing.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-20 Thread Zane Bitter

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter mailto:zbit...@redhat.com>> wrote:

On 19/11/13 19:14, Christopher Armstrong wrote:



[snip]




There are a number of advantages to including the whole template,
rather than a resource snippet:
  - Templates are versioned!
  - Templates accept parameters
  - Templates can provide outputs - we'll need these when we go to
do notifications (e.g. to load balancers).

The obvious downside is there's a lot of fiddly stuff to include in
the template (hooking up the parameters and outputs), but this is
almost entirely mitigated by the fact that the user can get a
template, ready built with the server hooked up, from the API by
hitting /resource_types/OS::Nova::__Server/template and just edit in
the Volume and VolumeAttachment. (For a different example, they
could of course begin with a different resource type - the launch
config accepts any keys for parameters.) To the extent that this
encourages people to write templates where the outputs are actually
supplied, it will help reduce the number of people complaining their
load balancers aren't forwarding any traffic because they didn't
surface the IP addresses.



My immediate reaction is to counter-propose just specifying an entire
template instead of parameters and template separately, but I think the


As an API, I think that would be fine, though inconsistent between the 
default (no template provided) and non-default cases. When it comes to 
implementing Heat resources to represent those, however, it would make 
the templates much less composable. If you wanted to reference anything 
from the surrounding template (including parameters), you would have to 
define the template inline and resolve references there. Whereas if you 
can pass parameters, then you only need to include the template from a 
separate file, or to reference a URL.



crux will be this point you mentioned:

  - Templates can provide outputs - we'll need these when we go to do
notifications (e.g. to load balancers).

Can you explain this in a bit more depth? It seems like whatever it is
may be the real deciding factor that means that your proposal can do
something that a "resources" or a "template" parameter can't do.  I


What I'm proposing _is_ a "template" parameter... I don't see any 
difference. A "resources" parameter couldn't do this though, because the 
resources section obviously doesn't contain outputs.


In any event, when we notify a Load Balancer, or _any_ type of thing 
that needs a notification, we need to pass it some data. At the moment, 
for load balancers, we pass the IDs of the servers (I originally thought 
we passed IP addresses directly, hence possibly misleading comments 
earlier). But our scaling unit is a template which may contain multiple 
servers, or no servers. And the thing that gets notified may not even be 
a load balancer. So there is no way to infer what the right data to send 
is, we will need the user to tell us. The outputs section of the 
template seems like a good mechanism to do it.



thought we had a workable solution with the "LoadBalancerMember" idea,
which you would use in a way somewhat similar to CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it 
handle the problem of wanting to notify an arbitrary service (i.e. not 
necessarily a load balancer)?


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday November 19th at 19:00 UTC

2013-11-20 Thread Elizabeth Krumbach Joseph
On Mon, Nov 18, 2013 at 11:03 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday November 19th, at 19:00 UTC in
> #openstack-meeting

Meeting notes and minutes from our meeting yesterday available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-11-19-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-11-19-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-11-19-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-20 Thread Mike Wilson
I agree heartily with the availability and resiliency aspect.  For me, that
is the biggest reason to consider a NOSQL backend. The other potential
performance benefits are attractive to me also.

-Mike


On Wed, Nov 20, 2013 at 9:06 AM, Soren Hansen  wrote:

> 2013/11/18 Mike Spreitzer :
> > There were some concerns expressed at the summit about scheduler
> > scalability in Nova, and a little recollection of Boris' proposal to
> > keep the needed state in memory.
>
>
> > I also heard one guy say that he thinks Nova does not really need a
> > general SQL database, that a NOSQL database with a bit of
> > denormalization and/or client-maintained secondary indices could
> > suffice.
>
> I may have said something along those lines. Just to clarify -- since
> you started this post by talking about scheduler scalability -- the main
> motivation for using a non-SQL backend isn't scheduler scalability, it's
> availability and resilience. I just don't accept the failure modes that
> MySQL (and derivatives such as Galera) impose.
>
> > Has that sort of thing been considered before?
>
> It's been talked about on and off since... well, probably since we
> started this project.
>
> > What is the community's level of interest in exploring that?
>
> The session on adding a backend using a non-SQL datastore was pretty
> well attended.
>
>
> --
> Soren Hansen | http://linux2go.dk/
> Ubuntu Developer | http://www.ubuntu.com/
> OpenStack Developer  | http://www.openstack.org/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The recent gate performance and how it affects you

2013-11-20 Thread Clark Boylan
Joe Gordon has been doing great working tracking test failures and how
often they affect us. Post Havana release the failure rate has
increased dramatically, negatively affecting the gate and forcing it to
run in a near worst case scenario. That is changes are being tested in
parallel but the head of the queue is more often than not running into a
failed job forcing all changes behind it to be retested and so on.

This led to a gate queue 130 deep with the head of the queue 18 hours
behind its approval. We have identified fixes for some of the worst
current bugs and in order to get them in have restarted Zuul effectively
cancelling the gate queue and have queued these changes up at the front
of the qeueue. Once these changes are in and we are happy with the bug
fixing results we will requeue changes that were in the queue when it
got cancelled.

How do we avoid this in the future? Step one is reviewers that are
approving changes (or reverifying them) should keep an eye on the gate
queue. If it is struggling adding more changes to that queue problably
won't help. Instead we should focus on identifying the bugs, submitting
changes to elastic-recheck to track these bugs, and work towards fixing
the bugs. Everyone is affected by persistent gate failures, we need to
work together to fix them.

Thank you for your patience,

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Nov 20 2013

2013-11-20 Thread Anne Gentle
1. In review and merged this past week:

Wow, I see about 30 fixes backported to the Havana install guides in the
last week, great work Andreas! I appreciate the focus on the install guides
since the Summit, thank you.

2. High priority doc work:
I'm very excited to have Lana offer to do a proposal for config-reference
and cloud admin guide information architecture. I think that the
restructure and added guides show great promise from the Havana release and
I hope Icehouse is a polishing release.

3. Doc work going on that I know of:

We're going to hold a doc bug day by the end of the year, please choose
days that work well for you at http://doodle.com/vcycq4mknv6wdrnx. You can
also triage and fix bugs before a designated day, of course. There're still
quite a few install doc bugs, and some are duplicates. Do take a moment
even before the doc bug day to triage or confirm a few doc bugs.

We are moving forward with a large move of content during the Icehouse
release -- all -api repos will be removed after the content finds
a new home under /doc/source. The goal is to enable devs to write
API docs to help devs describe changes to the API. Then in the
openstack/api-site we will house all end-user API docs. Please help your
neighborhood doc patcher with reviews on those patches. Any questions about
this please do ask. You can also see the blueprint at
https://wiki.openstack.org/wiki/Blueprint-os-api-docs. Diane is working on
a document to precisely describe "what goes where" for the API docs we all
know and love.
There are a lot of incoming questions for "where did neutron admin docs
go?" and references to the /trunk/ version of the Networking Admin Guide.
This week that version will be redirected and removed, please patch the
OpenStack Cloud Administrator Guide with networking information.

4. New incoming doc requests:

Our team would like to finalize on Project Doc Leads as described at
https://wiki.openstack.org/wiki/Documentation/ProjectDocLeads. If you
haven't already put yourself forward, please update this page with your
interest.

For some reason I had to "fix" the Lulu link for the OpenStack Security
Guide at http://docs.openstack.org/sec/ -- thanks to Bryan Payne for
reporting. It's all fixed up now.

5. Doc tools updates:

David Cramer tells me he'll cut one more release of the clouddocs-tools
maven plugin then it'll be put into Gerrit workflow in the OpenStack
organization. Thanks to David and Zaro for the hard work here over many
months. I'd encourage Stackers to see how they can get involved in
collaborating on our doc build tool once it's in our OpenStack systems.

 Some reps from infra and myself will meet with O'Reilly technical folks on
Monday to finalize the workflow for the operations-guide repository. Once
we have it fleshed out I'll communicate it to the openstack-docs list.

6. Other doc news:

I wrote up a post about "Who Wrote Havana Docs?" despite my aversion to
stats at
http://justwriteclick.com/2013/11/18/who-wrote-openstack-havana-docs/ which
definitely shows how far we have come. With some Project Doc Leads and
blueprints in place I'm certain we'll keep moving forward in Icehouse for
more doc contributors and doc improvements.

We're now holding meetings for both sides of the earth. Earth-dwellers,
please attend one of these Tuesday meetings as you prefer according to your
favorite sleep schedule.
1st Tuesday, 03:00:00 UTC
2nd Tuesday, 14:00:00 UTC
3rd Tuesday, 03:00:00 UTC
4th Tuesday, 14:00:00 UTC

We'll follow the agenda posted at
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >