Re: [openstack-dev] [Ceilometer][SUSE] SUSE OpenStack Havana distribution is different with upstream

2014-02-27 Thread Vincent Untz
Hi,

Le jeudi 27 février 2014, à 09:12 +0800, ZhiQiang Fan a écrit :
> thanks, Vincent
> 
> I already noticed that the critical bug is caused by a wrong backport, just
> in the 
> 0001-enable-sql-metadata-query.patchline
> 124, session.flush() should be placed in line 110
> 
> Another question is that: How can I get the original version of community?
> I know devstack is a good choice, but we want to install it via package
> management system

All the packages are available in the Build Service:
http://en.opensuse.org/Build_Service

Literally everyone can branch a package there and get it build the way
you want. For instance, you could simply disable the patch and build
your own package there.

Again, I encourage you to join opensuse-cl...@opensuse.org for further
discussion on this (see http://en.opensuse.org/Portal:OpenStack for
details). We have several options from there: disable this patch, fix it
(if it was badly backported) or add another required backport.

Cheers,

Vincent

> Cheers
> 
> 
> On Wed, Feb 26, 2014 at 8:58 PM, Vincent Untz  wrote:
> 
> > Hi,
> >
> > Le mercredi 26 février 2014, à 17:20 +0800, ZhiQiang Fan a écrit :
> > > Hi, SUSE OpenStack developers,
> > >
> > > After install OpenStack Havana on SLES 11 SP3 following the
> > > openstack-manuals' guide, I find that the
> > > ceilometer/storage/impl_sqlalchemy.py has implemented the metaquey
> > > functionality. However, the upstream, which means
> > > github.com/openstack/ceilometer stable/havana branch doesn't implement
> > that
> > > feature yet.
> > >
> > > the ceilometer package version is 2013.2.2.dev13
> > >
> > > Here is my questions:
> > >
> > > 1. is this intent or just a mistake during package?
> > > if it is intent, where can I get the distribution release notes?
> >
> > You can see the package at
> >
> > https://build.opensuse.org/package/show/Cloud:OpenStack:Havana/openstack-ceilometer
> >
> > I think this is added because of the
> > 0001-enable-sql-metadata-query.patch patch (you can get some notes about
> > history in the openstack-ceilometer.changes file)
> >
> > > 2. is this the only part which is different with community, or there are
> > > other parts?
> > > if it is not the only part, where can I get the whole diff note?
> >
> > See link above :-) Generally, the only "differences" are backports from
> > Icehouse.
> >
> > > 3. where can I get help and how, if there is a bug in the different part
> > > code?
> > > actually, there is one, which caused by mysql foreign key, blocks
> > > entire ceilometer-collector service.
> >
> > Feel free to get in touch with opensuse-cl...@opensuse.org, since this
> > is where the packaging for OpenStack is discussed for openSUSE and SLES.
> >
> > (btw, with the build service, you can contribute any change you want to
> > the package)
> >
> > Cheers,
> >
> > Vincent
> >
> > > This problem is really important (and a bit urgent), please help me.
> > >
> > > Thanks
> >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > --
> > Les gens heureux ne sont pas pressés.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> blog: zqfan.github.com
> git: github.com/zqfan

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Les gens heureux ne sont pas pressés.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] a question about Mistral

2014-02-27 Thread Liuji (Jeremy)
Hi, Mistral members

I am very interesting in the project Mistral. 

About the wiki of the Mistral, I have a question about the use case description 
as the follow.

"Live migration
A user specifies tasks for VM live migration triggered upon an event from 
Ceilometer (CPU consumption 100%)."

Is this mean Mistral has the plan to provide the feature like DRS?

I am a newbie for Mistral and I apologize if I am missing something very 
obvious.

Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Youcef Laribi
Hi Eugene,

Thanks for the provided detail. See my comments below.

The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here? Why is it that 2 VIPs using 
the same IP address have to be implemented on the same backend? Isn't this a 
driver/technology capability? If a certain driver *requires* that VIPs sharing 
the same IP address have to be on the same "backend" (whatever a "backend" 
means), it just needs to ensure that this is the case, but another driver might 
be able to support VIPs sharing the same IP to be on different backends. The 
user really shouldn't care. Did I miss some important detail? It feels like it, 
so please be patient with me :)

I'm sorry this all creates so much confusion.
In order to understand why we need additional entity, you need to keep in mind 
the following things:
 1) We have a notion of root object. From user perspective it represents 
logical instance, from implementation perspective it also represents how that 
instance is mapped to a backend (agent, device), which flavor/provider/driver 
it has, etc
 2) We're trying to change vip-pool relationship to m:n, if vip or pool remain 
the root object, that creates inconsistency because root object can be 
connected to another root object with different parameters.

I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet. It can wait until a vip/pool 
are created and attached to each other, then it would have a clearer idea of 
the backends eligible to host that whole LB configuration. Another driver 
though, might be able to perform the configuration on its "backend" 
straight-away on each API call, and still be able to comply with the object 
model.

Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not a priority right now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not possible 
for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to a 
provider (and then to a particular backend), doing so for 2 vips makes address 
reuse impossible if we want to maintain logical API, or otherwise we would need 
to expose implementation details that will allow us to connect two vips to the 
same backend.

On the open discussion questions:
I think most of them are resolved by following existing API expectations about 
status fields, etc.
Main thing that allows to go with existing API expectations is the notion of 
'root object'.
Root object is the obje

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Samuel Bercovici
+1

From: Youcef Laribi [mailto:youcef.lar...@citrix.com]
Sent: Thursday, February 27, 2014 10:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Eugene,

Thanks for the provided detail. See my comments below.

The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here? Why is it that 2 VIPs using 
the same IP address have to be implemented on the same backend? Isn't this a 
driver/technology capability? If a certain driver *requires* that VIPs sharing 
the same IP address have to be on the same "backend" (whatever a "backend" 
means), it just needs to ensure that this is the case, but another driver might 
be able to support VIPs sharing the same IP to be on different backends. The 
user really shouldn't care. Did I miss some important detail? It feels like it, 
so please be patient with me :)

I'm sorry this all creates so much confusion.
In order to understand why we need additional entity, you need to keep in mind 
the following things:
 1) We have a notion of root object. From user perspective it represents 
logical instance, from implementation perspective it also represents how that 
instance is mapped to a backend (agent, device), which flavor/provider/driver 
it has, etc
 2) We're trying to change vip-pool relationship to m:n, if vip or pool remain 
the root object, that creates inconsistency because root object can be 
connected to another root object with different parameters.

I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet. It can wait until a vip/pool 
are created and attached to each other, then it would have a clearer idea of 
the backends eligible to host that whole LB configuration. Another driver 
though, might be able to perform the configuration on its "backend" 
straight-away on each API call, and still be able to comply with the object 
model.

Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not a priority right now.
I think it's a fair limitation that can be removed later.

On workflows:
WFs #2 and #3 are problematic. First off, sharing the same IP is not possible 
for other vip for the following reason:
vip is created (with new model) with flavor (or provider) and scheduled to a 
provider (and then to a particular backend), doing so for 2 vips makes address 
reuse impossible if we want to maintain logical API, or otherwise we would need 
to expose implementation details that will allow us to connect two vips to the 
same backend.

On the open d

[openstack-dev] does exception need localize or not?

2014-02-27 Thread yongli he

refer to :
https://wiki.openstack.org/wiki/Translations

now some exception use _ and some not.  the wiki suggest do not to do 
that. but i'm not sure.


what's the correct way?


F.Y.I


   What To Translate

At present the convention is to translate/all/user-facing strings. This 
means API messages, CLI responses, documentation, help text, etc.


There has been a lack of consensus about the translation of log 
messages; the current ruling is that while it is not against policy to 
mark log messages for translation if your project feels strongly about 
it, translating log messages is not actively encouraged.


Exception text should/not/be marked for translation, becuase if an 
exception occurs there is no guarantee that the translation machinery 
will be functional.




Regards
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-27 Thread Dougal Matthews

On 26/02/14 18:38, Petr Blaho wrote:

I am wondering what is the OpenStack way of returning json from
apiclient.


Good question, I think its important for us to be as consistent as
possible.


By looking at API docs at http://api.openstack.org/ I can say that
projects use both ways, altought what I would describe as "nicer API"
uses namespaced variant.


Can you explain why you prefer it? I'm curious as I would tend to vote
for no namespaces. They add an extra level of nesting but don't actually
provide any benefits that I can see.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Does l2-pop sync fdb on agent start ?

2014-02-27 Thread Eleouet Francois
Hi,

As agent rebuilds br-tun at startup, l2pop MD is supposed to provide
agent with the whole list of fdb entries when it boots.

You may hit this bug [1] that can prevent some flows to be appended to OVS?

[1] https://bugs.launchpad.net/neutron/+bug/1263866


2014-02-27 4:53 GMT+01:00 Zang MingJie :
> Hi all,
>
> I found my ovs-agent has missed some tunnels on br-tun. I have l2-pop
> enabled, if some fdb entries is added while the agent is down, can it be
> added back once the agent is back ?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Does l2-pop sync fdb on agent start ?

2014-02-27 Thread Édouard Thuleau
Yes, the agent sync fdb on startup thanks to the flag 'agent_boot_time'
(default 180 seconds).
Plugin compares it with time agent is started (diff between the agent start
(agent.started.at) and its last heartbeat timestamp) and if it less, the
plugin send all fdb entries of the network.

Édoaurd.


On Thu, Feb 27, 2014 at 4:53 AM, Zang MingJie  wrote:

> Hi all,
>
> I found my ovs-agent has missed some tunnels on br-tun. I have l2-pop
> enabled, if some fdb entries is added while the agent is down, can it be
> added back once the agent is back ?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-27 Thread Dougal Matthews

On 26/02/14 19:17, Jay Dobies wrote:

This is a new concept to me in JSON, I've never heard of a wrapper
element like that being called a namespace.


It's certainly not in the spec AFAIK, but I've seen this approach before
in various places.



My first impression is that is looks like cruft. If there's nothing else
at the root of the JSON document besides the namespace, all it means is
that every time I go to access relevant data I have an extra layer of
indirection.


+1



If we ever forsee an aggregate API, I can see some value in it. For
instance, a single call that aggregates a volume with some relevant
metrics from ceilometer. In that case, I could see leaving both distinct
data sets separate at the root with some form of namespace rather than
attempting to merge the data.


I can see this working well - return a set of raw API responses and
aggregate information under the tuskar namespace. However, I don't
think any other OpenStack services return multiple top level
namespaces. Do we have examples of this?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] a question about Mistral

2014-02-27 Thread Renat Akhmerov
Hey,

Can you please provide more details on what you’re interested in? What do you 
mean by DRS?

If you mean VMware Distributed Resource Scheduler then yes and no. It’s not the 
major goal of Mistral but Mistral is a more generic tool that could be used to 
build something like this. The primary goal of Mistral is to provide a workflow 
engine and easy way to integrate Mistral with other systems so that we can 
trigger workflow execution upon external events like Ceilometer alarms, timer 
or anything else.

Feel free to ask any questions, thanks!

Renat Akhmerov
@ Mirantis Inc.

On 27 Feb 2014, at 15:08, Liuji (Jeremy)  wrote:

> Hi, Mistral members
> 
> I am very interesting in the project Mistral. 
> 
> About the wiki of the Mistral, I have a question about the use case 
> description as the follow.
> 
> "Live migration
> A user specifies tasks for VM live migration triggered upon an event from 
> Ceilometer (CPU consumption 100%)."
> 
> Is this mean Mistral has the plan to provide the feature like DRS?
> 
> I am a newbie for Mistral and I apologize if I am missing something very 
> obvious.
> 
> Thanks,
> Jeremy Liu
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Eugene Nikanorov
>
>
>
> The point is to be able to share IP address, it really means that two
> VIPs(as we understand them in current model) need to reside within same
> backend (technically they need to share neutron port).
>
>
>
> Aren't we leaking some implementation detail here?
>
I see IP address sharing as user intent, not an implementation detail. Same
backend could be not only the only obstacle here.
The backend is not exposed anyhow by the API, by the way.
When you create root object with flavor - you really can't control to which
driver it will be scheduled.
So even if there is driver that is somehow (how?) will allow same IP on
different backends, user just will not be able to create 2 vips that share
IP address.


>
> I'm not against introducing a wrapper entity that correlates the different
> config objects that logically make up one LB config, but I don't think it
> is needed from the logical object model pov IMO. Yes, it might make the
> implementation of the object model for some drivers easier, and I'm OK with
> having it, if it helps. But strictly speaking it is not needed, because a
> driver doesn't have to choose a backend when the pool is created or when a
> vip is created, if it doesn't have enough info yet.
>
That is just not so simple. If you create vip and the pool - this or that
way it is ready configuration that needs to be deployed, so driver chooses
the backend. Then you need to add objects to this configuration, by say,
adding a vip with the same IP on different port.
Currently there is no way you can specify this through the API.
You can specify same IP address and another tcp port, but that call will
just fail.
E.g. we'll have subtle limitation in the API instead of consistency.

It can wait until a vip/pool are created and attached to each other, then
> it would have a clearer idea of the backends eligible to host that whole LB
> configuration. Another driver though, might be able to perform the
> configuration on its "backend" straight-away on each API call, and still be
> able to comply with the object model.
>
API will not let user to control drivers, that's one of the reasons why
it's not possible from design standpoint.

Youcef, can we chat over IRC? I think we could clarify lot more than over
ML.

Thanks,
Eugene.


>
> Youcef
>
>
>
> On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
> wrote:
>
> Hi Eugene,
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> Why do we need a new 'listener' concept? Since as Sam pointed out, we are
> removing the reference to a pool from the VIP in the current model, isn't
> this enough by itself to allow the model to support multiple VIPs per pool
> now?
>
>
>
> lb-pool-create   à$POOL-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-1
>
> lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... à $VIP-2
>
>
>
>
>
> Youcef
>
>
>
>
>
>
>
>
>
>
>
> *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
> *Sent:* Wednesday, February 26, 2014 1:26 PM
> *To:* Samuel Bercovici
> *Cc:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
> Hi Sam,
>
>
>
> I've looked over the document, couple of notes:
>
>
>
> 1) In order to allow real multiple 'vips' per pool feature, we need the
> listener concept.
>
> It's not just a different tcp port, but also a protocol, so session
> persistence and all ssl-related parameters should move to listener.
>
>
>
> 2) ProviderResourceAssociation - remains on the instance object (our
> instance object is VIP) as a relation attribute.
>
> Though it is removed from public API, so it could not be specified on
> creation.
>
> Remember provider is needed for REST call dispatching. The value of
> provider attribute (e.g. ProviderResourceAssociation) is result of
> scheduling.
>
>
>
> 3) As we discussed before, pool->vip relation will be removed, but pool
> reuse by different vips (e.g. different backends) will be forbidden for
> implementation simplicity, because this is definitely not a priority right
> now.
>
> I think it's a fair limitation that can be removed later.
>
>
>
> On workflows:
>
> WFs #2 and #3 are problematic. First off, sharing the same IP is not
> possible for other vip for the following reason:
>
> vip is created (with new model) with flavor (or provider) and scheduled to
> a provider (and then to a particular backend), doing so for 2 vips makes
> address reuse impossible if we want to maintain logical API, or otherwise
> we would need to expose implementation details that will allow us to
> connect two vips to the same backend.
>
>
>
> On the open discussion questions:
>
> I think most of them are resolved by following existing API expectations
> about status fields, etc.
>
> Main th

Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Dougal Matthews

On 26/02/14 19:43, Jay Dobies wrote:

* Even at this level, it exposes the underlying guts. There are calls to
nova baremetal listed in there, but eventually those will turn into
ironic calls. It doesn't give us a ton of flexibility in terms of
underlying technology if that knowledge bubbles up to the end user that
way.


This is a good point, and it maybe highlights that when we don't
abstract away these concepts it becomes much harder for us to make
changes under the hood. Rather than handling that on our own we would
need to provide "migration" steps for people to follow.



* This is a good view into what third-party integrators are going to
face if they choose to skip our UIs and go directly to the REST APIs.


I like the notion of OpenStackClient. I'll talk ideals for a second. If
we had a standard framework and each project provided a command
abstraction that plugged in, we could pick and choose what we included
under the Tuskar umbrella. Advanced users with particular needs could go
directly to the project clients if needed.

I think this could go beyond usefulness for Tuskar as well. On a
previous project, I wrote a pluggable client framework, allowing the end
user to add their own commands that put a custom spin on what data was
returned or how it was rendered. That's a level between being locked
into what we decide the UX should be and having to go directly to the
REST APIs themselves.

That said, I know that's a huge undertaking to get OpenStack in general
to buy into. I'll leave it more that I think it is a lesser UX (not even
saying bad, just not great) to have so much for the end user to digest
to attempt to even play with it. I'm more of the mentality of a unified
TripleO CLI that would be catered towards handling TripleO stuffs. Short
of OpenStackClient, I realize I'm not exactly in the majority here, but
figured it didn't hurt to spell out my opinion  :)


There is a renewed effort to create a unified client for OpenStack. I'm
not sure if this exactly matches what your thinking but it may be worth
checking out. I've only seen it in passing. They seem to be in the early
stages but there is quite a bit of motivation/backing AFAICT.

https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-02-27 Thread Julio Carlos Barrera Juez
I'm following the change you pointed a week ago. It seems that it is
working now, and will be eventually approved soon. I will be happy when it
is approved.

Anyway, I need more information about how to develop a service driver and a
device driver for VPN plugin. I realize doing reverse-engineering that I
need and RPC agent and and RPC between them to communicate and use a kind
of callbacks to answer. Where I can find documentation about it and some
examples? Is there any best practise guide of the use of this architecture?

Thank you again!

[image: i2cat]
Julio C. Barrera Juez
Office phone: +34 93 357 99 27
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona, Spain
http://dana.i2cat.net


On 19 February 2014 09:13, Julio Carlos Barrera Juez <
juliocarlos.barr...@i2cat.net> wrote:

> Thank you very much Bo. I will try all your advices and check if it works!
>
> [image: i2cat]
> Julio C. Barrera Juez
> Office phone: +34 93 357 99 27
> Distributed Applications and Networks Area (DANA)
> i2CAT Foundation, Barcelona, Spain
> http://dana.i2cat.net
>
>
> On 18 February 2014 09:18, Bo Lin  wrote:
>
>>  I wonder whether your neutron server codes have added the " VPNaaS
>> integration with service type framework" change on
>> https://review.openstack.org/#/c/41827/21 , if not, the service_provider
>> option is useless. You need to include the change before developing your
>> own driver.
>>
>> QA (In my opinion and sth may be missing):
>> - What is the difference between service drivers and device drivers?
>> service drivers are driven by vpn service plugin and are responsible
>> for casting rpc request (CRUD of vpnservices) to and do callbacks from vpn
>> agent.
>> device drivers are driven by vpn agent and are responsible for
>> implementing specific vpn operations and report vpn running status.
>>
>> - Could I implement only one of them?
>> device driver must be implemented based on your own device. Unless
>> the default ipsec service driver is definitely appropriate, suggest you
>> implement both of them. After including "VPNaaS integration with service
>> type framework", the service driver work is simple.
>>
>> - Where I need to put my Python implementation in my OpenStack instance?
>>Do you mean let your instance runs your new codes? The default source
>> codes dir is /opt/stack/neutron, you need to put your new changes into the
>> dir and restart the neutron server.
>>
>> - How could I configure my OpenStack instance to use this implementation?
>>1.  Add your new codes into source dir
>>2. Add appropriate vpnaas service_provider into neutron.conf and add
>> appropriate "vpn_device_driver" option into vpn_agent.ini
>>3. restart n-svc and q-vpn
>>
>> Hope help you.
>>
>> --
>> *From: *"Julio Carlos Barrera Juez" 
>> *To: *"OpenStack Development Mailing List" <
>> openstack-dev@lists.openstack.org>
>> *Sent: *Monday, February 17, 2014 7:18:44 PM
>> *Subject: *[openstack-dev] How to implement and configure a new Neutron
>> vpnaasdriver from scratch?
>>
>>
>> Hi.
>>
>> I have asked in the Q&A website without success (
>> https://ask.openstack.org/en/question/12072/how-to-implement-and-configure-a-new-vpnaas-driver-from-scratch/
>> ).
>>
>> I want to develop a vpnaas implementation. It seems that since Havana,
>> there are plugins, services and device implementations. I like the plugin
>> and his current API, then I don't need to reimplement it. Now I want yo
>> implement a vpnaas driver, and I see I have two main parts to take into
>> account: the service_drivers and the device_drivers. IPsec/OpenSwan
>> implementation is the unique sample I've found.
>>
>> I'm using devstack to test my experiments.
>>
>> I tried to implement VpnDriver Python class extending the main API
>> methods like IPsecVPNDriver does. I placed basic implementation files at
>> the same level of IPsec/OpenSwan does and configured Neutron adding this
>> line to /etc/neutron/neutron.conf file:
>>
>> service_provider =
>> VPN:VPNaaS:neutron.services.vpn.service_drivers.our_python_filename.OurClassName:default
>>
>> I restarted Neutron related services in my devstack instance, but it
>> seemed it didn't work.
>>
>>
>>
>> - What is the difference between service drivers and device drivers?
>> - Could I implement only one of them?
>> - Where I need to put my Python implementation in my OpenStack instance?
>> - How could I configure my OpenStack instance to use this implementation?
>>
>>
>>
>> I didn't find almost any documentation about these topics.
>>
>> Thank you very much.
>>
>> ___

Re: [openstack-dev] [nova] why doesn't _rollback_live_migration() always call rollback_live_migration_at_destination()?

2014-02-27 Thread John Garbutt
On 25 February 2014 15:44, Chris Friesen  wrote:
> On 02/25/2014 05:15 AM, John Garbutt wrote:
>>
>> On 24 February 2014 22:14, Chris Friesen 
>> wrote:
>
>
>
>>> What happens if we have a shared-storage instance that we try to migrate
>>> and
>>> fail and end up rolling back?  Are we going to end up with messed-up
>>> networking on the destination host because we never actually cleaned it
>>> up?
>>
>>
>> I had some WIP code up to clean that up, as part as the move to
>> conductor, its massively confusing right now.
>>
>> Looks like a bug to me.
>>
>> I suspect the real issue is that some parts of:
>> self.driver.rollback_live_migration_at_destination(ontext, instance,
>>  network_info, block_device_info)
>> Need more information about if there is shared storage being used or not.
>
>
> What's the timeframe on the move to conductor?

Not before Juno now :(

It got cut just before Havana shipped, and so it needed a complete
re-write once Icehouse opened, which didn't get completed in time.
Sorry. I have a better approach now though, but needs coding up
properly.

> I'm looking at fixing up the resource tracking over a live migration
> (currently we just rely on the audit fixing things up whenever it gets
> around to running) but to make that work properly I need to unconditionally
> run rollback code on the destination.

Ok, ouch. Thats worth doing.

I have added a new bug tag "live-migrate", so I would love to document
all these bugs people are finding under that tag. I want to spend some
time working through some of these bugs as go into bug fixing mode.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Samuel Bercovici


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Thursday, February 27, 2014 11:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


The point is to be able to share IP address, it really means that two VIPs(as 
we understand them in current model) need to reside within same backend 
(technically they need to share neutron port).

Aren't we leaking some implementation detail here?
I see IP address sharing as user intent, not an implementation detail. Same 
backend could be not only the only obstacle here.
The backend is not exposed anyhow by the API, by the way.
When you create root object with flavor - you really can't control to which 
driver it will be scheduled.
So even if there is driver that is somehow (how?) will allow same IP on 
different backends, user just will not be able to create 2 vips that share IP 
address.

Eugene, is your point that the logical model addresses the capability for IP 
sharing but that it can't be scheduled correctly?


I'm not against introducing a wrapper entity that correlates the different 
config objects that logically make up one LB config, but I don't think it is 
needed from the logical object model pov IMO. Yes, it might make the 
implementation of the object model for some drivers easier, and I'm OK with 
having it, if it helps. But strictly speaking it is not needed, because a 
driver doesn't have to choose a backend when the pool is created or when a vip 
is created, if it doesn't have enough info yet.
That is just not so simple. If you create vip and the pool - this or that way 
it is ready configuration that needs to be deployed, so driver chooses the 
backend. Then you need to add objects to this configuration, by say, adding a 
vip with the same IP on different port.
I don't understand the issue described here.

Currently there is no way you can specify this through the API.
You can specify same IP address and another tcp port, but that call will just 
fail.
Correct, as I have described, the current implementation allocates a 
neutron-port on the first VIP hence the second VIP will fail.
This is an implementation detail, we can discuss how to address. In the logical 
model I have removed the reference to the neutron port and noted this for 
further discussion.

E.g. we'll have subtle limitation in the API instead of consistency.

It can wait until a vip/pool are created and attached to each other, then it 
would have a clearer idea of the backends eligible to host that whole LB 
configuration. Another driver though, might be able to perform the 
configuration on its "backend" straight-away on each API call, and still be 
able to comply with the object model.
API will not let user to control drivers, that's one of the reasons why it's 
not possible from design standpoint.
I do not see how this relates to controlling drivers. It is the driver 
implementation, the user should not need to control it.

Youcef, can we chat over IRC? I think we could clarify lot more than over ML.

Thanks,
Eugene.


Youcef

On Thu, Feb 27, 2014 at 5:20 AM, Youcef Laribi 
mailto:youcef.lar...@citrix.com>> wrote:
Hi Eugene,

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

Why do we need a new 'listener' concept? Since as Sam pointed out, we are 
removing the reference to a pool from the VIP in the current model, isn't this 
enough by itself to allow the model to support multiple VIPs per pool now?

lb-pool-create   -->$POOL-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-1
lb-vip-create .$VIP_ADDRESS,$TCP_PORT, default_pool=$POOL-1... --> $VIP-2


Youcef





From: Eugene Nikanorov 
[mailto:enikano...@mirantis.com]
Sent: Wednesday, February 26, 2014 1:26 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi Sam,

I've looked over the document, couple of notes:

1) In order to allow real multiple 'vips' per pool feature, we need the 
listener concept.
It's not just a different tcp port, but also a protocol, so session persistence 
and all ssl-related parameters should move to listener.

2) ProviderResourceAssociation - remains on the instance object (our instance 
object is VIP) as a relation attribute.
Though it is removed from public API, so it could not be specified on creation.
Remember provider is needed for REST call dispatching. The value of provider 
attribute (e.g. ProviderResourceAssociation) is result of scheduling.

3) As we discussed before, pool->vip relation will be removed, but pool reuse 
by different vips (e.g. different backends) will be forbidden for 
implementation simplicity, because this is definitely not

Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-02-27 Thread Bo Lin
Hi Julio, 
You can take https://review.openstack.org/#/c/74156/ and 
https://review.openstack.org/#/c/74144/ as examples to write your own vpnaas 
driver. More info about service type framework, you can also refer to 
neutron/services/loadbalancer codes. 

- Original Message -

From: "Julio Carlos Barrera Juez"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Thursday, February 27, 2014 5:26:32 PM 
Subject: Re: [openstack-dev] How to implement and configure a new Neutron 
vpnaas driver from scratch? 

I'm following the change you pointed a week ago. It seems that it is working 
now, and will be eventually approved soon. I will be happy when it is approved. 

Anyway, I need more information about how to develop a service driver and a 
device driver for VPN plugin. I realize doing reverse-engineering that I need 
and RPC agent and and RPC between them to communicate and use a kind of 
callbacks to answer. Where I can find documentation about it and some examples? 
Is there any best practise guide of the use of this architecture? 

Thank you again! 


Julio C. Barrera Juez 
Office phone: +34 93 357 99 27 
Distributed Applications and Networks Area (DANA) 
i2CAT Foundation, Barcelona, Spain 
http://dana.i2cat.net 


On 19 February 2014 09:13, Julio Carlos Barrera Juez < 
juliocarlos.barr...@i2cat.net > wrote: 



Thank you very much Bo. I will try all your advices and check if it works! 


Julio C. Barrera Juez 
Office phone: +34 93 357 99 27 
Distributed Applications and Networks Area (DANA) 
i2CAT Foundation, Barcelona, Spain 
http://dana.i2cat.net 


On 18 February 2014 09:18, Bo Lin < l...@vmware.com > wrote: 



I wonder whether your neutron server codes have added the " VPNaaS integration 
with service type framework " change on 
https://review.openstack.org/#/c/41827/21 , if not, the service_provider option 
is useless. You need to include the change before developing your own driver. 

QA (In my opinion and sth may be missing): 
- What is the difference between service drivers and device drivers? 
service drivers are driven by vpn service plugin and are responsible for 
casting rpc request (CRUD of vpnservices) to and do callbacks from vpn agent. 
device drivers are driven by vpn agent and are responsible for implementing 
specific vpn operations and report vpn running status. 

- Could I implement only one of them? 
device driver must be implemented based on your own device. Unless the default 
ipsec service driver is definitely appropriate, suggest you implement both of 
them. After including "VPNaaS integration with service type framework", the 
service driver work is simple. 

- Whe re I need to put my Python implementation in my OpenStack instance? 
Do you mean let your instance runs your new codes? The default source codes dir 
is /opt/stack/neutron, you need to put your new changes into the dir and 
restart the neutron server. 

- How could I configure my OpenStack instance to use this implementation? 
1. Add your new codes into source dir 
2. Add appropriate vpnaas service_provider into neutron.conf and add 
appropriate "vpn_device_driver" option into vpn_agent.ini 
3. restart n-svc and q-vpn 

Hope help you. 


From: "Julio Carlos Barrera Juez" < juliocarlos.barr...@i2cat.net > 
To: "OpenStack Development Mailing List" < openstack-dev@lists.openstack.org > 
Sent: Monday, February 17, 2014 7:18:44 PM 
Subject: [openstack-dev] How to implement and configure a new Neutron vpnaas 
driver from scratch? 


Hi. 

I have asked in the Q&A website without success ( 
https://ask.openstack.org/en/question/12072/how-to-implement-and-configure-a-new-vpnaas-driver-from-scratch/
 ). 

I want to develop a vpnaas implementation. It seems that since Havana, there 
are plugins, services and device implementations. I like the plugin and his 
current API, then I don't need to reimplement it. Now I want yo implement a 
vpnaas driver, and I see I have two main parts to take into account: the 
service_drivers and the device_drivers. IPsec/OpenSwan implementation is the 
unique sample I've found. 

I'm using devstack to test my experiments. 

I tried to implement VpnDriver Python class extending the main API methods like 
IPsecVPNDriver does. I placed basic implementation files at the same level of 
IPsec/OpenSwan does and configured Neutron adding this line to 
/etc/neutron/neutron.conf file: 

service_provider = 
VPN:VPNaaS:neutron.services.vpn.service_drivers.our_python_filename.OurClassName:default
 

I restarted Neutron related services in my devstack instance, but it seemed it 
didn't work. 



- What is the difference between service drivers and device drivers? 
- Could I implement only one of them? 
- Whe re I need to put my Python implementation in my OpenStack instance? 
- How could I configure my OpenStack instance to use this implementation? 



I didn't find almost any documentation about these topics. 

Thank you very much. 

___

Re: [openstack-dev] How do I mark one option as deprecating another one ?

2014-02-27 Thread Day, Phil
Hi Denis,

Thanks for the pointer, but I looked at that and I my understanding is that it 
only allows me to retrieve a value by an old name, but doesn't let me know that 
the old name has been used.  So If all I wanted to do was change the name/group 
of the config value it would be fine.  But in my case I need to be able to 
implement:
If new_value_defined:
  do_something
else if old_value_defined:
 warn_about_deprectaion
do_something_else

Specifically I want to replace tenant_name based authentication with tenant_id 
- so I need to know which has been specified.

Phil


From: Denis Makogon [mailto:dmako...@mirantis.com]
Sent: 26 February 2014 14:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] How do I mark one option as deprecating another 
one ?

Here what oslo.config documentation says.

Represents a Deprecated option. Here's how you can use it

oldopts = [cfg.DeprecatedOpt('oldfoo', group='oldgroup'),
   cfg.DeprecatedOpt('oldfoo2', group='oldgroup2')]
cfg.CONF.register_group(cfg.OptGroup('blaa'))
cfg.CONF.register_opt(cfg.StrOpt('foo', deprecated_opts=oldopts),
   group='blaa')

Multi-value options will return all new and deprecated
options.  For single options, if the new option is present
("[blaa]/foo" above) it will override any deprecated options
present.  If the new option is not present and multiple
deprecated options are present, the option corresponding to
the first element of deprecated_opts will be chosen.
I hope that it'll help you.

Best regards,
Denis Makogon.

On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil 
mailto:philip@hp.com>> wrote:
Hi Folks,

I could do with some pointers on config value deprecation.

All of the examples in the code and documentation seem to deal with  the case 
of "old_opt" being replaced by "new_opt" but still returning the same value
Here using deprecated_name and  / or deprecated_opts in the definition of 
"new_opt" lets me still get the value (and log a warning) if the config still 
uses "old_opt"

However my use case is different because while I want deprecate old-opt, 
new_opt doesn't take the same value and I need to  different things depending 
on which is specified, i.e. If old_opt is specified and new_opt isn't I still 
want to do some processing specific to old_opt and log a deprecation warning.

Clearly I can code this up as a special case at the point where I look for the 
options - but I was wondering if there is some clever magic in oslo.config that 
lets me declare this as part of the option definition ?



As a second point,  I thought that using a deprecated option automatically 
logged a warning, but in the latest Devstack wait_soft_reboot_seconds is 
defined as:

cfg.IntOpt('wait_soft_reboot_seconds',
   default=120,
   help='Number of seconds to wait for instance to shut down after'
' soft reboot request is made. We fall back to hard reboot'
' if instance does not shutdown within this window.',
   deprecated_name='libvirt_wait_soft_reboot_seconds',
   deprecated_group='DEFAULT'),



but if I include the following in nova.conf

libvirt_wait_soft_reboot_seconds = 20


I can see the new value of 20 being used, but there is no warning logged that 
I'm using a deprecated name ?

Thanks
Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Dougal Matthews

On 26/02/14 13:34, Jiří Stránský wrote:

Hello,

i went through the CLI way of deploying overcloud, so if you're
interested what's the workflow, here it is:

https://gist.github.com/jistr/9228638


This is great, thanks for taking the time to put it together. Another
thing to add to my list of stuff to try :)



I'd say it's still an open question whether we'll want to give better UX
than that ^^ and at what cost (this is very much tied to the benefits
and drawbacks of various solutions we discussed in December [1]). All in
all it's not as bad as i expected it to be back then [1]. The fact that
we keep Tuskar API as a layer in front of Heat means that CLI user
doesn't care about calling merge.py and creating Heat stack manually,
which is great.


It does look pretty good, and also simpler than I expected. At the
moment it seems we only really need to interact with glance, nova and
tuskar. However, presumably this will only get more complicated over
time which could become a problem.



In general the CLI workflow is on the same conceptual level as Tuskar
UI, so that's fine, we just need to use more commands than "tuskar".


I wonder if this would become a documentation/support nightmare.


get Tuskar UI a bit closer back to the fact

that Undercloud is OpenStack too, and keep the name "Flavors" instead of
changing it to "Node Profiles". I wonder if that would be unwelcome to
the Tuskar UI UX, though.


I can imagine we would constantly explain it by saying its the same as
flavors because people will be familiar with this. So maybe being
consistent would help.



One other thing, I've looked at my own examples so far, so I didn't
really think about this but seeing it written down, I've realised the
way we specify the roles in the Tuskar CLI really bugs me.

--roles 1=1 \
--roles 2=1

I know what this means, but even reading it now I think: One equals
one? Two equals one? What? I think we should probably change the arg
name and also refer to roles by name.

--role-count compute=10

and a shorter option

-R compute=10


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question

2014-02-27 Thread Derek Higgins
On 25/02/14 22:30, Robert Collins wrote:
> In the tripleo meeting today we re-affirmed that the tripleo-cd-admins
> team is aimed at delivering production-availability clouds - thats how
> we know the the tripleo program is succeeding (or not !).
> 
> So if you're a member of that team, you're on the hook - effectively
> on call, where production issues will take precedence over development
> / bug fixing etc.
> 
> We have the following clouds today:
> cd-undercloud (baremetal, one per region)
> cd-overcloud (KVM in the HP region, not sure yet for the RH region) -
> multi region.
> ci-overcloud (same as cd-overcloud, and will go away when cd-overcloud
> is robust enough).
> 
> And we have two users:
>  - TripleO ATCs, all of whom are eligible for accounts on *-overcloud
>  - TripleO reviewers, indirectly via openstack-infra who provide 99%
> of the load on the clouds
> 
> Right now when there is a problem, there's no clearly defined 'get
> hold of someone' mechanism other than IRC in #tripleo.
> 
> And thats pretty good since most of the admins are on IRC most of the time.
> 
> But.
> 
> There are two holes - a) what if its sunday evening :) and b) what if
> someone (for instance Derek) has been troubleshooting a problem, but
> needs to go do personal stuff, or you know, sleep. There's no reliable
> defined handoff mechanism.
> 
> So - I think we need to define two things:
>   - a stock way for $randoms to ask for support w/ these clouds that
> will be fairly low latency and reliable.
>   - a way for us to escalate to each other *even if folk happen to be
> away from the keyboard at the time*.
> And possibly a third:
>   - a way for openstack-infra admins to escalate to us in the event of
> OMG things happening. Like, we send 1000 VMs all at once at their git
> mirrors or something.
> 
> And with that lets open the door for ideas!
> 
> -Rob
> 


I agree that is something that needs to happen, at the very least to aid
handover between team members, I wonder if we should start simple give
it a little time and progress from there to improve, here is what I
would suggest

o Handling outages would be done here
https://etherpad.openstack.org/p/cloud-outage
o Once outage occurs we jump on there to store relevant debug info
o Everybody on the admin list gets spammed on irc by a bot (we could put
instructions for the bot into the ethepad) e.g.
ircmessage: ci-overcloud down, compontent XXX not starting
o communicate on #tripleo was we are working on the problem
o For handover, we should keep enough in the etherpad to allow anybody
to get upto speed on the problem and history of whats been done so far
o We delete irrelevant garbage from etherpad once it becomes clear its
irrelevant

Once finished filebugs/write up summary/clear etherpad etc

infra (or any other user) could escalate to us by getting somebody on
#tripleo or adding something to the etherpad e.g.
ircmessage: ci-overcloud down sine 12:30UTC
the irc bot would take it from there

If it becomes obvious that we are not reacting quickly enough (or just
don't have enough redundancy of people in enough timezones to keep
somebody working on a problem) I think expanding the team a little might
be in order.

The disadvantage to using a etherpad is anybody could come along a
delete details without us noticing ...

thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New Websockify Release

2014-02-27 Thread Thierry Carrez
Solly Ross wrote:
> We (the websockify/noVNC team) have released a new version of websockify 
> (0.6.0).  It contains several fixes and features relating to OpenStack (a 
> couple of bugs were fixed, and native support for the `logging` module was 
> added).  Unfortunately, to integrate it into OpenStack, a patch is needed to 
> the websocketproxy code in Nova 
> (https://gist.github.com/DirectXMan12/9233369) due to a refactoring of the 
> websockify API.  My concern is that the various distos most likely have not 
> had time to update the package in their package repositories.  What is the 
> appropriate timescale for updating Nova to work with the new version?

Thanks for reaching out !

I'll let the Nova devs speak, but in that specific case it might make
sense to patch the Nova code to support both API versions. That would
facilitate the migration to 0.6.0-style code.

At some point in the future (when 0.6.0 is everywhere) we could bump the
dep to >=0.6.0 and remove the compatibility code.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Switching from sql_connection to [database] connection ?

2014-02-27 Thread Flavio Percoco

On 27/02/14 12:12 +0800, Tom Fifield wrote:

Hi,

As best I can tell, all other services now use this syntax for 
configuring database connections:


[database]
connection = sqlite:///etc,omg


whereas glance appears to still use

[DEFAULT]
...
sql_connection = sqlite:///etc,omg


Is there a plan to switch to the former during Icehouse development?

From a user standpoint it'd be great to finally have consistency 
amoungst all the services :)


It already did. It looks like the config sample needs to be updated.

To be more precise, `sql_connection` is marked as deprecated.[0]

[0] 
https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/session.py#L329

Cheers,
Fla

--
@flaper87
Flavio Percoco


pgpHbm4UKm_HE.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New Websockify Release

2014-02-27 Thread Nikola Đipanov
On 02/27/2014 11:22 AM, Thierry Carrez wrote:
> Solly Ross wrote:
>> We (the websockify/noVNC team) have released a new version of websockify 
>> (0.6.0).  It contains several fixes and features relating to OpenStack (a 
>> couple of bugs were fixed, and native support for the `logging` module was 
>> added).  Unfortunately, to integrate it into OpenStack, a patch is needed to 
>> the websocketproxy code in Nova 
>> (https://gist.github.com/DirectXMan12/9233369) due to a refactoring of the 
>> websockify API.  My concern is that the various distos most likely have not 
>> had time to update the package in their package repositories.  What is the 
>> appropriate timescale for updating Nova to work with the new version?
> 
> Thanks for reaching out !
> 
> I'll let the Nova devs speak, but in that specific case it might make
> sense to patch the Nova code to support both API versions. That would
> facilitate the migration to 0.6.0-style code.
> 
> At some point in the future (when 0.6.0 is everywhere) we could bump the
> dep to >=0.6.0 and remove the compatibility code.
> 

Yes - fully agreed with Thierry.

I will try to put up a patch for this, but if someone gets there before
me - even better :).

Thanks,

ndipanov



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-27 Thread Petr Blaho
On Wed, Feb 26, 2014 at 02:17:53PM -0500, Jay Dobies wrote:
> This is a new concept to me in JSON, I've never heard of a wrapper 
> element like that being called a namespace.

I named it "namespace" in my email. It is not any kind of formal or
standard naming. "Wrapper element" is better name for this... wrapper
element :-)
> 
> My first impression is that is looks like cruft. If there's nothing else 
> at the root of the JSON document besides the namespace, all it means is 
> that every time I go to access relevant data I have an extra layer of 
> indirection. Something like:
> 
> volume_wrapper = get_volume(url)
> volume = volume_wrapper['volume']
> 
> or
> 
> volume = get_volume(url)
> name = volume['volume']['name']

I agree with you w/r/t to indirection when accessing data but I like the
idea that when I look at json repsonse I see what type of resource it
is. That wrapper element describes it. And I do not need to know what
request (url, service, GET or POST...) triggered that output.

> 
> If we ever forsee an aggregate API, I can see some value in it. For 
> instance, a single call that aggregates a volume with some relevant 
> metrics from ceilometer. In that case, I could see leaving both distinct 
> data sets separate at the root with some form of namespace rather than 
> attempting to merge the data.
> 
> Even in that case, I think it'd be up to the aggregate API to introduce 
> that.
> 
> Looking at api.openstack.org, there doesn't appear to be any high level 
> resource get that would aggregate the different subcollections.
> 
> For instance, {tenant_id}/volumes stuffs everything inside of an element 
> called "volumes". {tenant_id}/types stuffs everything inside of an 
> element called volume_types. If a call to {tenant_id} aggregated both of 
> those, then I can see leaving the namespace in on the single ID look ups 
> for consistency (even if it's redundant). However, the API doesn't 
> appear to support that, so just looking at the examples given it looks 
> like an added layer of depth that carries no extra information and makes 
> using the returned result a bit awkward IMO.
> 

I did not think about aggregate APIs before... interesting idea...
> 
> On 02/26/2014 01:38 PM, Petr Blaho wrote:
> > Hi,
> >
> > I am wondering what is the OpenStack way of returning json from
> > apiclient.
> >
> > I have got 2 different JSON response examples from 
> > http://api.openstack.org/:
> >
> > json output with namespace:
> > {
> >"volume":
> >{
> >  "status":"available",
> >  "availability_zone":"nova",
> >  "id":"5aa119a8-d25b-45a7-8d1b-88e127885635",
> >  "name":"vol-002",
> >  "volume_type":"None",
> >  "metadata":{
> >"contents":"not junk"
> >  }
> >}
> > }
> > (example for GET 'v2/{tenant_id}/volumes/{volume_id}' of Block Storage API 
> > v2.0 taken from
> > http://api.openstack.org/api-ref-blockstorage.html [most values ommited])
> >
> > json output without namespace:
> > {
> >"alarm_actions": [
> >"http://site:8000/alarm";
> >  ],
> >  "alarm_id": null,
> >  "combination_rule": null,
> >  "description": "An alarm",
> >  "enabled": true,
> >  "type": "threshold",
> >  "user_id": "c96c887c216949acbdfbd8b494863567"
> > }
> > (example for GET 'v2/alarms/{alarm_id}' of Telemetry API v2.0 taken from
> > http://api.openstack.org/api-ref-telemetry.html [most values ommited])
> >
> > Tuskar API now uses "without namespace" variant.
> >
> > By looking at API docs at http://api.openstack.org/ I can say that
> > projects use both ways, altought what I would describe as "nicer API"
> > uses namespaced variant.
> >
> > So, returning to my question, does OpenStack have some rules what
> > format of JSON (namespaced or not) should APIs return?
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 26.2.2014 20:43, Jay Dobies wrote:

I'd say it's still an open question whether we'll want to give better UX
than that ^^ and at what cost (this is very much tied to the benefits
and drawbacks of various solutions we discussed in December [1]). All in
all it's not as bad as i expected it to be back then [1]. The fact that
we keep Tuskar API as a layer in front of Heat means that CLI user
doesn't care about calling merge.py and creating Heat stack manually,
which is great.


I agree that it's great that Heat is abstracted away. I also agree that
it's not as bad as I too expected it to be.

But generally speaking, I think it's not an ideal user experience. A few
things jump out at me:

* We currently have glance, nova, and tuskar represented. We'll likely
need something to ceilometer as well for gathering metrics and
configuring notifications (I assume the notifications will fall under
that, but come with me on it).

That's a lot for an end user to comprehend and remember, which concerns
me for both adoption and long term usage. Even in the interim when a
user remembers nova is related to node stuff, doing a --help on nova is
huge.


+1



That's going to put a lot of stress on our ability to document our
prescribed path. It will be tricky for us to keep track of the relevant
commands and still point to the other project client documentation so as
to not duplicate it all.


+1



* Even at this level, it exposes the underlying guts. There are calls to
nova baremetal listed in there, but eventually those will turn into
ironic calls. It doesn't give us a ton of flexibility in terms of
underlying technology if that knowledge bubbles up to the end user that way.

* This is a good view into what third-party integrators are going to
face if they choose to skip our UIs and go directly to the REST APIs.


I like the notion of OpenStackClient. I'll talk ideals for a second. If
we had a standard framework and each project provided a command
abstraction that plugged in, we could pick and choose what we included
under the Tuskar umbrella. Advanced users with particular needs could go
directly to the project clients if needed.

I think this could go beyond usefulness for Tuskar as well. On a
previous project, I wrote a pluggable client framework, allowing the end
user to add their own commands that put a custom spin on what data was
returned or how it was rendered. That's a level between being locked
into what we decide the UX should be and having to go directly to the
REST APIs themselves.

That said, I know that's a huge undertaking to get OpenStack in general
to buy into. I'll leave it more that I think it is a lesser UX (not even
saying bad, just not great) to have so much for the end user to digest
to attempt to even play with it. I'm more of the mentality of a unified
TripleO CLI that would be catered towards handling TripleO stuffs. Short
of OpenStackClient, I realize I'm not exactly in the majority here, but
figured it didn't hurt to spell out my opinion  :)


Yeah i think having a unified TripleO CLI would be a great boost for the 
CLI user experience, and it would solve the problems we pointed out. 
It's another thing that we'd have to commit to maintain, but i hope CLI 
UX is enough priority that it should be fine to spend the dev time there.



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-27 Thread Eugene Nikanorov
>
> I see IP address sharing as user intent, not an implementation detail.
> Same backend could be not only the only obstacle here.
>
> The backend is not exposed anyhow by the API, by the way.
>
> When you create root object with flavor - you really can't control to
> which driver it will be scheduled.
>
> So even if there is driver that is somehow (how?) will allow same IP on
> different backends, user just will not be able to create 2 vips that share
> IP address.
>
>
>
> Eugene, is your point that the logical model addresses the capability for
> IP sharing but that it can't be scheduled correctly?
>
That's one of concerns, correct.

   That is just not so simple. If you create vip and the pool - this or
> that way it is ready configuration that needs to be deployed, so driver
> chooses the backend. Then you need to add objects to this configuration, by
> say, adding a vip with the same IP on different port.
>
> I don't understand the issue described here.
>
Again, it's about working with proper provider when creating/updating the
resource.
User has no control of it, other then referencing provider in indirect way,
say by working with the object that is attached to the root object.


>
> Currently there is no way you can specify this through the API.
>
> You can specify same IP address and another tcp port, but that call will
> just fail.
>
> Correct, as I have described, the current implementation allocates a
> neutron-port on the first VIP hence the second VIP will fail.
>
> This is an implementation detail, we can discuss how to address. In the
> logical model I have removed the reference to the neutron port and noted
> this for further discussion.
>
Well, it may be implementation detail, or it may be a part of logical
model. Port is logical abstraction, i don't see why it should be necessary
a detail here. Anyway, I'd like to see suggestions on how to address that.
So far all the way do address it will introduce these 'impl detail' that
we're trying to get rid of.

 API will not let user to control drivers, that's one of the reasons why
> it's not possible from design standpoint.
>
>  I do not see how this relates to controlling drivers. It is the driver
> implementation, the user should not need to control it.
>
That was about scheduling. User will not have control over what backend
technology newly created resource will use, neither provider/driver, nor
particular physical backend.


>
> Youcef, can we chat over IRC? I think we could clarify lot more than over
> ML.
>
>
>
> Thanks,
>
> Eugene.
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [heat] bug 1203680 - fix requires doc

2014-02-27 Thread Sean Dague
On 02/27/2014 12:52 AM, Mike Spreitzer wrote:
> Dean Troyer  wrote on 02/26/2014 03:28:04 PM:
> 
>> On Wed, Feb 26, 2014 at 1:09 PM, Mike Spreitzer 
> wrote:
>>> Thanks for the further updates.  I have just one question about
>>> those.  One way to do both unit testing and system (integration)
>>> testing is to: git clone your favorite project to make a working
>>> local repo somewhere on your testing machine, edit and commit there,
>>> then use DevStack to create a running system in /opt/stack using
>>> your modified project in place of the copy at git.openstack.org.
>>
>> Yes, that is similar to what Sean mentioned, the difference being
>> that his repos are actually in VirtualBox Shared Folders so the
>> Linux VM has access to them. He works on them natively on his laptop
>> so the VM doesn't need his desktop toolset installed.
> 
> Sounds like he does unit testing in the VM and editing in the host
> laptop.  In other words, the things we are documenting (testing) are all
> done in the VM.

No, actually I usually do unit testing on laptops / desktops. There is
no reason to take VM overhead for unit tests when I've got a perfectly
working local env that can run tox. :)

The point is a ton of code is fixable / testable in the small. Then
exposing to a full environment once you've gotten it solid shape.

>>>  I think the most direct approach would be to generalize the title of
>>> https://wiki.openstack.org/wiki/Gerrit_Workflow#Unit_Tests_Onlyand
>>> generalize the introductory remark to include a reference to
>>> https://wiki.openstack.org/wiki/Testing#Indirect_Approach.  Does
>>> this make sense to you? 
>>
>> Sure.  My edits were prompted by the loss of some information useful
>> to making a choice between the two and tweaking the phrasing and
> sections.
>>
>> There are a lot of ways to do this, enumerating them is beyond the
>> scope of that doc IMHO.  I think having the basic components/
>> requirements available should be enough, but then I also have my
>> workflow figured out so i probably am still missing something.
> 
> Sure, documenting all the possible tropes and tweaks (such as editing
> shared files on another machine) is not needed.  I think documenting the
> basic ideas is helpful to newbies.
> 
>>> As far as my experience goes, the fix to 1203680 would also cover
>>> 1203723: the only thing I find lacking for nova unit testing in
>>> Ubuntu is libmysqlclient-dev, which is among the testing
>>> requirements of glance (see DevStack's files/apts/glance). 
>>>
>>> As far as I can tell, Sean Dague is saying the fix to 1203680 is
>>> wrong because it binds unit testing to DevStack and he thinks unit
>>> testing should be independent of DevStack.
>>
>> Let me summarize: "Unit testing is not a default requirement for
>> DevStack.
> 
> But do we stipulate the converse: shall DevStack be a requirement for
> unit testing?  That is what I think Sean is objecting to.  But I am
> getting weary and wary of speaking so much on his behalf.  I wonder if
> anyone else wants to chime in on this.
> 
> If the only people who speak up think that DevStack should be a
> requirement for unit testing then this should be documented, and I am
> likely to do so.

Right, that's what I object to. Because it's not a requirement for unit
testing. A requirement for unit testing is just that you run an
operating system that has basic tools, and that you can install pip and
tox == 1.6 in that environment.

Individual unit tests have additional requirements, but they don't
require a big running devstack environment. That's *one* way you might
decide to build such a thing, but it's really not the best way.

I think what you probably actually want is some way to let/make tox
actually install the -dev packages that are needed to run the unit tests
in questions. That's something I think monty has thought about in the
past. And I agree, that would probably be handy.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 27.2.2014 10:16, Dougal Matthews wrote:

On 26/02/14 13:34, Jiří Stránský wrote:

get Tuskar UI a bit closer back to the fact
that Undercloud is OpenStack too, and keep the name "Flavors" instead of
changing it to "Node Profiles". I wonder if that would be unwelcome to
the Tuskar UI UX, though.


I can imagine we would constantly explain it by saying its the same as
flavors because people will be familiar with this. So maybe being
consistent would help.


Yeah. This is a double bladed axe but i'm leaning towards naming flavors 
consistently a bit more too. Here's an attempt at +/- summary:



"node profile"

+ a bit more descriptive for a newcomer imho

- CLI renaming/reimplementing mentioned before

- inconsistency dangers lurking in the deep - e.g. if an error message 
bubbles up from Nova all the way to the user, it might mention flavors, 
and if we talk 99% of time about node profiles, then user will not know 
what is meant in the error message. I'm a bit worried that we'll keep 
hitting things like this in the long run.


- developers still often call them "flavors", because that's what Nova 
calls them



"flavor"

+ fits with the rest, does not cause communication or development problems

- not so descriptive (but i agree with you - OpenStack admins will 
already be familiar what "flavor" means in the overcloud, and i think 
they'd be able to infer what it means in the undercloud)



I'm CCing Jarda as this affects his work quite a lot and i think he'll 
have some insight+opinion (he's on PTO now so it might take some time 
before he gets to this).






One other thing, I've looked at my own examples so far, so I didn't
really think about this but seeing it written down, I've realised the
way we specify the roles in the Tuskar CLI really bugs me.

  --roles 1=1 \
  --roles 2=1

I know what this means, but even reading it now I think: One equals
one? Two equals one? What? I think we should probably change the arg
name and also refer to roles by name.

  --role-count compute=10

and a shorter option

  -R compute=10


Yeah this is https://bugs.launchpad.net/tuskar/+bug/1281051

I agree with you on the solution (rename long option, support lookup by 
names, add a short option).



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How do I mark one option as deprecating another one ?

2014-02-27 Thread Davanum Srinivas
Phil,

Correct. We don't have this functionality in oslo.config. Please
create a new feature/enhancement request against oslo

thanks,
dims

On Thu, Feb 27, 2014 at 4:47 AM, Day, Phil  wrote:
> Hi Denis,
>
>
>
> Thanks for the pointer, but I looked at that and I my understanding is that
> it only allows me to retrieve a value by an old name, but doesn't let me
> know that the old name has been used.  So If all I wanted to do was change
> the name/group of the config value it would be fine.  But in my case I need
> to be able to implement:
>
> If new_value_defined:
>
>   do_something
>
> else if old_value_defined:
>
>  warn_about_deprectaion
>
> do_something_else
>
>
>
> Specifically I want to replace tenant_name based authentication with
> tenant_id - so I need to know which has been specified.
>
>
>
> Phil
>
>
>
>
>
> From: Denis Makogon [mailto:dmako...@mirantis.com]
> Sent: 26 February 2014 14:31
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] How do I mark one option as deprecating another
> one ?
>
>
>
> Here what oslo.config documentation says.
>
>
> Represents a Deprecated option. Here's how you can use it
>
> oldopts = [cfg.DeprecatedOpt('oldfoo', group='oldgroup'),
>cfg.DeprecatedOpt('oldfoo2', group='oldgroup2')]
> cfg.CONF.register_group(cfg.OptGroup('blaa'))
> cfg.CONF.register_opt(cfg.StrOpt('foo', deprecated_opts=oldopts),
>group='blaa')
>
> Multi-value options will return all new and deprecated
> options.  For single options, if the new option is present
> ("[blaa]/foo" above) it will override any deprecated options
> present.  If the new option is not present and multiple
> deprecated options are present, the option corresponding to
> the first element of deprecated_opts will be chosen.
>
> I hope that it'll help you.
>
>
> Best regards,
>
> Denis Makogon.
>
>
>
> On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil  wrote:
>
> Hi Folks,
>
>
>
> I could do with some pointers on config value deprecation.
>
>
>
> All of the examples in the code and documentation seem to deal with  the
> case of "old_opt" being replaced by "new_opt" but still returning the same
> value
>
> Here using deprecated_name and  / or deprecated_opts in the definition of
> "new_opt" lets me still get the value (and log a warning) if the config
> still uses "old_opt"
>
>
>
> However my use case is different because while I want deprecate old-opt,
> new_opt doesn't take the same value and I need to  different things
> depending on which is specified, i.e. If old_opt is specified and new_opt
> isn't I still want to do some processing specific to old_opt and log a
> deprecation warning.
>
>
>
> Clearly I can code this up as a special case at the point where I look for
> the options - but I was wondering if there is some clever magic in
> oslo.config that lets me declare this as part of the option definition ?
>
>
>
>
>
>
>
> As a second point,  I thought that using a deprecated option automatically
> logged a warning, but in the latest Devstack wait_soft_reboot_seconds is
> defined as:
>
>
>
> cfg.IntOpt('wait_soft_reboot_seconds',
>
>default=120,
>
>help='Number of seconds to wait for instance to shut down
> after'
>
> ' soft reboot request is made. We fall back to hard
> reboot'
>
> ' if instance does not shutdown within this window.',
>
>deprecated_name='libvirt_wait_soft_reboot_seconds',
>
>deprecated_group='DEFAULT'),
>
>
>
>
>
>
>
> but if I include the following in nova.conf
>
>
>
> libvirt_wait_soft_reboot_seconds = 20
>
>
>
>
>
> I can see the new value of 20 being used, but there is no warning logged
> that I'm using a deprecated name ?
>
>
>
> Thanks
>
> Phil
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 26.2.2014 21:34, Dean Troyer wrote:

On Wed, Feb 26, 2014 at 1:43 PM, Jay Dobies  wrote:


I like the notion of OpenStackClient. I'll talk ideals for a second. If we
had a standard framework and each project provided a command abstraction
that plugged in, we could pick and choose what we included under the Tuskar
umbrella. Advanced users with particular needs could go directly to the
project clients if needed.



This is a thing.  https://github.com/dtroyer/python-oscplugin is an example
of a stand-alone OSC plugin that only needs to be installed to be
recognized.  FWIW, four of the built-in API command sets in OSC also are
loaded in this manner even though they are in the OSC repo so they
represent additional examples of writing plugins.


Thanks for bringing this up. It looks really interesting. Is it possible 
to not only add commands to the OpenStackClient, but also purposefully 
blacklist some from appearing? As Jay mentioned in his reply, we don't 
make use of many commands in the undercloud and having the others appear 
in --help is just confusing. E.g. there's a lot of commands in Nova CLI 
that we'll not use and we have no use for Cinder CLI at all.


Even if we decided to build TripleO CLI separate from OpenStackClient, i 
think being able to consume this plugin API would help us. We could plug 
in the particular commands we want (and rename them if we want "node 
profiles" instead of "flavors") and hopefully not reimplement everything.



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]the discussion about traffic storm protection in network virtualization environment

2014-02-27 Thread Yuzhou (C)
Hi everyone:

A traffic storm occurs when broadcast, unknown unicast, or multicast 
(BUM) packets flood the LAN, creating excessive traffic and degrading network 
performance. 
So physical switch or router offer traffic storm protection, these approaches:
1.Storm suppression, which enables to limit the size of monitored 
traffic passing through an Ethernet interface by setting a traffic threshold. 
When the traffic threshold is exceeded, the interface discards all exceeding 
traffic.
2.Storm control, which enables to shut down Ethernet interfaces or 
block traffic when monitored traffic exceeds the traffic threshold. It also 
enables an interface to send trap or log messages when monitored traffic 
reaches a certain traffic threshold, depending on the configuration.
I want to get traffic storm protection in network virtualization 
environment as same as in physical network. So I registered a BP:  
https://blueprints.launchpad.net/neutron/+spec/traffic-protection  and
wrote a Wiki: https://wiki.openstack.org/wiki/Neutron/TrafficProtection

I would like your opinions about this subject. Specifically, how to 
avoid traffic storm and protect traffic in network virtualization environment ? 
Is there other approaches?
Welcome to share your experiences about it .

Thanks,

Zhou Yu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 12:48 AM, Michael Davies wrote:

> Hi everyone,
>
> Over in "Ironic Land" we're looking at removing XML support from
> ironic-api (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
>
> I've been looking, but I can't seem to find an easy way to modify the
> accepted content_types.
>
> Are there any wsgi / WSME / Pecan experts out there who can point me in
> the right direction?
>

There's no support for turning off a protocol in WSME right now, but we
could add that if we really need it.

Why would we turn it off, though? The point of dropping XML support in some
of the other projects is that they use toolkits that require extra work to
support it (either coding or maintenance of code we've written elsewhere
for OpenStack). WSME supports both protocols without the API developer
having to do any extra work.

Doug



>
> Thanks in advance,
>
> Michael...
> --
> Michael Davies   mich...@the-davies.net
> Rackspace Australia
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Sylvain Bauza

Le 27/02/2014 14:13, Doug Hellmann a écrit :
WSME supports both protocols without the API developer having to do 
any extra work.


Doug



Just one comment about WSGI middlewares that we could create for Pecan, 
we still need to handle both XML and JSON by hand. I do agree this is 
not WSME related, but that's still something which needs to be 
considered if needed.


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 2:00 AM, Noorul Islam K M  wrote:

> Michael Davies  writes:
>
> > Hi everyone,
> >
> > Over in "Ironic Land" we're looking at removing XML support from
> ironic-api
> > (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
> >
> > I've been looking, but I can't seem to find an easy way to modify the
> > accepted content_types.
> >
> > Are there any wsgi / WSME / Pecan experts out there who can point me in
> the
> > right direction?
> >
>
> Also in Solum we have a use case where in we would like to have
> pecan+wsme accept content-type application/x-yaml.
>
> It will be great if this can be made configurable.
>

What do you want to have happen with the YAML?

Do you want to receive and return YAML to a bunch of API calls? We could
add YAML protocol support to WSME to make that happen.

Do you want a couple of controller methods to be given a YAML parse result,
rather than WSME objects? You could write an expose() decorator for Pecan
in Solum. This would be a more appropriate approach if you just have one or
two methods of the API that need YAML support.

Or do you want the YAML text passed in to you directly, maybe to be
uploaded in a single controller method? You should be able to have that by
skipping WSME and just using Pecan's expose() decorator with an appropriate
content type, then parsing the body yourself.

Doug



>
> Regards,
> Noorul
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Sean Dague
On 02/27/2014 08:13 AM, Doug Hellmann wrote:
> 
> 
> 
> On Thu, Feb 27, 2014 at 12:48 AM, Michael Davies  > wrote:
> 
> Hi everyone,
> 
> Over in "Ironic Land" we're looking at removing XML support from
> ironic-api (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
> 
> I've been looking, but I can't seem to find an easy way to modify
> the accepted content_types.
> 
> Are there any wsgi / WSME / Pecan experts out there who can point me
> in the right direction?
> 
> 
> There's no support for turning off a protocol in WSME right now, but we
> could add that if we really need it.
> 
> Why would we turn it off, though? The point of dropping XML support in
> some of the other projects is that they use toolkits that require extra
> work to support it (either coding or maintenance of code we've written
> elsewhere for OpenStack). WSME supports both protocols without the API
> developer having to do any extra work.

Because if an interface is exported to the user, then it needs to be
both Documented and Tested. So that's double the cost on the validation
front, and the documentation front.

Exporting an API isn't set and forget. Especially with the semantic
differences between JSON and XML. And if someone doesn't feel the XML
created by WSME is semantically useful enough to expose to their users,
they shouldn't be forced to by the interface.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Dean Troyer
On Thu, Feb 27, 2014 at 6:36 AM, Jiří Stránský  wrote:

> Thanks for bringing this up. It looks really interesting. Is it possible
> to not only add commands to the OpenStackClient, but also purposefully
> blacklist some from appearing? As Jay mentioned in his reply, we don't make
> use of many commands in the undercloud and having the others appear in
> --help is just confusing. E.g. there's a lot of commands in Nova CLI that
> we'll not use and we have no use for Cinder CLI at all.
>

I didn't consider removing other API's commands as a use case, it might
work but whether that is a Good Thing is TBD from our/user's point of view.

If you are trying to hide the rest of the APIs then this is not the CLI
that you seek.  OSC doesn't assume that a user has access to only one or
one type of cloud, so disabling/removing commands simply based on package
installation is not a good user experience.  Everything after that requires
authentication to get server API versions and a service catalog to know
what a specific cloud offers.

Even if we decided to build TripleO CLI separate from OpenStackClient, i
> think being able to consume this plugin API would help us. We could plug in
> the particular commands we want (and rename them if we want "node profiles"
> instead of "flavors") and hopefully not reimplement everything.
>

The command names themselves are easy to change as they are just the key in
the entry point key=value pair.  Everything else about them, including the
help text would be code changes.

It sounds like you may be wanting something parallel to OSC, ie, a
Tuskar-specific rebranding of it with a overlapping-but-different command
set.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-27 Thread Sean Dague
This patch is coming through the gate this morning -
https://review.openstack.org/#/c/71996/

The point being to actually make devstack stop when it hits an error,
instead of only once these compound to the point where there is no
moving forward and some service call fails. This should *dramatically*
improve the experience of figuring out a failure in the gate, because
where it fails should be the issue. (It also made us figure out some
wonkiness with stdout buffering, that was making debug difficult).

This works on all the content that devstack gates against. However,
there are a ton of other paths in devstack, including vendor plugins,
which I'm sure aren't clean enough to run under -o errexit. So if all of
a sudden things start failing, this may be why. Fortunately you'll be
pointed at the exact point of the fail.

Ian got through 3 fixes for RHEL support prior to it that makes it run
clean there, so we have some confidence that's covered. Other distros
may hit issues as well.

If you have a patch to fix fallout from this, please just get on
#openstack-qa, and we'll try to get those devstack patches worked
through the system quickly.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 8:28 AM, Sean Dague  wrote:

> On 02/27/2014 08:13 AM, Doug Hellmann wrote:
> >
> >
> >
> > On Thu, Feb 27, 2014 at 12:48 AM, Michael Davies  > > wrote:
> >
> > Hi everyone,
> >
> > Over in "Ironic Land" we're looking at removing XML support from
> > ironic-api (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
> >
> > I've been looking, but I can't seem to find an easy way to modify
> > the accepted content_types.
> >
> > Are there any wsgi / WSME / Pecan experts out there who can point me
> > in the right direction?
> >
> >
> > There's no support for turning off a protocol in WSME right now, but we
> > could add that if we really need it.
> >
> > Why would we turn it off, though? The point of dropping XML support in
> > some of the other projects is that they use toolkits that require extra
> > work to support it (either coding or maintenance of code we've written
> > elsewhere for OpenStack). WSME supports both protocols without the API
> > developer having to do any extra work.
>
> Because if an interface is exported to the user, then it needs to be
> both Documented and Tested. So that's double the cost on the validation
> front, and the documentation front.
>
> Exporting an API isn't set and forget. Especially with the semantic
> differences between JSON and XML. And if someone doesn't feel the XML
> created by WSME is semantically useful enough to expose to their users,
> they shouldn't be forced to by the interface.
>

I guess I can see that.

Bugs and blueprints can be filed at https://launchpad.net/wsme. It's not
likely to happen in the next few weeks, but it shouldn't be difficult to
provide some sort of switch to turn off XML support.

Doug



>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] BP:Store both IPv6 LLA and GUA address on router interface port

2014-02-27 Thread Robert Li (baoli)
Hi Xuhan,

Thank you for your summary. see comments inline.

--Robert

On 2/27/14 12:49 AM, "Xuhan Peng" 
mailto:pengxu...@gmail.com>> wrote:

As the follow up action of IPv6 sub-team meeting [1], I created a new blueprint 
[2] to store both IPv6 LLA and GUA address on router interface port.

Here is what it's about:

Based on the two modes (ipv6-ra-mode and ipv6-address-mode) design[3], RA can 
be sent from both openstack controlled dnsmasq or existing devices.

RA From dnsmasq: gateway ip that dnsmasq binds into should be link local 
address (LLA) according to [4]. This means we need to pass the LLA of the 
created router internal port (i.e. qr-) to dnsmasq spawned by openstack 
dhcp agent. In the mean while, we need to assign an GUA to the created router 
port so that the traffic from external network can be routed back using the GUA 
of the router port as the next hop into the internal subnet. Therefore, we will 
need some change to the current logic to leverage both LLA and GUA on router 
port.

[Robert]: in this case, a LLA address is automatically created based on the 
gateway port's MAC address (EUI64 format). If it's determined that the gateway 
port is enabled with IPv6 (due to the two modes), then an RA rule can be 
installed based on the gateway port's automatic LLA.
if a service VM is running on the same subnet that supports IPv6 (either by RA 
or DHCPv6), then the service VM is attached to a neutron port on the same 
subnet (the gateway port). In this case, the automatic LLA on that port can be 
used to install the RA Rule. This is actually the same as in the dnsmasq case: 
use the gateway port's automatic LLA.


RA from existing device on the same link which is not controlled by openstack: 
dnsmasq will not send RA in this case. RA is sending from subnet's gateway 
address which should also be LLA according to [4]. Allowing subnet's gateway IP 
to be LLA is enough in this case. Current code works when 
force_gateway_on_subnet = False.

[Robert]
if it's a provider network, the gateway already exists. I believe that the 
behavior of the —gateway option in the subnet API is to indicate the gateway's 
true IP address and install default route. In the IPv6 case, however, due to 
the existence of RA, the gateway doesn't have to be provided. In this case, a 
neutron gateway port doesn't have to be created, either. Installing a RA rule 
to prevent RA from malicious source should be done explicitly. A couple of 
methods may be considered. For example, an option such as —alow-ra  can be 
introduced in the subnet API, or the security group rule can be enhanced to 
allow specification of message type so that a RA rule can be incorporated.

In any case, I don't believe that the gateway behavior should be modified. In 
addition, I don't think that this functionality (IPv6 RA rule) has to be 
provided right now, but can be introduced when it's completely sorted out.

The above is just my two cents.

thanks.





RA from router gateway port (i.e. qg-):  the LLA of the gateway port 
(qg-) should be set as the gateway of tenant subnet to get the RA from 
that. This could be potentially calculated by [5] or by other methods in the 
future considering privacy extension. However, this will make the tenant 
network gateway port qr- useless. Therefore, we also need code change to 
current router interface attach logic.

If you have any comments on this, please let me know.

[1] 
http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-02-25-14.02.html
[2] https://blueprints.launchpad.net/neutron/+spec/ipv6-lla-gua-router-interface
[3] https://blueprints.launchpad.net/neutron/+spec/ipv6-two-attributes
[4] http://tools.ietf.org/html/rfc4861
[5] https://review.openstack.org/#/c/56184/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-27 Thread Ziad Sawalha

On Feb 26, 2014, at 11:47 AM, Joe Gordon  wrote:

> On Wed, Feb 26, 2014 at 9:05 AM, David Ripton  wrote:
>> On 02/26/2014 11:40 AM, Joe Gordon wrote:
>> 
>>> This is missing the point about manually enforcing style. If you pass
>>> the 'pep8' job there is no need to change any style.
>> 
>> 
>> In a perfect world, yes.
> 
> While there are exceptions to this,  this just sounds like being extra
> nit-picky.  

The current reality is that reviewers do vote and comment based on the rules in
the hacking guide. In terms of style, whether my patch gets approved or not 
depends
on who reviews it.

I think that clarifying this in the hacking guide and including it in our test 
suites
would remove the personal bias from it. I don’t see folks complaining that 
Jenkins is
being nit-picky as a reviewer.

> The important aspect here is the mindset, if we don't gate
> on style rule x, then we shouldn't waste valuable human review time
> and patch revisions on trying to manually enforce it (And yes there
> are exceptions to this).
> 
>> 
>> In the real world, there are several things in PEP8 or our project
>> guidelines that the tools don't enforce perfectly.  I think it's fine for
>> human reviewers to point such things out.  (And then submit a patch to
>> hacking to avoid the need to do so in the future.)
> 
> To clarify, we don't rely on long term human enforcement for something
> that a computer can do.
> 

Precisely. I’m offering to include this as a patch to hacking or even a separate
test that we can add (without gating initially).

>> 
>> --
>> David Ripton   Red Hat   drip...@redhat.com
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jay Dobies

Yeah. This is a double bladed axe but i'm leaning towards naming flavors
consistently a bit more too. Here's an attempt at +/- summary:


"node profile"

+ a bit more descriptive for a newcomer imho

- CLI renaming/reimplementing mentioned before

- inconsistency dangers lurking in the deep - e.g. if an error message
bubbles up from Nova all the way to the user, it might mention flavors,
and if we talk 99% of time about node profiles, then user will not know
what is meant in the error message. I'm a bit worried that we'll keep
hitting things like this in the long run.


While I agree with all of your points, this is the one that resonates 
with me the most. We won't be able to be 100% consistent with a rename 
(exceptions are a great example). It's already irritating to the user to 
have to see an error, having to then see it in terms they aren't 
familiar with is an added headache.



- developers still often call them "flavors", because that's what Nova
calls them


"flavor"

+ fits with the rest, does not cause communication or development problems

- not so descriptive (but i agree with you - OpenStack admins will
already be familiar what "flavor" means in the overcloud, and i think
they'd be able to infer what it means in the undercloud)


I'm CCing Jarda as this affects his work quite a lot and i think he'll
have some insight+opinion (he's on PTO now so it might take some time
before he gets to this).





One other thing, I've looked at my own examples so far, so I didn't
really think about this but seeing it written down, I've realised the
way we specify the roles in the Tuskar CLI really bugs me.

  --roles 1=1 \
  --roles 2=1

I know what this means, but even reading it now I think: One equals
one? Two equals one? What? I think we should probably change the arg
name and also refer to roles by name.

  --role-count compute=10

and a shorter option

  -R compute=10


Yeah this is https://bugs.launchpad.net/tuskar/+bug/1281051

I agree with you on the solution (rename long option, support lookup by
names, add a short option).


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] How to change jenkins to run tempest to run tests against the gantt scheduler

2014-02-27 Thread Dugger, Donald D
For a significant transition period gantt (the separated out scheduler) will be 
an optional component that requires extra configuration options when setting up 
devstack to use it.  As such we need to create the Jenkins jobs to do this 
configuration when running the tempest tests.  Note that given that gantt will 
first be a nova scheduler replacement we need to run all of the nova tests so 
it would be good to not have to re-write all of those tests.

I think I know how to do this but I wanted to check to see if I'm doing the 
right thing first.  Turns out the `devstack-vm-gate.sh' script completely sets 
up the devstack environment so the obvious way is to change that script.  What 
I did was:

1) Modify `devstack-vm-gate.sh' to check the environment variable 
`SCHEDULER_TYPE' and, if and only if it is set to `gantt', have the script 
configure the devstack environment to utilize gantt.
2) Modify the tempest jobs in `devstack-gate.yaml' to set the environment 
variable `SCHEDULER_TYPE' to the job name, which will only be `gant' when 
testing the gantt project.

Patches are attached.

Does this seem like the right way to go or am I missing an easier way to do 
things?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786



config.patch
Description: config.patch


devstack.patch
Description: devstack.patch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] [smart-scenario-args]

2014-02-27 Thread Sergey Skripnick


Hello,


Problem: what about deployment specific parts
Template string in config? %imageid% or similar?
Image name regex, rather than image name? so can work with multiple  
deployments, eg ^cirros$



so we have a few solutions for today: function, vars, and "special args".


FUNCTION

args: {"image_id": {"$func": "img_by_reg", "$args": ["ubuntu.*"]}}

Flexible but configuration looks complex.

VARS

vars : {
$image1 : {"$func": "img_by_reg", "$args": ["ubuntu.*"]},
$image2: {"$func": "img_by_reg", "$args": ["centos.*"]}
}
args: {
   image_id: $image1,
   alt_image_id: $image2
}

This may be an addition to the first solution, but personally to me it
looks like overkill.

SPECIAL ARGS

args: {"image_re": {"ubuntu.*"}}

Very simple configuration, but less flexible then others. IMO all three may
be implemented.

I vote for "special args", and IMO functions may be implemented too.
Please feel free to propose other solutions.

--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Adrian Otto
Doug,

On Feb 27, 2014, at 7:23 AM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:

On Thu, Feb 27, 2014 at 2:00 AM, Noorul Islam K M 
mailto:noo...@noorul.com>> wrote:
Michael Davies mailto:mich...@the-davies.net>> writes:

> Hi everyone,
>
> Over in "Ironic Land" we're looking at removing XML support from ironic-api
> (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
>
> I've been looking, but I can't seem to find an easy way to modify the
> accepted content_types.
>
> Are there any wsgi / WSME / Pecan experts out there who can point me in the
> right direction?
>

Also in Solum we have a use case where in we would like to have
pecan+wsme accept content-type application/x-yaml.

It will be great if this can be made configurable.
What do you want to have happen with the YAML?

We want to parse it into a graph of objects. It will be similar in nature to 
processing a Heat template. We will treat the resulting data structure as a 
model for generation of Heat templates, after some logic is applied to each 
object in the graph, and the relations between them.

Do you want to receive and return YAML to a bunch of API calls? We could add 
YAML protocol support to WSME to make that happen.

We might have a total of 3 API calls that take application/x-yaml as inputs in 
addition to the same calls also accepting application/json.

Do you want a couple of controller methods to be given a YAML parse result, 
rather than WSME objects? You could write an expose() decorator for Pecan in 
Solum. This would be a more appropriate approach if you just have one or two 
methods of the API that need YAML support.

Interesting. Can you think of existing implementations we could reference that 
are using this approach? What would you consider the pro/con balance for this?

Or do you want the YAML text passed in to you directly, maybe to be uploaded in 
a single controller method? You should be able to have that by skipping WSME 
and just using Pecan's expose() decorator with an appropriate content type, 
then parsing the body yourself.

We probably don't need to do our own parsing of the YAML text, as we expect 
that to be regular YAML. We probably don't need to parse it incrementally as a 
stream because even our most complex YAML content is not likely to be more than 
a few hundred nodes, and should fit into memory. We do, however need to be able 
to inspect and walk through a resulting data structure that represents the YAML 
content.

Would it be sensible to to have a translation shim of some sort that interprets 
the YAML into JSON, and feeds that into the existing code so that we would be 
confident there was no difference between parsing the YAML versus parsing 
equivalent JSON?

Thanks,

Adrian


Doug



Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Dan Smith
> So I think once we start returning different response codes, or
> completely different structures (such as the tasks change will be), it
> doesn't matter if we make the change in effect by invoking /v2 prefix
> or /v3 prefix or we look for a header. Its a major api revision. I
> don't think we should pretend otherwise.

I think you're missing my point. The current /v2 and /v3 versions are
about 10,000 revisions apart. I'm saying we have the client declare
support for a new version every time we need to add something new, not
to say that they support what we currently have as /v3. So it would be
something like:

 version=v2: Current thing
 version=v3: added simple task return for server create
 version=v4: added the event extension
 version=v5: added a new event for cinder to the event extension
 version=v6: changed 200 to 202 for three volumes calls
 ...etc

Obviously using a header instead of a url doesn't help if the gap
between v2 and v3 is what it is today.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting

2014-02-27 Thread Eugene Nikanorov
Thanks everyone for joining, meeting logs:

 Meeting ended Thu Feb 27 15:00:42 2014 UTC.  Information about
MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
 Minutes:
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-02-27-14.00.html
 Minutes (text):
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-02-27-14.00.txt
 Log:
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/neutron_lbaas.2014-02-27-14.00.log.html

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Ladislav Smola

Hello,

I think if we will use Openstack CLI, it has to be something like this 
https://github.com/dtroyer/python-oscplugin.

Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when 
people will debug issues. So it would

have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?
..

Calli 'neutron net-create' and you just know.

Btw. who would actually hire a sysadmin that will start to use CLI and 
have no
idea what is he doing? They need to know what each service do, how to 
correctly

use them and how do debug it when something is wrong.


For flavors just use flavors, we call them flavors in code too. It has 
just nicer face in UI.


Kind regards,
Ladislav


On 02/26/2014 02:34 PM, Jiří Stránský wrote:

Hello,

i went through the CLI way of deploying overcloud, so if you're 
interested what's the workflow, here it is:


https://gist.github.com/jistr/9228638


I'd say it's still an open question whether we'll want to give better 
UX than that ^^ and at what cost (this is very much tied to the 
benefits and drawbacks of various solutions we discussed in December 
[1]). All in all it's not as bad as i expected it to be back then [1]. 
The fact that we keep Tuskar API as a layer in front of Heat means 
that CLI user doesn't care about calling merge.py and creating Heat 
stack manually, which is great.


In general the CLI workflow is on the same conceptual level as Tuskar 
UI, so that's fine, we just need to use more commands than "tuskar".


There's one naming mismatch though -- Tuskar UI doesn't use Horizon's 
Flavor management, but implements its own and calls it Node Profiles. 
I'm a bit hesitant to do the same thing on CLI -- the most obvious 
option would be to make python-tuskarclient depend on 
python-novaclient and use a renamed Flavor management CLI. But that's 
wrong and high cost given that it's only about naming :)


The above issue is once again a manifestation of the fact that Tuskar 
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more 
services. If this becomes a greater problem, or if we want a top-notch 
CLI experience despite reimplementing bits that can be already done 
(just not in a super-friendly way), we could start thinking about 
building something like OpenStackClient CLI [2], but directed 
specifically at Undercloud/Tuskar needs and using undercloud naming.


Another option would be to get Tuskar UI a bit closer back to the fact 
that Undercloud is OpenStack too, and keep the name "Flavors" instead 
of changing it to "Node Profiles". I wonder if that would be unwelcome 
to the Tuskar UI UX, though.



Jirka


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-27 Thread Brian Haley
On 02/26/2014 05:18 PM, Carl Baldwin wrote:
> Brian,
> 
> In shell it is correct to return 0 for success and non-zero for failure.

But, at least in lib/neutron, there is a check like this:

if is_service_enabled neutron; then
...
fi

Which will fail with a 0 return code and miss some config.  It seems like these
functions should return TRUE (1) in this case based on their naming scheme.

There were recent changes to these *enabled() functions a few days ago which is
part of the reason I'm asking as well, don't know if something got overlooked.

-Brian


> On Feb 26, 2014 10:54 AM, "Brian Haley"  > wrote:
> 
> While trying to track down why Jenkins was handing out -1's in a Neutron 
> patch,
> I was seeing errors in the devstack tests it runs.  When I dug deeper it 
> looked
> like it wasn't properly determining that Neutron was enabled - 
> ENABLED_SERVICES
> had multiple "q-*" entries, but 'is_service_enabled neutron' was 
> returning 0.
> 
> I boiled it down to a simple reproducer based on the many is_*_enabled()
> functions:
> 
> #!/usr/bin/env bash
> set -x
> 
> function is_foo_enabled {
> [[ ,${ENABLED_SERVICES} =~ ,"f-" ]] && return 0
> return 1
> }
> 
> ENABLED_SERVICES=f-svc
> 
> is_foo_enabled
> 
> $ ./is_foo_enabled.sh
> + ENABLED_SERVICES=f-svc
> + is_foo_enabled
> + [[ ,f-svc =~ ,f- ]]
> + return 0
> 
> So either the return values need to be swapped, or && changed to ||.  I 
> haven't
> tested is_service_enabled() but all the is_*_enabled() functions are wrong
> at least.
> 
> Is anyone else seeing this besides me?  And/or is someone already working 
> on
> fixing it?  Couldn't find a bug for it.
> 
> Thanks,
> 
> -Brian
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-27 Thread Daniel P. Berrange
On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:
> This patch is coming through the gate this morning -
> https://review.openstack.org/#/c/71996/
> 
> The point being to actually make devstack stop when it hits an error,
> instead of only once these compound to the point where there is no
> moving forward and some service call fails. This should *dramatically*
> improve the experience of figuring out a failure in the gate, because
> where it fails should be the issue. (It also made us figure out some
> wonkiness with stdout buffering, that was making debug difficult).
> 
> This works on all the content that devstack gates against. However,
> there are a ton of other paths in devstack, including vendor plugins,
> which I'm sure aren't clean enough to run under -o errexit. So if all of
> a sudden things start failing, this may be why. Fortunately you'll be
> pointed at the exact point of the fail.

This is awesome! It just helped me solve the problem I've been having
for the past 2 days with devstack on one of my machines.

I was hitting a problem where I'd set DEST and DATA_DIR but not set
SERVICE_DIR in local.conf, resulting in mysterious failures later
on since $SERVICE_DIR had not been created properly.

  https://bugs.launchpad.net/devstack/+bug/1285720

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Chris Friesen

On 02/27/2014 08:43 AM, Dan Smith wrote:

So I think once we start returning different response codes, or
completely different structures (such as the tasks change will be), it
doesn't matter if we make the change in effect by invoking /v2 prefix
or /v3 prefix or we look for a header. Its a major api revision. I
don't think we should pretend otherwise.


I think you're missing my point. The current /v2 and /v3 versions are
about 10,000 revisions apart. I'm saying we have the client declare
support for a new version every time we need to add something new, not
to say that they support what we currently have as /v3. So it would be
something like:

  version=v2: Current thing
  version=v3: added simple task return for server create
  version=v4: added the event extension
  version=v5: added a new event for cinder to the event extension
  version=v6: changed 200 to 202 for three volumes calls
  ...etc


Sure, but that's still functionally equivalent to using the /v2 prefix. 
 So we could chuck the current /v3 code and do:


/v2: Current thing
/v3: invalid, not supported
/v4: added simple task return for server create
/v5: added the event extension
/v6: added a new event for cinder to the event extension

and it would be equivalent.

And arguably, anything that is a pure "add" could get away with either a 
minor version or not touching the version at all.  Only "remove" or 
"modify" should have the potential to break a properly-written application.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Tzu-Mainn Chen
> Hello,
> 
> I think if we will use Openstack CLI, it has to be something like this
> https://github.com/dtroyer/python-oscplugin.
> Otherwise we are not Openstack on Openstack.
> 
> Btw. abstracting it all to one big CLI will be just more confusing when
> people will debug issues. So it would
> have to be done very good.
> 
> E.g calling 'openstack-client net-create' fails.
> Where do you find error log?
> Are you using nova-networking or Neutron?
> ..
> 
> Calli 'neutron net-create' and you just know.
> 
> Btw. who would actually hire a sysadmin that will start to use CLI and
> have no
> idea what is he doing? They need to know what each service do, how to
> correctly
> use them and how do debug it when something is wrong.
> 
> 
> For flavors just use flavors, we call them flavors in code too. It has
> just nicer face in UI.

Actually, don't we called them node_profiles in the UI code?  Personally,
I'd much prefer that we call them flavors in the code.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Devananda van der Veen
On Thu, Feb 27, 2014 at 5:28 AM, Sean Dague  wrote:

> On 02/27/2014 08:13 AM, Doug Hellmann wrote:
> >
> >
> >
> > On Thu, Feb 27, 2014 at 12:48 AM, Michael Davies  > > wrote:
> >
> > Hi everyone,
> >
> > Over in "Ironic Land" we're looking at removing XML support from
> > ironic-api (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
> >
> > I've been looking, but I can't seem to find an easy way to modify
> > the accepted content_types.
> >
> > Are there any wsgi / WSME / Pecan experts out there who can point me
> > in the right direction?
> >
> >
> > There's no support for turning off a protocol in WSME right now, but we
> > could add that if we really need it.
> >
> > Why would we turn it off, though? The point of dropping XML support in
> > some of the other projects is that they use toolkits that require extra
> > work to support it (either coding or maintenance of code we've written
> > elsewhere for OpenStack). WSME supports both protocols without the API
> > developer having to do any extra work.
>
> Because if an interface is exported to the user, then it needs to be
> both Documented and Tested. So that's double the cost on the validation
> front, and the documentation front.
>
> Exporting an API isn't set and forget. Especially with the semantic
> differences between JSON and XML. And if someone doesn't feel the XML
> created by WSME is semantically useful enough to expose to their users,
> they shouldn't be forced to by the interface.
>
>
Aside from our lack of doc and test coverage for XML support and the desire
to hide an untested and undocumented API (which I think are valid reasons
to disable it) there is an actual technical problem.

Ironic's API relies on HTTP PATCH to modify resources, which Pecan/WSME
does not abstract for us. We're using the python jsonpatch library to parse
these requests. I'm not aware of a similar python library for XML support.

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help oslo-incubator team == Help your project - some advices

2014-02-27 Thread Boris Pavlovic
Hi Stackers,

Intro:
It goes without saying words that we have the same code in different
projects: RPC, DB, Python clients, and a lot of other parts. In case of
fast growing OpenStack there were some issues in evolving and instead of
doing libs and new projects from the blank page on top of these libs. We
build new projects as a fork of already existing, removing unnecessary
code. During the time "same" code in different projects evolve in different
directions and now it is not "same". At this moment we have a lot of code
that makes same things in different projects. To assume we are spending too
much time for supporting and improving "same" code in different projects.

Ideal variant to resolve such problem is to create a lot of libs for every
logical piece of code. But it's well known that every change in API of lib
produce new version of it. Oslo-incubator initiative was started to avoid
versioning armageddon:
1) put all code to oslo incubator
2) reuse it in all projects
3) make a libs
4) switch to libs
5) ...
6) PROFIT!!

To put it in nutshell:
1) It is okay if you are not able to sync code from Oslo without any
changes in your project. (That's the main idea of Oslo-incubator be free to
change API)
2) Avoid syncing randomly code from Oslo if you are not related to it
(cause it probably have some changes in API or internal parts that could
probably brake something that won't be catch even by gates).
3) If you would like to sync and help Oslo team, contact please
 maintainers (they will help you):
https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS
4) Please PTL could you make high priority for reviews for syncs from oslo
(cause it is the major bottleneck for oslo team)
5) Do not make reverts for patches if they don't pass your gates like here:
https://review.openstack.org/#/c/76718/

So Doug could you please merge our revert to revert:
https://review.openstack.org/#/c/76836/

And we will sync it to nova:
https://review.openstack.org/#/c/76880/


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Ana Krivokapic


On 02/27/2014 04:41 PM, Tzu-Mainn Chen wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?
..

Calli 'neutron net-create' and you just know.

Btw. who would actually hire a sysadmin that will start to use CLI and
have no
idea what is he doing? They need to know what each service do, how to
correctly
use them and how do debug it when something is wrong.


For flavors just use flavors, we call them flavors in code too. It has
just nicer face in UI.

Actually, don't we called them node_profiles in the UI code?


We do: 
https://github.com/openstack/tuskar-ui/tree/master/tuskar_ui/infrastructure/node_profiles

  Personally,
I'd much prefer that we call them flavors in the code.
I agree, keeping the name "flavor" makes perfect sense here, IMO. The 
only benefit of using "node profile" seems to be that it is more 
descriptive. However, as already mentioned, admins are well used to the 
name "flavor". It seems to me that this change introduces more confusion 
than it serves to clear things up. In other words, it brings more harm 
than good.




Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Regards,

Ana Krivokapic
Associate Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Dan Smith
> Sure, but that's still functionally equivalent to using the /v2 prefix.
>  So we could chuck the current /v3 code and do:
> 
> /v2: Current thing
> /v3: invalid, not supported
> /v4: added simple task return for server create
> /v5: added the event extension
> /v6: added a new event for cinder to the event extension
> 
> and it would be equivalent.

Yep, sure. This seems more likely to confuse people or clients to me,
but if that's how we decided to do it, then that's fine. The approach to
_what_ we version is my concern.

> And arguably, anything that is a pure "add" could get away with either a
> minor version or not touching the version at all.  Only "remove" or
> "modify" should have the potential to break a properly-written application.

Totally agree!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][productivity] Help oslo-incubator team == Help your project - some advices

2014-02-27 Thread Boris Pavlovic
Hi Stackers,

Intro:
It goes without saying words that we have the same code in different
projects: RPC, DB, Python clients, and a lot of other parts. In case of
fast growing OpenStack there were some issues in evolving and instead of
doing libs and new projects from the blank page on top of these libs. We
build new projects as a fork of already existing, removing unnecessary
code. During the time "same" code in different projects evolve in different
directions and now it is not "same". At this moment we have a lot of code
that makes same things in different projects. To assume we are spending too
much time for supporting and improving "same" code in different projects.

Ideal variant to resolve such problem is to create a lot of libs for every
logical piece of code. But it's well known that every change in API of lib
produce new version of it. Oslo-incubator initiative was started to avoid
versioning armageddon:
1) put all code to oslo incubator
2) reuse it in all projects
3) make a libs
4) switch to libs
5) ...
6) PROFIT!!

To put it in nutshell:
1) It is okay if you are not able to sync code from Oslo without any
changes in your project. (That's the main idea of Oslo-incubator be free to
change API)
2) Avoid syncing randomly code from Oslo if you are not related to it
(cause it probably have some changes in API or internal parts that could
probably brake something that won't be catch even by gates).
3) If you would like to sync and help Oslo team, contact please
 maintainers (they will help you):
https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS
4) Please PTL could you make high priority for reviews for syncs from oslo
(cause it is the major bottleneck for oslo team)
5) Do not make reverts for patches if they don't pass your gates like here:
https://review.openstack.org/#/c/76718/

So Doug could you please merge our revert to revert:
https://review.openstack.org/#/c/76836/

And we will sync it to nova:
https://review.openstack.org/#/c/76880/


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Dougal Matthews

On 27/02/14 15:08, Ladislav Smola wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?


I would at least expect the debug/log of the tuskar client to show what
calls its making on other clients so following this trail wouldn't be
too hard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Henry Nash
So a couple of things about this:

1) Today (and also true for Grizzly and Havana), the user can chose what LDAP 
attribute should be returned as the user or group ID.  So it is NOT a safe 
assumption today (ignoring any support for domain-specific LDAP support) that 
the format of a user or group ID is a 32 char UUID.  Quite often, I would 
think, that email address would be chosen by a cloud provider as the LDAP id 
field, by default we use the CN.  Since we really don't want to ever change the 
user or group ID we have given out from keystone for a particular entity, this 
means we need to update nova (or anything else) that has made a 32 char 
assumption.
2) In oder to support the ability for service providers to be able to have the 
identity part of keystone be satisfied by a customer LDAP (i.e. for a given 
domain, have a specific LDAP), then, as has been stated, we need to 
subsequently, when handed an API call with just a user or group ID, be able to 
"route" this call to the correct LDAP.  Trying to keep true to the openstack 
design principles, we had planned to encode a domain identifier into the user 
or group ID - i.e. distribute the data to where it is needed, in ortherwords, 
the user and group ID provide all the info we need to route the call to the 
right place. Two implementations come to mind:
2a) Simply concatenate the user/group ID with the domain_id, plus some 
separator and make a composite public facing ID.  e.g. 
"user_entity_id@@UUID_of_domain".  This would have a technical maximum size of 
64+2+64 (i.e. 130), although in reality since we control domain_id and we know 
it is always 32 char UUID - in fact the max size would be 98.  This has the 
problem of increasing the size of the public facing field beyond the existing 
64.  This is what we had planned for IceHouse - and is currently in review.
2b) Use a similar concatenation idea as 2a), but limit the total size to the 
existing 64. Since we control domain_id, we could (internally and not visibly 
to the outside world), create a domain_index, that was used in place of 
domain_id in the publicly visible field, to minimize the number of chars it 
requires.  So the public facing composite ID might be something like @@<8 chars of domain_index>.  There is a chance, of course, 
that  the 54 char restriction might be problematic for LDAP usersbut I 
doubt it.  We would make that a restriction and if it really became a problem, 
we could consider a field size increase at a later release
3) The alternative to 2a and 2b is to have, as had been suggested, an internal 
mapping table that maps external facing entity_ids to a domain plus local 
entity ID.  The problem with this idea is that:
- This could become a very big table (you will essentially have an entry for 
every user in every corporate LDAP that has accessed a given openstack)
- Since most LDAPs are RO, we will never see deletes...so we won't know when 
(without some kind of garbage collection) to cull entries
- It obviously does not solve 1) - since existing LDAP support can break the 32 
char limit - and so it isn't true that this mapping table causes all public 
facing entity IDs to be simple 32 char UUIDs

From a delivery into IceHouse point of view any of the above are possible, 
since the actual mapping used is relatively small part of the patch.  I 
personally favor 2b), since it is simple, has "less moving parts" and does not 
change any external facing requirements for storage of user and group IDs 
(above and beyond what is true today).

Henry
On 27 Feb 2014, at 03:46, Adam Young  wrote:

> On 02/26/2014 08:25 AM, Dolph Mathews wrote:
>> 
>> On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes  wrote:
>> On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
>> > For purposes of supporting multiple backends for Identity (multiple
>> > LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
>> > increase the maximum size of the USER_ID field from an upper limit of
>> > 64 to an upper limit of 255. This change would not impact any
>> > currently assigned USER_IDs (they would remain in the old simple UUID
>> > format), however, new USER_IDs would be increased to include the IDP
>> > identifier (e.g. USER_ID@@IDP_IDENTIFIER).
>> 
>> -1
>> 
>> I think a better solution would be to have a simple translation table
>> only in Keystone that would store this longer identifier (for folks
>> using federation and/or LDAP) along with the Keystone user UUID that is
>> used in foreign key relations and other mapping tables through Keystone
>> and other projects.
>> 
>> Morgan and I talked this suggestion through last night and agreed it's 
>> probably the best approach, and has the benefit of zero impact on other 
>> services, which is something we're obviously trying to avoid. I imagine it 
>> could be as simple as a user_id to domain_id lookup table. All we really 
>> care about is "given a globally unique user ID, which identity backend is 
>> the user from?"
>> 
>> On the 

Re: [openstack-dev] WSME / Pecan and only supporting JSON?

2014-02-27 Thread Doug Hellmann
On Thu, Feb 27, 2014 at 9:33 AM, Adrian Otto wrote:

>  Doug,
>
>  On Feb 27, 2014, at 7:23 AM, Doug Hellmann 
> wrote:
>
>   On Thu, Feb 27, 2014 at 2:00 AM, Noorul Islam K M wrote:
>
>> Michael Davies  writes:
>>
>> > Hi everyone,
>> >
>> > Over in "Ironic Land" we're looking at removing XML support from
>> ironic-api
>> > (i.e. https://bugs.launchpad.net/ironic/+bug/1271317)
>> >
>> > I've been looking, but I can't seem to find an easy way to modify the
>> > accepted content_types.
>> >
>> > Are there any wsgi / WSME / Pecan experts out there who can point me in
>> the
>> > right direction?
>> >
>>
>>  Also in Solum we have a use case where in we would like to have
>> pecan+wsme accept content-type application/x-yaml.
>>
>> It will be great if this can be made configurable.
>>
>  What do you want to have happen with the YAML?
>
>
>  We want to parse it into a graph of objects. It will be similar in nature
> to processing a Heat template. We will treat the resulting data structure
> as a model for generation of Heat templates, after some logic is applied to
> each object in the graph, and the relations between them.
>
>Do you want to receive and return YAML to a bunch of API calls? We
> could add YAML protocol support to WSME to make that happen.
>
>
>  We might have a total of 3 API calls that take application/x-yaml as
> inputs in addition to the same calls also accepting application/json.
>
>Do you want a couple of controller methods to be given a YAML parse
> result, rather than WSME objects? You could write an expose() decorator for
> Pecan in Solum. This would be a more appropriate approach if you just have
> one or two methods of the API that need YAML support.
>
>
>  Interesting. Can you think of existing implementations we could reference
> that are using this approach? What would you consider the pro/con balance
> for this?
>

I'm not aware of any OpenStack APIs that take YAML input now, but that
doesn't mean they don't exist.



>
>Or do you want the YAML text passed in to you directly, maybe to be
> uploaded in a single controller method? You should be able to have that by
> skipping WSME and just using Pecan's expose() decorator with an appropriate
> content type, then parsing the body yourself.
>
>
>  We probably don't need to do our own parsing of the YAML text, as we
> expect that to be regular YAML. We probably don't need to parse it
> incrementally as a stream because even our most complex YAML content is not
> likely to be more than a few hundred nodes, and should fit into memory. We
> do, however need to be able to inspect and walk through a resulting data
> structure that represents the YAML content.
>
>  Would it be sensible to to have a translation shim of some sort that
> interprets the YAML into JSON, and feeds that into the existing code so
> that we would be confident there was no difference between parsing the YAML
> versus parsing equivalent JSON?
>

Is the same controller method taking both JSON and YAML at different times,
like we support now with JSON and XML?

I'm not sure I would convert YAML text to JSON text, but converting the
YAML document to the same set of data structures represented by the JSON
would make sense.

For reference:

This is the WSME decorator that handles converting incoming data to WSME
objects:
http://git.openstack.org/cgit/stackforge/wsme/tree/wsmeext/pecan.py#n46

If you want to add YAML support to WSME, you would need to add an
appropriate call to the top like we have there for JSON and XML, and then
implement a YAML protocol plugin under
http://git.openstack.org/cgit/stackforge/wsme/tree/wsme/rest like you see
there for JSON and XML.

Doug


>
>  Thanks,
>
>  Adrian
>
>
>  Doug
>
>
>
>>
>> Regards,
>> Noorul
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TROVE] Trove replication meeting

2014-02-27 Thread Denis Makogon
Good day, DBaaS community.


Yesterday, at meeting was mentioned that some one
(maybe from core team) going to schedule hangout meeting
 related replication implementation.

Would be better if the person who's going to schedule and organize meeting
will send the notification to openstack-dev to let all interested be
informed about time and link to hangout chat.



Best regards,
Denis Makogon.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] std:repeat action

2014-02-27 Thread Renat Akhmerov
Thanks Manas!

This is one of the important things we need to get done within the next couple 
of weeks. Since it’s going to affect engine I think we need to wait for a 
couple of days with the implementation till we merge the changes that are being 
worked on and that also affect engine significantly.

Team, please research carefully this etherpad and leave your comments. It’s a 
pretty tricky thing and we need to figure out the best strategy how to approach 
this kind of things. We’re going to have more problems similar to this one.

Renat Akhmerov
@ Mirantis Inc.



On 25 Feb 2014, at 10:07, manas kelshikar  wrote:

> Hi everyone, 
> 
> I have put down my thoughts about the standard repeat action blueprint.
> 
> https://blueprints.launchpad.net/mistral/+spec/mistral-std-repeat-action
> 
> I have added link to an etherpad document which explore a few alternatives of 
> the approach. I have explored details of how the std:repeat action should 
> behave as defined in the blueprint. Further there are some thoughts on how it 
> could be designed to remove ambiguity in the chaining.
> 
> Please take a look.
> 
> Thanks,
> Manas
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-27 Thread Matthew Treinish
On Tue, Feb 25, 2014 at 07:46:23PM -0600, Matt Riedemann wrote:
> 
> 
> On 2/12/2014 1:57 PM, Matthew Treinish wrote:
> >On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:
> >>
> >>
> >>On 1/17/2014 8:34 AM, Matthew Treinish wrote:
> >>>On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:
> On 01/16/2014 10:56 PM, Matthew Treinish wrote:
> >Hi everyone,
> >
> >With some recent changes made to Tempest compatibility with nosetests is 
> >going
> >away. We've started using newer features that nose just doesn't support. 
> >One
> >example of this is that we've started using testscenarios and we're 
> >planning to
> >do this in more places moving forward.
> >
> >So at Icehouse-3 I'm planning to push the patch out to remove nosetests 
> >from the
> >requirements list and all the workarounds and references to nose will be 
> >pulled
> >out of the tree. Tempest will also start raising an unsupported 
> >exception when
> >you try to run it with nose so that there isn't any confusion on this 
> >moving
> >forward. We talked about doing this at summit briefly and I've brought 
> >it up a
> >couple of times before, but I believe it is time to do this now. I feel 
> >for
> >tempest to move forward we need to do this now so that there isn't any 
> >ambiguity
> >as we add even more features and new types of testing.
> I'm with you up to here.
> >
> >Now, this will have implications for people running tempest with python 
> >2.6
> >since up until now we've set nosetests. There is a workaround for getting
> >tempest to run with python 2.6 and testr see:
> >
> >https://review.openstack.org/#/c/59007/1/README.rst
> >
> >but essentially this means that when nose is marked as unsupported on 
> >tempest
> >python 2.6 will also be unsupported by Tempest. (which honestly it 
> >basically has
> >been for while now just we've gone without making it official)
> The way we handle different runners/os can be categorized as "tested
> in gate", "unsupported" (should work, possibly some hacks needed),
> and "hostile". At present, both nose and py2.6 I would say are in
> the unsupported category. The title of this message and the content
> up to here says we are moving nose to the hostile category. With
> only 2 months to feature freeze I see no justification in moving
> py2.6 to the hostile category. I don't see what new testing features
> scheduled for the next two months will be enabled by saying that
> tempest cannot and will not run on 2.6. It has been agreed I think
> by all projects that py2.6 will be dropped in J. It is OK that py2.6
> will require some hacks to work and if in the next few months it
> needs a few more then that is ok. If I am missing another connection
> between the py2.6 and nose issues, please explain.
> 
> >>>
> >>>So honestly we're already at this point in tempest. Nose really just 
> >>>doesn't
> >>>work with tempest, and we're adding more features to tempest, your 
> >>>negative test
> >>>generator being one of them, that interfere further with nose. I've seen 
> >>>several
> >>
> >>I disagree here, my team is running Tempest API, CLI and scenario
> >>tests every day with nose on RHEL 6 with minimal issues.  I had to
> >>workaround the negative test discovery by simply sed'ing that out of
> >>the tests before running it, but that's acceptable to me until we
> >>can start testing on RHEL 7.  Otherwise I'm completely OK with
> >>saying py26 isn't really supported and isn't used in the gate, and
> >>it's a buyer beware situation to make it work, which includes
> >>pushing up trivial patches to make it work (which I did a few of
> >>last week, and they were small syntax changes or usages of
> >>testtools).
> >>
> >>I don't understand how the core projects can be running unit tests
> >>in the gate on py26 but our functional integration project is going
> >>to actively go out and make it harder to run Tempest with py26, that
> >>sucks.
> >>
> >>If we really want to move the test project away from py26, let's
> >>make the concerted effort to get the core projects to move with it.
> >
> >So as I said before the python 2.6 story for tempest remains the same after 
> >this
> >change. The only thing that we'll be doing is actively preventing nose from
> >working with tempest.
> >
> >>
> >>And FWIW, I tried the discover.py patch with unittest2 and
> >>testscenarios last week and either I botched it, it's not documented
> >>properly on how to apply it, or I screwed something up, but it
> >>didn't work for me, so I'm not convinced that's the workaround.
> >>
> >>What's the other option for running Tempest on py26 (keeping RHEL 6
> >>in mind)?  Using tox with testr and pip?  I'm doing this all
> >>single-node.
> >
> >Yes, that is what the discover patch is used to enable. By disabling nose 

Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-27 Thread Brian Haley
On 02/26/2014 04:23 PM, Brian Haley wrote:
> On 02/26/2014 01:36 PM, Dean Troyer wrote:
>> On Wed, Feb 26, 2014 at 11:51 AM, Brian Haley > > wrote:
>>
>> While trying to track down why Jenkins was handing out -1's in a
>> Neutron patch,
>> I was seeing errors in the devstack tests it runs.  When I dug
>> deeper it looked
>> like it wasn't properly determining that Neutron was enabled -
>> ENABLED_SERVICES
>> had multiple "q-*" entries, but 'is_service_enabled neutron' was
>> returning 0.
>>
>>
>> This is the correct return, 0 == success.

Ok, part of this is my kernel background, where true=1 like it should be :)
So there's a -EUSERERROR there.

Something is still wonky in the log below though...

> But, at least in lib/neutron, there is a check like this:
> 
> if is_service_enabled neutron; then
> ...
> fi
> 
> Which will fail with a 0 return code and miss some config.
> 
>> Can you point to a specific log example?
> 
> http://logs.openstack.org/89/70689/24/check/check-devstack-dsvm-neutron/8e137e0/console.html


> neutron.common.legacy [-] Skipping unknown group key: firewall_driver
> 2014-02-26 13:12:01.183 | ++ probe_id=ace800b2-5753-4609-b2c3-f9c87d0cc004
> 2014-02-26 13:12:01.184 | ++ echo ' ip netns exec
> qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004'
> 2014-02-26 13:12:01.184 | + probe_cmd=' ip netns exec
> qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004'
> 2014-02-26 13:12:01.185 | + [[ True = \T\r\u\e ]]
> 2014-02-26 13:12:01.185 | + check_command='while !  ip netns exec
> qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004 ping -w 1 -c 1 10.1.0.4; do sleep 
> 1;
> done'
> 2014-02-26 13:12:01.185 | + timeout 90 sh -c 'while !  ip netns exec
> qprobe-ace800b2-5753-4609-b2c3-f9c87d0cc004 ping -w 1 -c 1 10.1.0.4; do sleep 
> 1;
> done'
> 2014-02-26 13:12:01.194 | Cannot open network namespace: Permission denied
> 2014-02-26 13:12:02.201 | Cannot open network namespace: Permission denied
> 2014-02-26 13:12:03.207 | Cannot open network namespace: Permission denied
> 2014-02-26 13:12:04.213 | Cannot open network namespace: Permission denied

That call to 'ip netns exec...' should be:

sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec...

Perhaps there's something up with this test - exercises/floating_ips.sh - I'll
try running it by hand.

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Defining term DSL

2014-02-27 Thread Manas Kelshikar
I looked at the review prior to looking at the discussion and even I was
confused by names like DSL*. The way I see it DSL is largely syntatic sugar
and therefore it will be good to have a clear separation between DSL and
model. The fact that something is defined in a DSL is irrelevant once it
crosses mistral API border in effect within mistral itself DSLTask,
DSLAction etc are simply description objects and how they were defined does
not matter to mistral implementation.

Each description object being a recipe to eventually execute a task. We
therefore already see these two manifestations in current code i.e.
DSLTask(per Nikolay's change) and Task (
https://github.com/stackforge/mistral/blob/master/mistral/api/controllers/v1/task.py#L30
).

To me it seems like we only need to agree upon names. Here are my
suggestions -

i)
DSLTask -> Task
Task -> TaskInstance
(Similarly for workflow, action etc.)

OR

ii)
DSLTask -> TaskSpec
Task -> Task
(Similarly for workflow, action etc.)



On Wed, Feb 26, 2014 at 9:31 PM, Renat Akhmerov wrote:

>
> On 26 Feb 2014, at 22:54, Dmitri Zimine  wrote:
>
> Based on the terminology from [1], it's not part of the model, but the
> language that describes the model in the file.
>
>
> Sorry, I'm having a hard time trying to understand this phrase :) What do
> you mean by "model" here? And why should DSL be a part of the model?
>
> And theoretically this may be not the only language to express the
> workflow.
>
>
> Sure, from that perspective, for example, JVM has many "DSLs": Java,
> Scala, Groovy etc.
>
> Once the file is parsed, we operate on model, not on the language.
>
>
> How does it influence using term DSL? DSL is, in fact, a user interface.
> Model is something we build inside a system to process DSL in a more
> convenient way.
>
>
> I am afraid we are breaking an abstraction when begin to call things
> DSLWorkbook or DSLWorkflow. What is the difference between Workbook and
> DSLWorkbook, and how DSL is relevant here?
>
>
> Prefix "DSL" tells that this exactly matches the structure of an object
> declared with using DSL. But, for example, a workbook in a database may
> have (and it has) a different structure better suitable for storing it in a
> relational model.
> So I'm not sure what you mean by saying "we are breaking an abstraction"
> here. What abstraction?
>
> [1] https://wiki.openstack.org/wiki/Mistral,
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-27 Thread Frittoli, Andrea (HP Cloud)
This is another example of achieving the same result (exclusion from a
list):
https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
s/tempest/tests2skip.py
https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
s/tempest/tests2skip.txt

andrea

-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org] 
Sent: 27 February 2014 15:49
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [QA] The future of nosetests with Tempest

On Tue, Feb 25, 2014 at 07:46:23PM -0600, Matt Riedemann wrote:
> 
> 
> On 2/12/2014 1:57 PM, Matthew Treinish wrote:
> >On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:
> >>
> >>
> >>On 1/17/2014 8:34 AM, Matthew Treinish wrote:
> >>>On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:
> On 01/16/2014 10:56 PM, Matthew Treinish wrote:
> >Hi everyone,
> >
> >With some recent changes made to Tempest compatibility with 
> >nosetests is going away. We've started using newer features that 
> >nose just doesn't support. One example of this is that we've 
> >started using testscenarios and we're planning to do this in more
places moving forward.
> >
> >So at Icehouse-3 I'm planning to push the patch out to remove 
> >nosetests from the requirements list and all the workarounds and 
> >references to nose will be pulled out of the tree. Tempest will 
> >also start raising an unsupported exception when you try to run 
> >it with nose so that there isn't any confusion on this moving 
> >forward. We talked about doing this at summit briefly and I've 
> >brought it up a couple of times before, but I believe it is time 
> >to do this now. I feel for tempest to move forward we need to do this
now so that there isn't any ambiguity as we add even more features and new
types of testing.
> I'm with you up to here.
> >
> >Now, this will have implications for people running tempest with 
> >python 2.6 since up until now we've set nosetests. There is a 
> >workaround for getting tempest to run with python 2.6 and testr see:
> >
> >https://review.openstack.org/#/c/59007/1/README.rst
> >
> >but essentially this means that when nose is marked as 
> >unsupported on tempest python 2.6 will also be unsupported by 
> >Tempest. (which honestly it basically has been for while now just 
> >we've gone without making it official)
> The way we handle different runners/os can be categorized as 
> "tested in gate", "unsupported" (should work, possibly some hacks 
> needed), and "hostile". At present, both nose and py2.6 I would 
> say are in the unsupported category. The title of this message and 
> the content up to here says we are moving nose to the hostile 
> category. With only 2 months to feature freeze I see no 
> justification in moving
> py2.6 to the hostile category. I don't see what new testing 
> features scheduled for the next two months will be enabled by 
> saying that tempest cannot and will not run on 2.6. It has been 
> agreed I think by all projects that py2.6 will be dropped in J. It 
> is OK that py2.6 will require some hacks to work and if in the 
> next few months it needs a few more then that is ok. If I am 
> missing another connection between the py2.6 and nose issues, please
explain.
> 
> >>>
> >>>So honestly we're already at this point in tempest. Nose really 
> >>>just doesn't work with tempest, and we're adding more features to 
> >>>tempest, your negative test generator being one of them, that 
> >>>interfere further with nose. I've seen several
> >>
> >>I disagree here, my team is running Tempest API, CLI and scenario 
> >>tests every day with nose on RHEL 6 with minimal issues.  I had to 
> >>workaround the negative test discovery by simply sed'ing that out of 
> >>the tests before running it, but that's acceptable to me until we 
> >>can start testing on RHEL 7.  Otherwise I'm completely OK with 
> >>saying py26 isn't really supported and isn't used in the gate, and 
> >>it's a buyer beware situation to make it work, which includes 
> >>pushing up trivial patches to make it work (which I did a few of 
> >>last week, and they were small syntax changes or usages of 
> >>testtools).
> >>
> >>I don't understand how the core projects can be running unit tests 
> >>in the gate on py26 but our functional integration project is going 
> >>to actively go out and make it harder to run Tempest with py26, that 
> >>sucks.
> >>
> >>If we really want to move the test project away from py26, let's 
> >>make the concerted effort to get the core projects to move with it.
> >
> >So as I said before the python 2.6 story for tempest remains the same 
> >after this change. The only thing that we'll be doing is actively 
> >preventing nose from working with tempest.
> >
> >>
> >>And FWIW, I tried the discover.py

Re: [openstack-dev] [Mistral] Renaming action types

2014-02-27 Thread Manas Kelshikar
How about ...

invoke_http & invoke_mistral to fit the verb_noun pattern.


On Wed, Feb 26, 2014 at 6:04 AM, Renat Akhmerov wrote:

> Ooh, I was wrong. Sorry. We use dash naming. We have "on-success",
> "on-error" and so forth.
>
> Please let us know if you see other inconsistencies.
>
> Thanks
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 26 Feb 2014, at 21:00, Renat Akhmerov  wrote:
>
> > Thanks Jay.
> >
> > Regarding underscore naming. If you meant using underscore naming for
> "createVM" and "novaURL" then yes, "createVM" is just a task name and it's
> a user preference. The same about "novaURL" which will be defined by users.
> As for keywords, seemingly we follow underscore naming.
> >
> > Renat Akhmerov
> > @ Mirantis Inc.
> >
> >
> >
> > On 26 Feb 2014, at 17:58, Jay Pipes  wrote:
> >
> >> On Wed, 2014-02-26 at 14:38 +0700, Renat Akhmerov wrote:
> >>> Folks,
> >>>
> >>> I'm proposing to rename these two action types REST_API and
> >>> MISTRAL_REST_API to HTTP and MISTRAL_HTTP. Words "REST" and "API"
> >>> don't look correct to me, if you look at
> >>>
> >>>
> >>> Services:
> >>> Nova:
> >>>   type: REST_API
> >>>   parameters:
> >>> baseUrl: {$.novaURL}
> >>>   actions:
> >>> createVM:
> >>>   parameters:
> >>> url: /servers/{$.vm_id}
> >>> method: POST
> >>>
> >>> There's no information about "REST" or "API" here. It's just a spec
> >>> how to form an HTTP request.
> >>
> >> +1 on HTTP and MISTRAL_HTTP.
> >>
> >> On an unrelated note, would it be possible to use under_score_naming
> >> instead of camelCase naming?
> >>
> >> Best,
> >> -jay
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Renaming "events" to "triggers" and move them out of "Workflow"

2014-02-27 Thread Manas Kelshikar
Agreed on both event -> trigger & moving triggers out of workflow. Lets get
the blueprint started.

/manas


On Wed, Feb 26, 2014 at 8:51 PM, Renat Akhmerov wrote:

> Hi team,
>
> When I tell peopleI about Mistral I always have a hard time explaining why
> we use term "event" for declaring ways to start workflows. For example,
> take a look at the following snippet:
>
> Workflow:
>...
>events:
>  execute_backup:
> type: periodic
> tasks: execute_backup
> parameters:
> cron-pattern: "*/1 * * * *"
>
> Here we just tell Mistral "we should run backup workflow every minute".
>
> My suggestion is to rename "events" to "triggers" because actually events
> are going to be sent by Mistral when we implement notification mechanism
> (sending events about over HTTP or AMQP about changes in workflows' and
> tasks' state).
>
> I would also suggest we move events out of "Workflow" section since it's
> not actually a part of workflow.
>
> Thoughts?
>
> If you agree I'll create a blueprint for this.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-27 Thread Dean Troyer
On Thu, Feb 27, 2014 at 10:34 AM, Brian Haley  wrote:

> Ok, part of this is my kernel background, where true=1 like it should be :)
> So there's a -EUSERERROR there.
>

Right.  This is Bourne/POSIX shell, forget everything logical. ;)


> That call to 'ip netns exec...' should be:
>
> sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns
> exec...
>
> Perhaps there's something up with this test - exercises/floating_ips.sh -
> I'll
> try running it by hand.


There is a problem in two of DevStack's exercises, floating_ips.sh and
volume.sh, where lib/neutron is not set up properly to handle the
ping_check() function calls.  That is what leads to what you see.
https://review.openstack.org/#/c/76867/ fixes the problem in the exercises.

As you know, the actual issue is that Q_RR_COMMAND is not set.  Which is
because 'is_service_enabled neutron' check appears to be failing.  Note
that is not the one you found in the logs as the one in question isn't
actually logged.  I've just re-submitted the above review with that logging
turned on.

The changes to is_service_enabled() are (so far) backward-compatible in
that if is_neutron_enabled() is not declared (as is the case now) the old
check in is_service_enabled for 'neutron' should still catch it.  That
happens everywhere else and in the DevStack test scripts, so I'm baffled
and awaiting the current gate check on 76867.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Sean Dague
On 02/27/2014 11:05 AM, Dan Smith wrote:
>> Sure, but that's still functionally equivalent to using the /v2 prefix.
>>  So we could chuck the current /v3 code and do:
>>
>> /v2: Current thing
>> /v3: invalid, not supported
>> /v4: added simple task return for server create
>> /v5: added the event extension
>> /v6: added a new event for cinder to the event extension
>>
>> and it would be equivalent.
> 
> Yep, sure. This seems more likely to confuse people or clients to me,
> but if that's how we decided to do it, then that's fine. The approach to
> _what_ we version is my concern.
> 
>> And arguably, anything that is a pure "add" could get away with either a
>> minor version or not touching the version at all.  Only "remove" or
>> "modify" should have the potential to break a properly-written application.
> 
> Totally agree!

Basically agree, I just think we need to realize "properly-written
application" might not be the majority.

But I think as was said previously, the conversion to objects inside of
Nova means we're fundamentally changing the validation anyway, because
the content isn't going all the way down to the database any more.

I do think client headers instead of urls have some pragmatic approach
here that is very attractive. Will definitely need a good chunk of
plumbing to support that in a sane way in the tree that keeps the
overhead from a review perspective low.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-27 Thread Brian Haley
On 02/27/2014 11:55 AM, Dean Troyer wrote:
> On Thu, Feb 27, 2014 at 10:34 AM, Brian Haley  > wrote:
> 
> Ok, part of this is my kernel background, where true=1 like it should be 
> :)
> So there's a -EUSERERROR there.
> 
> 
> Right.  This is Bourne/POSIX shell, forget everything logical. ;)

Carl beat that into me as well.

> That call to 'ip netns exec...' should be:
> 
> sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns 
> exec...
> 
> Perhaps there's something up with this test - exercises/floating_ips.sh - 
> I'll
> try running it by hand.
> 
> 
> There is a problem in two of DevStack's exercises, floating_ips.sh and
> volume.sh, where lib/neutron is not set up properly to handle the ping_check()
> function calls.  That is what leads to what you see.
>  https://review.openstack.org/#/c/76867/ fixes the problem in the exercises.
> 
> As you know, the actual issue is that Q_RR_COMMAND is not set.  Which is 
> because
> 'is_service_enabled neutron' check appears to be failing.  Note that is not 
> the
> one you found in the logs as the one in question isn't actually logged.  I've
> just re-submitted the above review with that logging turned on.
> 
> The changes to is_service_enabled() are (so far) backward-compatible in that 
> if
> is_neutron_enabled() is not declared (as is the case now) the old check in
> is_service_enabled for 'neutron' should still catch it.  That happens 
> everywhere
> else and in the DevStack test scripts, so I'm baffled and awaiting the current
> gate check on 76867.

Thanks for finding that bug and saving my sanity - yes, it all has to do with
Q_RR_COMMAND not getting setup properly.

I'll look at the patch and see if I can get things to pass when I apply it,
still trying to get that test even running correctly, which might just need a
reclone.

Thanks,

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] packstack support for DRBD based installation

2014-02-27 Thread Ben Nemec
 

On 2014-02-26 16:52, Curious Human wrote: 

> Hi , 
> 
> We are using packstack based openstack installation. I am working on 
> providing HA support for Openstack services and dependent 3rd part services 
> like mysql etc . I will be using DRBD and pacemaker based HA functionality. 
> Is there any support for installing DRBD based installation using packstack , 
> or is there any document suggesting how to do it ? 
> 
> Thanks in advance. 
> Rohit

This is a development list, and your question sounds more suited to the
users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 

Or possibly rdo-list since you're using packstack:
http://www.redhat.com/mailman/listinfo/rdo-list 

Thanks. 

-Ben 

 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Dan Smith
> I do think client headers instead of urls have some pragmatic
> approach here that is very attractive. Will definitely need a good
> chunk of plumbing to support that in a sane way in the tree that
> keeps the overhead from a review perspective low.

Aside from some helper functions to make this consistent, and some
things to say "here's the code I want to return, but compat clients
need /this/ code", I think this actually gets us most of the way there:

http://paste.openstack.org/show/70310/

(super hacky not-working fast-as-I-could-type prototype, of course)

Some care to the protocol we use, versioning rules, etc is definitely
required. But plumbing-wise, I think it's actually not so bad. With
the above, I think we could stop doing new extensions for every tweak
right now.

Of course, cleaning up the internals of v2 to be more like v3 now is a
separate effort, which is non-trivial and important. However, v3
started as a copy of v2, so we should know exactly what that takes.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-27 Thread Ben Nemec

On 2014-02-27 09:23, Daniel P. Berrange wrote:

On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:

This patch is coming through the gate this morning -
https://review.openstack.org/#/c/71996/

The point being to actually make devstack stop when it hits an error,
instead of only once these compound to the point where there is no
moving forward and some service call fails. This should *dramatically*
improve the experience of figuring out a failure in the gate, because
where it fails should be the issue. (It also made us figure out some
wonkiness with stdout buffering, that was making debug difficult).

This works on all the content that devstack gates against. However,
there are a ton of other paths in devstack, including vendor plugins,
which I'm sure aren't clean enough to run under -o errexit. So if all 
of

a sudden things start failing, this may be why. Fortunately you'll be
pointed at the exact point of the fail.


This is awesome!


+1!  Thanks Sean and everyone else who was involved with this.

This should help reduce the number of "devstack failed, what went 
wrong?" questions since the actual failure will be right there, as 
opposed to buried a thousand lines of output up. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-27 Thread Ben Nemec

On 2014-02-27 08:05, Ziad Sawalha wrote:

On Feb 26, 2014, at 11:47 AM, Joe Gordon  wrote:

On Wed, Feb 26, 2014 at 9:05 AM, David Ripton  
wrote:

On 02/26/2014 11:40 AM, Joe Gordon wrote:

This is missing the point about manually enforcing style. If you 
pass

the 'pep8' job there is no need to change any style.



In a perfect world, yes.


While there are exceptions to this,  this just sounds like being extra
nit-picky.


The current reality is that reviewers do vote and comment based on the 
rules in

the hacking guide. In terms of style, whether my patch gets approved
or not depends
on who reviews it.

I think that clarifying this in the hacking guide and including it in
our test suites
would remove the personal bias from it. I don’t see folks complaining
that Jenkins is
being nit-picky as a reviewer.


The important aspect here is the mindset, if we don't gate
on style rule x, then we shouldn't waste valuable human review time
and patch revisions on trying to manually enforce it (And yes there
are exceptions to this).



In the real world, there are several things in PEP8 or our project
guidelines that the tools don't enforce perfectly.  I think it's fine 
for
human reviewers to point such things out.  (And then submit a patch 
to

hacking to avoid the need to do so in the future.)


To clarify, we don't rely on long term human enforcement for something
that a computer can do.



Precisely. I’m offering to include this as a patch to hacking or even a 
separate

test that we can add (without gating initially).


I'm fine with this personally, although it should probably be noted that 
we have some PEP 257-type style checks in hacking that get almost 
universally ignored because no project has been following those style 
guidelines up to this point and it's a significant amount of effort to 
fix that.  That said, a check for the trailing blank line would be 
easier to fix so it might get more traction.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Error

2014-02-27 Thread Ben Nemec
 

The role add sounds like a problem where a variable didn't get
initialized correctly, but that's about all I can guess from this, and I
don't know anything about devstack-gate so I'm afraid I can't help with
that. 

Sean's devstack errexit change (which just merged) might help you track
this down since devstack will fail immediately instead of trying to
continue and failing later as it seems to be doing now. 

-Ben 

On 2014-02-26 02:53, trinath.soman...@freescale.com wrote: 

> Hi Ben- 
> 
> Can you give a guess of the problem. 
> 
> I feel that there might be some error with devstack-gate which brings down 
> the things from git and configure it in the system. 
> 
> The first error started at "openstack role add: error: argument -user: 
> expected one argument" 
> 
> Do I need to check or verify any other configuration for this issue. 
> 
> Kindly help me resolve the same. 
> 
> -- 
> 
> Trinath Somanchi - B39208 
> 
> trinath.soman...@freescale.com | extn: 4048 
> 
> FROM: Ben Nemec [mailto:openst...@nemebean.com] 
> SENT: Tuesday, February 25, 2014 9:23 PM
> TO: openstack-dev@lists.openstack.org
> SUBJECT: Re: [openstack-dev] Devstack Error 
> 
> On 2014-02-25 08:19, trinath.soman...@freescale.com wrote: 
> 
>> Hi Stackers- 
>> 
>> When I configured Jenkins to run the Sandbox tempest testing, While devstack 
>> is running, 
>> 
>> I have seen error 
>> 
>> "ERROR: Invalid Openstack Nova credentials" 
>> 
>> and another error 
>> 
>> "ERROR: HTTPConnection Pool(host='127.0.0.1', port=8774): Max retries 
>> exceeded wuth url: /v2/91dd….(caused by : [Errno 111] 
>> Connection refused) 
>> 
>> I feel devstack automates the openstack environment. 
>> 
>> Kindly guide me resolve the issue. 
>> 
>> Thanks in advance. 
>> 
>> -- 
>> 
>> Trinath Somanchi - B39208 
>> 
>> trinath.soman...@freescale.com | extn: 4048
> 
> Those are both symptoms of an underlying problem. It sounds like a service 
> didn't start or wasn't configured correctly, but it's impossible to say for 
> sure what went wrong based on this information. 
> 
> -Ben

 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Jay Pipes
On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
> So a couple of things about this:
> 
> 
> 1) Today (and also true for Grizzly and Havana), the user can chose
> what LDAP attribute should be returned as the user or group ID.  So it
> is NOT a safe assumption today (ignoring any support for
> domain-specific LDAP support) that the format of a user or group ID is
> a 32 char UUID.  Quite often, I would think, that email address would
> be chosen by a cloud provider as the LDAP id field, by default we use
> the CN.  Since we really don't want to ever change the user or group
> ID we have given out from keystone for a particular entity, this means
> we need to update nova (or anything else) that has made a 32 char
> assumption.

I don't believe this is correct. Keystone is the service that deals with
authentication. As such, Keystone should be the one and only one service
that should have any need whatsoever to need to understand a non-UUID
value for a user ID. The only value that should ever be communicated
*from* Keystone should be the UUID value of the user.

If the Keystone service uses LDAP or federation for alternative
authentication schemes, then Keystone should have a mapping table that
translates those elongated and non-UUID identifiers values (email
addresses, LDAP CNs, etc) into the UUID value that is then communicated
to all other OpenStack services.

Best,
-jay

> 2) In oder to support the ability for service providers to be able to
> have the identity part of keystone be satisfied by a customer LDAP
> (i.e. for a given domain, have a specific LDAP), then, as has been
> stated, we need to subsequently, when handed an API call with just a
> user or group ID, be able to "route" this call to the correct LDAP.
>  Trying to keep true to the openstack design principles, we had
> planned to encode a domain identifier into the user or group ID - i.e.
> distribute the data to where it is needed, in ortherwords, the user
> and group ID provide all the info we need to route the call to the
> right place. Two implementations come to mind:
> 2a) Simply concatenate the user/group ID with the domain_id, plus some
> separator and make a composite public facing ID.  e.g.
> "user_entity_id@@UUID_of_domain".  This would have a technical maximum
> size of 64+2+64 (i.e. 130), although in reality since we control
> domain_id and we know it is always 32 char UUID - in fact the max size
> would be 98.  This has the problem of increasing the size of the
> public facing field beyond the existing 64.  This is what we had
> planned for IceHouse - and is currently in review.
> 2b) Use a similar concatenation idea as 2a), but limit the total size
> to the existing 64. Since we control domain_id, we could (internally
> and not visibly to the outside world), create a domain_index, that was
> used in place of domain_id in the publicly visible field, to minimize
> the number of chars it requires.  So the public facing composite ID
> might be something like @@<8 chars of
> domain_index>.  There is a chance, of course, that  the 54 char
> restriction might be problematic for LDAP usersbut I doubt it.  We
> would make that a restriction and if it really became a problem, we
> could consider a field size increase at a later release
> 3) The alternative to 2a and 2b is to have, as had been suggested, an
> internal mapping table that maps external facing entity_ids to a
> domain plus local entity ID.  The problem with this idea is that:
> - This could become a very big table (you will essentially have an
> entry for every user in every corporate LDAP that has accessed a given
> openstack)
> - Since most LDAPs are RO, we will never see deletes...so we won't
> know when (without some kind of garbage collection) to cull entries
> - It obviously does not solve 1) - since existing LDAP support can
> break the 32 char limit - and so it isn't true that this mapping table
> causes all public facing entity IDs to be simple 32 char UUIDs
> 
> 
> From a delivery into IceHouse point of view any of the above are
> possible, since the actual mapping used is relatively small part of
> the patch.  I personally favor 2b), since it is simple, has "less
> moving parts" and does not change any external facing requirements for
> storage of user and group IDs (above and beyond what is true today).
> 
> 
> Henry
> On 27 Feb 2014, at 03:46, Adam Young  wrote:
> 
> > On 02/26/2014 08:25 AM, Dolph Mathews wrote:
> > 
> > > 
> > > On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes 
> > > wrote:
> > > On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
> > > > For purposes of supporting multiple backends for
> > > Identity (multiple
> > > > LDAP, mix of LDAP and SQL, federation, etc) Keystone is
> > > planning to
> > > > increase the maximum size of the USER_ID field from an
> > > upper limit of
> > > > 64 to an upper limit of 255. This change would not
> > > impact any
> > > >

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Henry Nash

On 27 Feb 2014, at 17:52, Jay Pipes  wrote:

> On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
>> So a couple of things about this:
>> 
>> 
>> 1) Today (and also true for Grizzly and Havana), the user can chose
>> what LDAP attribute should be returned as the user or group ID.  So it
>> is NOT a safe assumption today (ignoring any support for
>> domain-specific LDAP support) that the format of a user or group ID is
>> a 32 char UUID.  Quite often, I would think, that email address would
>> be chosen by a cloud provider as the LDAP id field, by default we use
>> the CN.  Since we really don't want to ever change the user or group
>> ID we have given out from keystone for a particular entity, this means
>> we need to update nova (or anything else) that has made a 32 char
>> assumption.
> 
> I don't believe this is correct. Keystone is the service that deals with
> authentication. As such, Keystone should be the one and only one service
> that should have any need whatsoever to need to understand a non-UUID
> value for a user ID. The only value that should ever be communicated
> *from* Keystone should be the UUID value of the user.
> 
> If the Keystone service uses LDAP or federation for alternative
> authentication schemes, then Keystone should have a mapping table that
> translates those elongated and non-UUID identifiers values (email
> addresses, LDAP CNs, etc) into the UUID value that is then communicated
> to all other OpenStack services.
> 
> Best,
> -jay
> 
So I think that's a perfectly reasonable point of viewour challenge is that 
this isn't what
Keystone has done to datee.g. anyone using a RO LDAP today is probably 
exposing
non-UUID identifiers out into nova and other projects (and maybe outside of 
openstack
altogether).  We can't (without breaking them) just change the IDs for any 
existing LDAP
entities.  So they best we could do is to say something like, new entities (and 
perhaps only
those in domain-specific backends) would use such a mapping capability.

Henry
>> 2) In oder to support the ability for service providers to be able to
>> have the identity part of keystone be satisfied by a customer LDAP
>> (i.e. for a given domain, have a specific LDAP), then, as has been
>> stated, we need to subsequently, when handed an API call with just a
>> user or group ID, be able to "route" this call to the correct LDAP.
>> Trying to keep true to the openstack design principles, we had
>> planned to encode a domain identifier into the user or group ID - i.e.
>> distribute the data to where it is needed, in ortherwords, the user
>> and group ID provide all the info we need to route the call to the
>> right place. Two implementations come to mind:
>> 2a) Simply concatenate the user/group ID with the domain_id, plus some
>> separator and make a composite public facing ID.  e.g.
>> "user_entity_id@@UUID_of_domain".  This would have a technical maximum
>> size of 64+2+64 (i.e. 130), although in reality since we control
>> domain_id and we know it is always 32 char UUID - in fact the max size
>> would be 98.  This has the problem of increasing the size of the
>> public facing field beyond the existing 64.  This is what we had
>> planned for IceHouse - and is currently in review.
>> 2b) Use a similar concatenation idea as 2a), but limit the total size
>> to the existing 64. Since we control domain_id, we could (internally
>> and not visibly to the outside world), create a domain_index, that was
>> used in place of domain_id in the publicly visible field, to minimize
>> the number of chars it requires.  So the public facing composite ID
>> might be something like @@<8 chars of
>> domain_index>.  There is a chance, of course, that  the 54 char
>> restriction might be problematic for LDAP usersbut I doubt it.  We
>> would make that a restriction and if it really became a problem, we
>> could consider a field size increase at a later release
>> 3) The alternative to 2a and 2b is to have, as had been suggested, an
>> internal mapping table that maps external facing entity_ids to a
>> domain plus local entity ID.  The problem with this idea is that:
>> - This could become a very big table (you will essentially have an
>> entry for every user in every corporate LDAP that has accessed a given
>> openstack)
>> - Since most LDAPs are RO, we will never see deletes...so we won't
>> know when (without some kind of garbage collection) to cull entries
>> - It obviously does not solve 1) - since existing LDAP support can
>> break the 32 char limit - and so it isn't true that this mapping table
>> causes all public facing entity IDs to be simple 32 char UUIDs
>> 
>> 
>> From a delivery into IceHouse point of view any of the above are
>> possible, since the actual mapping used is relatively small part of
>> the patch.  I personally favor 2b), since it is simple, has "less
>> moving parts" and does not change any external facing requirements for
>> storage of user and group IDs (above and beyond what is true t

Re: [openstack-dev] How do I mark one option as deprecating another one ?

2014-02-27 Thread Matt Riedemann



On 2/27/2014 6:32 AM, Davanum Srinivas wrote:

Phil,

Correct. We don't have this functionality in oslo.config. Please
create a new feature/enhancement request against oslo

thanks,
dims


Done: https://bugs.launchpad.net/oslo/+bug/1285768



On Thu, Feb 27, 2014 at 4:47 AM, Day, Phil  wrote:

Hi Denis,



Thanks for the pointer, but I looked at that and I my understanding is that
it only allows me to retrieve a value by an old name, but doesn't let me
know that the old name has been used.  So If all I wanted to do was change
the name/group of the config value it would be fine.  But in my case I need
to be able to implement:

If new_value_defined:

   do_something

else if old_value_defined:

  warn_about_deprectaion

 do_something_else



Specifically I want to replace tenant_name based authentication with
tenant_id - so I need to know which has been specified.



Phil





From: Denis Makogon [mailto:dmako...@mirantis.com]
Sent: 26 February 2014 14:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] How do I mark one option as deprecating another
one ?



Here what oslo.config documentation says.


Represents a Deprecated option. Here's how you can use it

 oldopts = [cfg.DeprecatedOpt('oldfoo', group='oldgroup'),
cfg.DeprecatedOpt('oldfoo2', group='oldgroup2')]
 cfg.CONF.register_group(cfg.OptGroup('blaa'))
 cfg.CONF.register_opt(cfg.StrOpt('foo', deprecated_opts=oldopts),
group='blaa')

 Multi-value options will return all new and deprecated
 options.  For single options, if the new option is present
 ("[blaa]/foo" above) it will override any deprecated options
 present.  If the new option is not present and multiple
 deprecated options are present, the option corresponding to
 the first element of deprecated_opts will be chosen.

I hope that it'll help you.


Best regards,

Denis Makogon.



On Wed, Feb 26, 2014 at 4:17 PM, Day, Phil  wrote:

Hi Folks,



I could do with some pointers on config value deprecation.



All of the examples in the code and documentation seem to deal with  the
case of "old_opt" being replaced by "new_opt" but still returning the same
value

Here using deprecated_name and  / or deprecated_opts in the definition of
"new_opt" lets me still get the value (and log a warning) if the config
still uses "old_opt"



However my use case is different because while I want deprecate old-opt,
new_opt doesn't take the same value and I need to  different things
depending on which is specified, i.e. If old_opt is specified and new_opt
isn't I still want to do some processing specific to old_opt and log a
deprecation warning.



Clearly I can code this up as a special case at the point where I look for
the options - but I was wondering if there is some clever magic in
oslo.config that lets me declare this as part of the option definition ?







As a second point,  I thought that using a deprecated option automatically
logged a warning, but in the latest Devstack wait_soft_reboot_seconds is
defined as:



 cfg.IntOpt('wait_soft_reboot_seconds',

default=120,

help='Number of seconds to wait for instance to shut down
after'

 ' soft reboot request is made. We fall back to hard
reboot'

 ' if instance does not shutdown within this window.',

deprecated_name='libvirt_wait_soft_reboot_seconds',

deprecated_group='DEFAULT'),







but if I include the following in nova.conf



 libvirt_wait_soft_reboot_seconds = 20





I can see the new value of 20 being used, but there is no warning logged
that I'm using a deprecated name ?



Thanks

Phil




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] why force_config_drive is a per comptue node config

2014-02-27 Thread yunhong jiang
Greeting,
I have some questions on the force_config_drive configuration options
and hope get some hints.
a) Why do we want this? Per my understanding, if the user want to use
the config drive, they need specify it in the nova boot. Or is it
because possibly user have no idea of the cloudinit installation in the
image?

b) even if we want to force config drive, why it's a compute node
config instead of cloud wise config? Compute-node config will have some
migration issue per my understanding.

Did I missed anything important?

Thanks
--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Dolph Mathews
On Thu, Feb 27, 2014 at 11:52 AM, Jay Pipes  wrote:

> On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
> > So a couple of things about this:
> >
> >
> > 1) Today (and also true for Grizzly and Havana), the user can chose
> > what LDAP attribute should be returned as the user or group ID.  So it
> > is NOT a safe assumption today (ignoring any support for
> > domain-specific LDAP support) that the format of a user or group ID is
> > a 32 char UUID.  Quite often, I would think, that email address would
> > be chosen by a cloud provider as the LDAP id field, by default we use
> > the CN.  Since we really don't want to ever change the user or group
> > ID we have given out from keystone for a particular entity, this means
> > we need to update nova (or anything else) that has made a 32 char
> > assumption.
>
> I don't believe this is correct. Keystone is the service that deals with
> authentication. As such, Keystone should be the one and only one service
> that should have any need whatsoever to need to understand a non-UUID
> value for a user ID. The only value that should ever be communicated
> *from* Keystone should be the UUID value of the user.
>

+++


>
> If the Keystone service uses LDAP or federation for alternative
> authentication schemes, then Keystone should have a mapping table that
> translates those elongated and non-UUID identifiers values (email
> addresses, LDAP CNs, etc) into the UUID value that is then communicated
> to all other OpenStack services.
>

I'd take it one step further and say that at some point, keystone should
stop leaking identity details such as user name / ID into OpenStack (they
shouldn't appear in tokens, and shouldn't be expected output of
auth_token). The use cases that "require" them are weak and would be better
served by pure multitenant RBAC, ABAC, etc. There are a lot of blockers to
making this happen (including a few in keystone's own API), but still food
for thought.


>
> Best,
> -jay
>
> > 2) In oder to support the ability for service providers to be able to
> > have the identity part of keystone be satisfied by a customer LDAP
> > (i.e. for a given domain, have a specific LDAP), then, as has been
> > stated, we need to subsequently, when handed an API call with just a
> > user or group ID, be able to "route" this call to the correct LDAP.
> >  Trying to keep true to the openstack design principles, we had
> > planned to encode a domain identifier into the user or group ID - i.e.
> > distribute the data to where it is needed, in ortherwords, the user
> > and group ID provide all the info we need to route the call to the
> > right place. Two implementations come to mind:
> > 2a) Simply concatenate the user/group ID with the domain_id, plus some
> > separator and make a composite public facing ID.  e.g.
> > "user_entity_id@@UUID_of_domain".  This would have a technical maximum
> > size of 64+2+64 (i.e. 130), although in reality since we control
> > domain_id and we know it is always 32 char UUID - in fact the max size
> > would be 98.  This has the problem of increasing the size of the
> > public facing field beyond the existing 64.  This is what we had
> > planned for IceHouse - and is currently in review.
> > 2b) Use a similar concatenation idea as 2a), but limit the total size
> > to the existing 64. Since we control domain_id, we could (internally
> > and not visibly to the outside world), create a domain_index, that was
> > used in place of domain_id in the publicly visible field, to minimize
> > the number of chars it requires.  So the public facing composite ID
> > might be something like @@<8 chars of
> > domain_index>.  There is a chance, of course, that  the 54 char
> > restriction might be problematic for LDAP usersbut I doubt it.  We
> > would make that a restriction and if it really became a problem, we
> > could consider a field size increase at a later release
> > 3) The alternative to 2a and 2b is to have, as had been suggested, an
> > internal mapping table that maps external facing entity_ids to a
> > domain plus local entity ID.  The problem with this idea is that:
> > - This could become a very big table (you will essentially have an
> > entry for every user in every corporate LDAP that has accessed a given
> > openstack)
> > - Since most LDAPs are RO, we will never see deletes...so we won't
> > know when (without some kind of garbage collection) to cull entries
> > - It obviously does not solve 1) - since existing LDAP support can
> > break the 32 char limit - and so it isn't true that this mapping table
> > causes all public facing entity IDs to be simple 32 char UUIDs
> >
> >
> > From a delivery into IceHouse point of view any of the above are
> > possible, since the actual mapping used is relatively small part of
> > the patch.  I personally favor 2b), since it is simple, has "less
> > moving parts" and does not change any external facing requirements for
> > storage of user and group IDs (above and beyond what is true today).
> 

Re: [openstack-dev] [Neutron][IPv6] tox error

2014-02-27 Thread Collins, Sean
Shixiong Shang and I ran into this problem with Tox today while we were
pair programming, and I've also seen similar barfs on my DevStack lab
boxes - it's quite a mess. Frankly I've moved to using nosetests as a
workaround, and have added it to the developer docs. 

We really do need to figure out how to make Tox and Testr give more
useful failure output - it's so huge it makes my iTerm2 window lock up
when I try and do an incremental search on the output.

Help from a Testr / Tox guru would be appreciated.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Design doc for Data sources

2014-02-27 Thread Rajdeep Dua
Updated the Neutron tables based on the example.
It will be great if we have similar file for keystone and nova as well

https://docs.google.com/document/d/1K9RkQuBSPN7Z2TmKfok7mw3E24otEGo8Pnsemxd5544/edit

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Morgan Fainberg
On Thursday, February 27, 2014, Dolph Mathews 
wrote:

>
> On Thu, Feb 27, 2014 at 11:52 AM, Jay Pipes 
> 
> > wrote:
>
>> On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
>> > So a couple of things about this:
>> >
>> >
>> > 1) Today (and also true for Grizzly and Havana), the user can chose
>> > what LDAP attribute should be returned as the user or group ID.  So it
>> > is NOT a safe assumption today (ignoring any support for
>> > domain-specific LDAP support) that the format of a user or group ID is
>> > a 32 char UUID.  Quite often, I would think, that email address would
>> > be chosen by a cloud provider as the LDAP id field, by default we use
>> > the CN.  Since we really don't want to ever change the user or group
>> > ID we have given out from keystone for a particular entity, this means
>> > we need to update nova (or anything else) that has made a 32 char
>> > assumption.
>>
>> I don't believe this is correct. Keystone is the service that deals with
>> authentication. As such, Keystone should be the one and only one service
>> that should have any need whatsoever to need to understand a non-UUID
>> value for a user ID. The only value that should ever be communicated
>> *from* Keystone should be the UUID value of the user.
>>
>
> +++
>
>
>>
>> If the Keystone service uses LDAP or federation for alternative
>> authentication schemes, then Keystone should have a mapping table that
>> translates those elongated and non-UUID identifiers values (email
>> addresses, LDAP CNs, etc) into the UUID value that is then communicated
>> to all other OpenStack services.
>>
>
> I'd take it one step further and say that at some point, keystone should
> stop leaking identity details such as user name / ID into OpenStack (they
> shouldn't appear in tokens, and shouldn't be expected output of
> auth_token). The use cases that "require" them are weak and would be better
> served by pure multitenant RBAC, ABAC, etc. There are a lot of blockers to
> making this happen (including a few in keystone's own API), but still food
> for thought.
>
>
++ this would be a great change!

I am on the same page as Dolph when it comes to approving of the UUID being
the only value communicated outside of keystone. There is just no good
reason to send out extra identity information (it isn't needed and can help
to reduce token bloat etc).

--Morgan

Sent via mobile
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] tox error

2014-02-27 Thread Clark Boylan
On Thu, Feb 27, 2014 at 11:39 AM, Collins, Sean
 wrote:
> Shixiong Shang and I ran into this problem with Tox today while we were
> pair programming, and I've also seen similar barfs on my DevStack lab
> boxes - it's quite a mess. Frankly I've moved to using nosetests as a
> workaround, and have added it to the developer docs.
>
> We really do need to figure out how to make Tox and Testr give more
> useful failure output - it's so huge it makes my iTerm2 window lock up
> when I try and do an incremental search on the output.
>
> Help from a Testr / Tox guru would be appreciated.
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

These failures are a result of testr or discover (depending on the
step in the test process, discovery happens first) running into python
import failures. In the example above it looks like
neutron.tests.unit.
linuxbridge.test_lb_neutron_agent failed to import. You can spin up a
python interpreter and try importing that to debug (note that is what
you tried to do but I believe errors4 is part of the error message and
not the thing that couldn't be imported). Flake8 may also catch the
problem. Lifeless has laid the groundwork to fix this upstream in
testtools [0], but I don't think the corresponding testrepository
improvements have been released yet. You can however install
testrepository from source [1] and see if that solves your problem.

Without seeing the code in question it is really hard to debug any
further. If nosetests does work that would indicate a possible
intertest dependency that nose resolves by running tests in a
particular order which is different than the order used by testr.
Finally, when using the python executable from a virtualenv you don't
want to be in the virtualenv's bin dir. To do a proper import test you
want to be in the root dir of the repository `cd ~/github/neutron` the
either activate the virtualenv and run python or skip activation and
do `.tox/py27/bin/python` to run the virtualenv's python binary.

[0] 
https://github.com/testing-cabal/testtools/commit/6da4893939c6fd2d732bb20a4ac50db2fe639132
[1] https://launchpad.net/testrepository/

Hope this helps,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-27 Thread Devananda van der Veen
 Thu, Feb 27, 2014 at 9:34 AM, Ben Nemec  wrote:

> On 2014-02-27 09:23, Daniel P. Berrange wrote:
>
>> On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:
>>
>>> This patch is coming through the gate this morning -
>>> https://review.openstack.org/#/c/71996/
>>>
>>> The point being to actually make devstack stop when it hits an error,
>>> instead of only once these compound to the point where there is no
>>> moving forward and some service call fails. This should *dramatically*
>>> improve the experience of figuring out a failure in the gate, because
>>> where it fails should be the issue. (It also made us figure out some
>>> wonkiness with stdout buffering, that was making debug difficult).
>>>
>>> This works on all the content that devstack gates against. However,
>>> there are a ton of other paths in devstack, including vendor plugins,
>>> which I'm sure aren't clean enough to run under -o errexit. So if all of
>>> a sudden things start failing, this may be why. Fortunately you'll be
>>> pointed at the exact point of the fail.
>>>
>>
>> This is awesome!
>>
>
> +1!  Thanks Sean and everyone else who was involved with this.
>

Another big +1 for this! I've wished for it every time I tried to add
something to devstack and struggled with debugging it.

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
I agree about not needing extra identity information outside of the user's 
UUID, but what about the role/project/domain information stored in the PKI 
token? Does it remain or go away?

From: Morgan Fainberg [mailto:m...@metacloud.com]
Sent: Thursday, February 27, 2014 12:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum 
from 64 to 255



On Thursday, February 27, 2014, Dolph Mathews 
mailto:dolph.math...@gmail.com>> wrote:

On Thu, Feb 27, 2014 at 11:52 AM, Jay Pipes 
> wrote:
On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
> So a couple of things about this:
>
>
> 1) Today (and also true for Grizzly and Havana), the user can chose
> what LDAP attribute should be returned as the user or group ID.  So it
> is NOT a safe assumption today (ignoring any support for
> domain-specific LDAP support) that the format of a user or group ID is
> a 32 char UUID.  Quite often, I would think, that email address would
> be chosen by a cloud provider as the LDAP id field, by default we use
> the CN.  Since we really don't want to ever change the user or group
> ID we have given out from keystone for a particular entity, this means
> we need to update nova (or anything else) that has made a 32 char
> assumption.
I don't believe this is correct. Keystone is the service that deals with
authentication. As such, Keystone should be the one and only one service
that should have any need whatsoever to need to understand a non-UUID
value for a user ID. The only value that should ever be communicated
*from* Keystone should be the UUID value of the user.

+++


If the Keystone service uses LDAP or federation for alternative
authentication schemes, then Keystone should have a mapping table that
translates those elongated and non-UUID identifiers values (email
addresses, LDAP CNs, etc) into the UUID value that is then communicated
to all other OpenStack services.

I'd take it one step further and say that at some point, keystone should stop 
leaking identity details such as user name / ID into OpenStack (they shouldn't 
appear in tokens, and shouldn't be expected output of auth_token). The use 
cases that "require" them are weak and would be better served by pure 
multitenant RBAC, ABAC, etc. There are a lot of blockers to making this happen 
(including a few in keystone's own API), but still food for thought.

++ this would be a great change!

I am on the same page as Dolph when it comes to approving of the UUID being the 
only value communicated outside of keystone. There is just no good reason to 
send out extra identity information (it isn't needed and can help to reduce 
token bloat etc).

--Morgan

Sent via mobile
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] asymmetric gating and stable vs unstable tests

2014-02-27 Thread Devananda van der Veen
Hi all,

I'd like to point out how asymmetric gating is challenging for incubated
projects, and propose that there may be a way to make it less so.

For reference, incubated projects aren't allowed to have symmetric gating
with integrated projects. This is why our devstack and tempest tests are
"*-check-nv" in devstack and tempest, but "*-check" and "*-gate" in our
pipeline. So, these jobs are stable from Ironic's point of view because
we've been gating on them for the last month.

Cut forward to this morning. A devstack patch [1] was merged and broke
Ironic's gate because of a one-line issue in devstack/lib/ironic which I've
since proposed a fix for [2]. This issue was visible in the non-voting
check results before the patch was approved -- but those non-voting checks
got ignored because of an assumption of instability (they must be
non-voting for a reason, right?).

I'm not suggesting we gate integrated projects on incubated projects, but I
would like to point out that not all non-voting checks are non-voting
*because they're unstable*. It would be great if there were a way to
indicate that certain tests are voting for someone else and a failure
actually matters to them.

Thanks for listening,
-Deva


[1] https://review.openstack.org/#/c/71996/

[2] 
https://review.openstack.org/#/c/76943/
 -- It's been approved already, just waiting in the merge queue ...
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why force_config_drive is a per comptue node config

2014-02-27 Thread Michael Still
On Fri, Feb 28, 2014 at 6:34 AM, yunhong jiang
 wrote:
> Greeting,
> I have some questions on the force_config_drive configuration options
> and hope get some hints.
> a) Why do we want this? Per my understanding, if the user want to use
> the config drive, they need specify it in the nova boot. Or is it
> because possibly user have no idea of the cloudinit installation in the
> image?

It is possible for a cloud admin to have only provided images which
work with config drive. In that case the admin would want to force
config drive on, to ensure that instances always boot correctly.

> b) even if we want to force config drive, why it's a compute node
> config instead of cloud wise config? Compute-node config will have some
> migration issue per my understanding.

That's a fair point. It should probably have been a flag on the api
servers. I'd file a bug for that one.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Christopher Yeoh
On Thu, 27 Feb 2014 09:41:45 -0800
Dan Smith  wrote:
> 
> Aside from some helper functions to make this consistent, and some
> things to say "here's the code I want to return, but compat clients
> need /this/ code", I think this actually gets us most of the way
> there:
> 
> http://paste.openstack.org/show/70310/
> 
> (super hacky not-working fast-as-I-could-type prototype, of course)
> 
> Some care to the protocol we use, versioning rules, etc is definitely
> required. But plumbing-wise, I think it's actually not so bad. With
> the above, I think we could stop doing new extensions for every tweak
> right now.

So whilst we still have extensions (and that's a separate debate) we
need versioning on a per extension basis. Otherwise people are forced 
to upgrade their extensions in lockstep with each other. 

Using microversions for the core makes sense though and we'd talked
about doing that for V3.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Christopher Yeoh
On Thu, 27 Feb 2014 12:02:45 -0500
Sean Dague  wrote:
> 
> I do think client headers instead of urls have some pragmatic approach
> here that is very attractive. Will definitely need a good chunk of
> plumbing to support that in a sane way in the tree that keeps the
> overhead from a review perspective low.
> 

So as I mentioned before I think I like the idea of microversions
through headers for the core for V3. 

However, doing versioning differently for major verisons just for Nova
compared to the rest of openstack has a cost too. And if we get version
discovery used and stop hard coding the version in our endpoints, then
people won't have the version number hard coded into their apps.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party-ci] Proposing a regular workshop/meeting to help folks set up CI environments

2014-02-27 Thread Jay Pipes
On Wed, 2014-02-26 at 12:09 +, trinath.soman...@freescale.com wrote:
> Hi Jay-
> 
> Rather than March 3rd 
> 
> Can you kindly make it on Feb 28th if possible.

Unfortunately, Trinath, I can't make it tomorrow during that time. I can
try and be on IRC earlier in the day, though, to help you?

> I'm interested in attending the meeting. 
> 
> I have doubts to get clarified regarding setp, configuration and CI system.

Please let me know what your issues are and I'll try to answer you asap.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Dan Smith
> So whilst we still have extensions (and that's a separate debate) we
> need versioning on a per extension basis. Otherwise people are forced 
> to upgrade their extensions in lockstep with each other. 

I think that some people would argue that requiring the extensions to go
together linearly is a good thing, from the point of view of a
consistent API. I'm not sure how I feel about that, actually, but I
think that's a different discussion.

However, what I think I really want, which I mentioned in IRC after I
sent this was: using something like servers:version=2. That could be
namespaced by extension, or we could define boxes of functionality that
go together, like core:version=1, volume-stuff:version=1, etc.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-27 Thread Sergey Lukjanov
And a big +1 from me too. It's really useful.

On Fri, Feb 28, 2014 at 12:15 AM, Devananda van der Veen
 wrote:
>  Thu, Feb 27, 2014 at 9:34 AM, Ben Nemec  wrote:
>>
>> On 2014-02-27 09:23, Daniel P. Berrange wrote:
>>>
>>> On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:

 This patch is coming through the gate this morning -
 https://review.openstack.org/#/c/71996/

 The point being to actually make devstack stop when it hits an error,
 instead of only once these compound to the point where there is no
 moving forward and some service call fails. This should *dramatically*
 improve the experience of figuring out a failure in the gate, because
 where it fails should be the issue. (It also made us figure out some
 wonkiness with stdout buffering, that was making debug difficult).

 This works on all the content that devstack gates against. However,
 there are a ton of other paths in devstack, including vendor plugins,
 which I'm sure aren't clean enough to run under -o errexit. So if all of
 a sudden things start failing, this may be why. Fortunately you'll be
 pointed at the exact point of the fail.
>>>
>>>
>>> This is awesome!
>>
>>
>> +1!  Thanks Sean and everyone else who was involved with this.
>
>
> Another big +1 for this! I've wished for it every time I tried to add
> something to devstack and struggled with debugging it.
>
> -Deva
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-27 Thread Joe Gordon
On Thu, Feb 27, 2014 at 6:05 AM, Ziad Sawalha
 wrote:
>
> On Feb 26, 2014, at 11:47 AM, Joe Gordon  wrote:
>
>> On Wed, Feb 26, 2014 at 9:05 AM, David Ripton  wrote:
>>> On 02/26/2014 11:40 AM, Joe Gordon wrote:
>>>
 This is missing the point about manually enforcing style. If you pass
 the 'pep8' job there is no need to change any style.
>>>
>>>
>>> In a perfect world, yes.
>>
>> While there are exceptions to this,  this just sounds like being extra
>> nit-picky.
>
> The current reality is that reviewers do vote and comment based on the rules 
> in
> the hacking guide. In terms of style, whether my patch gets approved or not 
> depends
> on who reviews it.

Style guide enforcement is a two part problem:

* Train the reviewers to just rely on the automated enforcement
(except for special cases)
* Make sure the automatic style checks reflect the lazy consensus of
what we want.

If reviewers nit pick and read between the lines in style enforcement,
we will spend all our time writing new rules.

>
> I think that clarifying this in the hacking guide and including it in our 
> test suites
> would remove the personal bias from it. I don't see folks complaining that 
> Jenkins is
> being nit-picky as a reviewer.

I think as a first pass we can explicitly mention we don't care about
the trailing empty line (which is the status quo).

>
>> The important aspect here is the mindset, if we don't gate
>> on style rule x, then we shouldn't waste valuable human review time
>> and patch revisions on trying to manually enforce it (And yes there
>> are exceptions to this).
>>
>>>
>>> In the real world, there are several things in PEP8 or our project
>>> guidelines that the tools don't enforce perfectly.  I think it's fine for
>>> human reviewers to point such things out.  (And then submit a patch to
>>> hacking to avoid the need to do so in the future.)
>>
>> To clarify, we don't rely on long term human enforcement for something
>> that a computer can do.
>>
>
> Precisely. I'm offering to include this as a patch to hacking or even a 
> separate
> test that we can add (without gating initially).
>
>>>
>>> --
>>> David Ripton   Red Hat   drip...@redhat.com
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-27 Thread Joe Gordon
On Thu, Feb 27, 2014 at 9:29 AM, Ben Nemec  wrote:
> On 2014-02-27 08:05, Ziad Sawalha wrote:
>>
>> On Feb 26, 2014, at 11:47 AM, Joe Gordon  wrote:
>>
>>> On Wed, Feb 26, 2014 at 9:05 AM, David Ripton  wrote:

 On 02/26/2014 11:40 AM, Joe Gordon wrote:

> This is missing the point about manually enforcing style. If you pass
> the 'pep8' job there is no need to change any style.



 In a perfect world, yes.
>>>
>>>
>>> While there are exceptions to this,  this just sounds like being extra
>>> nit-picky.
>>
>>
>> The current reality is that reviewers do vote and comment based on the
>> rules in
>> the hacking guide. In terms of style, whether my patch gets approved
>> or not depends
>> on who reviews it.
>>
>> I think that clarifying this in the hacking guide and including it in
>> our test suites
>> would remove the personal bias from it. I don't see folks complaining
>> that Jenkins is
>> being nit-picky as a reviewer.
>>
>>> The important aspect here is the mindset, if we don't gate
>>> on style rule x, then we shouldn't waste valuable human review time
>>> and patch revisions on trying to manually enforce it (And yes there
>>> are exceptions to this).
>>>

 In the real world, there are several things in PEP8 or our project
 guidelines that the tools don't enforce perfectly.  I think it's fine
 for
 human reviewers to point such things out.  (And then submit a patch to
 hacking to avoid the need to do so in the future.)
>>>
>>>
>>> To clarify, we don't rely on long term human enforcement for something
>>> that a computer can do.
>>>
>>
>> Precisely. I'm offering to include this as a patch to hacking or even a
>> separate
>> test that we can add (without gating initially).
>
>
> I'm fine with this personally, although it should probably be noted that we
> have some PEP 257-type style checks in hacking that get almost universally
> ignored because no project has been following those style guidelines up to
> this point and it's a significant amount of effort to fix that.  That said,
> a check for the trailing blank line would be easier to fix so it might get
> more traction.

If we do want to add this to hacking, and I am not convinced either
way yet, we should pick the option that requires the least change to
gate on.

>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Feb 27

2014-02-27 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-27-18.01.html
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-27-18.01.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why force_config_drive is a per comptue node config

2014-02-27 Thread Jiang, Yunhong


> -Original Message-
> From: Michael Still [mailto:mi...@stillhq.com]
> Sent: Thursday, February 27, 2014 1:04 PM
> To: yunhong jiang
> Cc: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
> comptue node config
> 
> On Fri, Feb 28, 2014 at 6:34 AM, yunhong jiang
>  wrote:
> > Greeting,
> > I have some questions on the force_config_drive configuration
> options
> > and hope get some hints.
> > a) Why do we want this? Per my understanding, if the user
> want to use
> > the config drive, they need specify it in the nova boot. Or is it
> > because possibly user have no idea of the cloudinit installation in the
> > image?
> 
> It is possible for a cloud admin to have only provided images which
> work with config drive. In that case the admin would want to force
> config drive on, to ensure that instances always boot correctly.

So would it make sense to keep it as image property, instead of compute node 
config?

> 
> > b) even if we want to force config drive, why it's a compute
> node
> > config instead of cloud wise config? Compute-node config will have
> some
> > migration issue per my understanding.
> 
> That's a fair point. It should probably have been a flag on the api
> servers. I'd file a bug for that one.

Thanks, and I can cook a patch for it. Still I think it will be better if we 
use image property?

--jyh

> 
> Michael
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why force_config_drive is a per comptue node config

2014-02-27 Thread Jiang, Yunhong
Hi, Michael, I created a bug at https://bugs.launchpad.net/nova/+bug/1285880 
and please have a look.

Thanks
--jyh

> -Original Message-
> From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
> Sent: Thursday, February 27, 2014 1:35 PM
> To: OpenStack Development Mailing List (not for usage questions);
> yunhong jiang
> Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
> comptue node config
> 
> 
> 
> > -Original Message-
> > From: Michael Still [mailto:mi...@stillhq.com]
> > Sent: Thursday, February 27, 2014 1:04 PM
> > To: yunhong jiang
> > Cc: OpenStack Development Mailing List
> > Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
> > comptue node config
> >
> > On Fri, Feb 28, 2014 at 6:34 AM, yunhong jiang
> >  wrote:
> > > Greeting,
> > > I have some questions on the force_config_drive
> configuration
> > options
> > > and hope get some hints.
> > > a) Why do we want this? Per my understanding, if the user
> > want to use
> > > the config drive, they need specify it in the nova boot. Or is it
> > > because possibly user have no idea of the cloudinit installation in the
> > > image?
> >
> > It is possible for a cloud admin to have only provided images which
> > work with config drive. In that case the admin would want to force
> > config drive on, to ensure that instances always boot correctly.
> 
> So would it make sense to keep it as image property, instead of compute
> node config?
> 
> >
> > > b) even if we want to force config drive, why it's a compute
> > node
> > > config instead of cloud wise config? Compute-node config will have
> > some
> > > migration issue per my understanding.
> >
> > That's a fair point. It should probably have been a flag on the api
> > servers. I'd file a bug for that one.
> 
> Thanks, and I can cook a patch for it. Still I think it will be better if we 
> use
> image property?
> 
> --jyh
> 
> >
> > Michael
> >
> > --
> > Rackspace Australia
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status of Docker CI

2014-02-27 Thread Eric Windisch
We have a Jenkins server and slave configuration that has been tested and
integrated into upstream OpenStack CI.  We do not yet trigger on rechecks
due to limitations of the Gerrit Jenkins trigger plugin.  However,  Arista
has published a patch for this that we may be able to test.  Reporting into
OpenStack Gerrit has been tested, but is currently disabled as we know that
tests are failing. Re-enabling the reporting is as simple as clicking a
checkbox in Jenkins, however.

The test itself where we bring Nova up with the Docker plugin and run
tempest against it is working fairly well. The process of building the VM
image and running it is fully automated and running smoothly. Nova is
installed, started, and tempest runs against it.

Tempest is working without failures on the majority of tests. To speed
development I've been concentrating on the "tempest.api.compute" tests. To
date, I've only disabled neutron, cinder, and the v3 api. I expect that
I'll need to disable the config_drive and migration extensions as we do not
support these features in our driver. I haven't yet identified any other
extensions that do not work.

Tuesday's pass/fail for Tempest was 32 failures to 937 tests. The total
number of tests is as low as 937 because this only includes the compute api
tests, knowing that we're passing or skipping all other test suites.

Since Tuesday, I've made a number of changes including bugfixes for the
Docker driver and disabling of the config_drive and migration extensions.
I'm still running tempest against these changes, but expect to see fewer
than 20 failing tests today.

Here is a list of the tests that failed as of Tuesday:
 http://paste.openstack.org/show/69566/

Related changes in review:
* Nova
  - https://review.openstack.org/#/c/76382/
  - https://review.openstack.org/#/c/76373/
* Tempest
  - https://review.openstack.org/#/c/75267/
  # following have -1's for me to review, may be rolled into a single patch
  - https://review.openstack.org/#/c/75249/
  - https://review.openstack.org/#/c/75254/
  - https://review.openstack.org/#/c/75274/

A fair number of the remaining failures are timeout errors creating tenants
in Keystone and uploading images into Glance. It isn't clear why I'm seeing
these errors, but I'm going to attempt increasing the timeout. There may be
some more subtle problem with my environment, or it may simply be a matter
of performance, but I doubt these issues are specific to the Docker
hypervisor.

Because we don't support Neutron and the v3 api doesn't work with
nova-network, I haven't yet concentrated effort into v3. Having done some
limited testing of the v3 API, however, I've seen relatively few failures
and most or all overlapped with the existing v2 failures. I'm not sure how
Russell or the community feels about skipping Tempest tests for v3, and I
would like to try making these pass, but I presently see it as lower
priority versus making v2 work and pass.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework

2014-02-27 Thread Jay Pipes
On Thu, 2014-02-27 at 02:11 +0400, Eugene Nikanorov wrote:
> Hi neutron folks,
> 
> I know that there are patches on gerrit for VPN, FWaaS and L3 services
> that are leveraging Provider Framework. 
> Recently we've been discussing more comprehensive approach that will
> allow user to choose service capabilities rather than vendor or
> provider.
> 
> I've started creating design draft of Flavor Framework, please take a
> look: 
> https://wiki.openstack.org/wiki/Neutron/FlavorFramework
> 
> It also now looks clear to me that the code that introduces providers
> for vpn, fwaas, l3 is really necessary to move forward to Flavors with
> one exception: providers should not be exposed to public API.
> While provider attribute could be visible to administrator (like
> segmentation_id of network), it can't be specified on creation and
> it's not available to a regular user.

A few thoughts, Eugene, and thanks for putting together the wiki page
where we can work on this important topic.

1) I'm not entirely sure that a provider attribute is even necessary to
expose in any API. What is important is for a scheduler to know which
drivers are capable of servicing a set of attributes that are grouped
into a "flavor".

2) I would love to see the use of the term "flavor" banished from
OpenStack APIs. Nova has moved from flavors to "instance types", which
clearly describes what the thing is, without the odd connotations that
the word "flavor" has in different languages (not to mention the fact
that flavor is spelled flavour in non-American English).

How about using the term "load balancer type", "VPN type", and "firewall
type" instead?

3) I don't believe the FlavorType (public or internal) attribute of the
flavor is useful. We want to get away from having any vendor-specific
attributes or objects in the APIs (yes, even if they are "hidden" from
the normal user). See point #1 for more about this. A scheduler should
be able to match a driver to a request simply by matching the set of
required capabilities in the requested flavor (load balancer type) to
the set of capabilities advertised by the driver.

4) A minor point... I think it would be fine to group the various
"types" into a single database table behind the scenes (like you have in
the Object model section). However, I think it is useful to have the
public API expose a /$servie-types resource endpoint for each service
itself, instead of a generic /types (or /flavors) endpoint. So, folks
looking to set up a load balancer would call GET /balancer-types, or
call neutron balancer-type-list, instead of calling
GET /types?service=load-balancer or neutron flavor-list
--service=load-balancer

5) In the section on Scheduling, you write "Scheduling is a process of
choosing provider and a backend for the resource". As mentioned above, I
think this could be changed to something like this: "Scheduling is a
process of matching the set of requested capabilities -- the flavor
(type) -- to the set of capabilities advertised by a driver for the
resource". That would put Neutron more in line with how Nova handles
this kind of thing.

Anyway, just food for thought.

Best,
-jay


> Looking forward to get your feedback.
> 
> 
> Thanks,
> Eugene.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of Docker CI

2014-02-27 Thread Russell Bryant
Thanks for the update.

On 02/27/2014 05:18 PM, Eric Windisch wrote:
> We have a Jenkins server and slave configuration that has been tested
> and integrated into upstream OpenStack CI.  We do not yet trigger on
> rechecks due to limitations of the Gerrit Jenkins trigger plugin.
>  However,  Arista has published a patch for this that we may be able to
> test.  Reporting into OpenStack Gerrit has been tested, but is currently
> disabled as we know that tests are failing. Re-enabling the reporting is
> as simple as clicking a checkbox in Jenkins, however.
> 
> The test itself where we bring Nova up with the Docker plugin and run
> tempest against it is working fairly well. The process of building the
> VM image and running it is fully automated and running smoothly. Nova is
> installed, started, and tempest runs against it.
> 
> Tempest is working without failures on the majority of tests. To speed
> development I've been concentrating on the "tempest.api.compute" tests.
> To date, I've only disabled neutron, cinder, and the v3 api. I expect
> that I'll need to disable the config_drive and migration extensions as
> we do not support these features in our driver. I haven't yet identified
> any other extensions that do not work.

The number of things that don't work with this driver is a big issue, I
think.  However, we haven't really set rules on a baseline for what we
expect every driver to support.  This is something I'd like to tackle in
the Juno cycle, including another deadline.

> Tuesday's pass/fail for Tempest was 32 failures to 937 tests. The total
> number of tests is as low as 937 because this only includes the compute
> api tests, knowing that we're passing or skipping all other test suites.
> 
> Since Tuesday, I've made a number of changes including bugfixes for the
> Docker driver and disabling of the config_drive and migration
> extensions. I'm still running tempest against these changes, but expect
> to see fewer than 20 failing tests today.
> 
> Here is a list of the tests that failed as of Tuesday:
>  http://paste.openstack.org/show/69566/
> 
> Related changes in review:
> * Nova
>   - https://review.openstack.org/#/c/76382/
>   - https://review.openstack.org/#/c/76373/
> * Tempest
>   - https://review.openstack.org/#/c/75267/
>   # following have -1's for me to review, may be rolled into a single patch
>   - https://review.openstack.org/#/c/75249/
>   - https://review.openstack.org/#/c/75254/
>   - https://review.openstack.org/#/c/75274/
> 
> A fair number of the remaining failures are timeout errors creating
> tenants in Keystone and uploading images into Glance. It isn't clear why
> I'm seeing these errors, but I'm going to attempt increasing the
> timeout. There may be some more subtle problem with my environment, or
> it may simply be a matter of performance, but I doubt these issues are
> specific to the Docker hypervisor.

These have got to be specific to your environment given the many
thousands of times these tests are run in other environments every week.

> Because we don't support Neutron and the v3 api doesn't work with
> nova-network, I haven't yet concentrated effort into v3. Having done
> some limited testing of the v3 API, however, I've seen relatively few
> failures and most or all overlapped with the existing v2 failures. I'm
> not sure how Russell or the community feels about skipping Tempest tests
> for v3, and I would like to try making these pass, but I presently see
> it as lower priority versus making v2 work and pass.

It's certainly not ideal.  On the one hand, we haven't enforced a
baseline for drivers yet.  On the other hand, we did communicate that we
expect a "full tempest run" in CI, so that's up to some interpretation.
 We also said "Note that hypervisors missing specific bits of feature
support may exclude those tests from their published Tempest
configuration, and the Nova team will validate the effectiveness of the
given config on a per-case basis to ensure reasonable coverage."

The deprecation schedule [1] says that the removal won't be committed
until just before the RC, so you still have a couple of weeks.  I would
sprint toward getting everything passing, even if it means applying
fixes to your env that haven't merged yet to demonstrate it working sooner.

[1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >