Re: [openstack-dev] [nova] Server Group API: add 'action' to authorizer?

2014-08-25 Thread Alex Xu

On 2014年08月23日 18:29, Christopher Yeoh wrote:

On Sat, 23 Aug 2014 03:56:27 -0500
Joe Cropper  wrote:


Hi Folks,

Would anyone be opposed to adding the 'action' checking to the v2/v3
authorizers?  This would allow administrators more fine-grained
control over  who can read vs. create/update/delete server groups.

Thoughts?

If folks are supportive, I'd be happy to add this... but not sure if
we'd treat this as a 'bug' or whether there is a blueprint under which
this could be done?

Long term we want to have a separate authorizer for every method. Alex
had a nova-spec  proposed for this but it unfortunately did not make
Juno

https://review.openstack.org/#/c/92326/

Also since the feature proposal deadline has passed it'll have to wait
till Kilo.


Yes, that spec propose adding policy rule for each API for get more 
fine-grained control. But we have to wait till K release.




Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] are we going to remove the novaclient v3 shell or what?

2014-09-18 Thread Alex Xu

On 2014年09月18日 18:14, Day, Phil wrote:

-Original Message-
From: Kenichi Oomichi [mailto:oomi...@mxs.nes.nec.co.jp]
Sent: 18 September 2014 02:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient
v3 shell or what?



-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
Sent: Wednesday, September 17, 2014 11:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] are we going to remove the novaclient v3

shell or what?

This has come up a couple of times in IRC now but the people that
probably know the answer aren't available.

There are python-novaclient patches that are adding new CLIs to the v2
(v1_1) and v3 shells, but now that we have the v2.1 API (v2 on v3) why
do we still have a v3 shell in the client?  Are there plans to remove that?

I don't really care either way, but need to know for code reviews.

One example: [1]

[1] https://review.openstack.org/#/c/108942/

Sorry for a little late response,
I think we don't need new features of v3 into novaclient anymore.
For example, the v3 part of the above[1] was not necessary because a new
feature server-group quota is provided as v2 and v2.1, not v3.

That would be true if there was a version of the client that supported v2.1 
today, but while the V2.1 API is still presented as V3 and doesn't include the 
tenant_id - making the V3 client the only simple way to test new V2.1 features 
in devstack as far as I can see.


How about this as a plan:

1) We add support to the client for "--os-compute-api-version=v2.1"   which 
maps into the client with the URL set to include v2.1(this won't be usable until we 
do step 2)

+1


2) We change the Nova  to present the v2.1 API  as 
'http://X.X.X.X:8774/v2.1//
  - At this point we will have a working client for all of the stuff that's 
been moved back from V3 to V2.1, but will lose access to any V3 stuff not yet 
moved (which is the opposite of the current state where the v3 client can only 
be used for things that haven't been refactored to V2.1)


Actually we already can access v2.1 API as 
''http://X.X.X.X:8774/v2.1//..'

https://github.com/openstack/nova/blob/master/etc/nova/api-paste.ini#L64



3) We remove V3 from the client.


Until we get 1 & 2 done, to me it still makes sense to allow small changes into 
the v3 client, so that we keep it usable with the V2.1 API



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-18 Thread Alex Xu

On 2014年09月18日 18:57, Sean Dague wrote:

On 09/18/2014 06:38 AM, Christopher Yeoh wrote:

On Sat, 13 Sep 2014 06:48:19 -0400
Sean Dague  wrote:


On 09/13/2014 02:28 AM, Kenichi Oomichi wrote:

Hi Chris,

Thanks for bring it up here,


-Original Message-
From: Chris St. Pierre [mailto:stpie...@metacloud.com]
Sent: Saturday, September 13, 2014 2:53 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Expand resource name allowed
characters

We have proposed that the allowed characters for all resource
names in Nova (flavors, aggregates, etc.) be expanded to all
printable unicode characters and horizontal spaces:
https://review.openstack.org/#/c/119741

Currently, the only allowed characters in most resource names are
alphanumeric, space, and [.-_].


We have proposed this change for two principal reasons:

1. We have customers who have migrated data forward since Essex,
when no restrictions were in place, and thus have characters in
resource names that are disallowed in the current version of
OpenStack. This is only likely to be useful to people migrating
from Essex or earlier, since the current restrictions were added
in Folsom.

2. It's pretty much always a bad idea to add unnecessary
restrictions without a good reason. While we don't have an
immediate need to use, for example, the ever-useful
http://codepoints.net/U+1F4A9 in a flavor name, it's hard to come
up with a reason people *shouldn't* be allowed to use it.

That said, apparently people have had a need to not be allowed to
use some characters, but it's not clear why:
https://bugs.launchpad.net/nova/+bug/977187 So I guess if anyone
knows any reason why these printable characters should not be
joined in holy resource naming, speak now or forever hold your
peace.

I also could not find the reason of current restriction on the bug
report, and I'd like to know it as the history.
On v2 API(not v2.1), each resource name contains the following
restriction for its name:

   Resource  | Length  | Pattern
  ---+-+--
   aggregate | 1-255   | nothing
   backup| nothing | nothing
   flavor| 1-255   | '^[a-zA-Z0-9. _-]*[a-zA-Z0-9_-]+
 | |   [a-zA-Z0-9. _-]*$'
   keypair   | 1-255   | '^[a-zA-Z0-9 _-]+$'
   server| 1-255   | nothing
   cell  | 1-255   | don't contain "." and "!"

On v2.1 API, we have applied the same restriction rule[1] for whole
resource names for API consistency, so maybe we need to consider
this topic for whole names.

[1]:
https://github.com/openstack/nova/blob/master/nova/api/validation/parameter_types.py#L44

Honestly, I bet this had to do with how the database used to be set
up.


So it turns out that utf8 support in MySQL does not support UTF-8 4 byte
multibyte characters (only 1-3 bytes). For example if you do a create
image call with an image name to glance with a 4 byte multibyte
character in the name it will 500. I'd guess we do something
similar in places with the Nova API where we have inadequate input
validation. If you want 4 byte support you need to use utf8mb4 instead.

Oh... fun. :(


I don't know if postgresql has any restrictions (I don't think it
does) or if db2 does too. But I don't think we can/should make it a
complete free for all. It should at most be what most databases support.

I think its a big enough change that this late in the cycle we should
push it off to Kilo. It's always much easier to loosen input validation
than tighten it (or have to have an "oops" revert on an officially
released Nova). Perhaps some tests to verify that everything we allow
past the input validation checks we can actually store.

So, honestly, that seems like a pendulum swing in an odd way.

Havana "use anything you want!"
Icehouse ?
Juno "strict asci!"
Kilo "utf8"

Can't we just catch the db exception correctly in glance and not have it
explode? And then allow it. Exploding with a 500 on a bad name seems the
wrong thing to do anyway.

That would also mean that if the user changed their database to support
utf8mb4 (which they might want to do if it was important to them) it
would all work.

I think some release notes would be fine to explain the current
situation and limitations.

Basically, lets skate towards the puck here, realizing some corner cases
exist, but that we're moving in the direction we want to be, not back
tracking.

When we can return the json-schema to user in the future, can we say 
that means API accepting utf8 or utf8mb4 is discoverable? If it is 
discoverable, then we needn't limit anything in our python code.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-18 Thread Alex Xu
Close to Kilo, it is time to think about what's next for nova API. In 
Kilo, we

will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for 
micro-version.


I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out in the 
blog post.


As discussion in the Nova API meeting, we want to bring it up to 
mail-list to

discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can we deprecate the server backup API please?

2018-11-18 Thread Alex Xu
Sounds make sense to me, and then we needn't fix this strange behaviour
also https://review.openstack.org/#/c/409644/

Jay Pipes  于2018年11月17日周六 上午3:56写道:

> The server backup API was added 8 years ago. It has Nova basically
> implementing a poor-man's cron for some unknown reason (probably because
> the original RAX Cloud Servers API had some similar or identical
> functionality, who knows...).
>
> Can we deprecate this functionality please? It's confusing for end users
> to have an `openstack server image create` and `openstack server backup
> create` command where the latter does virtually the same thing as the
> former only sets up some whacky cron-like thing and deletes images after
> some number of rotations.
>
> If a cloud provider wants to offer some backup thing as a service, they
> could implement this functionality separately IMHO, store the user's
> requested cronjob state in their own system (or in glance which is kind
> of how the existing Nova createBackup functionality works), and run a
> simple cronjob executor that ran `openstack server image create` and
> `openstack image delete` as needed.
>
> This is a perfect example of an API that should never have been added to
> the Compute API, in my opinion, and removing it would be a step in the
> right direction if we're going to get serious about cleaning the Compute
> API up.
>
> Thoughts?
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Remove ratelimits from nova v3 api

2013-11-25 Thread Alex Xu

Hi, guys,

Looks like ratelimits is not really useful. The reason already pointed 
out by this patch:

https://review.openstack.org/#/c/34821/ , Thanks Joe for point it out.

So v3 API is a chance to cleanup those stuff. If there isn't anyone 
object it. I will send patch

to get ride of ratelimits code for v3 API.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-01 Thread Alex Xu

On 2013?12?01? 21:39, Christopher Yeoh wrote:

Hi,

At the summit we agreed to split out lock/unlock, pause/unpause, 
suspend/unsuspend
functionality out of the V3 version of admin actions into separate 
extensions to make it easier for deployers to only have loaded the 
functionality that they want.


Remaining in admin_actions we have:

migrate
live_migrate
reset_network
inject_network_info
create_backup
reset_state

I think it makes sense to separate out migrate and live_migrate into a 
migrate plugin as well.


What do people think about the others? There is no real overhead of 
having them in separate
plugins and totally remove admin_actions. Does anyone have any 
objections from this being done?


I have question for reset_network and inject_network_info. Is it useful 
for v3 api? The network info(ip address, gateway...) should be pushed

by DHCP service that provided by Neutron. And we don't like any inject.



Also in terms of grouping I don't think any of the others remaining 
above really belong together, but welcome any suggestions.


Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] [qa][nova] The document for the changes from Nova v2 api to v3

2013-12-10 Thread Alex Xu
yeah, that's good idea. That can keep the doc update with code. I will 
try to convert the wiki to rst format, then submit to gerrit.


On 2013?12?07? 00:12, Anne Gentle wrote:

Hi all,
Now that you've got a decent start, how about checking it in as a 
doc/source/ document in the nova repository? Seems better than keeping 
it in the wiki so that you can update with code changes.

Anne


On Wed, Nov 13, 2013 at 11:11 PM, Alex Xu <mailto:x...@linux.vnet.ibm.com>> wrote:


On 2013?11?14? 07:09, Christopher Yeoh wrote:

On Thu, Nov 14, 2013 at 7:52 AM, David Kranz mailto:dkr...@redhat.com>> wrote:

    On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So
can we ask people, who change
something of api v3, update the doc accordingly? I think
it's a way to resolve it.

Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org  
<mailto:openstack...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

Thanks, this is great. I fixed a bug in the os-services
section. BTW, openstack...@lists.openstack.org
<mailto:openstack...@lists.openstack.org> list is obsolete.
openstack-dev with subject starting with [qa] is the current
"qa list". About updating, I think this will have to be
heavily socialized in the nova team. The initial review
should happen by those reviewing the tempest v3 api changes.
That is how I found the os-services bug.


Can we leverage off the DocImpact flag with the commit message
somehow - say anytime there is a changeset
with DocImpact and that changes a file under
nova/api/openstack/compute we generate a notification?

I think we're getting much better at enforcing the DocImpact flag
during reviews.

+1 for DocImpact flag, that's good idea.


Chris.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Alex Xu

On 2013年12月12日 04:41, Ryan Petrello wrote:

Hello,

I’ve spent the past week experimenting with using Pecan for Nova’s API, and 
have opened an experimental review:

https://review.openstack.org/#/c/61303/6

…which implements the `versions` v3 endpoint using pecan (and paves the way for 
other extensions to use pecan).  This is a *potential* approach I've considered 
for gradually moving the V3 API, but I’m open to other suggestions (and 
feedback on this approach).  I’ve also got a few open questions/general 
observations:

1.  It looks like the Nova v3 API is composed *entirely* of extensions 
(including “core” API calls), and that extensions and their routes are 
discoverable and extensible via installed software that registers itself via 
stevedore.  This seems to lead to an API that’s composed of installed software, 
which in my opinion, makes it fairly hard to map out the API (as opposed to how 
routes are manually defined in other WSGI frameworks).  I assume at this time, 
this design decision has already been solidified for v3?

2.  The approach in my review would allow us to translate extensions to pecan 
piecemeal.  To me, this seems like a more desirable and manageable approach 
than moving everything to pecan at once, given the scale of Nova’s API.  Do 
others agree/disagree?  Until all v3 extensions are translated, this means the 
v3 API is composed of two separate WSGI apps.

+1 for this too.

3.  Can somebody explain the purpose of the wsgi.deserializer decorator?  It’s 
something I’ve not accounted for yet in my pecan implementation.  Is the goal 
to deserialize the request *body* from e.g., XML into a usable data structure?  
Is there an equivalent for JSON handling?  How does this relate to the schema 
validation that’s being done in v3?

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-18 Thread Alex Xu
I replied some comments on the gerrit also. If we have patch for 
demonstrate pecan style extension, that will be great.


Thanks
Alex
On 2013年12月18日 05:08, Ryan Petrello wrote:

So any additional feedback on this patch?  I’d love to start working on porting 
some of the other extensions to pecan, but want to make sure I’ve got approval 
on this approach first.

https://review.openstack.org/#/c/61303/7

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Dec 14, 2013, at 10:45 AM, Doug Hellmann  wrote:




On Sat, Dec 14, 2013 at 7:55 AM, Christopher Yeoh  wrote:

On Sat, Dec 14, 2013 at 8:48 AM, Doug Hellmann  
wrote:
That covers routes. What about the properties of the inputs and outputs?


I think the best way for me to describe it is that as the V3 API core and all 
the extensions
are written, both the routes and input and output parameters are from a 
client's perspective fixed at application
startup time. Its not an inherent restriction of the framework (an extension 
could for
example dynamically load another extension at runtime if it really wanted to), 
but we just don't do that.

OK, good.

  


Note that values of parameters returned can be changed by an extension though. 
For example os-hide-server-addresses
can based on a runtime policy check and the vm_state of the server, filter 
whether the values in the
addresses field are filtered out or not when returning information about a 
server. This isn't a new thing in the
V3 API though, it already existed in the V2 API.

OK, it seems like as long as the fields are still present that makes the API at 
least consistent for a given deployment's configuration.

Doug

  


Chris
  


On Fri, Dec 13, 2013 at 4:43 PM, Ryan Petrello  
wrote:
Unless there’s some other trickiness going on that I’m unaware of, the routes 
for the WSGI app are defined at application startup time (by methods called in 
the WSGI app’s __init__).

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Dec 13, 2013, at 12:56 PM, Doug Hellmann  wrote:




On Thu, Dec 12, 2013 at 9:22 PM, Christopher Yeoh  wrote:
On Fri, Dec 13, 2013 at 4:12 AM, Jay Pipes  wrote:
On 12/11/2013 11:47 PM, Mike Perez wrote:
On 10:06 Thu 12 Dec , Christopher Yeoh wrote:
On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
mailto:doug.hellm...@dreamhost.com>>wrote:




On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello <
ryan.petre...@dreamhost.com
>
wrote:

Hello,

I’ve spent the past week experimenting with using Pecan for
Nova’s
API
and have opened an experimental review:


https://review.openstack.org/#/c/61303/6

…which implements the `versions` v3 endpoint using pecan (and
paves the
way for other extensions to use pecan).  This is a *potential*

approach
I've considered for gradually moving the V3 API, but I’m open
to other suggestions (and feedback on this approach).  I’ve
also got a few open questions/general observations:

1.  It looks like the Nova v3 API is composed *entirely* of
extensions (including “core” API calls), and that extensions
and their routes are discoverable and extensible via installed
software that registers
itself
via stevedore.  This seems to lead to an API that’s composed of

installed
software, which in my opinion, makes it fairly hard to map out
the
API (as
opposed to how routes are manually defined in other WSGI
frameworks).  I
assume at this time, this design decision has already been
solidified for
v3?


Yeah, I brought this up at the summit. I am still having some
trouble understanding how we are going to express a stable core
API for compatibility testing if the behavior of the API can be
varied so significantly by deployment decisions. Will we just
list each
"required"
extension, and forbid any extras for a compliant cloud?


Maybe the issue is caused by me misunderstanding the term
"extension," which (to me) implies an optional component but is
perhaps reflecting a technical implementation detail instead?


Yes and no :-) As Ryan mentions, all API code is a plugin in the V3
API. However, some must be loaded or the V3 API refuses to start
up. In nova/api/openstack/__init__.py we have
API_V3_CORE_EXTENSIONS which hard codes which extensions must be
loaded and there is no config option to override this (blacklisting
a core plugin will result in the V3 API not starting up).

So for compatibility testing I think what will probably happen is
that we'll be defining a minimum set (API_V3_CORE_EXTENSIONS) that
must be implemented and clients can rely on that always being
present
on a compliant cloud. But clients can also then query through
/extensions what other functionality (which is backwards compatible
with respect to core) may also be present on that specific cloud.

This really seems similar to the idea of having a router class, some
controllers and you map them. From my observation at the summit,
calling everything an extension creates confusion. An extension
"extends" something. For exa

Re: [openstack-dev] [nova] where to expose network quota

2014-01-06 Thread Alex Xu

On 2014?01?06? 16:47, Yaguang Tang wrote:

Hi all,

Now Neutron has its own quota management API for network related 
items(floating ips, security groups .etc) which are also manged by 
Nova.  when using nova with neutron as network service, the network 
related quota items are stored in two different databases and managed 
by different APIs.


I'd like your suggestions on which of the following is best to fix the 
issue.


1,  let nova to proxy all network related quota info operation(update, 
 list,delete) through neutron API.
For v2 api, I think this is the right way. there already have a patch 
doing similar thing, https://review.openstack.org/#/c/43822/


For v3 api, it will filter network related quota, there is also a patch 
for filtering security-groups https://review.openstack.org/#/c/58760/


2, filter network related quota info from nova when using neutron as 
network service, and change

novaclient to get quota info from nova and neutron quota API.


--
Tang Yaguang

Canonical Ltd. | www.ubuntu.com  | 
www.canonical.com 

Mobile:  +86 152 1094 6968
gpg key: 0x187F664F


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-14 Thread Alex Xu
+1 for drop xml. But if we can't drop it, can we think about use 
XMLDictSerializer instead of XmlTemplate? We spend a lot of time to 
maintain XmlTemplate, and it make xml
format inconsistent(some of resouce's attribute is output as xml 
sub-element, some of them is output as xml element's attribute, no rule 
for them). If we just use XMLDictSerializer for all api, can we just 
test XMLDictSerializer, not all the api?


On 2014年01月13日 22:38, Sean Dague wrote:
I know we've been here before, but I want to raise this again while 
there is still time left in icehouse.


I would like to propose that the Nova v3 API removes the XML payload 
entirely. It adds complexity to the Nova code, and it requires 
duplicating all our test resources, because we need to do everything 
onces for JSON and once for XML. Even worse, the dual payload strategy 
that nova employed leaked out to a lot of other projects, so they now 
think maintaining 2 payloads is a good thing (which I would argue it 
is not).


As we started talking about reducing tempest concurrency in the gate, 
I was starting to think a lot about what we could shed that would let 
us keep up a high level of testing, but bring our overall time back 
down. The fact that Nova provides an extremely wide testing surface 
makes this challenging.


I think it would be a much better situation if the Nova API is a 
single payload type. The work on the jsonschema validation is also 
something where I think we could get to a fully discoverable API, 
which would be huge.


If we never ship v3 API with XML as stable, we can deprecate it 
entirely, and let it die with v2 ( probably a year out ).


-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-15 Thread Alex Xu
On 2014年01月14日 20:04, Ken'ichi Ohmichi wrote:
> Hi,
>
> 2014/1/14 Alex Xu :
>> +1 for drop xml. But if we can't drop it, can we think about use
>> XMLDictSerializer instead of XmlTemplate? We spend a lot of time to maintain
>> XmlTemplate, and it make xml
>> format inconsistent(some of resouce's attribute is output as xml
>> sub-element, some of them is output as xml element's attribute, no rule for
>> them). If we just use XMLDictSerializer for all api, can we just test
>> XMLDictSerializer, not all the api?
> I like the idea that we use the same serializer for xml.
>
> Tempest contains many specific deserializers for each nova APIs, and that 
> makes
> tempest code complex now. If using the same serializer, I guess we can reduce
> Tempest code also.
>
> In addition, can we use the same deserializer for xml request body?
I think we can, I know neutron only use XMLDictSerializer/Deserializer.
> If doing it, we will be able to generate xml response documents from 
> jsonschema
> API definitions because of fixing the deserializing rule.
>
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
>> On 2014年01月13日 22:38, Sean Dague wrote:
>>> I know we've been here before, but I want to raise this again while there
>>> is still time left in icehouse.
>>>
>>> I would like to propose that the Nova v3 API removes the XML payload
>>> entirely. It adds complexity to the Nova code, and it requires duplicating
>>> all our test resources, because we need to do everything onces for JSON and
>>> once for XML. Even worse, the dual payload strategy that nova employed
>>> leaked out to a lot of other projects, so they now think maintaining 2
>>> payloads is a good thing (which I would argue it is not).
>>>
>>> As we started talking about reducing tempest concurrency in the gate, I
>>> was starting to think a lot about what we could shed that would let us keep
>>> up a high level of testing, but bring our overall time back down. The fact
>>> that Nova provides an extremely wide testing surface makes this challenging.
>>>
>>> I think it would be a much better situation if the Nova API is a single
>>> payload type. The work on the jsonschema validation is also something where
>>> I think we could get to a fully discoverable API, which would be huge.
>>>
>>> If we never ship v3 API with XML as stable, we can deprecate it entirely,
>>> and let it die with v2 ( probably a year out ).
>>>
>>> -Sean
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-21 Thread Alex Xu

On 2014?01?08? 23:12, Jay Lau wrote:

Thanks Russell, OK, will file a bug for first issue.

For second question, I want to show some of my comments here. I think 
that we should disable cold migration for an ACTIVE VM as cold 
migrating will first destroy the VM then re-create the VM when using 
KVM, I did not see a use case why someone want to do such a case.


Even further, this might make end user confused, its really strange 
both cold migration and live migration can migrate an ACTIVE VM. Cold 
migration should only target STOPPED VM instance.


I think cold migrate an ACTIVE VM is ok. The different of cold migration 
and live migration is there isn't down time for vm with live migration.  
Cold migration is make the vm

down first, then migrate it.



What do you think?

Thanks,

Jay



2014/1/8 Russell Bryant mailto:rbry...@redhat.com>>

On 01/08/2014 04:52 AM, Jay Lau wrote:
> Greetings,
>
> I have a question related to cold migration.
>
> Now in OpenStack nova, we support live migration, cold migration
and resize.
>
> For live migration, we do not need to confirm after live
migration finished.
>
> For resize, we need to confirm, as we want to give end user an
> opportunity to rollback.
>
> The problem is cold migration, because cold migration and resize
share
> same code path, so once I submit a cold migration request and
after the
> cold migration finished, the VM will goes to verify_resize
state, and I
> need to confirm resize. I felt a bit confused by this, why do I
need to
> verify resize for a cold migration operation? Why not reset the
VM to
> original state directly after cold migration?

The confirm step definitely makes more sense for the resize case.  I'm
not sure if there was a strong reason why it was also needed for cold
migration.

If nobody comes up with a good reason to keep it, I'm fine with
removing
it.  It can't be changed in the v2 API, though.  This would be a
v3 only
change.

> Also, I think that probably we need split compute.api.resize()
to two
> apis: one is for resize and the other is for cold migrations.
>
> 1) The VM state can be either ACTIVE and STOPPED for a resize
operation
> 2) The VM state must be STOPPED for a cold migrate operation.

I'm not sure why would require different states here, though.  ACTIVE
and STOPPED are allowed now.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Checking before delete flavor?

2014-01-27 Thread Alex Xu

On 2014?01?26? 20:05, Rui Chen wrote:

Hi Stackers:

Some instance operations and flavor are closely connected, for 
example, resize.
If I delete the flavor when resize instance, instance will be error. 
Like this:


1. run instance with flavor A
2. resize instance from flavor A to flavor B
3. delete flavor A
4. resize-revert instance
5. instance state into error


Hi, Rui Chen, In last code, resize-revert will return status code 400 
and with message about flavor not found in this case.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1101
Which version openstack you are running?

Thanks


Which following ways we think is a better? or you have another way?

1. List instance filter by flavor A, verify that no instance 
associated with flavor A, then delete flavor A
2. Delete flavor A, if instance state into error, reset instance state 
to active


General how do?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-04 Thread Alex Xu

On 2014年02月04日 19:36, Christopher Yeoh wrote:

On Tue, 04 Feb 2014 11:37:29 +0100
Thierry Carrez  wrote:


Christopher Yeoh wrote:

On Tue, Feb 4, 2014 at 12:03 PM, Joe Gordon mailto:joe.gord...@gmail.com>> wrote:

John and I discussed a third possibility:

nova-network v3 should be an extension, so the idea was to: Make
nova-network API a subset of neturon (instead of them adopting our
API we adopt theirs). And we could release v3 without nova network
in Icehouse and add the nova-network extension in Juno.

This would actually be my preferred approach if we can get consensus
around this. It takes a lot of pressure off this late in the cycle
and there's less risk around having to live with a nova-network API
in V3 that still has some rough edges around it. I imagine it will
be quite a while before we can deprecate the V2 API so IMO going
one cycle without nova-network support is not a big thing.

So user story would be, in icehouse release (nothing deprecated yet):
v2 + nova-net: supported
v2 + neutron: supported
v3 + nova-net: n/a
v3 + neutron: supported

And for juno:
v2 + nova-net: works, v2 could be deprecated
v2 + neutron: works, v2 could be deprecated
v3 + nova-net: works through extension, nova-net could be deprecated

So to be clear the idea I think is that nova-net of "v3 + nova-net"
would look like the neutron api. Eg nova-net API from v2 would look
quite different to 'nova-net' API from v3. To minimise the transition
pain for users on V3 moving to a neutron based cloud. Though those
moving from v2 + nova-net to v3 + nova-net would have to cope with more changes.


I have another idea is: move all nova-net API from v2 to v3 as 
sub-resources of 'os-nova-network' (maybe other name: os-legacy-network?)
So nova-net v3 api will look like as: 
'/v3/os-nova-network/os-floating-ips', '/v3/os-nova-network/os-networks'
Except prefix 'os-nova-network', nova-net v3 is same with nova-net v2. 
Then user needn't cope with a lot of changes,
and we needn't redesign an extension for nova-net v3. Also if we can 
move all those nova-net v3 extensions into a
sub-directory 'nova/api/openstack/compute/plugins/v3/os-nova-network/' 
that will be great.





v3 + neutron: supported (encouraged future-proof combo)

That doesn't sound too bad to me. Lets us finalize v3 core in icehouse
and keeps a lot of simplification / deprecation options open for Juno,
depending on how the nova-net vs. neutron story pans out then.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Neutron network in Nova V3 API

2014-02-12 Thread Alex Xu

Hi, guys,

I'm working neutron network stuff in nova V3 API. We will only pass port 
ids when create server, and
Nova won't proxy any neutron call in the future. I plan to add new v3 
network extension, that only
accept port ids as parameters. And it will pass those port ids in the 
old request_networks parameters.
(line 50 in 
https://review.openstack.org/#/c/36615/13/nova/api/openstack/compute/plugins/v3/networks.py)
Then other code path is same with before. In next release or when we 
remove nova-network, we
can add neutron specific code path. But v2 and v3 api's behavior is 
different. I need change something
before adding new network api. I want to hear you guys' suggestion 
first, ensure I'm working on the right

way.


1. Disable allocate ports automatically when create server without any 
port in v3 api.
When user create server without any port, the new server shouldn't be 
created with any ports in V3.
But in v2, nova-compute will allocate port from existed networks. I plan 
to pass parameter down to
nova-compute, that told nova-compute don't allocate ports for new 
server. And also keep old behavior

for v2 api.

2. Disable delete ports from neutron when remove server in v3 api.
In v2 api, after remove server, the port that attached to that server is 
removed by nova-compute.
But in v3 api, we shoudn't proxy any neutron call. Because there are 
some periodic tasks will delete
servers, just pass a parameter down to nova-compute from api isn't 
enough. So I plan to add a parameter in instance's
metadata when create server. When remove server, it will check the 
metadata first. If the server is marked as

created by v3 api, nova-compute won't remove attached neutron ports.

3. Enable pass port ids when multiple servers creation.
Currently multiple_create didn't support pass port ids. And we won't 
allocate ports automatically in v3 api.

So my plan as below:

When request with max_count=2 and ports=[{'id': 'port_id1'},  {'id': 
'port_id2'}, {'id': 'port_id3'}, {'id': 'port_id4}]
The first server create with ports 'port_id1' and 'port_id2', the second 
server create with ports 'port_id3' and 'port_id4'


When request with max_count=2 and ports = [{'id': 'port_id1'}]
The request return fault.
The request must be len(ports) % max_count == 0

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron network in Nova V3 API

2014-02-12 Thread Alex Xu

On 2014?02?12? 17:15, Alex Xu wrote:

Hi, guys,

I'm working neutron network stuff in nova V3 API. We will only pass 
port ids when create server, and
Nova won't proxy any neutron call in the future. I plan to add new v3 
network extension, that only
accept port ids as parameters. And it will pass those port ids in the 
old request_networks parameters.
(line 50 in 
https://review.openstack.org/#/c/36615/13/nova/api/openstack/compute/plugins/v3/networks.py)
Then other code path is same with before. In next release or when we 
remove nova-network, we
can add neutron specific code path. But v2 and v3 api's behavior is 
different. I need change something
before adding new network api. I want to hear you guys' suggestion 
first, ensure I'm working on the right

way.


1. Disable allocate ports automatically when create server without any 
port in v3 api.
When user create server without any port, the new server shouldn't be 
created with any ports in V3.
But in v2, nova-compute will allocate port from existed networks. I 
plan to pass parameter down to
nova-compute, that told nova-compute don't allocate ports for new 
server. And also keep old behavior

for v2 api.


reference to https://review.openstack.org/#/c/73000/



2. Disable delete ports from neutron when remove server in v3 api.
In v2 api, after remove server, the port that attached to that server 
is removed by nova-compute.
But in v3 api, we shoudn't proxy any neutron call. Because there are 
some periodic tasks will delete
servers, just pass a parameter down to nova-compute from api isn't 
enough. So I plan to add a parameter in instance's
metadata when create server. When remove server, it will check the 
metadata first. If the server is marked as

created by v3 api, nova-compute won't remove attached neutron ports.


reference to https://review.openstack.org/#/c/73001/



3. Enable pass port ids when multiple servers creation.
Currently multiple_create didn't support pass port ids. And we won't 
allocate ports automatically in v3 api.

So my plan as below:

When request with max_count=2 and ports=[{'id': 'port_id1'}, {'id': 
'port_id2'}, {'id': 'port_id3'}, {'id': 'port_id4}]
The first server create with ports 'port_id1' and 'port_id2', the 
second server create with ports 'port_id3' and 'port_id4'


When request with max_count=2 and ports = [{'id': 'port_id1'}]
The request return fault.
The request must be len(ports) % max_count == 0



reference to https://review.openstack.org/#/c/73002/

V3 API layer works reference to:
https://review.openstack.org/#/c/36615/
https://review.openstack.org/#/c/42315/
https://review.openstack.org/#/c/42315/



Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Alex Xu

On 2014年02月20日 10:44, Christopher Yeoh wrote:

On Wed, 19 Feb 2014 12:36:46 -0500
Russell Bryant  wrote:


Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following
question: "Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.


Although I'm very eager to get the V3 API released, I do agree with you.
As you have said we will be living with both the V2 and V3 APIs for a
very long time. And at this point there would be simply too many last
minute changes to the V3 API for us to be confident that we have it
right "enough" to release as a stable API.


+1


We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the
case, we haven't done our job.

+1


Let's all take some time to reflect on what has happened with v3 so
far and what it means for how we should move forward.  We can regroup
for Juno.

Finally, I would like to thank everyone who has helped with the effort
so far.  Many hours have been put in to code and reviews for this.  I
would like to specifically thank Christopher Yeoh for his work here.
Chris has done an *enormous* amount of work on this and deserves
credit for it.  He has taken on a task much bigger than anyone
anticipated. Thanks, Chris!

Thanks Russell, that's much appreciated. I'm also very thankful to
everyone who has worked on the V3 API either through patches and/or
reviews, especially Alex Xu and Ivan Zhu who have done a lot of work on
it in Havana and Icehouse.


Thank you, Chris, hope we get a great v3 api.



Chris.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-25 Thread Alex Xu

On 2014年02月25日 21:17, Ken'ichi Ohmichi wrote:

2014-02-25 19:48 GMT+09:00 Thierry Carrez :

Sean Dague wrote:

So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project. So whatever
gets decided about v3, the thing that's important to me is a sane way to
be able to add backwards compatible changes (which we actually don't
have today, and I don't think any other service in OpenStack does
either), as well a mechanism for deprecating parts of the API. With some
future decision about whether removing them makes sense.

I agree with Sean. Whatever solution we pick, we need to make sure it's
solid enough that it can handle further evolutions of the Nova API
without repeating this dilemma tomorrow. V2 or V3, we would stick to it
for the foreseeable future.

Between the cleanup of the API, the drop of XML support, and including a
sane mechanism for supporting further changes without major bumps of the
API, we may have enough to technically justify v3 at this point. However
from a user standpoint, given the surface of the API, it can't be
deprecated fast -- so this ideal solution only works in a world with
infinite maintenance resources.

Keeping V2 forever is more like a trade-off, taking into account the
available maintenance resources and the reality of Nova's API huge
surface. It's less satisfying technically, especially if you're deeply
aware of the API incoherent bits, and the prospect of living with some
of this incoherence forever is not really appealing.

What is the maintenance cost for keeping both APIs?
I think Chris and his team have already paid most part of it, the
works for porting
the existing v2 APIs to v3 APIs is almost done.
So I'd like to clarify the maintenance cost we are discussing.

If the cost means that we should implement both API methods when creating a
new API, how about implementing internal proxy from v2 to v3 API?
When creating a new API, it is enough to implement API method for v3 API. and
when receiving a v2 request, Nova translates it to v3 API.
The request styles(url, body) of v2 and v3 are different and this idea makes new
v2 APIs v3 style. but now v2 API has already a lot of inconsistencies.
so it does not seem so big problem.

I want to ask this question too. What is the maintenance cost?
When we release v3 api, we will freeze v2 api. So we won't add any new 
API into v2,

So is that mean the maintenance cost is much less after v2 api froze?
What I know is we should keep compute-api keep back-compatibility with 
v2 api. What

else except that?



>From the viewpoint of OpenStack interoperability also, I believe we
need a new API.
Many v2 API parameters are not validated. If implementing strict
validation for v2 API,
incompatibility issues happen. That is why we are implementing input
validation for
v3 API. If staying v2 API forever, we should have this kind of problem forever.
v2 API is fragile now. So the interoperability should depend on v2
API, that seems
sandbox.. (I know that it is a little overstatement, but we have found
a lot of this kind
of problem already..)


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-26 Thread Alex Xu

On 2014年02月26日 18:40, Thierry Carrez wrote:

Kenichi Oomichi wrote:

From: Christopher Yeoh [mailto:cbky...@gmail.com]
So the problem here is what we consider a "bug" becomes a feature from
a user of the API point of view. Eg they really shouldn't be passing
some data in a request, but its ignored and doesn't cause any issues
and the request ends up doing what they expect.

In addition, current v2 API behavior is not consistent when receiving
unexpected API parameters. Most v2 APIs ignore unexpected API parameters,
but some v2 APIs return a BadRequest response. For example, "update host"
API does it in this case by 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/hosts.py#L185

Through v3 API development, we are making all v3 APIs return a BadRequest
in this case. I think we cannot apply this kind of strict validation to
running v2 API.

We may need to differentiate between breaking the API and breaking
corner-case behavior. In one case you force everyone in the ecosystem to
adapt (the libraries, the end user code). In the other you only
(potentially) affect those that were not following the API correctly.

So there may be a middle ground between "sticking with dirty V2 forever"
and "Go to V3 and accept a long V2 deprecation":


Let us find the middle ground, how about this:

v3: This is the total new api. it's with CamelCase fixing, stronger 
input vaildation, api policy checks, and task api.


v2.1: This is based on v3, we transform the v2 request into v3 
request(then they are share same codebase). This api is without 
CamelCase fixing. And it get new thing from v3: stronger input 
vaildation, api policy checks, task api. We are keeping this api for 
long time. V2.1 didn't break the api. This api only affect those that 
were not following v2 api correctly.


v2: just go away after a shorter deprecation period.

So v3 and v2.1 based on same code, this is reduce the cost of 
maintenance. v2 is keeping for short time, give people a chance to move 
to v2.1 or v3.

And v2.1 didn't break the api, and it's more easy for maintenance.


We could make a V3 that doesn't break the API, only breaks behavior in
error cases due to its stronger input validation. A V3 that shouldn't
break code that was following the API, nor require heavy library
changes. It's still a major API bump because behavior may change and
some end users will be screwed in the process, but damage is more
limited, so V2 could go away after a shorter deprecation period.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-27 Thread Alex Xu

On 2014?02?28? 00:05, Dan Smith wrote:

Sure, but that's still functionally equivalent to using the /v2 prefix.
  So we could chuck the current /v3 code and do:

/v2: Current thing
/v3: invalid, not supported
/v4: added simple task return for server create
/v5: added the event extension
/v6: added a new event for cinder to the event extension

and it would be equivalent.

Yep, sure. This seems more likely to confuse people or clients to me,
but if that's how we decided to do it, then that's fine. The approach to
_what_ we version is my concern.


Does mean our code looks like as below?
if client_version > 2:
   
elif client_version > 3
   ...
elif client_version > 4:
  ...
elif client_version > 5:
  ...
elif client_version > 6:
  ..

And we need test each version...  That looks bad...


And arguably, anything that is a pure "add" could get away with either a
minor version or not touching the version at all.  Only "remove" or
"modify" should have the potential to break a properly-written application.

Totally agree!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Alex Xu

On 2014?02?28? 13:40, Chris Friesen wrote:

On 02/27/2014 06:00 PM, Alex Xu wrote:


Does mean our code looks like as below?
if client_version > 2:

elif client_version > 3
...
elif client_version > 4:
   ...
elif client_version > 5:
   ...
elif client_version > 6:
   ..

And we need test each version...  That looks bad...


I don't think the code would look like that

Each part of the API could look at the version separately.  And each 
part of the API only needs to check the client version if it has made 
a backwards-incompatible change.


So a part of the API that only made one backwards-incompatible change 
at version 3 would only need one check.


if client_version >= 3
do_newer_something()
else
do_something()



Maybe some other part of the API made a change at v6 (assuming global 
versioning).  That part of the API would also only need one check.



if client version >= 6
do_newer_something()
else
do_something()



Yes, I know it. But it still looks bad :(

In api code, it will be looks like as below:

def do_something(self, body):
if client_version == 2:
   args = body['SomeArguments']
elif client_version == 3:
   args = body['some_arguments']

   try:
ret = self.compute_api.do_something(args)
   except exception.SomeException:
if client_version == 2:
raise exception.HTTPBadRequest()
elif client_version == 4:
raise exception.HTTPConflictRequest()

   if client_version == 2:
   return {'some_arguments': ret}
   elif client_version == 3:
   return {'SomeArguments': ret}



Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] V3 API support

2014-12-07 Thread Alex Xu
I think Chris is on vacation. We move V3 API to V2.1. V2.1 have some
improvement compare to V2. You can find more detail at
http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/v2-on-v3-api.html

We need support instance tag for V2.1. And in your patch, we needn't
json-schema for V2, just need for V2.1.

Thanks
Alex

2014-12-04 20:50 GMT+08:00 Sergey Nikitin :

> Hi, Christopher,
>
> I working on API extension for instance tags (
> https://review.openstack.org/#/c/128940/). Recently one reviewer asked me
> to add  V3 API support. I talked with Jay Pipes about it and he told me
> that V3 API became useless. So I wanted to ask you and our community: "Do
> we need to support v3 API in future nova patches?"
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Using query string or request body to pass parameter

2014-12-07 Thread Alex Xu
Hi,

I have question about using query string or request body for REST API.

This question found when I review this spec:
https://review.openstack.org/#/c/131633/6..7/specs/kilo/approved/judge-service-state-when-deleting.rst

Think about use request body will have more benefit:
1. Request body can be validated by json-schema
2. json-schema can doc what can be accepted by the parameter

Should we have guideline for this?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Alex Xu
Not sure all, nova is limited at
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
That under our control.

Maybe not just ask question for delete, also for other method.

2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell :

> On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > I wonder if we can use body in delete, currently , there isn't any
> > case used in v2/v3 api.
>
> No, many frameworks raise an error if you try to include a body with a
> DELETE request.
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Using query string or request body to pass parameter

2014-12-08 Thread Alex Xu
Kevin, thanks for the info! I agree with you. RFC is the authority. use
payload in the DELETE isn't good way.

2014-12-09 7:58 GMT+08:00 Kevin L. Mitchell :

> On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote:
> > Not sure all, nova is limited
> > at
> https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79
> > That under our control.
>
> It is, but the client frameworks aren't, and some of them prohibit
> sending a body with a DELETE request.  Further, RFC7231 has this to say
> about DELETE request bodies:
>
> A payload within a DELETE request message has no defined semantics;
> sending a payload body on a DELETE request might cause some
> existing
> implementations to reject the request.
>
> (§4.3.5)
>
> I think we have to conclude that, if we need a request body, we cannot
> use the DELETE method.  We can modify the operation, such as setting a
> "force" flag, with a query parameter on the URI, but a request body
> should be considered out of bounds with respect to DELETE.
>
> > Maybe not just ask question for delete, also for other method.
> >
> > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell <
> kevin.mitch...@rackspace.com>:
> > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote:
> > > I wonder if we can use body in delete, currently , there isn't
> any
> > > case used in v2/v3 api.
> >
> > No, many frameworks raise an error if you try to include a body
> with a
> > DELETE request.
> > --
> > Kevin L. Mitchell 
> > Rackspace
>
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-19 Thread Alex Xu
Hi,

There is problem when evacuate instance. If the instance is in the server
group with affinity policy, the instance can't evacuate out the failed
compute node.

I know there is soft affinity policy under development, but think of if the
instance in server group with hard affinity means no way to get it back
when compute node failed, it's really confuse.

I guess there should be some people concern that will violate the affinity
policy. But I think the compute node already down, all the instance in that
server group are down also, so I think we needn't care about the policy
anymore.

I wrote up a patch can fix this problem:
https://review.openstack.org/#/c/135607/


We have some discussion on the gerrit (Thanks Sylvain for discuss with me),
but we still not sure we are on the right direction. So I bring this up at
here.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-21 Thread Alex Xu
2014-12-22 9:01 GMT+08:00 Lingxian Kong :

> 2014-12-19 17:44 GMT+08:00 Alex Xu :
> > Hi,
> >
> > There is problem when evacuate instance. If the instance is in the server
> > group with affinity policy, the instance can't evacuate out the failed
> > compute node.
> >
> > I know there is soft affinity policy under development, but think of if
> the
> > instance in server group with hard affinity means no way to get it back
> when
> > compute node failed, it's really confuse.
> >
> > I guess there should be some people concern that will violate the
> affinity
> > policy. But I think the compute node already down, all the instance in
> that
> > server group are down also, so I think we needn't care about the policy
> > anymore.
>
> but what if the compute node is back to normal? There will be
> instances in the same server group with affinity policy, but located
> in different hosts.
>
>
If operator decide to evacuate the instance from the failed host, we should
fence the failed host first.
So the failed host shoudn't have chance to get back.



> >
> > I wrote up a patch can fix this problem:
> > https://review.openstack.org/#/c/135607/
> >
> >
> > We have some discussion on the gerrit (Thanks Sylvain for discuss with
> me),
> > but we still not sure we are on the right direction. So I bring this up
> at
> > here.
> >
> > Thanks
> > Alex
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Regards!
> ---
> Lingxian Kong
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
Joe, thanks, that's useful feature. But still not sure it's good for this
case. Thinking of user's server-group will be deleted by administrator and
new server-group created for user by administrator, that sounds confused
for user. I'm thinking of the HA case, if there is host failed, the
infrastructure can evacuate instance out of failed host automatically, and
user shouldn't be affected by that(user still will know his instance is
down, and the instance get back later. At least we should reduce the
affect).

I think the key is whether we think evacuate instance out of failed host
that in affinity group is violation or not. The host already failed, we can
ignore the failed host which in server group when we evacuate first
instance to another host. After first instance evacuated, there is new
alive host in the server group, then other instances will be evacuated to
that new alive host to comply affinity policy.

2014-12-22 11:29 GMT+08:00 Joe Cropper :

> This is another great example of a use case in which these blueprints [1,
> 2] would be handy.  They didn’t make the clip line for Kilo, but we’ll try
> again for L.  I personally don’t think the scheduler should have “special
> case” rules about when/when not to apply affinity policies, as that could
> be confusing for administrators.  It would be simple to just remove it from
> the group, thereby allowing the administrator to rebuild the VM anywhere
> s/he wants… and then re-add the VM to the group once the environment is
> operational once again.
>
> [1] https://review.openstack.org/#/c/136487/
> [2] https://review.openstack.org/#/c/139272/
>
> - Joe
>
> On Dec 21, 2014, at 8:36 PM, Lingxian Kong  wrote:
>
> > 2014-12-22 9:21 GMT+08:00 Alex Xu :
> >>
> >>
> >> 2014-12-22 9:01 GMT+08:00 Lingxian Kong :
> >>>
> >
> >>>
> >>> but what if the compute node is back to normal? There will be
> >>> instances in the same server group with affinity policy, but located
> >>> in different hosts.
> >>>
> >>
> >> If operator decide to evacuate the instance from the failed host, we
> should
> >> fence the failed host first.
> >
> > Yes, actually. I mean the recommandation or prerequisite should be
> > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
> > things more confused. But the issue you are working around is indeed a
> > problem we should solve.
> >
> > --
> > Regards!
> > ---
> > Lingxian Kong
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
2014-12-22 10:36 GMT+08:00 Lingxian Kong :

> 2014-12-22 9:21 GMT+08:00 Alex Xu :
> >
> >
> > 2014-12-22 9:01 GMT+08:00 Lingxian Kong :
> >>
>
> >>
> >> but what if the compute node is back to normal? There will be
> >> instances in the same server group with affinity policy, but located
> >> in different hosts.
> >>
> >
> > If operator decide to evacuate the instance from the failed host, we
> should
> > fence the failed host first.
>
> Yes, actually. I mean the recommandation or prerequisite should be
> emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
> things more confused. But the issue you are working around is indeed a
> problem we should solve.
>
>
Yea, you are right, we should doc it if we think this make sense. Thanks!


> --
> Regards!
> ---
> Lingxian Kong
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-23 Thread Alex Xu
2014-12-22 21:50 GMT+08:00 Sylvain Bauza :

>
> Le 22/12/2014 13:37, Alex Xu a écrit :
>
>
>
> 2014-12-22 10:36 GMT+08:00 Lingxian Kong :
>
>> 2014-12-22 9:21 GMT+08:00 Alex Xu :
>> >
>> >
>> > 2014-12-22 9:01 GMT+08:00 Lingxian Kong :
>> >>
>>
>> >>
>> >> but what if the compute node is back to normal? There will be
>> >> instances in the same server group with affinity policy, but located
>> >> in different hosts.
>> >>
>> >
>> > If operator decide to evacuate the instance from the failed host, we
>> should
>> > fence the failed host first.
>>
>> Yes, actually. I mean the recommandation or prerequisite should be
>> emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
>> things more confused. But the issue you are working around is indeed a
>> problem we should solve.
>>
>>
>  Yea, you are right, we should doc it if we think this make sense. Thanks!
>
>
>
> As I said, I'm not in favor of adding more complexity in the instance
> group setup that is done in the conductor for basic race condition reasons.
>

Emm...anyway we can resolve it for now?


>
> If I understand correctly, the problem is when there is only one host for
> all the instances belonging to a group with affinity filter and this host
> is down, then the filter will deny any other host and consequently the
> request will fail while it should succeed.
>
>
Yes, you understand correctly. Thanks for explain that, that's good for
other people to understand what we talking about.



> Is this really a problem ? I mean, it appears to me that's a normal
> behaviour because a filter is by definition an *hard* policy.
>

Yea, it isn't problem for normal case. But it's problem for VM HA. So I
want to ask whether we should tell user if you use *hard* policy, that
means you lose the VM HA. If we choice that, maybe we should doc at
somewhere to notice user. But if user can have *hard* policy and VM HA at
sametime and we aren't break anything(except a little complex code), that's
sounds good for user.


>
> So, provided you would like to implement *soft* policies, that sounds more
> likely a *weigher* that you would like to have : ie. make sure that hosts
> running existing instances in the group are weighted more than other ones
> so they'll be chosen every time, but in case they're down, allow the
> scheduler to pick other hosts.
>

yes, soft policy didn't have this problem.


>
> HTH,
> -Sylvain
>
>
>
>
> --
>> Regards!
>> ---
>> Lingxian Kong
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Questions on pep8 F811 hacking check for microversion

2015-01-06 Thread Alex Xu
2015-01-06 20:31 GMT+08:00 Jay Pipes :

> On 01/06/2015 06:25 AM, Chen CH Ji wrote:
>
>> Based on nova-specs api-microversions.rst
>> we support following function definition format, but it violate the
>> hacking rule pep8 F811 because duplicate function definition
>> we should use #noqa for them , but considering microversion may live for
>> long time ,
>> keep adding #noqa may be a little bit ugly, can anyone suggest a good
>> solution for it ? thanks
>>
>>  >   @api_version(min_version='2.1')
>>  >   def _version_specific_func(self, req, arg1):
>>  >  pass
>>  >
>>  >   @api_version(min_version='2.5')
>>  >   def _version_specific_func(self, req, arg1):
>>  >  pass
>>
>
> Hey Kevin,
>
> This was actually one of my reservations about the proposed
> microversioning implementation -- i.e. having functions that are named
> exactly the same, only decorated with the microversioning notation. It
> kinda reminds me of the hell of debugging C++ code that uses STL: how does
> one easily know which method one is in when inside a debugger?
>
> That said, the only other technique we could try to use would be to not
> use a decorator and instead have a top-level dispatch function that would
> inspect the API microversion (only when the API version makes a difference
> to the output or input of that function) and then dispatch the call to a
> helper method that had the version in its name.
>
> So, for instance, let's say you are calling the controller's GET
> /$tenant/os-hosts method, which happens to get routed to the
> nova.api.openstack.compute.contrib.hosts.HostController.index() method.
> If you wanted to modify the result of that method and the API microversion
> is at 2.5, you might do something like:
>
>  def index(self, req):
>  req_api_ver = utils.get_max_requested_api_version(req)
>  if req_api_ver == (2, 5):
>  return self.index_2_5(req)
>  return self.index_2_1(req)
>
>  def index_2_5(self, req):
>  results = self.index_2_1(req)
>  # Replaces 'host' with 'host_name'
>  for result in results:
>  result['host_name'] = result['host']
>  del result['host']
>  return results
>
>  def index_2_1(self, req):
>  # Would be a rename of the existing index() method on
>  # the controller
>
> Another option would be to use something like JSON-patch to determine the
> difference between two output schemas and automatically translate one to
> another... but that would be a huge effort.
>

Just JSON-patch only can resolve multiple-version of request/response, we
need handle semantic change also.

But I still think we need something like JSON-patch, it can avoid add new
method only for small request/response changing, like this patch
https://review.openstack.org/#/c/144995/3

I have propose before, we can try mapping the request/response into nova
object by json-schema automatically, the nova object will handle the
change. When the request/response changed, nothing will change in REST API
code. I wrote the POC before
https://github.com/soulxu/nova-v3-api-doc/commits/micro_ver_with_obj_auto_mapping
This is more easy to maintenance than JSON-patch(thinking about the case of
3.3 patch based on 3.2 patch based on 3.1 patch.)


> That's the only other way I can think of besides disabling F811, which I
> really would not recommend, since it's a valuable safeguard against
> duplicate function names (especially duplicated test methods).
>
> Best,
> -jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]do we need to have a spec for all api related changes?

2015-01-06 Thread Alex Xu
2015-01-07 11:43 GMT+08:00 Eli Qiao :

>  hi all:
> I have a patch [1], just did slight changes on api, do I need to write a
> spec(kinds of wasting time to get approved)?
> since api-microversion[2] is almost done, can we just feel free to add
> changes as micro-version api?
>

We definitely can't 'feel free' to change API even we have micro-version,
special for back-incompatibility change. The spec is for avoid we break the
API accidentally.



> like bump version , write down changes in rest_api_version_history.rst
>
> [1] https://review.openstack.org/#/c/144914/
> [2]
> https://review.openstack.org/#/q/status:merged+project:openstack/nova+branch:master+topic:bp/api-microversions,n,z
>
> --
> Thanks,
> Eli (Li Yong) Qiao
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
Hi, All

There is bug when running nova with ironic
https://bugs.launchpad.net/nova/+bug/1402658

The case is simple: one baremetal node with 1024MB ram, then boot two
instances with 512MB ram flavor.
Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will consume all the
resources for that node whatever
how much resource the instance used. But at compute node side, the
ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And ResourceTracker
will update the resource usage once the
instance resource claimed, then scheduler will know there are some free
resource on that node, then will try to
schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit how many
instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is configured
by option 'max_instances_per_host', we
can make the virt driver to report how many instances it supported. The
ironic driver can just report max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that means no
limit. And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler. Does make
sense? or there are more trap?

Thanks in advance for any feedback and suggestion.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 17:17 GMT+08:00 Sylvain Bauza :

>
> Le 09/01/2015 09:01, Alex Xu a écrit :
>
> Hi, All
>
>  There is bug when running nova with ironic
> https://bugs.launchpad.net/nova/+bug/1402658
>
>  The case is simple: one baremetal node with 1024MB ram, then boot two
> instances with 512MB ram flavor.
> Those two instances will be scheduling to same baremetal node.
>
>  The problem is at scheduler side the IronicHostManager will consume all
> the resources for that node whatever
> how much resource the instance used. But at compute node side, the
> ResourceTracker won't consume resources
> like that, just consume like normal virtual instance. And ResourceTracker
> will update the resource usage once the
> instance resource claimed, then scheduler will know there are some free
> resource on that node, then will try to
> schedule other new instance to that node.
>
>  I take look at that, there is NumInstanceFilter, it will limit how many
> instance can schedule to one host. So can
> we just use this filter to finish the goal? The max instance is configured
> by option 'max_instances_per_host', we
> can make the virt driver to report how many instances it supported. The
> ironic driver can just report max_instances_per_host=1.
> And libvirt driver can report max_instance_per_host=-1, that means no
> limit. And then we can just remove the
> IronicHostManager, then make the scheduler side is more simpler. Does make
> sense? or there are more trap?
>
>  Thanks in advance for any feedback and suggestion.
>
>
>
> Mmm, I think I disagree with your proposal. Let me explain by the best I
> can why :
>
> tl;dr: Any proposal unless claiming at the scheduler level tends to be
> wrong
>
> The ResourceTracker should be only a module for providing stats about
> compute nodes to the Scheduler.
> How the Scheduler is consuming these resources for making a decision
> should only be a Scheduler thing.
>

agreed, but we can't implement this for now, the reason is you described as
below.


>
> Here, the problem is that the decision making is also shared with the
> ResourceTracker because of the claiming system managed by the context
> manager when booting an instance. It means that we have 2 distinct decision
> makers for validating a resource.
>
>
Totally agreed! This is the root cause.


> Let's stop to be realistic for a moment and discuss about what could mean
> a decision for something else than a compute node. Ok, let say a volume.
> Provided that *something* would report the volume statistics to the
> Scheduler, that would be the Scheduler which would manage if a volume
> manager could accept a volume request. There is no sense to validate the
> decision of the Scheduler on the volume manager, just maybe doing some
> error management.
>
> We know that the current model is kinda racy with Ironic because there is
> a 2-stage validation (see [1]). I'm not in favor of complexifying the
> model, but rather put all the claiming logic in the scheduler, which is a
> longer path to win, but a safier one.
>

Yea, I have thought about add same resource consume at compute manager
side, but it's ugly because we implement ironic's resource consuming method
in two places. If we move the claiming in the scheduler the thing will
become easy, we can just provide some extension for different consuming
method (If I understand right the discussion in the IRC). As gantt will be
standalone service, so validating a resource shouldn't spread into
different service. So I agree with you.

But for now, as you said this is long term plan. We can't provide different
resource consuming in compute manager side now, also can't move the
claiming into scheduler now. So the method I proposed is more easy for now,
at least we won't have different resource consuming way between
scheduler(IonricHostManger) and compute(ResourceTracker) for ironic. And
ironic can works fine.

The method I propose have a little problem. When all the node allocated, we
still can saw there are some resource are free if the flavor's resource is
less than baremetal's resource. But it can be done by expose max_instance
to hypervisor api(running instances already exposed), then user will now
why can't allocated more instance. And if we can configure max_instance for
each node, sounds like useful for operator also :)


>
> -Sylvain
>
> [1]  https://bugs.launchpad.net/nova/+bug/1341420
>
>  Thanks
> Alex
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 22:07 GMT+08:00 Murray, Paul (HP Cloud) :

>   >There is bug when running nova with ironic
> https://bugs.launchpad.net/nova/+bug/1402658
>
>
>
> I filed this bug – it has been a problem for us.
>
>
>
> >The problem is at scheduler side the IronicHostManager will consume all
> the resources for that node whatever
>
> >how much resource the instance used. But at compute node side, the
> ResourceTracker won't consume resources
>
> >like that, just consume like normal virtual instance. And ResourceTracker
> will update the resource usage once the
>
> >instance resource claimed, then scheduler will know there are some free
> resource on that node, then will try to
>
> >schedule other new instance to that node
>
>
>
> You have summed up the problem nicely – i.e.: the resource availability is
> calculated incorrectly for ironic nodes.
>
>
>
> >I take look at that, there is NumInstanceFilter, it will limit how many
> instance can schedule to one host. So can
>
> >we just use this filter to finish the goal? The max instance is
> configured by option 'max_instances_per_host', we
>
> >can make the virt driver to report how many instances it supported. The
> ironic driver can just report max_instances_per_host=1.
>
> >And libvirt driver can report max_instance_per_host=-1, that means no
> limit. And then we can just remove the
>
> >IronicHostManager, then make the scheduler side is more simpler. Does
> make sense? or there are more trap?
>
>
>
>
>
> Makes sense, but solves the wrong problem. The problem is what you said
> above – i.e.: the resource availability is calculated incorrectly for
> ironic nodes.
>
> The right solution would be to fix the resource tracker. The ram resource
> on an ironic node has different allocation behavior to a regular node. The
> test to see if a new instance fits is the same, but instead of deducting
> the requested amount to get the remaining availability it should simply
> return 0. This should be dealt with in the new resource objects ([2] below)
> by either having different version of the resource object for ironic nodes
> (certainly doable and the most sensible option – resources should be
> presented according to the resources on the host). Alternatively the ram
> resource object should cater for the difference in its calculations.
>
 Dang it, I reviewed that specwhy I didn't found that :( Totally beat
me!

>  I have a local fix for this that I was too shy to propose upstream
> because it’s a bit hacky and will hopefully be obsolete soon. I could share
> it if you like.
>
> Paul
>
> [2] https://review.openstack.org/#/c/127609/
>
>
>
>
>
> From: *Sylvain Bauza* 
> Date: 9 January 2015 at 09:17
> Subject: Re: [openstack-dev] [Nova][Ironic] Question about scheduling two
> instances to same baremetal node
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
>
>
> Le 09/01/2015 09:01, Alex Xu a écrit :
>
>  Hi, All
>
>
>
> There is bug when running nova with ironic
> https://bugs.launchpad.net/nova/+bug/1402658
>
>
>
> The case is simple: one baremetal node with 1024MB ram, then boot two
> instances with 512MB ram flavor.
>
> Those two instances will be scheduling to same baremetal node.
>
>
>
> The problem is at scheduler side the IronicHostManager will consume all
> the resources for that node whatever
>
> how much resource the instance used. But at compute node side, the
> ResourceTracker won't consume resources
>
> like that, just consume like normal virtual instance. And ResourceTracker
> will update the resource usage once the
>
> instance resource claimed, then scheduler will know there are some free
> resource on that node, then will try to
>
> schedule other new instance to that node.
>
>
>
> I take look at that, there is NumInstanceFilter, it will limit how many
> instance can schedule to one host. So can
>
> we just use this filter to finish the goal? The max instance is configured
> by option 'max_instances_per_host', we
>
> can make the virt driver to report how many instances it supported. The
> ironic driver can just report max_instances_per_host=1.
>
> And libvirt driver can report max_instance_per_host=-1, that means no
> limit. And then we can just remove the
>
> IronicHostManager, then make the scheduler side is more simpler. Does make
> sense? or there are more trap?
>
>
>
> Thanks in advance for any feedback and suggestion.
>
>
>
>
>
> Mmm, I think I disagree with your proposal. Let me explai

Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 22:22 GMT+08:00 Sylvain Bauza :

>
> Le 09/01/2015 14:58, Alex Xu a écrit :
>
>
>
> 2015-01-09 17:17 GMT+08:00 Sylvain Bauza :
>
>>
>> Le 09/01/2015 09:01, Alex Xu a écrit :
>>
>> Hi, All
>>
>>  There is bug when running nova with ironic
>> https://bugs.launchpad.net/nova/+bug/1402658
>>
>>  The case is simple: one baremetal node with 1024MB ram, then boot two
>> instances with 512MB ram flavor.
>> Those two instances will be scheduling to same baremetal node.
>>
>>  The problem is at scheduler side the IronicHostManager will consume all
>> the resources for that node whatever
>> how much resource the instance used. But at compute node side, the
>> ResourceTracker won't consume resources
>> like that, just consume like normal virtual instance. And ResourceTracker
>> will update the resource usage once the
>> instance resource claimed, then scheduler will know there are some free
>> resource on that node, then will try to
>> schedule other new instance to that node.
>>
>>  I take look at that, there is NumInstanceFilter, it will limit how many
>> instance can schedule to one host. So can
>> we just use this filter to finish the goal? The max instance is
>> configured by option 'max_instances_per_host', we
>> can make the virt driver to report how many instances it supported. The
>> ironic driver can just report max_instances_per_host=1.
>> And libvirt driver can report max_instance_per_host=-1, that means no
>> limit. And then we can just remove the
>> IronicHostManager, then make the scheduler side is more simpler. Does
>> make sense? or there are more trap?
>>
>>  Thanks in advance for any feedback and suggestion.
>>
>>
>>
>>  Mmm, I think I disagree with your proposal. Let me explain by the best
>> I can why :
>>
>> tl;dr: Any proposal unless claiming at the scheduler level tends to be
>> wrong
>>
>> The ResourceTracker should be only a module for providing stats about
>> compute nodes to the Scheduler.
>> How the Scheduler is consuming these resources for making a decision
>> should only be a Scheduler thing.
>>
>
>  agreed, but we can't implement this for now, the reason is you described
> as below.
>
>
>>
>> Here, the problem is that the decision making is also shared with the
>> ResourceTracker because of the claiming system managed by the context
>> manager when booting an instance. It means that we have 2 distinct decision
>> makers for validating a resource.
>>
>>
>  Totally agreed! This is the root cause.
>
>
>>  Let's stop to be realistic for a moment and discuss about what could
>> mean a decision for something else than a compute node. Ok, let say a
>> volume.
>> Provided that *something* would report the volume statistics to the
>> Scheduler, that would be the Scheduler which would manage if a volume
>> manager could accept a volume request. There is no sense to validate the
>> decision of the Scheduler on the volume manager, just maybe doing some
>> error management.
>>
>> We know that the current model is kinda racy with Ironic because there is
>> a 2-stage validation (see [1]). I'm not in favor of complexifying the
>> model, but rather put all the claiming logic in the scheduler, which is a
>> longer path to win, but a safier one.
>>
>
>  Yea, I have thought about add same resource consume at compute manager
> side, but it's ugly because we implement ironic's resource consuming method
> in two places. If we move the claiming in the scheduler the thing will
> become easy, we can just provide some extension for different consuming
> method (If I understand right the discussion in the IRC). As gantt will be
> standalone service, so validating a resource shouldn't spread into
> different service. So I agree with you.
>
>  But for now, as you said this is long term plan. We can't provide
> different resource consuming in compute manager side now, also can't move
> the claiming into scheduler now. So the method I proposed is more easy for
> now, at least we won't have different resource consuming way between
> scheduler(IonricHostManger) and compute(ResourceTracker) for ironic. And
> ironic can works fine.
>
>  The method I propose have a little problem. When all the node allocated,
> we still can saw there are some resource are free if the flavor's resource
> is less than baremetal's resource. But it can be done by expose
> max_instance to hypervisor api(running instances already exposed), then
> user 

Re: [openstack-dev] [nova] request spec freeze exception for Attach/Detach SR-IOV interface

2015-01-12 Thread Alex Xu
2015-01-13 13:57 GMT+08:00 少合冯 :

> Hello,
>
> I'd like to request an exception for Attach/Detach SR-IOV interface
> feature. [1]
> This is an important feature that aims to improve better performance than
> normal
> network interface in guests and not too hard to implement.
>
> Thanks,
> Shao He, Feng
>
> [1] https://review.openstack.org/#/c/139910/
>


Oops, after I clicked the link it forward to an wrong link, but I can open
it by copy the text https://review.openstack.org/#/c/139910/ into
web-browser directly. :)



>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Requesting exception for add separated policy rule for each v2.1 api

2015-01-12 Thread Alex Xu
https://review.openstack.org/#/c/127863/

This spec is part of Nova REST API policy improvement. And those
improvement already got generic agreement as in this full view devref
https://review.openstack.org/#/c/138270/

This spec is just for Nova REST API v2.1. So really hope it can be done
before v2.1 released, then we needn't think about upgrade impact for
deployer. Finish this simple task when it's simple.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Enable policy improvment both v2/v3 API or not

2014-05-30 Thread Alex Xu

Hi, guys

There are some BPs working on improve the usability of API policy. Initially
those BP just for v2.1/v3 API. For v2 API, we just want to it keep the same
as before.

But in Juno design summit, we get some complain about policy is hard to use.
(https://etherpad.openstack.org/p/juno-nova-devops)
I guess those guys is complain for v2 API. So I'm thinking of whether we 
should

enable those improvment for v2 API too. I want to hear your guys and CD
people's suggestion. To ensure we should enable those for V2 API.

The main propose of improve policy is:
Policy should be enforced at REST API layer 
https://review.openstack.org/92005


In this propose we remove the compute-api layer policy checks for v3 
API, and
move policy checks into API layer for v2 API. So only v3 API can get the 
benefit.

V2 API still have two policy checks for same API.

For example:
At API layer: "compute_extension:admin_actions:pause": "rule:admin_or_owner"
At compute API layer: "compute:pause": ""

There is pros/cons of enable for v2 API as below:

Pros:
* V2 API user can get the benefit from those improvement. We still have some
user use V2 API before we release V2.1/V3.

* We don't need make the code back-compatibility for v2 API. That make the
code looks mess.

For example:
https://review.openstack.org/#/c/65071/5/nova/api/openstack/compute/contrib/shelve.py
There are two policy checks code for one API. One is used for extension 
(line 84),

another one for keep compatibility (line 85).

(There is another method that won't make the code mess and we can support
back-compatibility. It is that we didn't remove the compute api layer 
policy code,
then we just skip the policy check for v3 API. After v2 API deprecated, 
we clean

up those compute api layer policy code.)

Cons:
* Maybe V2 API user didn't have too much pain on this. And we will have 
V2.1/V3 API,
V2 will be deprecated. If we change those, this may become extra burden 
for some

operator user upgrade their policy config file when upgrade nova code.

* The risk of touch existed v2 API code.

And there are other minor improvement propose for API policy:
https://review.openstack.org/92325
https://review.openstack.org/92326

I think after make decision for first propose, then I think those two 
propose can

just follow the decision.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Alex Xu

+1

On 2014年06月14日 06:40, Michael Still wrote:

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews
on API changes are excellent, and he's been part of the team that has
driven the new API work we've seen in recent cycles forward. Ken'ichi
has also been reviewing other parts of the code base, and I think his
reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

   
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

   https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

   http://www.stackalytics.com/?module=nova-group&user_id=oomichi

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] API weekly meeting

2014-03-07 Thread Alex Xu

Any time except 15-22 UTC

On 2014年03月07日 08:45, Christopher Yeoh wrote:

Hi,

I'd like to start a weekly IRC meeting for those interested in
discussing Nova API issues. I think it would be a useful forum for:

- People to keep up with what work is going on the API and where its
   headed.
- Cloud providers, SDK maintainers and users of the REST API to provide
   feedback about the API and what they want out of it.
- Help coordinate the development work on the API (both v2 and v3)

If you're interested in attending please respond and include what time
zone you're in so we can work out the best time to meet.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-09 Thread Alex Xu
Hi, Jeremy, the discussion at here 
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html


Thanks
Alex
On 2014年03月07日 10:29, Liuji (Jeremy) wrote:

Hi, all

Current openstack seems not support to snapshot instance with memory and dev 
states.
I searched the blueprint and found two relational blueprint like below.
But these blueprint failed to get in the branch.

[1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
[2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms

In the blueprint[1], there is a comment,"
We discussed this pretty extensively on the mailing list and in a design summit 
session.
The consensus is that this is not a feature we would like to have in nova. 
--russellb "
But I can't find the discuss mail about it. I hope to know why we think so ?
Without memory snapshot, we can't to provide the feature for user to revert a 
instance to a checkpoint.

Anyone who knows the history can help me or give me a hint how to find the 
discuss mail?

I am a newbie for openstack and I apologize if I am missing something very 
obvious.


Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Alex Xu

On 2014年03月21日 17:04, Christopher Yeoh wrote:

On Thu, 20 Mar 2014 15:45:11 -0700
Dan Smith  wrote:

I know that our primary delivery mechanism is releases right now, and
so if we decide to revert before this gets into a release, that's
cool. However, I think we need to be looking at CD as a very important
use-case and I don't want to leave those folks out in the cold.


I don't want to cause issues for the CD people, but perhaps it won't be
too disruptive for them (some direct feedback would be handy). The
initial backwards incompatible change did not result in any bug reports
coming back to us at all. If there were lots of users using it I think
we could have expected some complaints as they would have had to adapt
their programs to no longer manually add the flavor access (otherwise
that would fail). It is of course possible that new programs written in
the meantime would rely on the new behaviour.

I think (please correct me if I'm wrong) the public CD clouds don't
expose that part of API to their users so the fallout could be quite
limited. Some opinions from those who do CD for private clouds would be
very useful. I'll send an email to openstack-operators asking what
people there believe the impact would be but at the moment I'm thinking
that revert is the way we should go.


Could we consider a middle road? What if we made the extension
silently tolerate an add-myself operation to a flavor, (potentially
only) right after create? Yes, that's another change, but it means
that old clients (like horizon) will continue to work, and new
clients (which expect to automatically get access) will continue to
work. We can document in the release notes that we made the change to
match our docs, and that anyone that *depends* on the (admittedly
weird) behavior of the old broken extension, where a user doesn't
retain access to flavors they create, may need to tweak their client
to remove themselves after create.

My concern is that we'd be digging ourselves an even deeper hole with
that approach. That for some reason we don't really understand at the
moment, people have programs which rely on adding flavor access to a
tenant which is already on the access list being rejected rather than
silently accepted. And I'm not sure its the behavior from flavor access
that we actually want.

But we certainly don't want to end up in the situation of trying to
work out how to rollback two backwards incompatible API changes.


I vote to revert also. If we promise api stable before release, that 
means we can't
make any mistake in the review. We should think about whether we promise 
something

before release.

If we really want to keep this. There is antoher road. Add an extension 
for this change, just
like extend_quotas extension. It's disabled by default. If any 
deployment depend on that change,

admin can enable it.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to add a property to the API extension?

2014-04-20 Thread Alex Xu

On 2014年04月17日 05:25, Jiang, Yunhong wrote:

Hi, Christopher,
I have some question to the API changes related to 
https://review.openstack.org/#/c/80707/4/nova/api/openstack/compute/plugins/v3/hypervisors.py
 , which adds a property to the hypervisor information.


Hi, Yunhong, Chris may be available for a while. Let me answer your 
question.



a) I checked the https://wiki.openstack.org/wiki/APIChangeGuidelines but not sure if it's ok to 
"Adding a property to a resource representation" as I did in the patch, or I need another 
extension to add this property? Does "OK when conditionally added as a new API extension" 
means I need another extension?
You can add a property for v3 api directly for now. Because v3 api 
didn't release yet. We needn't wrong about any back-compatibility 
problem. if you add a property for v2 api

you need another extension.


b) If we can simply add a property like the patch is doing, would it requires 
to bump the version number? If yes, how should the version number be? Would it 
be like 1/2/3 etc, or should it be something like 1.1/1.2/2.1 etc?


You needn't bump the version number for same reason v3 api didn't 
release yet. After v3 api released, we should bump the version, it would 
be like 1/2/3 etc.



Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-22 Thread Alex Xu

On 2014年09月23日 04:27, Brant Knudson wrote:



On Fri, Sep 19, 2014 at 1:39 AM, Alex Xu <mailto:x...@linux.vnet.ibm.com>> wrote:


Close to Kilo, it is time to think about what's next for nova API.
In Kilo, we
will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for
micro-version.

I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out
in the blog post.

As discussion in the Nova API meeting, we want to bring it up to
mail-list to
discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!

Thanks
Alex



Did you consider JSON Home[1] for this? For Juno we've got JSON Home 
support in Keystone for Identity v3 (Zaqar was using it already). We 
weren't planning to use it for microversioning since we weren't 
planning on doing microversioning, but I think JSON Home could be used 
for this purpose.


Using JSON Home, you'd have relationships that include the version, 
then the client can check the JSON Home document to see if the server 
has support for the relationship the client wants to use.


[1] http://tools.ietf.org/html/draft-nottingham-json-home-03

- Brant

Brant, thanks for your comment!

In the micro-version spec discussion, there are discussion JSON-Home 
also. (At line 158 
https://review.openstack.org/#/c/101648/1/specs/juno/api-microversions-alt.rst)

And people like it. JSON-Home is good choice for API discoverable.

That didn't describe in the blog post. The blog post is most focus on 
implement multiple version support in the wsgi infrastructure. But the 
proposed implementation is good for support API discoverable, it propose 
defining json-schema for each version API, not use tranlsation stuff. 
Then the json-schema can be exposed to user easily (I guess Json-schema 
in the Json-home), make the API discoverable.


The micro-version specs is also need discussion in the K, it defined how 
can we bump the version, how to interact with client. But whatever the 
wsgi infrastructure need support multiple version. So I begin to think 
about it.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] REST API style guide (Re: [Nova] Some ideas for micro-version implementation)

2014-09-22 Thread Alex Xu

On 2014年09月23日 08:00, Ken'ichi Ohmichi wrote:

# I changed the title for getting opinions from many projects.

2014-09-22 23:47 GMT+09:00 Anne Gentle :

-Original Message-
From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
Sent: Friday, September 19, 2014 3:40 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] Some ideas for micro-version
implementation

Close to Kilo, it is time to think about what's next for nova API. In
Kilo, we
will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for
micro-version.

I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out in the
blog post.

As discussion in the Nova API meeting, we want to bring it up to
mail-list to
discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!


I would greatly appreciate this style guide to be finalized for
documentation purposes as well. Thanks for starting this write-up. I'd be
happy to write it up on a wiki page while we get agreement, would that be
helpful?

The wiki page of REST API style guide would be great,
thanks for joining into this :-)


Yes, the REST API style guide is helpful, thanks too :)




Before discussing how to implement, I'd like to consider what we should
implement. IIUC, the purpose of v3 API is to make consistent API with the
backwards incompatible changes. Through huge discussion in Juno cycle, we
knew that backwards incompatible changes of REST API would be huge pain
against clients and we should avoid such changes as possible. If new APIs
which are consistent in Nova API only are inconsistent for whole OpenStack
projects, maybe we need to change them again for whole OpenStack
consistency.

For avoiding such situation, I think we need to define what is consistent
REST API across projects. According to Alex's blog, The topics might be

  - Input/Output attribute names
  - Resource names
  - Status code

The following are hints for making consistent APIs from Nova v3 API
experience,
I'd like to know whether they are the best for API consistency.

(1) Input/Output attribute names
(1.1) These names should be snake_case.
   eg: imageRef -> image_ref, flavorRef -> flavor_ref, hostId -> host_id
(1.2) These names should contain extension names if they are provided in
case of some extension loading.
   eg: security_groups -> os-security-groups:security_groups
   config_drive -> os-config-drive:config_drive


Do you mean that the os- prefix should be dropped? Or that it should be
maintained and added as needed?

The above samples contain two meanings:
  - extension names are added. ("os-security-groups:", "os-config-drive:")
  - their extensions are not cores. (v3 ones should contain "os-")

Their changes are Nova v3 API's ones, and now I have a question related to
your point.
Should we add extension names to each input/output attribute names?
How about naming them with snake_case only without extension names?
I can think out of two purposes for add extension names to attribute 
names. One is used for namespace, Another one is used for distinguish 
between core and extension.
And with extension name is more readable for developer, developer can 
know this attribute come from which extension without search API doc.




To be honest, I'm not sure yet extension name values for each attribute name.
Additional extension names make attribute names long and complex.
In addition, if we define Pecan/WSME as standard web frameworks, we should name
attributes with snake_case only because of Pecan/WSME restriction[1]. So we can
not name them with hyphens and colons which are including in current extension
attribute names.
yes, good point, that is problem. Is it only restriction for 
resource-name? "-"/":" works for getattr/setattr, it is a walk around, 
so I'm not sure it's worth.





(1.3) Extension names should consist of hyphens and low chars.
   eg: OS-EXT-AZ:availability_zone ->
os-extended-availability-zone:availability_zone
   OS-EXT-STS:task_state -> os-extended-status:task_state


Yes, I don't like the shoutyness of the ALL CAPS.


(1.4) Extension names should contain the prefix "os-" if the extension is
not core.
   eg: rxtx_factor -> os-flavor-rxtx:rxtx_factor
   os-flavor-access:is_public -> flavor-access:is_public (flavor-access
extension became core)

Do we have a list of "core" yet?

We have the list in the code:

nova/api/openstack/__init__.py

Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-23 Thread Alex Xu

On 2014年09月23日 17:12, Christopher Yeoh wrote:

On Mon, 22 Sep 2014 09:29:26 +
Kenichi Oomichi  wrote:

Before discussing how to implement, I'd like to consider what we
should implement. IIUC, the purpose of v3 API is to make consistent
API with the backwards incompatible changes. Through huge discussion
in Juno cycle, we knew that backwards incompatible changes of REST
API would be huge pain against clients and we should avoid such
changes as possible. If new APIs which are consistent in Nova API
only are inconsistent for whole OpenStack projects, maybe we need to
change them again for whole OpenStack consistency.

So I think there's three different aspects to microversions which we
can consider quit separately:

- The format of the client header and what the version number means.
   Eg is the version number of the format X.Y.Z, what do we increment
   when we make a bug fix, what do we increment when we make a backwards
   compatible change and what do we increment when we make backwards
   incompatible change.

   Also how does a client request experimental APIs (I believe we have
   consensus that we really need this to avoid backwards incompatible
   changes as much as possible as it allows more testing before
   guaranteeing backwards compatibility)

   I believe that we can consider this part separately from the next two
   issues.

- The implementation on the nova api side. Eg how do we cleanly handle
   supporting multiple versions of the api based on the client header
   (or lack of it which will indicate v2 compatibility. I'll respond
   directly on Alex's original post

- What we are going to use the microversions API feature to do. I think
   they fall under a few broad categories:

   - Backwards compatible changes. We desperately need a mechanism that
 allows us to make backwards compatible changes (eg add another
 parameter to a response) without having to add another dummy
 extension.

   - Significant backwards incompatible changes. The Tasks API and server
 diagnostics API are probably the best examples of this.

   - V3 like backwards incompatible changes (consistency fixes).

I think getting consensus over backwards compatible changes will be
straightforward. However given the previous v2/v3 discussions I don't
think we will be able to get consensus over doing all or most of the
consistency type fixes even using microversions in the short term.
Because with microversions you get all the changes applied before the
version that you choose. So from a client application point of view its
just as much work as V2 to V3 API transition.

I don't think that means we need to put all of these consistency
changes off forever though. We need to make backwards incompatible
changes in order to implement the Tasks API  and new server
diagnostics api the way we want to. The Tasks API will eventually cover
quite a few interfaces and while say breaking backwards compatibility
with the create server api, we can also fix consistency issues in that
api at the same time. Clients will need to make changes to their app
anyway if they want to take advantage of the new features (or they can
just continue to use the old non-tasks enabled API).

So as we slowly make backwards incompatible changes to the API for
other reasons we can also fix up other issues. Other consistency fixes
we can propose on a case by case basis and the user community can have
input as to whether the cost (app rework) is worth it without getting a
new feature at the same time.


Agree, consistency fixes should depend on the whether the cost is worth 
it or not, maybe we can't fix some inconsistency issues.
And definitely need micro-version for add tasks API and new server 
diagnostics. Also need micro-version to fix some bugs,
like https://bugs.launchpad.net/nova/+bug/1320754 and 
https://bugs.launchpad.net/nova/+bug/1333494.




But I think its clear that we *need* the microversions mechanism. So we
don't need to decide beforehand exactly what we're going to use it for
first. I think think its more important that we get a nova-spec
approved for the the first two parts - what it looks like from the
client point of view. And how we're going to implement it.

Regards,

Chris


For avoiding such situation, I think we need to define what is
consistent REST API across projects. According to Alex's blog, The
topics might be

  - Input/Output attribute names
  - Resource names
  - Status code

The following are hints for making consistent APIs from Nova v3 API
experience, I'd like to know whether they are the best for API
consistency.

(1) Input/Output attribute names
(1.1) These names should be snake_case.
   eg: imageRef -> image_ref, flavorRef -> flavor_ref, hostId ->
host_id (1.2) These names should contain extension names if they are
provided in case of some extension loading. eg: security_groups ->
os-security-groups:security_groups config_drive ->
os-config-drive:config_drive (1.3) Extension names should consist of
hyphe

Re: [openstack-dev] [Nova] [All] API standards working group

2014-09-24 Thread Alex Xu

I'm interesting in the group too!

On 2014年09月24日 18:01, Salvatore Orlando wrote:

Please keep me in the loop.

The importance of ensuring consistent style across Openstack APIs 
increases as the number of "integrated" project increases.
Unless we decide to merge all API endpoints as proposed in another 
thread! [1]


Regards,
Salvatore

[1] 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36012.html


On 24 September 2014 11:15, Kenichi Oomichi > wrote:


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com
]
> Sent: Wednesday, September 24, 2014 7:19 AM
> To: openstack-dev@lists.openstack.org

> Subject: Re: [openstack-dev] [Nova] [All] API standards working
group
>
> On 09/23/2014 05:03 PM, Rochelle.RochelleGrober wrote:
> > jaypi...@gmail.com 
> on
Tuesday, September 23,
> > 2014 9:09 AM wrote:
> >
> > _Snip
> >
> > I'd like to say finally that I think there should be an
OpenStack API
> > working group whose job it is to both pull together a set of
OpenStack
> > API practices as well as evaluate new REST APIs proposed in the
> > OpenStack ecosystem to provide guidance to new projects or new
> > subprojects wishing to add resources to an existing REST API.
> >
> > Best,
> >
> > -jay
> >
> > */[Rocky Grober] /*++
> >
> > */Jay, are you volunteering to head up the working group? Or
at least be
> > an active member?  I’ll certainly follow with interest, but I
think I
> > have my hands full with the log rationalization working group./*
>
> Yes, I'd be willing to head up the working group... or at least
> participate in it.

I also would like to join the group.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Alex Xu

On 2014年10月14日 12:52, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 22:20:32 -0400
Jay Pipes  wrote:


On 10/13/2014 07:11 PM, Christopher Yeoh wrote:

On Mon, 13 Oct 2014 10:52:26 -0400
Jay Pipes  wrote:


On 10/10/2014 02:05 AM, Christopher Yeoh wrote:

I agree with what you've written on the wiki page. I think our
priority needs to be to flesh out
https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
so we have something to reference when reviewing specs. At the
moment I see that document as something anyone should be able to
document a project's API convention even if they conflict with
another project for the moment. Once we've got a fair amount of
content we can start as a group resolving
any conflicts.

Agreed that we should be fleshing out the above wiki page. How
would you like us to do that? Should we have an etherpad to discuss
individual topics? Having multiple people editing the wiki page
offering commentary seems a bit chaotic, and I think we would do
well to have the Gerrit review process in place to handle proposed
guidelines and rules for APIs. See below for specifics on this...

Honestly I don't think we have enough content yet to have much of a
discussion. I started the wiki page

https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines

in the hope that people from other projects would start adding
conventions that they use in their projects. I think its fine for
the moment if its contradictory, we just need to gather what
projects currently do (or want to do) in one place so we can start
discussing any contradictions.

Actually, I don't care all that much about what projects *currently*
do. I want this API working group to come up with concrete guidelines
and rules/examples of what APIs *should* look like.

What projects currently do gives us a baseline to work from. It also
should expose where we have currently have inconsistencies between
projects.

And whilst I don't have a problem with having some guidelines which
suggest a future standard for APIs, I don't think we should be
requiring any type of feature which has not yet been implemented in
at least one, preferably two openstack projects and released and tested
for a cycle. Eg standards should be lagging rather than leading.


There is one reason to think about what projects *currently* do. When we 
choice which convention we want.
For example, the CamelCase and snake_case, if the most project use 
snake_case, then choice snake_case style

will be the right.




So I'd again encourage anyone interested in APIs from the various
projects to just start dumping their project viewpoint in there.

I went ahead and just created a repository that contained all the
stuff that should be pretty much agreed-to, and a bunch of stub topic
documents that can be used to propose specific ideas (and get
feedback on) here:

http://github.com/jaypipes/openstack-api

Hopefully, you can give it a look and get a feel for why I think the
code review process will be better than the wiki for controlling the
deliverables produced by this team...

I think it will be better in git (but we also need it in gerrit) when
it comes to resolving conflicts and after we've established a decent
document (eg when we have more content). I'm just looking to make it
as easy as possible for anyone to add any guidelines now. Once we've
actually got something to discuss then we use git/gerrit with patches
proposed to resolve conflicts within the document.


I like the idea of a repo and using Gerrit for discussions to
resolve issues. I don't think it works so well when people are
wanting to dump lots of information in initially.  Unless we agree
to just merge anything vaguely reasonable and then resolve the
conflicts later when we have a reasonable amount of content.
Otherwise stuff will get lost in gerrit history comments and
people's updates to the document will overwrite each other.

I guess we could also start fleshing out in the repo how we'll work
in practice too (eg once the document is stable what process do we
have for making changes - two +2's is probably not adequate for
something like this).

We can make it work exactly like the openstack/governance repo, where
ttx has the only ability to +2/+W approve a patch for merging, and he
tallies a majority vote from the TC members, who vote -1 or +1 on a
proposed patch.

Instead of ttx, though, we can have an API working group lead
selected from the set of folks currently listed as committed to the
effort?

Yep, that sounds fine, though I don't think a simple majority is
sufficient for something like api standards. We either get consensus
or we don't include it in the final document.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.or

Re: [openstack-dev] [api] Forming the API Working Group

2014-10-14 Thread Alex Xu

On 2014年10月14日 21:57, Jay Pipes wrote:

On 10/14/2014 05:04 AM, Alex Xu wrote:

There is one reason to think about what projects *currently* do. When we
choice which convention we want.
For example, the CamelCase and snake_case, if the most project use
snake_case, then choice snake_case style
will be the right.


I would posit that the reason we have such inconsistencies in our 
project's APIs is that we haven't taken a stand and said "this is the 
way it must be".


There's lots of examples of inconsistencies out in the OpenStack APIs. 
We can certainly use a wiki or etherpad page to document those 
inconsistencies. But, eventually, this working group should produce 
solid decisions that should be enforced across *future* OpenStack 
APIs. And that guidance should be forthcoming in the next month or so, 
not in one or two release cycles.


I personally think proposing patches to an openstack-api repository is 
the most effective way to make those proposals. Etherpads and wiki 
pages are fine for dumping content, but IMO, we don't need to dump 
content -- we already have plenty of it. We need to propose guidelines 
for *new* APIs to follow.



+1 in the next month, stop more inconsistent.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] APIImpact flag for nova specs

2014-10-15 Thread Alex Xu

On 2014年10月15日 14:20, Christopher Yeoh wrote:

Hi,

I was wondering what people thought of having a convention of adding
an APIImpact flag to proposed nova specs commit messages where the
Nova API will change? It would make it much easier to find proposed
specs which affect the API as its not always clear from the gerrit
summary listing.

+1, and is there any tool can be used by search flag?



Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-16 Thread Alex Xu
2014-10-16 17:57 GMT+08:00 Salvatore Orlando :

> In an analysis we recently did for managing lifecycle of neutron
> resources, it also emerged that task (or operation) API are a very useful
> resource.
> Indeed several neutron resources introduced the (in)famous PENDING_XXX
> operational statuses to note the fact that an operation is in progress and
> its status is changing.
>
> This could have been easily avoided if a facility for querying active
> tasks through the API was available.
>
> From an API guideline viewpoint, I understand that
> https://review.openstack.org/#/c/86938/ proposes the introduction of a
> rather simple endpoint to query active tasks and filter them by resource
> uuid or state, for example.
> While this is hardly questionable, I wonder if it might be worth
> "typifying" the task, ie: adding a resource_type attribute, and/or allowing
> to retrieve active tasks as a chile resource of an object, eg.: GET
> /servers//tasks?state=running or if just for running tasks GET
> /servers//active_tasks
>
> The proposed approach for the multiple server create case also makes sense
> to me. Other than "bulk" operations there are indeed cases where a single
> API operation needs to perform multiple tasks. For instance, in Neutron,
> creating a port implies L2 wiring, setting up DHCP info, and securing it on
> the compute node by enforcing anti-spoof rules and security groups. This
> means there will be 3/4 active tasks. For this reason I wonder if it might
> be the case of differentiating between the concept of "operation" and
> "tasks" where the former is the activity explicitly initiated by the API
> consumer, and the latter are the activities which need to complete to
> fulfil it. This is where we might leverage the already proposed request_id
> attribute of the task data structure.
>

This sounds like sub-task. The propose from Andrew include the sub-task
concept, it just didn't implement in the first step.


>
> Finally, a note on persistency. How long a completed task, successfully or
> not should be stored for? Do we want to store them until the resource they
> operated on is deleted?
> I don't think it's a great idea to store them indefinitely in the DB.
> Tying their lifespan to resources is probably a decent idea, but time-based
> cleanup policies might also be considered (e.g.: destroy a task record 24
> hours after its completion)
>
>
This is good point! Task can be removed after finished except failed. And
maybe can implement plugin mechanism to add different persistency backend?



> Salvatore
>
>
> On 16 October 2014 08:38, Christopher Yeoh  wrote:
>
>> On Thu, Oct 16, 2014 at 7:19 AM, Kevin L. Mitchell <
>> kevin.mitch...@rackspace.com> wrote:
>>
>>> On Wed, 2014-10-15 at 12:39 -0400, Andrew Laski wrote:
>>> > On 10/15/2014 11:49 AM, Kevin L. Mitchell wrote:
>>> > > Now that we have an API working group forming, I'd like to kick off
>>> some
>>> > > discussion over one point I'd really like to see our APIs using (and
>>> > > I'll probably drop it in to the repo once that gets fully set up):
>>> the
>>> > > difference between synchronous and asynchronous operations.  Using
>>> nova
>>> > > as an example—right now, if you kick off a long-running operation,
>>> such
>>> > > as a server create or a reboot, you watch the resource itself to
>>> > > determine the status of the operation.  What I'd like to propose is
>>> that
>>> > > future APIs use a separate "operation" resource to track status
>>> > > information on the particular operation.  For instance, if we were to
>>> > > rebuild the nova API with this idea in mind, booting a new server
>>> would
>>> > > give you a server handle and an operation handle; querying the server
>>> > > resource would give you summary information about the state of the
>>> > > server (running, not running) and pending operations, while querying
>>> the
>>> > > operation would give you detailed information about the status of the
>>> > > operation.  As another example, issuing a reboot would give you the
>>> > > operation handle; you'd see the operation in a queue on the server
>>> > > resource, but the actual state of the operation itself would be
>>> listed
>>> > > on that operation.  As a side effect, this would allow us (not
>>> require,
>>> > > though) to queue up operations on a resource, and allow us to cancel
>>> an
>>> > > operation that has not yet been started.
>>> > >
>>> > > Thoughts?
>>> >
>>> > Something like https://review.openstack.org/#/c/86938/ ?
>>> >
>>> > I know that Jay has proposed a similar thing before as well.  I would
>>> > love to get some feedback from others on this as it's something I'm
>>> > going to propose for Nova in Kilo.
>>>
>>> Yep, something very much like that :)  But the idea behind my proposal
>>> is to make that a codified API guideline, rather than just an addition
>>> to Nova.
>>>
>>
>> Perhaps the best way to make this move faster is for developers not from
>> Nova
>> who are interested to help develop the t

[openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-28 Thread Alex Xu

Hi,

Currently migration/rebuild/evacuate didn't support pass 
scheduler-hints, that means any migration

can't use schedule-hints that passed when creating instance.

Can we add scheduler-hints support when migration/rebuild/evacuate? That 
also can enable user

move in/out instance to/from an server group.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-28 Thread Alex Xu

On 2014年10月29日 12:37, Chen CH Ji wrote:


I think we already support to specify the host when doing evacuate and 
migration ?



Yes, we support to specify the host, but schedule-hints is different thing.



if we need use hints that passed from creating instance, that means we 
need to persistent schedule hints

I remember we used to have a spec for store it locally ...



I also remember we have one spec for persistent before, but I don't know 
why it didn't continue.
And I think maybe we needn't persistent schedule-hints, just add pass 
new schedule-hints when

migration the instance. Nova just need provide the mechanism.



Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Alex Xu ---10/29/2014 12:19:35 PM---Hi, 
Currently migration/rebuild/evacuate didn't support passAlex Xu 
---10/29/2014 12:19:35 PM---Hi, Currently migration/rebuild/evacuate 
didn't support pass


From: Alex Xu 
To: openstack-dev@lists.openstack.org
Date: 10/29/2014 12:19 PM
Subject: [openstack-dev] [Nova] Add scheduler-hints when 
migration/rebuild/evacuate






Hi,

Currently migration/rebuild/evacuate didn't support pass
scheduler-hints, that means any migration
can't use schedule-hints that passed when creating instance.

Can we add scheduler-hints support when migration/rebuild/evacuate? That
also can enable user
move in/out instance to/from an server group.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-29 Thread Alex Xu
Hah, thanks :), Let me see if I can help on that.

2014-10-29 19:59 GMT+08:00 Jay Lau :

> Hi Alex,
>
> You can continue the work https://review.openstack.org/#/c/88983/ from
> here ;-)
>
> 2014-10-29 13:42 GMT+08:00 Chen CH Ji :
>
>> Yes, I remember that spec might talk about local storage (in local db?)
>> and it can be the root cause
>>
>> And I think we need persistent storage otherwise the scheduler hints
>> can't survive in some conditions such as system reboot or upgrade ?
>>
>> Best Regards!
>>
>> Kevin (Chen) Ji 纪 晨
>>
>> Engineer, zVM Development, CSTL
>> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
>> Phone: +86-10-82454158
>> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
>> Beijing 100193, PRC
>>
>> [image: Inactive hide details for Alex Xu ---10/29/2014 01:34:13 PM---On
>> 2014年10月29日 12:37, Chen CH Ji wrote: >]Alex Xu ---10/29/2014 01:34:13
>> PM---On 2014年10月29日 12:37, Chen CH Ji wrote: >
>>
>> From: Alex Xu 
>> To: openstack-dev@lists.openstack.org
>> Date: 10/29/2014 01:34 PM
>> Subject: Re: [openstack-dev] [Nova] Add scheduler-hints when
>> migration/rebuild/evacuate
>>
>> --
>>
>>
>>
>> On 2014年10月29日 12:37, Chen CH Ji wrote:
>>
>>
>>I think we already support to specify the host when doing evacuate
>>and migration ?
>>
>>
>> Yes, we support to specify the host, but schedule-hints is different
>> thing.
>>
>>
>>
>>if we need use hints that passed from creating instance, that means
>>we need to persistent schedule hints
>>I remember we used to have a spec for store it locally ...
>>
>>
>>
>> I also remember we have one spec for persistent before, but I don't know
>> why it didn't continue.
>> And I think maybe we needn't persistent schedule-hints, just add pass new
>> schedule-hints when
>> migration the instance. Nova just need provide the mechanism.
>>
>>
>>
>>Best Regards!
>>
>>Kevin (Chen) Ji 纪 晨
>>
>>Engineer, zVM Development, CSTL
>>Notes: Chen CH Ji/China/IBM@IBMCN   Internet: *jiche...@cn.ibm.com*
>>
>>Phone: +86-10-82454158
>>Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
>>District, Beijing 100193, PRC
>>
>>[image: Inactive hide details for Alex Xu ---10/29/2014 12:19:35
>>PM---Hi, Currently migration/rebuild/evacuate didn't support pass]Alex
>>Xu ---10/29/2014 12:19:35 PM---Hi, Currently migration/rebuild/evacuate
>>didn't support pass
>>
>>From: Alex Xu ** 
>>To: *openstack-dev@lists.openstack.org*
>>
>>Date: 10/29/2014 12:19 PM
>>Subject: [openstack-dev] [Nova] Add scheduler-hints when
>>migration/rebuild/evacuate
>>
>>--
>>
>>
>>
>>Hi,
>>
>>Currently migration/rebuild/evacuate didn't support pass
>>scheduler-hints, that means any migration
>>can't use schedule-hints that passed when creating instance.
>>
>>Can we add scheduler-hints support when migration/rebuild/evacuate?
>>That
>>also can enable user
>>move in/out instance to/from an server group.
>>
>>Thanks
>>Alex
>>
>>
>>___
>>OpenStack-dev mailing list
>> *OpenStack-dev@lists.openstack.org* 
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>>___
>>OpenStack-dev mailing list
>>*OpenStack-dev@lists.openstack.org*
>>
>>*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
>
> Jay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-29 Thread Alex Xu
2014-10-29 13:42 GMT+08:00 Chen CH Ji :

> Yes, I remember that spec might talk about local storage (in local db?)
> and it can be the root cause
>
> And I think we need persistent storage otherwise the scheduler hints can't
> survive in some conditions such as system reboot or upgrade ?
>
>
Yeah, that's problem. And I have talk with Jay Lau, look like there already
got agreement on persistent it.


> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
> Phone: +86-10-82454158
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> [image: Inactive hide details for Alex Xu ---10/29/2014 01:34:13 PM---On
> 2014年10月29日 12:37, Chen CH Ji wrote: >]Alex Xu ---10/29/2014 01:34:13
> PM---On 2014年10月29日 12:37, Chen CH Ji wrote: >
>
> From: Alex Xu 
> To: openstack-dev@lists.openstack.org
> Date: 10/29/2014 01:34 PM
> Subject: Re: [openstack-dev] [Nova] Add scheduler-hints when
> migration/rebuild/evacuate
>
> --
>
>
>
> On 2014年10月29日 12:37, Chen CH Ji wrote:
>
>
>I think we already support to specify the host when doing evacuate and
>migration ?
>
>
> Yes, we support to specify the host, but schedule-hints is different thing.
>
>
>
>if we need use hints that passed from creating instance, that means we
>need to persistent schedule hints
>I remember we used to have a spec for store it locally ...
>
>
>
> I also remember we have one spec for persistent before, but I don't know
> why it didn't continue.
> And I think maybe we needn't persistent schedule-hints, just add pass new
> schedule-hints when
> migration the instance. Nova just need provide the mechanism.
>
>
>
>Best Regards!
>
>Kevin (Chen) Ji 纪 晨
>
>Engineer, zVM Development, CSTL
>Notes: Chen CH Ji/China/IBM@IBMCN   Internet: *jiche...@cn.ibm.com*
>
>Phone: +86-10-82454158
>Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
>District, Beijing 100193, PRC
>
>[image: Inactive hide details for Alex Xu ---10/29/2014 12:19:35
>PM---Hi, Currently migration/rebuild/evacuate didn't support pass]Alex
>Xu ---10/29/2014 12:19:35 PM---Hi, Currently migration/rebuild/evacuate
>didn't support pass
>
>From: Alex Xu ** 
>To: *openstack-dev@lists.openstack.org*
>
>Date: 10/29/2014 12:19 PM
>Subject: [openstack-dev] [Nova] Add scheduler-hints when
>migration/rebuild/evacuate
>
>--
>
>
>
>Hi,
>
>Currently migration/rebuild/evacuate didn't support pass
>scheduler-hints, that means any migration
>can't use schedule-hints that passed when creating instance.
>
>Can we add scheduler-hints support when migration/rebuild/evacuate?
>That
>also can enable user
>move in/out instance to/from an server group.
>
>Thanks
>Alex
>
>
>___
>OpenStack-dev mailing list
> *OpenStack-dev@lists.openstack.org* 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
>___
>OpenStack-dev mailing list
>*OpenStack-dev@lists.openstack.org* 
>*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-30 Thread Alex Xu
2014-10-31 2:14 GMT+08:00 Steve Gordon :

> - Original Message -
> > From: "Wuhongning" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> >
> > +1, the hint should be persistent as other server instance metadata.
>
> I don't think there is much disagreement that it makes sense to do this,
> but more how/where to do so. You can refer to the comments in the review
> Jay linekd for more background:
>
>
> https://review.openstack.org/#/c/88983/17/specs/juno/persist-scheduler-hints.rst


Yea, thanks! will check that.


>
>
> > 
> > From: Alex Xu [sou...@gmail.com]
> > Sent: Wednesday, October 29, 2014 9:11 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Nova] Add scheduler-hints when
> > migration/rebuild/evacuate
> >
> >
> >
> > 2014-10-29 13:42 GMT+08:00 Chen CH Ji
> > mailto:jiche...@cn.ibm.com>>:
> >
> > Yes, I remember that spec might talk about local storage (in local db?)
> and
> > it can be the root cause
> >
> > And I think we need persistent storage otherwise the scheduler hints
> can't
> > survive in some conditions such as system reboot or upgrade ?
> >
> >
> > Yeah, that's problem. And I have talk with Jay Lau, look like there
> already
> > got agreement on persistent it.
> >
> >
> > Best Regards!
> >
> > Kevin (Chen) Ji 纪 晨
> >
> > Engineer, zVM Development, CSTL
> > Notes: Chen CH Ji/China/IBM@IBMCN   Internet:
> > jiche...@cn.ibm.com<mailto:jiche...@cn.ibm.com>
> > Phone: +86-10-82454158
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> > Beijing 100193, PRC
> >
> > [Inactive hide details for Alex Xu ---10/29/2014 01:34:13 PM---On
> 2014年10月29日
> > 12:37, Chen CH Ji wrote: >]Alex Xu ---10/29/2014 01:34:13 PM---On
> > 2014年10月29日 12:37, Chen CH Ji wrote: >
> >
> > From: Alex Xu mailto:x...@linux.vnet.ibm.com>>
> > To:
> > openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> > Date: 10/29/2014 01:34 PM
> > Subject: Re: [openstack-dev] [Nova] Add scheduler-hints when
> > migration/rebuild/evacuate
> >
> > 
> >
> >
> >
> > On 2014年10月29日 12:37, Chen CH Ji wrote:
> >
> > I think we already support to specify the host when doing evacuate and
> > migration ?
> >
> > Yes, we support to specify the host, but schedule-hints is different
> thing.
> >
> >
> > if we need use hints that passed from creating instance, that means we
> need
> > to persistent schedule hints
> > I remember we used to have a spec for store it locally ...
> >
> >
> > I also remember we have one spec for persistent before, but I don't know
> why
> > it didn't continue.
> > And I think maybe we needn't persistent schedule-hints, just add pass new
> > schedule-hints when
> > migration the instance. Nova just need provide the mechanism.
> >
> >
> > Best Regards!
> >
> > Kevin (Chen) Ji 纪 晨
> >
> > Engineer, zVM Development, CSTL
> > Notes: Chen CH Ji/China/IBM@IBMCN   Internet:
> > jiche...@cn.ibm.com<mailto:jiche...@cn.ibm.com>
> > Phone: +86-10-82454158
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> > Beijing 100193, PRC
> >
> > [Inactive hide details for Alex Xu ---10/29/2014 12:19:35
>  PM---Hi,
> > Currently migration/rebuild/evacuate didn't support   pass]Alex
> Xu
> > ---10/29/2014 12:19:35 PM---Hi, Currently migration/rebuild/evacuate
> didn't
> > support pass
> >
> > From: Alex Xu <mailto:x...@linux.vnet.ibm.com>
> > To:
> > openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> > Date: 10/29/2014 12:19 PM
> > Subject: [openstack-dev] [Nova] Add scheduler-hints when
> > migration/rebuild/evacuate
> >
> > 
> >
> >
> >
> > Hi,
> >
> > Currently migration/rebuild/evacuate didn't support pass
> > scheduler-hints, that means any migration
> > can't use schedule-hints that passed when creating instance.
> >
> > Can we add scheduler-hints support when migration/rebuild/evacuate? That
> > also can enable user
> > move in/out instance to/from an server group.
> >
> > Thanks
> &

Re: [openstack-dev] [nova][compute] propose to use a table to deal with the vm_state when _init_instance in compute

2014-11-04 Thread Alex Xu
+1, good idea!

2014-11-04 15:15 GMT+08:00 Eli Qiao :

>  hello all:
> in current _init_instance function in compute manager,
> there's flood 'and' 'or' logic, to check the vm_state and task_state when
> initialize a instance during service list,
> this lead hard to read and hard to maintain, so I propose a new way to
> handle this.
>
> we can create a vm_state_table, by look up the table  we can find the
> action we need to do for the instance,
> from this table , you can clearly see what vm_state and task_state should
> take the action.
>
> for example:
> {vm_states list :{task_states list: action}},
>
> each entry stands for an action,
> and we walk though the tuple
> so the table should be like this:
>
> vm_state_table = (
> {vm_states.SOFT_DELETE :{'ALL': ACTION_NONE}},
> {vm_states.ERROR:  {('NOT_IN',[task_states.RESIZE_MIGRATING,
>
> task_states.DELETING]): ACTION_NONE}},
> {vm_states.DELETED: {'ALL': _complete_partial_deletion}},
> {vm_states.BUILDING: {'ALL': ACTION_ERROR}},
> {'ALL': {('IN',[task_states.SCHEDULING,
> task_states.BLOCK_DEVICE_MAPPING,
> task_states.NETWORKING,
> task_states.SPAWNING)]: ACTION_ERROR}},
> {('IN',[vm_states.ACTIVE, vm_states.STOPPED]: {('IN',
> [task_states.REBUILDING,
>
>task_states.REBUILD_BLOCK_DEVICE_MAPPING,
>
>   task_states.REBUILD_SPAWNING]): ACTION_ERROR}},
> {('NOT_IN',[vm_states.ERROR]): {('IN',
> [task_states.IMAGE_SNAPSHOT_PENDING,
>
> task_states.IMAGE_PENDING_UPLOAD,
>
> task_states.IMAGE_UPLOADING,
>
> task_states.IMAGE_SNAPSHOT]): _post_interrupted_snapshot_cleanup}}
> )
>
> what do you think, do we need a bp for this?
>
> --
> Thanks,
> Eli (Li Yong) Qiao
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] top gate bug is libvirt snapshot

2014-07-15 Thread Alex Xu
Question about swap volume, swap volume's implementation is very similar 
with live snapshot.
Both implemented by blockRebase. But swap volume didn't check any 
libvirt and qemu version.
Should we add version check for swap_volume now? That means swap_volume 
will be disable also.


On 2014?06?26? 19:00, Sean Dague wrote:

While the Trusty transition was mostly uneventful, it has exposed a
particular issue in libvirt, which is generating ~ 25% failure rate now
on most tempest jobs.

As can be seen here -
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L294-L297


... the libvirt live_snapshot code is something that our test pipeline
has never tested before, because it wasn't a new enough libvirt for us
to take that path.

Right now it's exploding, a lot -
https://bugs.launchpad.net/nova/+bug/1334398

Snapshotting gets used in Tempest to create images for testing, so image
setup tests are doing a decent number of snapshots. If I had to take a
completely *wild guess*, it's that libvirt can't do 2 live_snapshots at
the same time. It's probably something that most people haven't hit. The
wild guess is based on other libvirt issues we've hit that other people
haven't, and they are basically always a parallel ops triggered problem.

My 'stop the bleeding' suggested fix is this -
https://review.openstack.org/#/c/102643/ which just effectively disables
this code path for now. Then we can get some libvirt experts engaged to
help figure out the right long term fix.

I think there are a couple:

1) see if newer libvirt fixes this (1.2.5 just came out), and if so
mandate at some known working version. This would actually take a bunch
of work to be able to test a non packaged libvirt in our pipeline. We'd
need volunteers for that.

2) lock snapshot operations in nova-compute, so that we can only do 1 at
a time. Hopefully it's just 2 snapshot operations that is the issue, not
any other libvirt op during a snapshot, so serializing snapshot ops in
n-compute could put the kid gloves on libvirt and make it not break
here. This also needs some volunteers as we're going to be playing a
game of progressive serialization until we get to a point where it looks
like the failures go away.

3) Roll back to precise. I put this idea here for completeness, but I
think it's a terrible choice. This is one isolated, previously untested
(by us), code path. We can't stay on libvirt 0.9.6 forever, so actually
need to fix this for real (be it in nova's use of libvirt, or libvirt
itself).

There might be other options as well, ideas welcomed.

But for right now, we should stop the bleeding, so that nova/libvirt
isn't blocking everyone else from merging code.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Scheduler] Promote select_destination as a REST API

2014-07-23 Thread Alex Xu
Maybe we can implement this goal by another way, adding new API 
'confirm_before_migration' that's similar with 'confirm_resize'. This 
also can resolve Chris Friesen's concern.


On 2014年07月23日 00:13, Jay Pipes wrote:

On 07/21/2014 11:16 PM, Jay Lau wrote:

Hi Jay,

There are indeed some China customers want this feature because before
they do some operations, they want to check the action plan, such as
where the VM will be migrated or created, they want to use some
interactive mode do some operations to make sure no errors.


This isn't something that normal tenants should have access to, IMO. 
The scheduler is not like a database optimizer that should give you a 
query plan for a SQL statement. The information the scheduler is 
acting on (compute node usage records, aggregate records, deployment 
configuration, etc) are absolutely NOT something that should be 
exposed to end-users.


I would certainly support a specification that intended to add 
detailed log message output from the scheduler that recorded how it 
made its decisions, so that an operator could evaluate the data and 
decision, but I'm not in favour of exposing this information via a 
tenant-facing API.


Best,
-jay


2014-07-22 10:23 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com>>:

On 07/21/2014 07:45 PM, Jay Lau wrote:

There is one requirement that some customers want to get the
possible
host list when create/rebuild/migrate/__evacuate VM so as to
create a
resource plan for those operations, but currently
select_destination is
not a REST API, is it possible that we promote this API to be a
REST API?


Which "customers" want to get the possible host list?

/me imagines someone asking Amazon for a REST API that returned all
the possible servers that might be picked for placement... and what
answer Amazon might give to the request.

If by "customer", you are referring to something like IBM Smart
Cloud Orchestrator, then I don't really see the point of supporting
something like this. Such a customer would only need to "create a
resource plan for those operations" if it was wholly supplanting
large pieces of OpenStack infrastructure, including parts of Nova
and much of Heat.

Best,
-jay


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 






--
Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirtError: XML error: Missing CPU model name on 2nd level vm

2014-08-05 Thread Alex Xu
You can search nested kvm on the web to get more info, or you can try 
just use qemu not kvm on 2nd level.


On 2014?08?06? 11:32, Chen CH Ji wrote:


Thanks a lot for your suggestions , I guess it might be 2nd level 
configuration issue ... will take more time on it , thanks



my 1st level host has following output when 'virsh capabilities'


c476d525-bba7-e211-98a9-9cd4d92d1300

x86_64
SandyBridge
Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  


while the output on 2nd level is


a1bb0066-80d2-f9db-aa4e-0520a0562875

x86_64
  





Best Regards!

Kevin (Chen) Ji ? ?

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Rafael Folco ---08/06/2014 02:11:26 AM---cat 
/proc/cpuinfo Make sure the cpu model and version is lisRafael Folco 
---08/06/2014 02:11:26 AM---cat /proc/cpuinfo Make sure the cpu model 
and version is listed on


From: Rafael Folco 
To: "OpenStack Development Mailing List (not for usage questions)" 
,

Date: 08/06/2014 02:11 AM
Subject: Re: [openstack-dev] [nova] libvirtError: XML error: Missing 
CPU model name on 2nd level vm






cat /proc/cpuinfo
Make sure the cpu model and version is listed on 
/usr/share/libvirt/cpu_map.xml


Hope this helps.




On Tue, Aug 5, 2014 at 2:49 PM, Solly Ross <_sross@redhat.com_ 
> wrote:


Hi Kevin,
Running devstack in a VM is perfectly doable.  Many developers use
devstack inside a VM (I run mine inside a VM launched using libvirt
on KVM).  I can't comment on the issue that you're encountering,
but perhaps something wasn't configured correctly when you
launched the
VM?

Best Regards,
Solly Ross

- Original Message -
> From: "Chen CH Ji" <_jiche...@cn.ibm.com_
>
> To: _openstack-dev@lists.openstack.org_

> Sent: Friday, August 1, 2014 5:04:16 AM
> Subject: [openstack-dev] [nova] libvirtError: XML error: Missing
CPU model name on 2nd level vm
>
>
>
> Hi
> I don't have a real PC to so created a test env ,so I created a
2nd level env
> (create a kvm virtual machine on top of a physical host then run
devstack o
> the vm)
> I am not sure whether it's doable because I saw following error
when start
> nova-compute service , is it a bug or I need to update my
configuration
> instead? thanks
>
>
> 2014-08-01 17:04:51.532 DEBUG nova.virt.libvirt.config [-]
Generated XML
> ('\n x86_64\n  threads="1"/>\n\n',) from (pid=16956) to_xml
> /opt/stack/nova/nova/virt/libvirt/config.py:79
> Traceback (most recent call last):
> File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py",
line 346, in
> fire_timers
> timer()
> File "/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py",
line 56, in
> __call__
> cb(*args, **kw)
> File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line
163, in
> _do_send
> waiter.switch(result)
> File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py",
line 194, in
> main
> result = function(*args, **kwargs)
> File "/opt/stack/nova/nova/openstack/common/service.py", line
490, in
> run_service
> service.start()
> File "/opt/stack/nova/nova/service.py", line 164, in start
> self.manager.init_host()
> File "/opt/stack/nova/nova/compute/manager.py", line 1055, in
init_host
> self.driver.init_host(host=self.host)
> File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 633, in
init_host
> self._do_quality_warnings()
> File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 616, in
> _do_quality_warnings
> caps = self._get_host_capabilities()
> File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2942, in
> _get_host_capabilities
> libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
> File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line
179, in doit
> result = proxy_call(self._autowrap, f, *args, **kwargs)
> File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line
139, in
> proxy_call
> rv = execute(f,*args,**kwargs)
> File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line
77, in
> tworker
> rv = meth(*args,**kwargs)
> File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3127,
in baselineCPU
> if ret is None: raise libvirtError ('virConnectBaselineCPU()
failed',
> conn=self)
> libvirtError: XML error: Missing CPU model name
>
> Best

[openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter 
lock_path, the code will try to use

posix semaphore.

But posix semaphore won't release even the process crashed. Should we 
fix it? I saw a lot of call for synchronized

without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

On 2014?08?07? 17:13, Chen CH Ji wrote:


Just to clarify , I think your case would be run nova-network ,then ^C 
or abnormally shutdown it
and it might be during  the period of holding a semaphore without 
releasing it, right?



yes, you are right. thanks for the clarify.

guess all component other than nova have this problem ? so maybe 
remove this [nova] can get more input ...



yes




Best Regards!

Kevin (Chen) Ji ? ?

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Alex Xu ---08/07/2014 04:54:09 PM---When I 
startup nova-network, it stuck at trying get lock for ebtaAlex Xu 
---08/07/2014 04:54:09 PM---When I startup nova-network, it stuck at 
trying get lock for ebtables. @utils.synchronized('ebtables


From: Alex Xu 
To: OpenStack Development Mailing List 
,

Date: 08/07/2014 04:54 PM
Subject: [openstack-dev] [nova] nova-network stuck at get semaphores 
lock when startup






When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu



On 2014?08?07? 17:13, Chen CH Ji wrote:


Just to clarify , I think your case would be run nova-network ,then ^C 
or abnormally shutdown it
and it might be during  the period of holding a semaphore without 
releasing it, right?



yes, you are right. thanks for the clarify.

guess all component other than nova have this problem ? so maybe 
remove this [nova] can get more input ...



yes




Best Regards!

Kevin (Chen) Ji ? ?

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC


Inactive hide details for Alex Xu ---08/07/2014 04:54:09 PM---When I 
startup nova-network, it stuck at trying get lock for ebtaAlex Xu 
---08/07/2014 04:54:09 PM---When I startup nova-network, it stuck at 
trying get lock for ebtables. @utils.synchronized('ebtables


From: Alex Xu 
To: OpenStack Development Mailing List 
,

Date: 08/07/2014 04:54 PM
Subject: [openstack-dev] [nova] nova-network stuck at get semaphores 
lock when startup






When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
.

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-network stuck at get semaphores lock when startup

2014-08-07 Thread Alex Xu

Oops, thanks

On 2014年08月07日 22:08, Ben Nemec wrote:

Unfortunately this is a known issue.  We're working on a fix:
https://bugs.launchpad.net/oslo/+bug/1327946

On 08/07/2014 03:57 AM, Alex Xu wrote:

When I startup nova-network, it stuck at trying get lock for ebtables.

@utils.synchronized('ebtables', external=True)
def ensure_ebtables_rules(rules, table='filter'):
  .

Checking the code found that invoke utils.synchronized without parameter
lock_path, the code will try to use
posix semaphore.

But posix semaphore won't release even the process crashed. Should we
fix it? I saw a lot of call for synchronized
without lock_path.

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][nova] The document for the changes from Nova v2 api to v3

2013-11-13 Thread Alex Xu

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we ask 
people, who change
something of api v3, update the doc accordingly? I think it's a way to 
resolve it.


Thanks
Alex

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] [qa][nova] The document for the changes from Nova v2 api to v3

2013-11-13 Thread Alex Xu

On 2013?11?14? 05:22, David Kranz wrote:

On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we ask 
people, who change
something of api v3, update the doc accordingly? I think it's a way 
to resolve it.


Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
Thanks, this is great. I fixed a bug in the os-services section. BTW, 
openstack...@lists.openstack.org list is obsolete. openstack-dev with 
subject starting with [qa] is the current "qa list". About updating, I 
think this will have to be heavily socialized in the nova team. The 
initial review should happen by those reviewing the tempest v3 api 
changes. That is how I found the os-services bug.


 -David
Thanks for the bug fix and reminder.  The initial review happen by 
reviewing tempest v3 api changes, that make sense for me.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] [qa][nova] The document for the changes from Nova v2 api to v3

2013-11-13 Thread Alex Xu

On 2013?11?14? 07:09, Christopher Yeoh wrote:
On Thu, Nov 14, 2013 at 7:52 AM, David Kranz <mailto:dkr...@redhat.com>> wrote:


On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we
ask people, who change
something of api v3, update the doc accordingly? I think it's a
way to resolve it.

Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org  <mailto:openstack...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

Thanks, this is great. I fixed a bug in the os-services section.
BTW, openstack...@lists.openstack.org
<mailto:openstack...@lists.openstack.org> list is obsolete.
openstack-dev with subject starting with [qa] is the current "qa
list". About updating, I think this will have to be heavily
socialized in the nova team. The initial review should happen by
those reviewing the tempest v3 api changes. That is how I found
the os-services bug.


Can we leverage off the DocImpact flag with the commit message somehow 
- say anytime there is a changeset
with DocImpact and that changes a file under 
nova/api/openstack/compute we generate a notification?


I think we're getting much better at enforcing the DocImpact flag 
during reviews.

+1 for DocImpact flag, that's good idea.


Chris.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Network stuff in Nova API v3

2013-08-07 Thread Alex Xu

Hi, guys,

Currently we have one core and two extensions that related network in 
Nova API v3.

They are ips, attach_interface and multinic. I have two questions for them.

The first question is about ips and attach_interface. The below was the 
index's response

of ips and attach_interface:
ips:
{
"addresses": {
"net1": [
{
"addr": "10.0.0.8",
"mac_addr": "fa:16:3e:c2:0f:aa",
"type": "fixed",
"version": 4
},
{
"addr": "30.0.0.5",
"mac_addr": "fa:16:3e:c2:0f:aa",
"type": "floating",
"version": 4
}
]
}
}

attach_interface:
{
"interface_attachments": [
{
"fixed_ips": [
{
"ip_address": "10.0.0.8",
"subnet_id": "f84f7d51-758c-4a02-a4c9-171ed988a884"
}
],
"mac_addr": "fa:16:3e:c2:0f:aa",
"net_id": "b6ba34f1-5504-4aca-825b-04511c104802",
"port_id": "3660380b-0075-4115-be96-f08b41ccdf5d",
"port_state": "ACTIVE"
}
]
}

The problem is the responses are similar, but just with different view,  
and all the information can
get from Neutron directly. I think we didn't want to proxy Neutron 
through Nova. So how about
we merge ips and attach_interface into an new extension. The new 
extension will be include the

things as below:
1. Extend the detail of servers to list the uuid of port. User can get 
more information from Neutron

by port uuid.
2. Attach and detach interface that move from extension attach_interface.
3. Extend the creation of servers to support network (The patch already 
here https://review.openstack.org/#/c/36615/)


The second question is about multinic. Looking into the code, multinic 
just add fixed_ip for server's port.
That can be done by Neutron API directly too. But there are 
inject_network_info and reset_network
in the code. Only xen and vmware's driver implement that function. I'm 
not familiar with xen and
vmware, I guess it use guest agent to update the guest network. If I am 
right, I think we didn't
encourage using that way to update guest network.There are api for 
inject_network_info and reset_network
in extension admin-actions also. I think we can keep them. But can we 
delete multinic for V3?


Thanks
Alex

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Network stuff in Nova API v3

2013-08-07 Thread Alex Xu

On 2013年08月07日 17:38, John Garbutt wrote:

multi-nic added an extra virtual interface on a seprate network, like
adding a port:
http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html
That just describe create instance with multinic, that we will support. 
Still have problem
with action add_fixed_ip and remove_fixed_ip in extension multinic. 
Those action

invoke inject_network_info and reset_network.


I think we need to keep a nova-network focused api extension, and a
separate neutron focused api extension, because we have not yet
removed neutron. It should probably proxy the neutron information
still, so people can more easily transition between nova-network and
neutron.

Sound good, thanks.

I agree we should probably slim down the neturon focused api extension.

Howerver, it should probably include network-ids and port-ids for each
port, if we still support both:
 nova boot --image  --flavor  --nic net-id=
--nic net-id= 
and this:
nova boot --image  --flavor  --nic port-id= 

Yes, we still support those. But why we need network-ids?

Longer term, we still need the metadata service to provide networking
information, so there will be a nova-api that has to proxy info from
neutron, but I agree we should reduce where we can.

John

On 7 August 2013 10:08, Alex Xu  wrote:

Hi, guys,

Currently we have one core and two extensions that related network in Nova
API v3.
They are ips, attach_interface and multinic. I have two questions for them.

The first question is about ips and attach_interface. The below was the
index's response
of ips and attach_interface:
ips:
{
 "addresses": {
 "net1": [
 {
 "addr": "10.0.0.8",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "fixed",
 "version": 4
 },
 {
 "addr": "30.0.0.5",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "floating",
 "version": 4
 }
 ]
 }
}

attach_interface:
{
 "interface_attachments": [
 {
 "fixed_ips": [
 {
 "ip_address": "10.0.0.8",
 "subnet_id": "f84f7d51-758c-4a02-a4c9-171ed988a884"
 }
 ],
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "net_id": "b6ba34f1-5504-4aca-825b-04511c104802",
 "port_id": "3660380b-0075-4115-be96-f08b41ccdf5d",
 "port_state": "ACTIVE"
 }
 ]
}

The problem is the responses are similar, but just with different view,  and
all the information can
get from Neutron directly. I think we didn't want to proxy Neutron through
Nova. So how about
we merge ips and attach_interface into an new extension. The new extension
will be include the
things as below:
1. Extend the detail of servers to list the uuid of port. User can get more
information from Neutron
by port uuid.
2. Attach and detach interface that move from extension attach_interface.
3. Extend the creation of servers to support network (The patch already here
https://review.openstack.org/#/c/36615/)

The second question is about multinic. Looking into the code, multinic just
add fixed_ip for server's port.
That can be done by Neutron API directly too. But there are
inject_network_info and reset_network
in the code. Only xen and vmware's driver implement that function. I'm not
familiar with xen and
vmware, I guess it use guest agent to update the guest network. If I am
right, I think we didn't
encourage using that way to update guest network.There are api for
inject_network_info and reset_network
in extension admin-actions also. I think we can keep them. But can we delete
multinic for V3?

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Network stuff in Nova API v3

2013-08-07 Thread Alex Xu

On 2013年08月07日 22:24, John Garbutt wrote:

Hey,

On 7 August 2013 14:42, Alex Xu  wrote:

On 2013年08月07日 17:38, John Garbutt wrote:

multi-nic added an extra virtual interface on a seprate network, like
adding a port:

http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html

That just describe create instance with multinic, that we will support.
Still have problem
with action add_fixed_ip and remove_fixed_ip in extension multinic. Those
action
invoke inject_network_info and reset_network.

Ah, sorry, my bad. That writes network data into xenstore (the new IP
address), then calls the agent inside the VM to read that data, and
apply the change in the VM, without the need to reboot.

Thanks for the info. Update the network by guest agent is not good way.
I prefer remove the extension multinic, but we can keep inject_network_info
and reset_network in extensions admin_actions.

I agree we should probably slim down the neturon focused api extension.

Howerver, it should probably include network-ids and port-ids for each
port, if we still support both:
  nova boot --image  --flavor  --nic net-id=
--nic net-id= 
and this:
 nova boot --image  --flavor  --nic port-id=


Yes, we still support those. But why we need network-ids?

I was just thinking about if a user creates their server with a network id:
 nova boot --image  --flavor  --nic net-id=
--nic net-id= 
Then nova list only shows port-ids, it seems a bit confusing.

User can get network id by port id. So I still prefer just port id.

John




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Network stuff in Nova API v3

2013-08-12 Thread Alex Xu

On 2013年08月08日 13:49, Zhu Bo wrote:

On 2013年08月07日 21:42, Alex Xu wrote:

On 2013年08月07日 17:38, John Garbutt wrote:

multi-nic added an extra virtual interface on a seprate network, like
adding a port:
http://docs.openstack.org/trunk/openstack-compute/admin/content/using-multi-nics.html 

That just describe create instance with multinic, that we will 
support. Still have problem
with action add_fixed_ip and remove_fixed_ip in extension multinic. 
Those action

invoke inject_network_info and reset_network.


I think we need to keep a nova-network focused api extension, and a
separate neutron focused api extension, because we have not yet
removed neutron. It should probably proxy the neutron information
still, so people can more easily transition between nova-network and
neutron.

Sound good, thanks.
Nova v2 api will be saved with v3 for some time, I think.  Why not 
just keep neutron api extension in v3?
I think people can have enough time to understand the difference 
between v2 and v3. If we keep
api for nova-network in v3, we will still face the same problem when 
next api version occur  or when

remove the nova-network.
Make sense. If we add nova-network focused api extension, will face the 
same problem when next api version occur

I agree we should probably slim down the neturon focused api extension.

Howerver, it should probably include network-ids and port-ids for each
port, if we still support both:
 nova boot --image  --flavor  --nic net-id=
--nic net-id= 
and this:
nova boot --image  --flavor  --nic 
port-id= 

Yes, we still support those. But why we need network-ids?

Longer term, we still need the metadata service to provide networking
information, so there will be a nova-api that has to proxy info from
neutron, but I agree we should reduce where we can.

agree with this. There will be a nova-api that has to proxy info from
neutron, but we should reduce where we can.


John

On 7 August 2013 10:08, Alex Xu  wrote:

Hi, guys,

Currently we have one core and two extensions that related network 
in Nova

API v3.
They are ips, attach_interface and multinic. I have two questions 
for them.


The first question is about ips and attach_interface. The below was 
the

index's response
of ips and attach_interface:
ips:
{
 "addresses": {
 "net1": [
 {
 "addr": "10.0.0.8",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "fixed",
 "version": 4
 },
 {
 "addr": "30.0.0.5",
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "type": "floating",
 "version": 4
 }
 ]
 }
}

attach_interface:
{
 "interface_attachments": [
 {
 "fixed_ips": [
 {
 "ip_address": "10.0.0.8",
 "subnet_id": 
"f84f7d51-758c-4a02-a4c9-171ed988a884"

 }
 ],
 "mac_addr": "fa:16:3e:c2:0f:aa",
 "net_id": "b6ba34f1-5504-4aca-825b-04511c104802",
 "port_id": "3660380b-0075-4115-be96-f08b41ccdf5d",
 "port_state": "ACTIVE"
 }
 ]
}

The problem is the responses are similar, but just with different 
view,  and

all the information can
get from Neutron directly. I think we didn't want to proxy Neutron 
through

Nova. So how about
we merge ips and attach_interface into an new extension. The new 
extension

will be include the
things as below:
1. Extend the detail of servers to list the uuid of port. User can 
get more

information from Neutron
by port uuid.
2. Attach and detach interface that move from extension 
attach_interface.
3. Extend the creation of servers to support network (The patch 
already here

https://review.openstack.org/#/c/36615/)

The second question is about multinic. Looking into the code, 
multinic just

add fixed_ip for server's port.
That can be done by Neutron API directly too. But there are
inject_network_info and reset_network
in the code. Only xen and vmware's driver implement that function. 
I'm not

familiar with xen and
vmware, I guess it use guest agent to update the guest network. If 
I am

right, I think we didn't
encourage using that way to update guest network.There are api for
inject_network_info and reset_network
in extension admin-actions also. I think we can keep them. But can 
we delete

multinic for V3?

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

[openstack-dev] Remove 'absolute_limit' from limits's response for v3

2013-08-12 Thread Alex Xu

Hi, guys,

 When I'm cleaning up v3 api. I found limits extension will return 
absolute_limit. I think that already
done by extension quota_sets. And I can't guess the reason why we keep 
that in limits. For ensure,
I didn't missing something, I bring it to here. If we haven't any reason 
for keep it in limits, I prefer delete it.


https://review.openstack.org/#/c/39872/


Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-15 Thread Alex Xu

On 2013年08月16日 03:16, Melanie Witt wrote:

On Aug 13, 2013, at 3:35 PM, Melanie Witt wrote:


On Aug 13, 2013, at 2:11 AM, Day, Phil wrote:


If we really want to get clean separation between Nova and Neutron in the V3 
API should we consider making the Nov aV3 API only accept lists o port ids in 
the server create command ?

That way there would be no need to every pass security group information into 
Nova.

Any cross project co-ordination (for example automatically creating ports) 
could be handled in the client layer, rather than inside Nova.

Server create is always (until there's a separate layer) going to go cross 
project calling other apis like neutron and cinder while an instance is being 
provisioned. For that reason, I tend to think it's ok to give some extra 
convenience of automatically creating ports if needed, and being able to 
specify security groups.

For the associate and disassociate, the only convenience is being able to use 
the instance display name and security group name, which is already handled at 
the client layer. It seems a clearer case of duplicating what neutron offers.

Thinking about this more, it seems like the security_groups extension should 
probably be removed in the v3 api. Originally, we considered not porting it to 
v3 because it's a network-related extension whose actions can be accomplished 
through neutron directly.

Then, it seemed associate/disassociate the with instance would be needed in 
nova, and those actions alone were ported. However, looking into the code more 
I found that's simply a neutron port update (append security group to port). 
Server create is similar.

It seems like the extension isn't really needed in v3. Does anyone have any 
objection to removing it?

+1


Melanie







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-16 Thread Alex Xu

On 2013?08?16? 14:34, Christopher Yeoh wrote:


On Fri, Aug 16, 2013 at 10:28 AM, Melanie Witt > wrote:


On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:

> +1 from me as long as this wouldn't change anything for the EC2
API's security groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was
consensus that removing the standalone associate/disassociate
actions should happen.

Now the question is whether to keep the server create piece and
not remove the extension entirely. The concern is about a delay in
the newly provisioned instance being associated with the desired
security groups. With the extension, the instance gets the desired
security groups before the instance is active (I think). Without
the extension, the client would receive the active instance and
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?


I think we should keep the capability to set the security group on 
instance creation, so those who care about this sort of race condition 
can avoid if they want to.




I am working v3 network. I plan to only support create new instance with 
port id, didn't support with
network id and fixed ip anymore. So that means user need create port 
from Neutron firstly, then
pass the port id into the request of creating instance. If we think this 
is ok, user can associate the
desired security groups when create port, and we can remove the 
securitygroup extension entirely.



+1 to removing the associate/disassociate actions though

Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-19 Thread Alex Xu

On 2013年08月17日 00:14, Vishvananda Ishaya wrote:

On Aug 15, 2013, at 5:58 PM, Melanie Witt  wrote:


On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:


+1 from me as long as this wouldn't change anything for the EC2 API's security 
groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was consensus that 
removing the standalone associate/disassociate actions should happen.

Now the question is whether to keep the server create piece and not remove the 
extension entirely. The concern is about a delay in the newly provisioned 
instance being associated with the desired security groups. With the extension, 
the instance gets the desired security groups before the instance is active (I 
think). Without the extension, the client would receive the active instance and 
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?


It seems like getting around this would be as simple as:

a. Create the port in neutron.
b. Associate a security group with the port.
c. Boot the instance with the port.

In general I'm a fan of doing all of the network creation and volume creation 
in neutron and cinder before booting the instance. Unfortunately I think this 
is pretty unfriendly to our users. One possibility is to move the smarts into 
the client side (i.e. have it talk to neutron and cinder), but I think that 
alienates all of the people using openstack who are not using python-novaclient 
or python-openstack client.
The API user is developer too, it shouldn't too difficult for them. I 
prefer move the smarts into client
side. I'm open with two way. And I will comment in my patch for notice 
reviewer vote for their flavor way before review my patch.


Since we are still supporting v2 this is a possibility for the v3 api, but if 
you can't do basic operations in v3 without talking to multiple services on the 
client side I think it will prevent a lot of people from using it.

Its clear to me that autocreation needs to stick around for a while just to 
keep the apis usable. I can see the argument for pulling it from the v3 api, 
but it seems like at least the basics need to stick around for now.

Vish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-05 Thread Alex Xu

+1
On 2013年09月05日 17:51, John Garbutt wrote:

+1 I meant to raise that myself when I saw some changes there the other day.

On 4 September 2013 15:52, Thierry Carrez  wrote:

Russell Bryant wrote:

On 09/04/2013 10:26 AM, Dan Smith wrote:

Hi all,

As someone who has felt about as much pain as possible from the
dual-maintenance of the v2 and v3 API extensions, I felt compelled to
bring up one that I think we can drop. The baremetal extension was
ported to v3 API before (I think) the decision was made to make v3
experimental for Havana. There are a couple of patches up for review
right now that make obligatory changes to one or both of the versions,
which is what made me think about this.

Since Ironic is on the horizon and was originally slated to deprecate
the in-nova-tree baremetal support for Havana, and since v3 is only
experimental in Havana, I think we can drop the baremetal extension for
the v3 API for now. If Nova's baremetal support isn't ready for
deprecation by the time we're ready to promote the v3 API, we can
re-introduce it at that time. Until then, I propose we avoid carrying
it for a soon-to-be-deprecated feature.

Thoughts?

Sounds reasonable to me.  Anyone else have a differing opinion about it?

+1

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE Request: v3 setting v3 API core

2013-09-08 Thread Alex Xu

On 2013?09?08? 21:34, Christopher Yeoh wrote:

Hi,

The following 3 changesets in the queue:

https://review.openstack.org/#/c/43274/
https://review.openstack.org/#/c/43278/
https://review.openstack.org/#/c/43280/

make keypairs, scheduler hints and console output part of the V3 core
api. This essentially changes just two things:

- the v3 API server will refuse to startup if they are not loaded (this
   is the definition of a feature being part of core given that all of
   the API is an 'extension' now.
- the resource is accessed slightly differently - /v3/keypairs rather
   than /v3/os-keypairs as an example

In terms of risk the change is limited to the V3 API only. And although
the V3 API will be experimental in Havana anyway and subject to some
change I would like to include this because of the resource name
changes and minimise any hassle for people who do start using the V3 API
in Havana and then want to use it for Icehouse. It has similar
downstream effects on documentation.
For the same reason, I propose this patch 
https://review.openstack.org/#/c/41349/
to get FFE, it already got one +2. It also doing some changing for 
resource name.


The patches are ready to go (recently its just mostly been rebase
updates with some very minor fixups).

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient 2.32.0 not working with rackspace

2015-10-20 Thread Alex Xu
Looks like we use API_MAX_VERSION and DEFAULT_OS_COMPUTE_API_VERSION with
wrong way. This isn't broken with rackspace only, this broken novaclient
runing with nova legacy v2 api(legacy v2 api's behavior is no version
exposed).

API_MAX_VERSION is the max version of client supported, currently it should
be '2.5'

DEFAULT_OS_COMPUTE_API_VERSION is the default value of CLI option of
'--os-compute-api-version', ti should be '2.latest' which means let CLI to
negotiate with server side choice most recently version.

Not sure whether we need revert, but at least a new fix should come after
that.

Thanks
Alex

2015-10-21 7:52 GMT+08:00 melanie witt :

> Hi everyone,
>
> We have an issue [1] in python-novaclient 2.32.0 where it's not working
> with rackspace cloud.
>
> It's caused by a commit [2] that changed the default requested compute API
> version from "latest" to "client supported latest", a specific version. We
> have some logic in the discover_version method that does comparisons
> between a user-specified version and the server version. For rackspace, we
> get a "null" server version because the version list API isn't exposed. The
> discover_version falls back on compute API 2.0 when requested version is
> "latest" and server version is "null" but raises an error when requested
> version is "user-specified" and server version is "null". So more work is
> needed there to handle cases where version API isn't exposed.
>
> Should we revert [2] for now? Any other thoughts?
>
> Thanks,
> -melanie (irc: melwitt)
>
> [1] https://bugs.launchpad.net/python-novaclient/+bug/1508244
> [2] https://review.openstack.org/#/c/230024/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient 2.32.0 not working with rackspace

2015-10-20 Thread Alex Xu
I just work out a patch quickly for fixing this,
https://review.openstack.org/237850

2015-10-21 10:08 GMT+08:00 Alex Xu :

> Looks like we use API_MAX_VERSION and DEFAULT_OS_COMPUTE_API_VERSION with
> wrong way. This isn't broken with rackspace only, this broken novaclient
> runing with nova legacy v2 api(legacy v2 api's behavior is no version
> exposed).
>
> API_MAX_VERSION is the max version of client supported, currently it
> should be '2.5'
>
> DEFAULT_OS_COMPUTE_API_VERSION is the default value of CLI option of
> '--os-compute-api-version', ti should be '2.latest' which means let CLI to
> negotiate with server side choice most recently version.
>
> Not sure whether we need revert, but at least a new fix should come after
> that.
>
> Thanks
> Alex
>
> 2015-10-21 7:52 GMT+08:00 melanie witt :
>
>> Hi everyone,
>>
>> We have an issue [1] in python-novaclient 2.32.0 where it's not working
>> with rackspace cloud.
>>
>> It's caused by a commit [2] that changed the default requested compute
>> API version from "latest" to "client supported latest", a specific version.
>> We have some logic in the discover_version method that does comparisons
>> between a user-specified version and the server version. For rackspace, we
>> get a "null" server version because the version list API isn't exposed. The
>> discover_version falls back on compute API 2.0 when requested version is
>> "latest" and server version is "null" but raises an error when requested
>> version is "user-specified" and server version is "null". So more work is
>> needed there to handle cases where version API isn't exposed.
>>
>> Should we revert [2] for now? Any other thoughts?
>>
>> Thanks,
>> -melanie (irc: melwitt)
>>
>> [1] https://bugs.launchpad.net/python-novaclient/+bug/1508244
>> [2] https://review.openstack.org/#/c/230024/
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-11-02 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-05 Thread Alex Xu
Hi, folks

Nova API sub-team is working on the swagger generation. And there is PoC
https://review.openstack.org/233446

But before we are going to next step, I really hope we can get agreement
with how to support Microversions and Actions. The PoC have demo about
Microversions. It generates min version action as swagger spec standard,
for the other version actions, it named as extended attribute, like:

{
'/os-keypairs': {
"get": {
'x-start-version': '2.1',
'x-end-version': '2.1',
'description': '',
   
},
"x-get-2.2-2.9": {
'x-start-version': '2.2',
'x-end-version': '2.9',
'description': '',
.
}
}
}

x-start-version and x-end-version are the metadata for Microversions, which
should be used by UI code to parse.

This is just based on my initial thought, and there is another thought is
generating a set full swagger specs for each Microversion. But I think how
to show Microversions and Actions should be depended how the doc UI to
parse that also.

As there is doc project to turn swagger to UI:
https://github.com/russell/fairy-slipper  But it didn't support
Microversions. So hope doc team can work with us and help us to find out
format to support Microversions and Actions which good for UI parse and
swagger generation.

Any thoughts folks?

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Alex Xu
2015-11-06 20:59 GMT+08:00 Sean Dague :

> On 11/06/2015 07:28 AM, John Garbutt wrote:
> > On 6 November 2015 at 12:09, Sean Dague  wrote:
> >> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
> >>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>  Hello all,
>  I came across [1] which is notionally an ironic bug in that
> horizon presents
>  VM operations (like suspend) to users.  Clearly these options don't
> make sense
>  to ironic which can be confusing.
> 
>  There is a horizon fix that just disables migrate/suspened and other
> functaions
>  if the operator sets a flag say ironic is present.  Clealy this is
> sub optimal
>  for a mixed hv environment.
> 
>  The data needed (hpervisor type) is currently avilable only to
> admins, a quick
>  hack to remove this policy restriction is functional.
> 
>  There are a few ways to solve this.
> 
>   1. Change the default from "rule:admin_api" to "" (for
>  os_compute_api:os-extended-server-attributes and
>  os_compute_api:os-hypervisors), and set a list of values we're
>  comfortbale exposing the user (hypervisor_type and
>  hypervisor_hostname).  So a user can get the hypervisor_name as
> part of
>  the instance deatils and get the hypervisor_type from the
>  os-hypervisors.  This would work for horizon but increases the
> API load
>  on nova and kinda implies that horizon would have to cache the
> data and
>  open-code assumptions that hypervisor_type can/can't do action $x
> 
>   2. Include the hypervisor_type with the instance data.  This would
> place the
>  burdon on nova.  It makes the looking up instance details
> slightly more
>  complex but doesn't result in additional API queries, nor caching
>  overhead in horizon.  This has the same opencoding issues as
> Option 1.
> 
>   3. Define a service user and have horizon look up the hypervisors
> details via
>  that role.  Has all the drawbacks as option 1 and I'm struggling
> to
>  think of many benefits.
> 
>   4. Create a capabilitioes API of some description, that can be
> queried so that
>  consumers (horizon) can known
> 
>   5. Some other way for users to know what kind of hypervisor they're
> on, Perhaps
>  there is an established image property that would work here?
> 
>  If we're okay with exposing the hypervisor_type to users, then #2 is
> pretty
>  quick and easy, and could be done in Mitaka.  Option 4 is probably
> the best
>  long term solution but I think is best done in 'N' as it needs lots of
>  discussion.
> >>>
> >>> I think that exposing hypervisor_type is very much the *wrong* approach
> >>> to this problem. The set of allowed actions varies based on much more
> than
> >>> just the hypervisor_type. The hypervisor version may affect it, as may
> >>> the hypervisor architecture, and even the version of Nova. If horizon
> >>> restricted its actions based on hypevisor_type alone, then it is going
> >>> to inevitably prevent the user from performing otherwise valid actions
> >>> in a number of scenarios.
> >>>
> >>> IMHO, a capabilities based approach is the only viable solution to
> >>> this kind of problem.
> >>
> >> Right, we just had a super long conversation about this in #openstack-qa
> >> yesterday with mordred, jroll, and deva around what it's going to take
> >> to get upgrade tests passing with ironic.
> >>
> >> Capabilities is the right approach, because it means we're future
> >> proofing our interface by telling users what they can do, not some
> >> arbitrary string that they need to cary around a separate library to
> >> figure those things out.
> >>
> >> It seems like capabilities need to exist on flavor, and by proxy
> instance.
> >>
> >> GET /flavors/bm.large/capabilities
> >>
> >> {
> >>  "actions": {
> >>  'pause': False,
> >>  'unpause': False,
> >>  'rebuild': True
> >>  ..
> >>   }
> >>
>

Does this need admin to set the capabilities? If yes, that looks like pain
to admin to set capabilities for all the flavors. This should be the
capabilities the instance required. And hypervisor should report their
capabilities, and reflect to instance.


> >> A starting point would definitely be the set of actions that you can
> >> send to the flavor/instance. There may be features beyond that we'd like
> >> to classify as capabilities, but actions would be a very concrete and
> >> attainable starting point. With microversions we don't have to solve
> >> this all at once, start with a concrete thing and move forward.
>

+1, Microversions give us a way to improve our API! And capabilities API is
really important.


> >>
> >> Sending an action that was "False" for the instance/flavor would return
> >> a 400 BadRequest high up at the API level, much like input validation
> >> via jsonschema.

Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 22:22 GMT+08:00 Anne Gentle :

>
>
> On Thu, Nov 5, 2015 at 9:31 PM, Alex Xu  wrote:
>
>> Hi, folks
>>
>> Nova API sub-team is working on the swagger generation. And there is PoC
>> https://review.openstack.org/233446
>>
>> But before we are going to next step, I really hope we can get agreement
>> with how to support Microversions and Actions. The PoC have demo about
>> Microversions. It generates min version action as swagger spec standard,
>> for the other version actions, it named as extended attribute, like:
>>
>> {
>> '/os-keypairs': {
>> "get": {
>> 'x-start-version': '2.1',
>> 'x-end-version': '2.1',
>> 'description': '',
>>
>> },
>> "x-get-2.2-2.9": {
>> 'x-start-version': '2.2',
>> 'x-end-version': '2.9',
>> 'description': '',
>> .
>> }
>> }
>> }
>>
>> x-start-version and x-end-version are the metadata for Microversions,
>> which should be used by UI code to parse.
>>
>
> The swagger.io editor will not necessarily recognize extended attributes
> (x- are extended attributes), right? I don't think we intend for these
> files to be hand-edited once they are generated, though, so I consider it a
> non-issue that the editor can't edit microversioned source.
>
>

yes, right. The editor can just ignore the extended attributes. I just want
to show if we have something more than swagger standard spec to support
Microversions and Actions, we should use swagger spec supported way to
extend.


>
>> This is just based on my initial thought, and there is another thought is
>> generating a set full swagger specs for each Microversion. But I think how
>> to show Microversions and Actions should be depended how the doc UI to
>> parse that also.
>>
>> As there is doc project to turn swagger to UI:
>> https://github.com/russell/fairy-slipper  But it didn't support
>> Microversions. So hope doc team can work with us and help us to find out
>> format to support Microversions and Actions which good for UI parse and
>> swagger generation.
>>
>
> Last release was a proof of concept for being able to generate Swagger.
> Next we'll bring fairy-slipper into OpenStack and work with the API working
> group and the Nova API team to enhance it.
>
> This release we can further enhance with microversions. Nothing's
> preventing that to my knowledge, other than Russell needs more input to
> make the output what we want. This email is a good start.
>

yea, really appreciate if Russell can give some input as he work on
fairy-slipper.


>
> I'm pretty sure microversions are hard to document no matter what we do so
> we just need to pick a way and move forward. Here's what is in the spec:
> For microversions, we'll need at least 2 copies of the previous reference
> info (enable a dropdown for the user to choose a prior version or one that
> matches theirs) Need to keep deprecated options.  An example of version
> comparisons https://libgit2.github.com/libgit2/#HEAD
>
> Let's discuss weekly at both the Nova API meeting and the API Working
> group meeting to refine the design. I'm back next week and plan to update
> the spec.
>

yea, let's talk more at next meeting, thanks!


> Anne
>
>
>
>>
>> Any thoughts folks?
>>
>> Thanks
>> Alex
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 20:46 GMT+08:00 John Garbutt :

> On 6 November 2015 at 03:31, Alex Xu  wrote:
> > Hi, folks
> >
> > Nova API sub-team is working on the swagger generation. And there is PoC
> > https://review.openstack.org/233446
> >
> > But before we are going to next step, I really hope we can get agreement
> > with how to support Microversions and Actions. The PoC have demo about
> > Microversions. It generates min version action as swagger spec standard,
> for
> > the other version actions, it named as extended attribute, like:
> >
> > {
> > '/os-keypairs': {
> > "get": {
> > 'x-start-version': '2.1',
> > 'x-end-version': '2.1',
> > 'description': '',
> >
> > },
> > "x-get-2.2-2.9": {
> > 'x-start-version': '2.2',
> > 'x-end-version': '2.9',
> > 'description': '',
> > .
> > }
> > }
> > }
> >
> > x-start-version and x-end-version are the metadata for Microversions,
> which
> > should be used by UI code to parse.
> >
> > This is just based on my initial thought, and there is another thought is
> > generating a set full swagger specs for each Microversion. But I think
> how
> > to show Microversions and Actions should be depended how the doc UI to
> parse
> > that also.
> >
> > As there is doc project to turn swagger to UI:
> > https://github.com/russell/fairy-slipper  But it didn't support
> > Microversions. So hope doc team can work with us and help us to find out
> > format to support Microversions and Actions which good for UI parse and
> > swagger generation.
> >
> > Any thoughts folks?
>
> I can't find the URL to the example, but I though the plan was each
> microversion generates a full doc tree.
>

yea, we said that in nova api meeting. and this is the example what we
expect UI looks like https://libgit2.github.com/libgit2/#HEAD

I just want to ensure with doc team and Russell this is good for them on
the implementation of fairy-slipper.


>
> It also notes the changes between the versions, so you look at the
> latest version, you can tell between which versions the API was
> modified.
>
> I remember annegentle had a great example of this style, will try ping
> here about that next week.
>


yea, let's talk about it in the meeting.


>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-11-09 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API sub-team meeting

2015-11-09 Thread Alex Xu
2015-11-09 21:49 GMT+08:00 Ed Leafe :

> On Nov 9, 2015, at 7:11 AM, Alex Xu  wrote:
> >
> > We have weekly Nova API meeting this week. The meeting is being held
> Tuesday UTC1200.
> >
> > In other timezones the meeting is at:
> >
> > EST 08:00 (Tue)
>
> Just to clarify: that's EDT 07:00, since daylight savings ended in the US
> last week.
>

Thanks! Sorry I always forget the daylight savings change.


>
> > Japan 21:00 (Tue)
> > China 20:00 (Tue)
> > United Kingdom 13:00 (Tue)
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Welcome to contribute on improvement the nova API documentation

2015-11-10 Thread Alex Xu
Hi,

At Tokyo summit, we decided the API documentation is hight priority and
focus on Mitaka release. We really need better API documentation to help
user to use our API easily. Those API doc works are also good for new
contributors, it is good chance to get familiar with nova.

So if you are interesting help on the API docs, you can get info from
etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The etherpad
includes the notes about expectation and workflow for the tasks, you can
learn how to contribute on it. And free to join Nova API sub-team meeting
https://wiki.openstack.org/wiki/Meetings/NovaAPI to work with team. Or
contact me (IRC: alex_xu) if you have any question about those works.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-11-11 Thread Alex Xu
Hi,

At nova api subteam weekly meeting, we decided hold 2 days virtual doc
sprint to help the Nova API document. The initial proposed date is Dec 8
and 9(Let me know if the date is conflict with other thing). The sprint is
running on local time for folks. Peoples can work on the patch and also can
help on the review.

Appreciate and welcome people join this sprint to help on API doc.

Please sign up for this sprint first if you are interesting at the top of
etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks of
sprint are also in the etherpad, already have some contributors work on
those doc tasks now, so free to join us now or join the sprint.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs][api] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-11-11 Thread Alex Xu
Sorry, add [docs] and [api] tags to the title!

2015-11-11 20:51 GMT+08:00 Alex Xu :

> Hi,
>
> At nova api subteam weekly meeting, we decided hold 2 days virtual doc
> sprint to help the Nova API document. The initial proposed date is Dec 8
> and 9(Let me know if the date is conflict with other thing). The sprint is
> running on local time for folks. Peoples can work on the patch and also can
> help on the review.
>
> Appreciate and welcome people join this sprint to help on API doc.
>
> Please sign up for this sprint first if you are interesting at the top of
> etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks
> of sprint are also in the etherpad, already have some contributors work on
> those doc tasks now, so free to join us now or join the sprint.
>
> Thanks
> Alex
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-13 Thread Alex Xu
2015-11-13 17:36 GMT+08:00 John Garbutt :

> On 6 November 2015 at 15:32, John Garbutt  wrote:
> > Hi,
> >
> > I propose we add Alex Xu[1] to nova-core.
> >
> > Over the last few cycles he has consistently been doing great work,
> > including some quality reviews, particularly around the API.
> >
> > Please respond with comments, +1s, or objections within one week.
>
> Big thank you to everyone who helped mentor Alex over the last year or
> so, and to all those who voted.
>


Yeah, really really appreciate all peoples who mentor and encourage me! I
really learned a lot of things from you!

Also thanks all the votes! I will continue try my best to help on Nova, and
continue to learn from you.

Thanks
Alex


>
> Alex, welcome to nova-core!
>
> Thanks,
> johnthetubaguy
>
> > Many thanks,
> > John
> >
> > [1]http://stackalytics.com/?module=nova-group&user_id=xuhj&release=all
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-11-16 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-11-23 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How were we going to remove soft delete again?

2015-11-30 Thread Alex Xu
2015-11-30 19:42 GMT+08:00 Sean Dague :

> On 11/24/2015 11:36 AM, Matt Riedemann wrote:
> > I know in Vancouver we talked about no more soft delete and in Mitaka
> > lxsli decoupled the nova models from the SoftDeleteMixin [1].
> >
> > From what I remember, the idea is to not add the deleted column to new
> > tables, to not expose soft deleted resources in the REST API in new
> > ways, and to eventually drop the deleted column from the models.
> >
> > I bring up the REST API because I was tinkering with the idea of
> > allowing non-admins to list/show their (soft) deleted instances [2].
> > Doing that, however, would expose more of the REST API to deleted
> > resources which makes it harder to remove from the data model.
> >
> > My question is, how were we thinking we were going to remove the deleted
> > column from the data model in a backward compatible way? A new
> > microversion in the REST API isn't going to magically work if we drop
> > the column in the data model, since anything before that microversion
> > should still work - like listing deleted instances for the admin.
> >
> > Am I forgetting something? There were a lot of ideas going around the
> > room during the session in Vancouver and I'd like to sort out the
> > eventual long-term plan so we can document it in the devref about
> > policies so that when ideas like [2] come up we can point to the policy
> > and say 'no we aren't going to do that and here's why'.
> >
> > [1]
> >
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/no-more-soft-delete.html
> >
> > [2]
> >
> https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-instances
> >
> >
>
> On the API compat side. There is no contract with the user for how long
> deleted items will be shown, or how old the deleted history is.
>
> The contract semantics are:
>
> * by default - don't show me anything that's deleted
> * with a flag - ok, you can *also* show me deleted things
>
> The one places soft deleted removal is going to be semantically
> meaningful is on GET /servers/details?changes-since=
>
> Because in that case it's used by most people as a journal, and includes
> deleted instances. It's our janky event stream for servers (given that
> we have no other real event stream). That would need something better
> before we make that query not possible.
>
> I also agree, it's not clear to me why removing soft delete off of
> things like instances is hugely critical. I think getting rid of it as
> default for new things is good. But some places it's pretty intrinsic to
> the information people pull out.
>

+1, for user to check out the history, query the deleted instance sounds
like useful.

For showing the deleted instance for non-admin user, looks like it's hard
for operator to make promise to user that how long the deleted instance can
be queried anymore. Currently the db archive just archive all the entries,
whatever it was deleted long time ago, or just in seconds. We probably can
add some filter to db archive call. But anyway this is another thing. Also
want to figure out why query deleted instance is useless first.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-12-01 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-03 Thread Alex Xu
2015-12-02 23:12 GMT+08:00 Sylvain Bauza :

>
>
> Le 02/12/2015 15:23, Sean Dague a écrit :
>
>> We have previously agreed that scheduler hints in Nova are an open ended
>> thing. It's expected for sites to have additional scheduler filters
>> which expose new hints. The way we handle that with our strict
>> jsonschema is that we allow additional properties -
>>
>> https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/api/openstack/compute/schemas/scheduler_hints.py#L65
>>
>> This means that if you specify some garbage hint, you don't get feedback
>> that it was garbage in your environment. That lost a couple of days
>> building multinode tests in the gate. Having gotten used to the hints
>> that "you've given us bad stuff", this was a stark change back to the
>> old world.
>>
>> Would it be possible to make it so that the schema could be explicitly
>> extended (instead of implicitly extended). So that additional
>> properties=False, but a mechanism for a scheduler filter to register
>> it's jsonschema in?
>>
>
> I'm pretty +1 for that because we want to have in-tree filters clear for
> the UX they provide when asking for scheduler hints.
>

+1 also, and we should have capability API for discovering what hints
supported by current deployment.


>
> For the moment, it's possible to have 2 different filters asking for the
> same hint without providing a way to explain the semantics so I would want
> to make sure that one in-tree filter could just have the same behaviour for
> *all the OpenStack deployments.*
>
> That said, I remember some discussion we had about that in the past, and
> the implementation details we discussed about having the Nova API knowing
> the list of filters and fitering by those.
> To be clear, I want to make sure that we could not leak the deployment by
> providing a 401 if a filter is not deployed, but rather just make sure that
> all our in-tree filters are like checked, even if they aren't deployed.
>

There isn't any other Nova API return 401. So if you return 401, then
everybody will know that is the only 401 in the nova, so I think there
isn't any different. As we have capability API, it's fine let the user to
know what is supported in the deployment.


>
> That leaves the out-of-tree discussion about custom filters and how we
> could have a consistent behaviour given that. Should we accept something in
> a specific deployment while another deployment could 401 against it ? Mmm,
> bad to me IMHO.
>

We can have code to check the out-of-tree filters didn't expose any same
hints with in-tree filter.


>
>
> -Sylvain
>
> -Sean
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs][api] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-12-04 Thread Alex Xu
Just reminder the event which close the time, the virtual doc sprint is
right next week. Welcome to join us!

2015-11-11 20:51 GMT+08:00 Alex Xu :

> Hi,
>
> At nova api subteam weekly meeting, we decided hold 2 days virtual doc
> sprint to help the Nova API document. The initial proposed date is Dec 8
> and 9(Let me know if the date is conflict with other thing). The sprint is
> running on local time for folks. Peoples can work on the patch and also can
> help on the review.
>
> Appreciate and welcome people join this sprint to help on API doc.
>
> Please sign up for this sprint first if you are interesting at the top of
> etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks
> of sprint are also in the etherpad, already have some contributors work on
> those doc tasks now, so free to join us now or join the sprint.
>
> Thanks
> Alex
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] jsonschema for scheduler hints

2015-12-06 Thread Alex Xu
2015-12-04 16:48 GMT+08:00 Sylvain Bauza :

>
>
> Le 04/12/2015 04:21, Alex Xu a écrit :
>
>
>
> 2015-12-02 23:12 GMT+08:00 Sylvain Bauza :
>
>>
>>
>> Le 02/12/2015 15:23, Sean Dague a écrit :
>>
>>> We have previously agreed that scheduler hints in Nova are an open ended
>>> thing. It's expected for sites to have additional scheduler filters
>>> which expose new hints. The way we handle that with our strict
>>> jsonschema is that we allow additional properties -
>>>
>>> https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/api/openstack/compute/schemas/scheduler_hints.py#L65
>>>
>>> This means that if you specify some garbage hint, you don't get feedback
>>> that it was garbage in your environment. That lost a couple of days
>>> building multinode tests in the gate. Having gotten used to the hints
>>> that "you've given us bad stuff", this was a stark change back to the
>>> old world.
>>>
>>> Would it be possible to make it so that the schema could be explicitly
>>> extended (instead of implicitly extended). So that additional
>>> properties=False, but a mechanism for a scheduler filter to register
>>> it's jsonschema in?
>>>
>>
>> I'm pretty +1 for that because we want to have in-tree filters clear for
>> the UX they provide when asking for scheduler hints.
>>
>
> +1 also, and we should have capability API for discovering what hints
> supported by current deployment.
>
>
>>
>> For the moment, it's possible to have 2 different filters asking for the
>> same hint without providing a way to explain the semantics so I would want
>> to make sure that one in-tree filter could just have the same behaviour for
>> *all the OpenStack deployments.*
>>
>> That said, I remember some discussion we had about that in the past, and
>> the implementation details we discussed about having the Nova API knowing
>> the list of filters and fitering by those.
>> To be clear, I want to make sure that we could not leak the deployment by
>> providing a 401 if a filter is not deployed, but rather just make sure that
>> all our in-tree filters are like checked, even if they aren't deployed.
>>
>
> There isn't any other Nova API return 401. So if you return 401, then
> everybody will know that is the only 401 in the nova, so I think there
> isn't any different. As we have capability API, it's fine let the user to
> know what is supported in the deployment.
>
>
>
> Sorry, I made a mistake by providing a wrong HTTP code for when the
> validation returns a ValidationError (due to the JSON schema not matched by
> the request).
> Here, my point is that if we enforce a per-enabled-filter basis for
> checking whether the hint should be enforced, it would mean that as an
> hacker, I could have some way to know what filters are enabled, or as an
> user, I could have different behaviours depending on the deployment.
>
> Let me give you an example: say that I'm not enabling the SameHostFilter
> which exposes the 'same_host' hint.
>
> For that specific cloud, if we allow to deny a request which could provide
> the 'same-host' hint (because the filter is not loaded by the
> 'scheduler_default_filters' option), it would make a difference with
> another cloud which enables SameHostFilter (because the request would pass).
>
> So, I'm maybe nitpicking, but I want to make clear that we shouldn't
> introspect the state of the filter, and just consider a static JSON schema
> (as we have today) which would reference all the hints, whether the
> corresponding filter is enabled or not.
>

yes, I see your concern, that is why I think we should have capabilities
API. User should query the capabilities API to ensure what filter enabled
in the current cloud.


>
>
>
>
>
>> That leaves the out-of-tree discussion about custom filters and how we
>> could have a consistent behaviour given that. Should we accept something in
>> a specific deployment while another deployment could 401 against it ? Mmm,
>> bad to me IMHO.
>>
>
> We can have code to check the out-of-tree filters didn't expose any same
> hints with in-tree filter.
>
>
>
> Sure, and thank you for that, that was missing in the past. That said,
> there are still some interoperability concerns, let me explain : as a cloud
> operator, I'm now providing a custom filter (say MyAwesomeFilter) which
> does the lookup for an hint called 'my_awesome_hint'.
>

  1   2   3   4   >