Jesse,
I agree that some implementations can want to have a single endpoint. I think
this is doable with a simple proxy that can pass requests back to each service
apis. This can also be accomplished by having configuration variables in your
bindings to talk to something that looks like the fol
I'm also confused because nova (compute/block/network) is in 1 repository
doesn't mean it isn't 3 different services.
We've talked about moving the services inside nova to not reaching inside of
each other via RPC calls and instead making HTTP calls. But they are mostly
already designed in a w
This is great stuff. It sounds like there is a real distinction to be made
between the data central to the apis and the user-defined properties. Also, as
time and compatibility allow, we should probably change what we were calling
metadata to be called properties or somesuch.
"Jay Pipes" said:
Thanks Devin for the reiteration. I'm for EC2 API support, I just think that
OS owning our own API specs is key if we are to innovate and drive open,
standard per service interfaces.
Erik
From: Devin Carlen mailto:devin.car...@gmail.com>>
Date: Mon, 28 Feb 2011 19:59:38 -0800
To: Erik Carlin m
On Mon, Feb 28, 2011 at 6:07 PM, Erik Carlin wrote:
> That all sounds good. My only question is around images. Is glance ready
> to be an independent service (and thus have a separate API) in Cactus?
Well, since this happened in Bexar...
-jay
___
Ma
On Mon, Feb 28, 2011 at 8:15 PM, Ewan Mellor wrote:
> If the “known_bugs” list isn’t being well received, how about this:
>
> # TODO(ewanm): Enable once bug #21212 is fixed
>
> if False:
> assert(something)
>
> And then put a comment on bug #21212 saying “please also enable the
> follo
Erik,
Thanks for the clarification. I'd just like to reiterate that official support
for the EC2 API is something that needs to be handled in parallel, since we've
committed to supporting it in the past.
Best,
Devin
On Feb 28, 2011, at 7:53 PM, Erik Carlin wrote:
> Devin -
>
> In a dec
Devin -
In a decomposed service model, OS APIs are per service, so the routing is
straightforward. For services that need to consume other services (e.g. The
compute service needs an IP from the network service), the queueing and worker
model remains the same, it's just that the network worker
On Mon, Feb 28, 2011 at 10:45 PM, Jay Pipes wrote:
> for those of a like-minded curiosity about these things. From the
> wikipedia article on this same subject:
>
> "The term Metadata is an ambiguous term which is used for two
> fundamentally different concepts (Types). Although a trite expression
On Mon, Feb 28, 2011 at 6:49 PM, Brian Lamar wrote:
> Just because I can't help but asking, when does data specified during
> instance creation stop being data and start being metadata? While it seems
> like a silly question I'm wrestling with the idea of metadata actually
> *doing* something.
Your diagram is deceptively simple because it makes no distinction about how
block API would be handled in the EC2 API, where compute and block operations
are very closely coupled. In order for the diagram to convey the requirements
properly, it needs to show how compute/network/volume API requ
If the "known_bugs" list isn't being well received, how about this:
# TODO(ewanm): Enable once bug #21212 is fixed
if False:
assert(something)
And then put a comment on bug #21212 saying "please also enable the following
unit tests when you fix this bug".
Ewan.
From: Justin Santa B
Thanks Thierry for summarizing the concerns.
I have a new version in https://github.com/rackspace/python-novatools
This does the following:
1. Renames the cmdline tool to nova. The package is still python-novatools
2. Ups the version # to 2.1
3. Changes the license to Apache for 2.1+. Prior versi
Hi all,
I cannot find information about how to set up an environment with Public IPs
and VlanManager enabled. I have tested FlatManager but it does not fully
cover our needs in terms of network isolation.
Basically what I'm trying to do is:
a) Setup a Controller Node with nova-api, nova-schedule
>
> Interesting, I guess I just don't see the point of introducing additional
> complexities for gain I don't yet see.
We can defer discussion until the patch lands, when you can see the gains
(or not!) :-)
> My example about 'image type' was meant to act as a deterrent against using
> metadata
Interesting, I guess I just don't see the point of introducing additional
complexities for gain I don't yet see. My example about 'image type' was meant
to act as a deterrent against using metadata for "OpenStack meaningful" values.
Instances, in my opinion, should be created explicitly with pr
It's an open question whether 'meaningful tags' are treated as metadata with
a system-reserved prefix (e.g. "openstack:"), or whether they end up in a
separate area of the API. The "aws:" prefix is already reserved by AWS in
their API, so we'll probably need to reserve it in ours as well or face
f
That depends on what "near" means? This will no doubt have significant
network implications and I can envision at least two levels of near (for
Rackspace):
1. Same public subnet, whatever that translates to. This is what
Rackspace needs now and specifically for compute.
2. Same private network a
I'm really talking about tests that degrade to work around bugs; I agree
that we shouldn't blindly skip whole tests (although when a test is skipped
using @unittest.skip, I think it gives a nice indication that it was skipped
vs passed?)
For example, consider the OpenStack API authentication issue
Just because I can't help but asking, when does data specified during instance
creation stop being data and start being metadata? While it seems like a silly
question I'm wrestling with the idea of metadata actually *doing* something.
I was under the (perhaps false) impression that metadata coul
Yup, Sandy's zone stuff, Justin's metadata stuff, and this is all
pretty much the same (or at least very closely related). First off,
lets move away from the term zone and define "location" as an arbitrary
grouping of one or more resources, not the traditional "availability
zones". Thinking in term
Yes - the use case I'm working towards is to use metadata to specify
"openstack:near=volume-01" when creating a machine, and I will provide a
scheduler that will take that information and will assign you a machine e.g.
in the same rack as the volume storage. It's unclear right now whether this
That all sounds good. My only question is around images. Is glance ready to
be an independent service (and thus have a separate API) in Cactus?
Erik
From: John Purrier mailto:j...@openstack.org>>
Date: Mon, 28 Feb 2011 16:53:53 -0600
To: Erik Carlin mailto:erik.car...@rackspace.com>>,
mailto:
We definitely wil need to be able to create volumes at the very least without
using ec2. Justinsb has some prototype code available for this.
Vish
On Feb 28, 2011, at 2:53 PM, John Purrier wrote:
> Hi Erik, today we have compute, block/volume, and network all encompassed in
> nova. Along with
This seems to overlap heavily with justin's metadata stuff. The idea was that
you could pass in metadata on instance launch saying near: other-object. I
think that is far more useful than an opaque affinity id.
Vish
On Feb 28, 2011, at 2:53 PM, Gabe Westmaas wrote:
> Hi Eric,
>
> I probably
Hi Erik, today we have compute, block/volume, and network all encompassed in
nova. Along with image and object storage these make the whole of OpenStack
today. The goal is to see where we are at wrt the OpenStack API
(compute/network/volume/image) and coverage of the underlying implementation
as we
Hi Eric,
I probably chose a poor word there, this is actually referring to something
smaller than the multicluster zones that Sandy has been working on. For
example, in case for some performance reasons you wanted two servers with as
few network hops as possible. If that still lines up with w
Hi Gabe,
There has been a lot of discussion about this, along with zone naming,
structure, and so forth. I was propsing we not only make it part of
Nova, but suggest all projects use the same locality zone names/tags
to ensure cross-project locality.
So, "yes", and don't make it nova-specific. :)
The Skip plugin for nose offers similar functionality which can be used in
Python 2.6:
http://somethingaboutorange.com/mrl/projects/nose/0.11.1/plugins/skip.html
Using this you can write decorators that raise SkipTest if a certain criteria
isn't met.
Fr
Hey All,
For various reasons, Rackspace has a need to allow customers to request
placement in the same zone as another server. I am trying to figure out if
this is generically useful, or something that should be outside of core. The
idea is that if you don't specify an affinity ID one will ge
I think it is commendable to identify bugs even if you can't fix them at the
time. I hope that we don't create incentives to ignore bugs you find during
development just to get your own merge through.
But I'm worried about staleness and usefulness with known bugs. If the known
bugs test cases a
Python 2.7 has @unittest.skip and @unittest.skipUnless decorators. Is this
what you want? You could write the failing unit test, and then mark it as
skipped until the bug is fixed. My only concern would be the Python 2.7
dependency - we're using 2.6 still ourselves, so I'd ask that you wrote
John -
Are we just talking about compute aspects? IMO, we should NOT be exposing
block functionality in the OS compute API. In Diablo, we will break out block
into a separate service with it's own OS block API. That means for now, there
may be functionality in nova that isn't exposed (an art
Unittest2 lets you define a test case that is expected to fail:
http://docs.python.org/library/unittest.html#unittest.expectedFailure
new in 2.7, but it could be possible to backport - or do something similar...
May have issues with nose:
http://code.google.com/p/python-nose/issues/detail?id=325
I'm not a big fan of 'known bugs' in unit tests. Unit tests should always pass.
How practical is it that I'm going to invest the time to write a unit tests on
a bug which I'm then not able to fix in the same merge. In many cases writing
the test cases are actually harder than writing the code t
Has anyone done a gap analysis against the proposed OpenStack Compute API and
a) the implemented code, and b) the EC2 API?
It looks like we have had a breakdown in process, as the community review
process of the proposed spec has not generated discussion of the missing
aspects of the propose
@brian: the problem with a json field is that searching would be really
expensive if we ever need to pull mac addresses from the db to ensure
uniqueness.
@Ilya: If I make a table, I plan on putting mac address, instance id,
network id, and if zones are about ready, some sort of zone information in
Jay and I have been having an interesting discussion about how to deal with
bugs that mean that unit tests _should_ fail. So, if I find a bug, I should
write a failing unit test first, and fix it (in one merge). However, if I
can't fix it, I can't get a failing unit test merged into the trunk (be
Hi Raphael,
On Mon, Feb 28, 2011 at 10:01:55AM +, Raphael Cohn wrote:
>AMQP Observations
>Your comments about AMQP seem to mostly be appropriate for one of the
>older versions, eg 0-8, and I don't think they particularly apply to later
>versions, eg 1-0. AMQP 0-8 did have some
2011/2/28 Thierry Carrez :
>> - qcow2 support was enabled utilizing libguestfs instead of missing NBD
> Though almost everyone knows I don't like the injection business, using
> libguestfs instead of NBD sounds like a patch that could be welcome in
> trunk, given that NBD can be a bit difficult (se
Ilya Alekseyev wrote:
> Thierry, we could propose libguestfs patch to trunk, but have concerns
> with it. First there is no libguestfs package for ubuntu and libguestfs
> people still looking for ubuntu mantainer
> (http://libguestfs.org/FAQ.html#binaries). We could create PPA if it
> will be enoug
Thierry, we could propose libguestfs patch to trunk, but have concerns with
it. First there is no libguestfs package for ubuntu and libguestfs people
still looking for ubuntu mantainer (http://libguestfs.org/FAQ.html#binaries).
We could create PPA if it will be enough for now, but for next openstac
Eric,
Thank you.
You raise lots of interest points. In no particular order:-
AMQP Observations
Your comments about AMQP seem to mostly be appropriate for one of the older
versions, eg 0-8, and I don't think they particularly apply to later
versions, eg 1-0. AMQP 0-8 did have some issues that did
Jay Pipes wrote:
> * Set the merge proposal status to Work In Progress
+1
As we should expect as a result of this policy more back-and-forth
between statuses, I'll fix http://wiki.openstack.org/releasestatus/ so
that it no longer reports as an oddity a blueprint in "Needs code
review" status whic
44 matches
Mail list logo