On Tue, 2011-12-06 at 14:12 -0800, Duncan McGreggor wrote:
> On 06 Dec 2011 - 13:52, Duncan McGreggor wrote:
> > On 06 Dec 2011 - 21:14, Thierry Carrez wrote:
> > > Tim Bell wrote:
> > > > I'm not clear on who will be maintaining the stable/diablo branch.
> > > > The people such as EPEL for RedHat
On Tue, 2011-12-06 at 10:11 -0800, Duncan McGreggor wrote:
> On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
> > So the general consensus so far on this discussion seems to be:
> >
> > (0) The "2011.3 release" PPA bears false expectations and should be
> > removed now. In the future, we should not pr
On Tue, 2011-12-06 at 19:54 -0800, Vishvananda Ishaya wrote:
> Hello Everyone,
>
> The Nova subteams have now been active for a month and a half. Some
> things are going very well, and others could use a little improvement.
> To keep things moving forward, I'd like to make the following changes:
Vishvananda Ishaya wrote:
> 2) *Closing down the team mailinglists.* Some of the lists have been a
> bit active, but I think the approach that soren has been using of
> sending messages to the regular list with a team [header] is a better
> approach. Examples:
> [db] Should we use zookeeper?
> [s
Hi,all,
I am trying to install OpenStack with XenServer.I followed what this
page says,but with no luck.
(http://wiki.openstack.org/XenServerDevelopment) Is there any one who
successfully installed this?
Is there any guides available beyond the page above?
What's more, I tried this way:
https
Duncan McGreggor wrote:
>> Creating a packaging team that acknowledge their contribution to the
>> upstream project will show that the packagers contributions are an
>> integral part of the openstack development, it would motivate new
>> packagers to contribute their changes upstream instead of kee
> On 06 Dec 2011 - 13:52, Duncan McGreggor wrote:
> Yikes! I forgot an incredibly important one:
> * What is the migration path story (diablo to essex, essex to f, etc.)?
I think it was going to be the Upgrades Team?
___
Mailing list: https://launchpad
For orchestration (and now the scheduler improvements) we need to know when an
operation fails ... and specifically, which resource was involved. In the
majority of the cases it's an instance_uuid we're looking for, but it could be
a security group id or a reservation id.
With most of the compu
Can you talk a little more about how you want to apply this failure
notification? That is, what is the case where you are going to use the
information that an operation failed? In my head I have an idea of getting code
simplicity dividends from an "everything succeeds" approach to some of our
o
Sure, the problem I'm immediately facing is reclaiming resources from the
Capacity table when something fails. (we claim them immediately in the
scheduler when the host is selected to lessen the latency).
The other situation is Orchestration needs it for retries, rescheduling,
rollbacks and cro
Hey all,
A quick reminder that the QA team has our weekly meeting on
#openstack-meeting in about 30 minutes.
12:00 EST
09:00 PST
17:00 UTC
See you there,
-jay
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.n
Gotcha.
So the way this might work is, for example, when a run_instance fails on
compute node, it would publish a "run_instance for uuid= failed" event.
There would be a subscriber associated with the scheduler listening for such
events--when it receives one it would go check the capacity table
Exactly! ... or it could be handled in the notifier itself.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Mark Washenberger [mark.washenber...@rackspace
Hi Sandy,
I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
to the scheduler, right? Naturally the scheduler can find another node
to retry or decide to give up and report failure. If we need to
provision man
True ... this idea has come up before (and is still being kicked around). My
biggest concern is what happens if that scheduler dies? We need a mechanism
that can live outside of a single scheduler service.
The more of these long-running processes we leave in a service the greater the
impact wh
*removing our Asynchronous nature.
(heh, such a key point to typo on)
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Sandy Walsh [sandy.wa...@rackspace
Hi Jeff,
Can you be more specific about what doesn't work? There are lots of people
using OpenStack with XenServer, including Citrix and Rackspace, so I can
guarantee that it works! The docs are lacking though, that's for certain.
Where did you get stuck?
Thanks,
Ewan.
From: openstack-bou
On Tue, 2011-12-06 at 23:56 +0100, Loic Dachary wrote:
> I think there is an opportunity to leverage the momentum that is
> growing in each distribution by creating an openstack team for them to
> meet. Maybe Stefano Maffulli has an idea about how to go in this
> direction. The IRC channel was a gr
On Wed, Dec 7, 2011 at 7:26 AM, Sandy Walsh wrote:
> For orchestration (and now the scheduler improvements) we need to know when
> an operation fails ... and specifically, which resource was involved. In the
> majority of the cases it's an instance_uuid we're looking for, but it could
> be a se
On 07 Dec 2011 - 08:22, Mark McLoughlin wrote:
> On Tue, 2011-12-06 at 10:11 -0800, Duncan McGreggor wrote:
> > On 06 Dec 2011 - 14:28, Thierry Carrez wrote:
> > > So the general consensus so far on this discussion seems to be:
> > >
> > > (0) The "2011.3 release" PPA bears false expectations and s
On 12/07/2011 10:32 PM, Stefano Maffulli wrote:
> On Tue, 2011-12-06 at 23:56 +0100, Loic Dachary wrote:
>> I think there is an opportunity to leverage the momentum that is
>> growing in each distribution by creating an openstack team for them to
>> meet. Maybe Stefano Maffulli has an idea about ho
On 07 Dec 2011 - 13:54, Thierry Carrez wrote:
> Duncan McGreggor wrote:
> >> Creating a packaging team that acknowledge their contribution to the
> >> upstream project will show that the packagers contributions are an
> >> integral part of the openstack development, it would motivate new
> >> packa
Hi folks
I wanna make Delete server spec clear.
The API doc says,
"When a server is deleted, all images created from that server are also removed"
http://docs.openstack.org/api/openstack-compute/1.1/content/Delete_Server-d1e2883.html
IMO, "all images" is vm images which stored on compute node i
I would interpret that to include the snapshots - but I'm not sure
that is what I'd expect as a user.
On Wed, Dec 7, 2011 at 5:05 PM, Nachi Ueno
wrote:
> Hi folks
>
> I wanna make Delete server spec clear.
>
> The API doc says,
> "When a server is deleted, all images created from that server are
Hi Jessy
Thanks.
Hmm, there are no implementation of cleanup snapshot images.
IMO, Snapshot image should not deleted, in case of API request mistake.
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L540
2011/12/7 Jesse Andrews :
> I would interpret that to include the snaps
I would agree with that. If I delete a server instance, I don't want
to destroy snapshot images I took of that server...
-jay
On Wed, Dec 7, 2011 at 8:50 PM, Nachi Ueno
wrote:
> Hi Jessy
>
> Thanks.
> Hmm, there are no implementation of cleanup snapshot images.
> IMO, Snapshot image should not d
excellent ideas. I especially like the standardized list of headers.
just to be sure, mondays at 2100utc? if so, no conflicts on my end
On Wed, Dec 7, 2011 at 4:51 AM, Thierry Carrez wrote:
> Vishvananda Ishaya wrote:
>
> > 2) *Closing down the team mailinglists.* Some of the lists have been a
27 matches
Mail list logo