On 04/10/13 04:42, Chris Behrens wrote:
>
>
> I
> felt that the multi-DC or multi-continent scenario where you want your
> nova-api endpoints to see ALL instances (as opposed to multi-region with
> keystone) was a good use case for cells.
This is the NeCTAR use case:
http://www.openstack.or
On Oct 3, 2013, at 11:36 AM, Tim Bell wrote:
>
> Chris,
>
> Great to see further improvements are in the pipeline. The cinder support for
> cells in Havana is a very welcome development.
Oh yes, I forgot that landed during Havana. It had been done during the
Grizzly timeframe but didn't
On Oct 3, 2013, at 12:49 PM, Mike Wilson wrote:
[…]
> Now that I've said all this, cells does handle these three problems very
> nicely by partitioning them all off and coordinating the api. However, there
> are some missing features that I think are not trivial to implement. I'm also
> not a
single cell would handle it.
>>
>> ** **
>>
>> I’d be happy to hear experiences from others in this area.
>>
>> ** **
>>
>> Belmiro will be giving a summit talk on the deep dive including our
>> experiences for those who are able to make it.
>>
>&
Harlow [mailto:harlo...@yahoo-inc.com]
> *Sent:* 03 October 2013 20:32
> *To:* Subbu Allamaraju; Tim Bell
> *Cc:* openstack@lists.openstack.org
>
> *Subject:* Re: [Openstack] Cells use cases
>
> ** **
>
> Hi Tim,
>
> ** **
>
> I'd also like to know wh
Got it. By RPC I was referring to RabbitMQ in particular. That's also the
rationale that Rackspace presented at the Portland summit.
Subbu
On Oct 3, 2013, at 11:42 AM, Chris Behrens wrote:
>
> On Oct 3, 2013, at 10:23 AM, Subbu Allamaraju wrote:
>
>> Hi Tim,
>>
>> Can you comment on scalab
openstack@lists.openstack.org
Subject: Re: [Openstack] Cells use cases
On Oct 3, 2013, at 8:53 AM, Tim Bell
mailto:tim.b...@cern.ch>> wrote:
At CERN, we're running cells for scalability. When you go over 1000 hypervisors
or so, the general recommendation is to be in a cells conf
On Oct 3, 2013, at 10:23 AM, Subbu Allamaraju wrote:
> Hi Tim,
>
> Can you comment on scalability more? Are you referring to just the RPC layer
> in the control plane?
Not just RPC, but RPC is a big one. Cells gives the ability to split up and
distribute work. If you divide hypervisors int
nces
for those who are able to make it.
Tim
From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
Sent: 03 October 2013 20:32
To: Subbu Allamaraju; Tim Bell
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Cells use cases
Hi Tim,
I'd also like to know what happens above 1000 hype
lists.openstack.org<mailto:openstack@lists.openstack.org>"
mailto:openstack@lists.openstack.org>>
Subject: Re: [Openstack] Cells use cases
Hi Tim,
Can you comment on scalability more? Are you referring to just the RPC layer in
the control plane?
Subbu
On Oct 3, 2013, at 8:53 AM, Tim Bell
On Oct 3, 2013, at 8:53 AM, Tim Bell wrote:
>
> At CERN, we’re running cells for scalability. When you go over 1000
> hypervisors or so, the general recommendation is to be in a cells
> configuration.
>
> Cells are quite complex and the full functionality is not there yet so some
> parts
Hi Tim,
Can you comment on scalability more? Are you referring to just the RPC layer in
the control plane?
Subbu
> On Oct 3, 2013, at 8:53 AM, Tim Bell wrote:
>
>
> At CERN, we’re running cells for scalability. When you go over 1000
> hypervisors or so, the general recommendation is to be
At CERN, we're running cells for scalability. When you go over 1000 hypervisors
or so, the general recommendation is to be in a cells configuration.
Cells are quite complex and the full functionality is not there yet so some
parts will need to wait for Havana.
Tim
From: Dmitry Ukov [mailto:du
13 matches
Mail list logo