On Fri, 2014-03-28 at 07:35 +0800, zhangleiqi...@gmail.com wrote: > Hi, stackers: > > I am confused about reading the original BP ([1]) and operation guide > of nova-cells, the question as follows: > > One aim of nova-cells is to allow additional scaling and (geographic) > distribution without complicated database or message queue clustering,
I'm not entirely sure I agree with the geographic distribution part, but yes, the idea of Nova cells is to partition a large (>220-250 compute nodes, not VMs) deployment of Nova into sub-deployments (cells) that have their own database and message bus. > and OpenStack Compute cells are designed to allow running the cloud > in a distributed fashion without having to use more complicated > technologies, or being invasive to existing nova installations. Hmm, well, distributed is kind of a misgnomer here. All of OpenStack is distributed. The main idea of cells is to scale beyond the theoretical limitations of a "typical" database and message queue cluster, and yes, Nova cells enables you to grow a Nova deployment without buying expensive scale-up hardware to manage increased demand on underlying database and message queue resources. > The typical use case for cells is a cloud with two independent sites > with a single API endpoint. IMO, that is actually not the typical use case for cells. The typical use case (is there one?) is to grow a *single* site's number of compute nodes greater than the (estimated) scaling capacity of a typical database and message queue cluster. "typical" and "estimated" are just that: one deployer's notions of what is a standard database and message queue setup and that deployer's experience attempting to push that database and MQ setup to its limits. > However, take cinder into account, it will require Cinder act as a > shared service for the two sites? I don't think there is anything in the Cinder or Nova code that prevents a Cinder service from being shared between two Nova cells -- or even Nova availability zones (which is, IMO, what you are describing here as "sites"). This all would depend on what storage backend you are using in Cinder, the amount of volume space that backend provides and can scale to, as well as the contract you are giving your tenants. This last point is important. If you allow a tenant to detach a volume from a VM in one "site" and attach it to a VM in another site, then you will need to either have the Cinder database and volume backend shared between the sites or you will need to have some replication technology that would enable that. Alternately, you might decide that you are *not* allowing a tenant to do that, but instead you will allow the tenant to *snapshot* the volume and launch a VM from that snapshot in a different site. This would require you to either share the Glance registry database between those sites or replicate that registry information from one site to another. > For two independent sites with existing cloud installations, in order > to *upgrade* to cells later, we must make sure the storage networks of > these sites connected to each other as early as building the two > sites? I don't know the answer to this question, unfortunately. I don't *think* this is necessary to do from the beginning, but sure, it would be helpful. > In order to provide shared service for two sites, we must use the > storage with geographic distribution? Among current Cinder backend > drivers, it seems there are only glusterfs and ceph? Or provide some replication technology external to Nova, yes. > Furthermore, I think Cinder need to provide the "site-affinity“ > filter for this scenario, does it? Yes, you would enable that filter for Cinder, but I am not entirely sure if Cinder understands the concept of Nova's cells, or whether the Nova cells code "fudges" the "site" to be the availability zone plus some cell identifier... Best, -jay _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack