Hi all, We are now facing a nova operation issue about setting different ceph rbd pool to each corresponding nova compute node in one available zone. For instance: (1) compute-node-1 in az1 and set images_rbd_pool=pool1 (2) compute-node-2 in az1 and set images_rbd_pool=pool2 This setting can normally work fine.
But problem encountered when doing resize instance. We try to resize a instance-1 originally in compute-node-1, then nova will do schedule procedure, assuming that nova-scheduler get the chosen compute node is compute-node-2. Then the nova will get the following error: http://paste.openstack.org/show/585540/. This exception is because that in compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can handle this? Similar thing in cinder, you may see a cinder volume has host attribute like: host_name@pool_name#ceph. Why we use such setting is because that while doing storage capacity expansion we want to avoid the influence of ceph rebalance. One solution I found is AggregateInstanceExtraSpecsFilter, this can coordinate working with Host Aggregates metadata and flavor metadata. We try to create Host Aggregates like: az1-pool1 with hosts compute-node-1 and metadata {ceph_pool: pool1}; az1-pool2 with hosts compute-node-2 and metadata {ceph_pool: pool2}; and create flavors like: flavor1-pool1 with metadata {ceph_pool: pool1}; flavor2-pool1 with metadata {ceph_pool: pool1}; flavor1-pool2 with metadata {ceph_pool: pool2}; flavor2-pool2 with metadata {ceph_pool: pool2}; But this may introduce a new issue about the create_instance. Which flavor should be used during nova boot? The business/application layer seems need to add it's own flavor scheduler. So here finally, I want to ask, if there is a best practice about using multiple ceph rbd pools in one available zone. Best regards, LIU Yulong
_______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators