Hi List,

We're running Mitaka with Ceph. Recently I enabled RBD snapshots by adding
write permissions to the images pool in Ceph. This works perfectly for some
instances but is failing back to standard snapshots for others with the
following error:

Performing standard snapshot because direct snapshot failed: Cannot
determine the parent storage pool for 7a7b5119-
85da-429b-89b5-ad345cfb649e; cannot determine where to store images

Looking at the code here:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py
it appears that it looks for the pool of the base image to determine where
to save the snapshot. I believe the problem I'm encountering is that for
some of our instances the base image no longer exists.

Am I understanding this correctly and is there anyway to explicitly set the
pool to be used for snapshots and bypass this logic?

Thank You,

John Petrini
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to