I think Josh may be the right man for this question ☺

To be more precious, I would like to add more words about the status:

1. We have configured “show_image_direct_url= Ture” in Glance, and from the 
Cinder-volume’s log, we can make sure we have got a direct_url , for example.
image_id 6565d775-553b-41b6-9d5e-ddb825677706
image_location rbd://6565d775-553b-41b6-9d5e-ddb825677706
2. In the _is_cloneable function, it tries to “_parse_location” the direct_url 
(rbd://6565d775-553b-41b6-9d5e-ddb825677706) into 4 parts : 
fsid,pool,volume,snapshot . Since the direct_url passed from Glance doesn’t 
provide fsid ,pool and snapshot info, the parse is failed and _is_cloneable 
return false, which will finally drop the request to 
RBDDriver::copy_image_to_volume.

3. In  Cinder/volume/driver.py,RBDDriver::copy_image_to_volume, we have seem 
this note:
 # TODO(jdurgin): replace with librbd  this is a temporary hack, since 
rewriting this driver to use librbd would take too long
                  And in this function, the cinder RBD driver download the 
whole image from Glance into a temp file in local Filesystem, then use rbd 
import to import the temp file into a RBD volume.
                
                This is absolutely not what we want (zero copy and CoW), so we 
digging into the _is_cloneable function

Seems the straightforward way to solve 2 )  write a patch for glance that 
adding more infos in the direct_url, but I am not sure if it’s possible for 
ceph to clone a RBD from pool A to pool B? 



From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Li, Chen
Sent: 2013年3月20日 12:57
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] create volume from an image

I'm using Ceph RBD for both Cinder and Glance. Cinder and Glance are installed 
in two machines.
I have get information from many place that when cinder and glance both using 
Ceph RBD, then no real data transmit will happen because of copy on write.
But the truth is when i run the command:
cinder create --image-id 6565d775-553b-41b6-9d5e-ddb825677706 --display-name 
test 3
I can still get network data traffic between cinder and glance.
And I check the cinder code, the image_location is None 
(cinder/volume/manager.py), which makes cinder will failed running cloned = 
self.driver.clone_image(volume_ref, image_location).
Is this a OpenStack (cinder or glance )bug  ?
Or I have miss and configuration?

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to