>>Is there anything changed from Hammer to Jewel that might be affecting the 
>>qemu-img convert performance?

maybe object map for exclusive lock ? (I think it could be a little bit slower 
when objects are created first)

you could test it, create the target rbd volume, disable exclusive lock,objet 
map, and try qemu-img convert.



----- Mail original -----
De: "Mahesh Jambhulkar" <mahesh.jambhul...@trilio.io>
À: "aderumier" <aderum...@odiso.com>
Cc: "dillaman" <dilla...@redhat.com>, "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Vendredi 21 Juillet 2017 14:38:20
Objet: Re: [ceph-users] qemu-img convert vs rbd import performance

Thanks Alexandre! 
We were using ceph - Hammer before and we never had these performance issues 
with qemu-img convert. 

Is there anything changed from Hammer to Jewel that might be affecting the 
qemu-img convert performance? 

On Fri, Jul 21, 2017 at 2:24 PM, Alexandre DERUMIER < [ 
mailto:aderum...@odiso.com | aderum...@odiso.com ] > wrote: 


It's already in qemu 2.9 

[ 
http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d
 | 
http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d
 ] 


" 
This patches introduces 2 new cmdline parameters. The -m parameter to specify 
the number of coroutines running in parallel (defaults to 8). And the -W 
parameter to 
allow qemu-img to write to the target out of order rather than sequential. This 
improves 
performance as the writes do not have to wait for each other to complete. 
" 

----- Mail original ----- 
De: "aderumier" < [ mailto:aderum...@odiso.com | aderum...@odiso.com ] > 
À: "dillaman" < [ mailto:dilla...@redhat.com | dilla...@redhat.com ] > 
Cc: "Mahesh Jambhulkar" < [ mailto:mahesh.jambhul...@trilio.io | 
mahesh.jambhul...@trilio.io ] >, "ceph-users" < [ 
mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ] > 
Envoyé: Vendredi 21 Juillet 2017 10:51:21 
Objet: Re: [ceph-users] qemu-img convert vs rbd import performance 

Hi, 

they are an RFC here: 

"[RFC] qemu-img: make convert async" 
[ https://patchwork.kernel.org/patch/9552415/ | 
https://patchwork.kernel.org/patch/9552415/ ] 


maybe it could help 


----- Mail original ----- 
De: "Jason Dillaman" < [ mailto:jdill...@redhat.com | jdill...@redhat.com ] > 
À: "Mahesh Jambhulkar" < [ mailto:mahesh.jambhul...@trilio.io | 
mahesh.jambhul...@trilio.io ] > 
Cc: "ceph-users" < [ mailto:ceph-users@lists.ceph.com | 
ceph-users@lists.ceph.com ] > 
Envoyé: Jeudi 20 Juillet 2017 15:20:32 
Objet: Re: [ceph-users] qemu-img convert vs rbd import performance 

Running a similar 20G import test within a single OSD VM-based cluster, I see 
the following: 
$ time qemu-img convert -p -O raw -f raw ~/image rbd:rbd/image 
(100.00/100%) 

real 3m20.722s 
user 0m18.859s 
sys 0m20.628s 

$ time rbd import ~/image 
Importing image: 100% complete...done. 

real 2m11.907s 
user 0m12.236s 
sys 0m20.971s 

Examining the IO patterns from qemu-img, I can see that it is effectively using 
synchronous IO (i.e. only a single write is in-flight at a time), whereas "rbd 
import" will send up to 10 (by default) IO requests concurrently. Therefore, 
the higher the latencies to your cluster, the worse qemu-img will perform as 
compared to "rbd import". 



On Thu, Jul 20, 2017 at 5:07 AM, Mahesh Jambhulkar < [ mailto: [ 
mailto:mahesh.jambhul...@trilio.io | mahesh.jambhul...@trilio.io ] | [ 
mailto:mahesh.jambhul...@trilio.io | mahesh.jambhul...@trilio.io ] ] > wrote: 



Adding rbd readahead disable after bytes = 0 did not help. 

[root@cephlarge mnt]# time qemu-img convert -p -O raw 
/mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1
 rbd:volumes/24291e4b-93d2-47ad-80a8-bf3c395319b9 (100.00/100%) 

real 4858m13.822s 
user 73m39.656s 
sys 32m11.891s 
It took 80 hours to complete. 

Also, its not feasible to test this with huge 465GB file every time. So I 
tested qemu-img convert with a 20GB file. 

Parameters Time taken 
-t writeback 38mins 
-t none 38 mins 
-S 4k 38 mins 
With client options mentions by Irek Fasikhov 40 mins 
The time taken is almost the same. 

On Thu, Jul 13, 2017 at 6:40 PM, Jason Dillaman < [ mailto: [ 
mailto:jdill...@redhat.com | jdill...@redhat.com ] | [ 
mailto:jdill...@redhat.com | jdill...@redhat.com ] ] > wrote: 


On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov < [ mailto: [ 
mailto:malm...@gmail.com | malm...@gmail.com ] | [ mailto:malm...@gmail.com | 
malm...@gmail.com ] ] > wrote: 
> rbd readahead disable after bytes = 0 


There isn't any reading from an RBD image in this example -- plus 
readahead disables itself automatically after the first 50MBs of IO 
(i.e. after the OS should have had enough time to start its own 
readahead logic). 

-- 
Jason 






-- 
Regards, 
mahesh j 






-- 
Jason 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 







-- 
Regards, 
mahesh j 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to