Running a similar 20G import test within a single OSD VM-based cluster, I
see the following:

$ time qemu-img convert -p -O raw -f raw  ~/image rbd:rbd/image
    (100.00/100%)

real 3m20.722s
user 0m18.859s
sys 0m20.628s

$ time rbd import ~/image
Importing image: 100% complete...done.

real 2m11.907s
user 0m12.236s
sys 0m20.971s

Examining the IO patterns from qemu-img, I can see that it is effectively
using synchronous IO (i.e. only a single write is in-flight at a time),
whereas "rbd import" will send up to 10 (by default) IO requests
concurrently. Therefore, the higher the latencies to your cluster, the
worse qemu-img will perform as compared to "rbd import".



On Thu, Jul 20, 2017 at 5:07 AM, Mahesh Jambhulkar <
mahesh.jambhul...@trilio.io> wrote:

> Adding  *rbd readahead disable after bytes = 0*  did not help.
>
> [root@cephlarge mnt]# time qemu-img convert -p -O raw
> /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snap
> shot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-
> 5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-
> 80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1
> rbd:volumes/24291e4b-93d2-47ad-80a8-bf3c395319b9     (100.00/100%)
>
> real    4858m13.822s
> user    73m39.656s
> sys     32m11.891s
> It took 80 hours to complete.
>
> Also, its not feasible to test this with huge 465GB file every time. So I
> tested *qemu-img convert* with a 20GB file.
>
> Parameters Time taken
> -t writeback 38mins
> -t none 38 mins
> -S 4k 38 mins
> With client options mentions by Irek Fasikhov 40 mins
> The time taken is almost the same.
>
> On Thu, Jul 13, 2017 at 6:40 PM, Jason Dillaman <jdill...@redhat.com>
> wrote:
>
>> On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov <malm...@gmail.com> wrote:
>> >      rbd readahead disable after bytes = 0
>>
>>
>> There isn't any reading from an RBD image in this example -- plus
>> readahead disables itself automatically after the first 50MBs of IO
>> (i.e. after the OS should have had enough time to start its own
>> readahead logic).
>>
>> --
>> Jason
>>
>
>
>
> --
> Regards,
> mahesh j
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to