>>We also need to support >1 librbd/librados-internal IO
>>thread for outbound/inbound paths.

Could be worderfull !
multiple iothread by disk is coming for qemu too. (I have seen Paolo Bonzini 
sending a lot of patches this month)



----- Mail original -----
De: "Jason Dillaman" <[email protected]>
À: "aderumier" <[email protected]>
Cc: "Phil Lacroute" <[email protected]>, "ceph-users" 
<[email protected]>
Envoyé: Vendredi 17 Février 2017 15:16:39
Objet: Re: [ceph-users] KVM/QEMU rbd read latency

On Fri, Feb 17, 2017 at 2:14 AM, Alexandre DERUMIER <[email protected]> 
wrote: 
> and I have good hope than this new feature 
> "RBD: Add support readv,writev for rbd" 
> http://marc.info/?l=ceph-devel&m=148726026914033&w=2 

Definitely will eliminate 1 unnecessary data copy -- but sadly it 
still will make a single copy within librbd immediately since librados 
*might* touch the IO memory after it has ACKed the op. Once that issue 
is addressed, librbd can eliminate that copy if the librbd cache is 
disabled. We also need to support >1 librbd/librados-internal IO 
thread for outbound/inbound paths. 

-- 
Jason 

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to