Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?

Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.j...@inktank.com] 
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Cc: Sage Weil (sw...@redhat.com); ceph-de...@vger.kernel.org; 
ceph-users@lists.ceph.com
Subject: Re: OpTracker optimization

Responded with cosmetic nonsense.  Once you've got that and the other comments 
addressed, I can put it in wip-sam-testing.
-Sam

On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <somnath....@sandisk.com> wrote:
> Thanks Sam..I responded back :-)
>
> -----Original Message-----
> From: ceph-devel-ow...@vger.kernel.org 
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
> Sent: Wednesday, September 10, 2014 11:17 AM
> To: Somnath Roy
> Cc: Sage Weil (sw...@redhat.com); ceph-de...@vger.kernel.org; 
> ceph-users@lists.ceph.com
> Subject: Re: OpTracker optimization
>
> Added a comment about the approach.
> -Sam
>
> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <somnath....@sandisk.com> wrote:
>> Hi Sam/Sage,
>>
>> As we discussed earlier, enabling the present OpTracker code 
>> degrading performance severely. For example, in my setup a single OSD 
>> node with
>> 10 clients is reaching ~103K read iops with io served from memory 
>> while optracking is disabled but enabling optracker it is reduced to ~39K 
>> iops.
>> Probably, running OSD without enabling OpTracker is not an option for 
>> many of Ceph users.
>>
>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>> ops_in_flight) and removing some other bottlenecks I am able to match 
>> the performance of OpTracking enabled OSD with OpTracking disabled, 
>> but with the expense of ~1 extra cpu core.
>>
>> In this process I have also fixed the following tracker.
>>
>>
>>
>> http://tracker.ceph.com/issues/9384
>>
>>
>>
>> and probably http://tracker.ceph.com/issues/8885 too.
>>
>>
>>
>> I have created following pull request for the same. Please review it.
>>
>>
>>
>> https://github.com/ceph/ceph/pull/2440
>>
>>
>>
>> Thanks & Regards
>>
>> Somnath
>>
>>
>>
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail 
>> message is intended only for the use of the designated recipient(s) 
>> named above. If the reader of this message is not the intended 
>> recipient, you are hereby notified that you have received this 
>> message in error and that any review, dissemination, distribution, or 
>> copying of this message is strictly prohibited. If you have received 
>> this communication in error, please notify the sender by telephone or 
>> e-mail (as shown above) immediately and destroy any and all copies of 
>> this message in your possession (whether hard copies or electronically 
>> stored copies).
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majord...@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to