Yep, as I said below, I consider to add auto scale up/down for worker
threads with connection load balance ability. It may let users not
entangled with how much thread number I need. :-(

Actually thread number for config value is a pain in ceph osd io stack.....

On Tue, Oct 13, 2015 at 2:45 PM, Somnath Roy <somnath....@sandisk.com> wrote:
> Thanks Haomai..
> Since Async messenger is always using a constant number of threads , there 
> could be a potential performance problem of scaling up the client connections 
> keeping the constant number of OSDs ?
> May be it's a good tradeoff..
>
> Regards
> Somnath
>
>
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Monday, October 12, 2015 11:35 PM
> To: Somnath Roy
> Cc: Mark Nelson; ceph-devel; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger vs 
> AsyncMessenger results
>
> On Tue, Oct 13, 2015 at 12:18 PM, Somnath Roy <somnath....@sandisk.com> wrote:
>> Mark,
>>
>> Thanks for this data. This means probably simple messenger (not OSD
>> core) is not doing optimal job of handling memory.
>>
>>
>>
>> Haomai,
>>
>> I am not that familiar with Async messenger code base, do you have an
>> explanation of the behavior (like good performance with default
>> tcmalloc) Mark reported ? Is it using lot less thread overall than Simple ?
>
> Originally async messenger mainly want to solve with high thread number 
> problem which limited the ceph cluster size. High context switch and cpu 
> usage caused by simple messenger under large cluster.
>
> Recently we have memory problem discussed on ML and I also spend times to 
> think about the root cause. Currently I would like to consider the simple 
> messenger's memory usage is deviating from the design of tcmalloc. Tcmalloc 
> is aimed to provide memory with local cache, and it also has memory control 
> among all threads, if we have too much threads, it may let tcmalloc busy with 
> memory lock contention.
>
> Async messenger uses thread pool to serve connections, it make all blocking 
> calls in simple messenger async.
>
>>
>> Also, it seems Async messenger has some inefficiencies in the io path
>> and that’s why it is not performing as well as simple if the memory
>> allocation stuff is optimally handled.
>
> Yep, simple messenger use two threads(one for read, one for write) to serve 
> one connection, async messenger at most have one thread to serve one 
> connection and multi connection  will share the same thread.
>
> Next, I would like to have several plans to improve performance:
> 1. add poll mode support, I hope it can help enhance high performance storage 
> need 2. add load balance ability among worker threads 3. move more works out 
> of messenger thread.
>
>>
>> Could you please send out any documentation around Async messenger ? I
>> tried to google it , but, not even blueprint is popping up.
>
>>
>>
>>
>>
>>
>> Thanks & Regards
>>
>> Somnath
>>
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
>> Of Haomai Wang
>> Sent: Monday, October 12, 2015 7:57 PM
>> To: Mark Nelson
>> Cc: ceph-devel; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger
>> vs AsyncMessenger results
>>
>>
>>
>> COOL
>>
>>
>>
>> Interesting that async messenger will consume more memory than simple,
>> in my mind I always think async should use less memory. I will give a
>> look at this
>>
>>
>>
>> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson <mnel...@redhat.com> wrote:
>>
>> Hi Guy,
>>
>> Given all of the recent data on how different memory allocator
>> configurations improve SimpleMessenger performance (and the effect of
>> memory allocators and transparent hugepages on RSS memory usage), I
>> thought I'd run some tests looking how AsyncMessenger does in
>> comparison.  We spoke about these a bit at the last performance meeting but 
>> here's the full write up.
>> The rough conclusion as of right now appears to be:
>>
>> 1) AsyncMessenger performance is not dependent on the memory allocator
>> like with SimpleMessenger.
>>
>> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB
>> (ie
>> default) thread cache.
>>
>> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
>> random reads.
>>
>> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
>> allocator optimizations are used.
>>
>> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>>
>> Here's a link to the paper:
>>
>> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>>
>> Mark
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>>
>> --
>>
>> Best Regards,
>>
>> Wheat
>>
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message
>> is intended only for the use of the designated recipient(s) named
>> above. If the reader of this message is not the intended recipient,
>> you are hereby notified that you have received this message in error
>> and that any review, dissemination, distribution, or copying of this
>> message is strictly prohibited. If you have received this
>> communication in error, please notify the sender by telephone or
>> e-mail (as shown above) immediately and destroy any and all copies of
>> this message in your possession (whether hard copies or electronically 
>> stored copies).
>>
>
>
>
> --
> Best Regards,
>
> Wheat
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
>



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to