But in ceph_user,Mark, and some users are really discussing some supermicro 
chassis that can have 24 spindles per 2u or 36/48 spindles per 4U

even with 20 osds per node,the thread num will more than 5000,and if take 
internal heartbeat/replication pipe into account, it should be around 10K 
threads. This is still too high for 8 core or 16 core cpu/cpus and will waste a 
lot of cycles in context switchinh.

发自我的 iPhone

在 2013-6-7,0:21,"Gregory Farnum" <g...@inktank.com> 写道:

> On Thu, Jun 6, 2013 at 12:25 AM, Chen, Xiaoxi <xiaoxi.c...@intel.com> wrote:
>> 
>> Hi,
>>         From the code, each pipe (contains a TCP socket) will fork 2 
>> threads, a reader and a writer. We really observe 100+ threads per OSD 
>> daemon with 30 instances of rados bench as clients.
>>         But this number seems a bit crazy, if I have a 40 disks node, thus I 
>> will have 40 OSDs, we plan to have 6 such nodes to serve 120 VMs from 
>> OpenStack.  Since a RBD is distributed across all the OSDs, we can expect, 
>> for every single OSD daemon, we will have 120 TCP socket, that means 240 
>> threads. Thus for 40 OSDs per node, we will have 9600 threads per node. This 
>> thread number seems incredible.
>>         Is there any internal mechanism to track and manage the number of 
>> pipes ? and another question may be , why we need such a lot threads ? why 
>> not epoll?
> Yep, right now the OSD maintains two threads per connection. That
> hasn't been a problem so far (they're fairly cheap threads) and people
> run into other limits much earlier ― 40 OSDs/node for instance would
> require a lot of compute power anyway.
> epoll is a good idea and is something we're aware of, but it hasn't
> been necessary yet and would involve mucking around with some fairly
> sensitive core code so it hasn't risen to the top of anybody's queue.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to