Hope this will be helpful..


Total connections per osd = (Target PGs per osd) * (# of pool replicas)

* 3 + (2 #clients) + (min_hb_peer)



# of pool replicas = configurable, default is 3

3 = is number of data communication messengers (cluster, hb_backend,

hb_frontend)

min_hb_peer = default is 20 I guess..
Total number connections per node: total connections per osd * number of osds 
per node

Thanks & Regards
Somnath

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Rick 
Balsano
Sent: Wednesday, November 04, 2015 12:28 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Understanding the number of TCP connections between 
clients and OSDs

Just following up since this thread went silent after a few comments showing 
similar concerns, but no explanation of the behavior. Can anyone point to some 
code or documentation which explains how to estimate the expected number of TCP 
connections a client would open based on read/write volume, # of volumes, # of 
OSDs in the pool, etc?


On Tue, Oct 27, 2015 at 5:05 AM, Dan van der Ster 
<d...@vanderster.com<mailto:d...@vanderster.com>> wrote:
On Mon, Oct 26, 2015 at 10:48 PM, Jan Schermer 
<j...@schermer.cz<mailto:j...@schermer.cz>> wrote:
> If we're talking about RBD clients (qemu) then the number also grows with
> number of volumes attached to the client.

I never thought about that but it might explain a problem we have
where multiple attached volumes crashes an HV. I had assumed that
multiple volumes would reuse the same rados client instance, and thus
reuse the same connections to the OSDs.

-- dan



--
Rick Balsano
Senior Software Engineer
Opower<http://www.opower.com>

O +1 571 384 1210<tel:%2B1%20571%20384%201210>
We're Hiring! See jobs here<http://www.opower.com/careers>.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to