Thanks.
Our initial deployment will be 8 OSD nodes containing 24 OSDs each
(spinning rust, not ssd). Each node will contain 2 PCIe p3700 NVMe for
journals. I expect us to grow to a maximum of 15 OSD nodes.

I'll just keep 40 gig on everything for the sake of consistency and not
risk under-sizing my monitor nodes.
On May 2, 2016 6:17 PM, "Chris Jones" <[email protected]> wrote:

> Mons and RGWs only use the public network but Mons can have a good deal of
> traffic. I would not recommend 1Gb but if looking for lower bandwidth then
> 10Gb would be good for most. It all depends in the overall size of the
> cluster. You mentioned 40Gb. If the nodes are high density then 40Gb but if
> they are lower density then 20Gb would be fine.
>
> -CJ
>
> On Mon, May 2, 2016 at 12:09 PM, Brady Deetz <[email protected]> wrote:
>
>> I'm working on finalizing designs for my Ceph deployment. I'm currently
>> leaning toward 40gbps ethernet for interconnect between OSD nodes and to my
>> MDS servers. But, I don't really want to run 40 gig to my mon servers
>> unless there is a reason. Would there be an issue with using 1 gig on my
>> monitor servers?
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Best Regards,
> Chris Jones
>
> [email protected]
> (p) 770.655.0770
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to