On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy <ar...@bisnis2030.com> wrote:

> Hello fellow ceph user, right now we are researching ceph for our storage.
>
> We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
> now we are using the NFS proxy setup. On each OSD node we have 4x 1G Intel
> copper NIC (not sure about the model number though but I'll look it up in
> case anyone asking). Up until now we are testing on one nic as we dont have
> (yet) a network switch with la/teaming support.
>
> I suppose since its Intel we should try to get jumbo frames working too,
> so I hope someone would recommend a good switch that is known to work with
> most Intel's.
>
> We are looking for recommendation on what kind of network switch, network
> layout, brand, model, whatever.. as we are (kind of) new to building our
> own storage and has no experience in ceph.
>
> We are also looking for feasibility of using fibre-channel instead of
> copper but we dont know if it would help much, in terms of
> speed-improvements/$ ratio since we already have 4 NICs on each OSD. Should
> we go for it?
>

I really would think about something faster then gig ethernet. Merchant
silicon is changing the world, take a look at guys like Quanta, I just
bought two T3048-LY2 switches with Cumulus software for under 6k each. That
gives you 48 10 gig ports and 4 40 gig ports to play with, to save on
optics use SFP+ copper cables. If you want to save even more money go with
used 10 gig infiniband off eBay, you can do that for under $100 a port.

><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to