First, thank you for your reply

TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches.
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed) if one goes down.
They are probably too pricey in a 1GB/s environment though, but that's for
you to investigate and decide.

Otherwise you'd wind up with something like 2 normal switches and half
your possible speed as one link is always just standby.

Segregation of client and replication traffic (public/cluster network)
probably won't make much sense, as any decent switch will be able to
handle the bandwidth of all ports and with a combined network (2
active links) you get the potential benefit of higher read speeds for
clients.


I suppose, should we go with the copper route then it is prefereable to go with stackable switches, with redudancy on different physical switch?

We are also looking for feasibility of using fibre-channel instead of
copper but we dont know if it would help much, in terms of
speed-improvements/$ ratio since we already have 4 NICs on each OSD.
Should we go for it?

Why would you?
For starters I think you mean fiber-optics, as fiber-channel is something
else. ^o^
Those make only sense when you're going longer distances than your cluster
size suggests.

If you're looking for something that is both faster and less expensive
than 10GB/s Ethernet, investigate Infiniband.

We look into infiniband as per your suggestion, but frankly our impression from our research so far is that this "medium" only supported by one vendor.

http://www.mellanoxstore.com/products/mellanox-mis5023q-1bfr-infiniscale-iv-qdr-infiniband-switch-18-qsfp-ports-1-power-supply-unmanaged-connector-side-airflow-exhaust-no-frus-with-rack-rails-short-depth-form-factor-rohs-6.html

The price sure look interesting though, but we're half-way around the world, and initially we're looking into ceph because we dont want to use non-widely available hardware so we'll just wait after our trial on a demo-unit (an HP 2920-24G-POE+ Switch) before having a good, hard look at infiniband.

I'm sorry I forgot to ask this earlier, but what is the most common medium used by most ceph cluster on production?
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to