Hello fellow ceph user, right now we are researching ceph for our storage.
We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
now we are using the NFS proxy setup. On each OSD node we have 4x 1G
Intel copper NIC (not sure about the model number though but I'll look
it up i
First, thank you for your reply
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches.
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed) i
Thank you for your reply,
We use a stacked pair of Dell Powerconnect 6248's with the 2*12 Gb/s
interconnect and single 10 GbE links, with 1 GbE failovers using Linux bonding
active/backup mode, to the four OSD nodes.
I'm sorry, but I just have to ask, what kind of 10GbE NIC do you use? If
Thank you for your reply,
I really would think about something faster then gig ethernet. Merchant
silicon is changing the world, take a look at guys like Quanta, I just
bought two T3048-LY2 switches with Cumulus software for under 6k each.
That gives you 48 10 gig ports and 4 40 gig ports to pl