Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Simon Leinen
Burkhard Linke writes: > I'm curious.what is the advantage of OSPF in your setup over > e.g. LACP bonding both links? Good question! Some people (including myself) are uncomfortable with LACP (in particular "MLAG", i.e. port aggregation across multiple chassis), and with fancy L2 setups in gen

Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Serkan Çoban
If ToR switches are L3 then you can not use LACP. On Mon, Jan 21, 2019 at 4:02 PM Burkhard Linke wrote: > > Hi, > > > I'm curious.what is the advantage of OSPF in your setup over e.g. > LACP bonding both links? > > > Regards, > > Burkhard > > > ___

Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Burkhard Linke
Hi, I'm curious.what is the advantage of OSPF in your setup over e.g. LACP bonding both links? Regards, Burkhard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Max Krasilnikov
День добрий! Mon, Jan 21, 2019 at 10:42:58AM +, pseudo wrote: > > On Sun, Jan 20, 2019 at 09:05:10PM +, Max Krasilnikov wrote: > > > > Just checking, since it isn't mentioned here: Did you explicitly add > > > > public_network+cluster_network as empty variables? > > > > > > > > Trace

Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Max Krasilnikov
Hello! Sun, Jan 20, 2019 at 09:07:35PM +, robbat2 wrote: > On Sun, Jan 20, 2019 at 09:05:10PM +, Max Krasilnikov wrote: > > > Just checking, since it isn't mentioned here: Did you explicitly add > > > public_network+cluster_network as empty variables? > > > > > > Trace the code in the

Re: [ceph-users] Ceph in OSPF environment

2019-01-20 Thread Volodymyr Litovka
Hi, to be more precise, netstat table looks as in the following snippet: tcp    0  0 10.10.200.5:6815 10.10.25.4:43788    ESTABLISHED 51981/ceph-osd tcp    0  0 10.10.15.2:41020 10.10.200.8:6813    ESTABLISHED 51981/ceph-osd tcp    0  0 10.10.15.2:48724 10.10.20

Re: [ceph-users] Ceph in OSPF environment

2019-01-20 Thread Robin H. Johnson
On Sun, Jan 20, 2019 at 09:05:10PM +, Max Krasilnikov wrote: > > Just checking, since it isn't mentioned here: Did you explicitly add > > public_network+cluster_network as empty variables? > > > > Trace the code in the sourcefile I mentioned, specific to your Ceph > > version, as it has change

Re: [ceph-users] Ceph in OSPF environment

2019-01-20 Thread Max Krasilnikov
Hello! Sun, Jan 20, 2019 at 09:00:22PM +, robbat2 wrote: > > > > we build L3 topology for use with CEPH, which is based on OSPF routing > > > > between Loopbacks, in order to get reliable and ECMPed topology, like > > > > this: > > > ... > > > > CEPH configured in the way > > > You have a

Re: [ceph-users] Ceph in OSPF environment

2019-01-20 Thread Robin H. Johnson
On Sun, Jan 20, 2019 at 08:54:57PM +, Max Krasilnikov wrote: > День добрий! > > Fri, Jan 18, 2019 at 11:02:51PM +, robbat2 wrote: > > > On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote: > > > Dear colleagues, > > > > > > we build L3 topology for use with CEPH, which is

Re: [ceph-users] Ceph in OSPF environment

2019-01-20 Thread Max Krasilnikov
День добрий! Fri, Jan 18, 2019 at 11:02:51PM +, robbat2 wrote: > On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote: > > Dear colleagues, > > > > we build L3 topology for use with CEPH, which is based on OSPF routing > > between Loopbacks, in order to get reliable and ECMPed

Re: [ceph-users] Ceph in OSPF environment

2019-01-18 Thread Robin H. Johnson
On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote: > Dear colleagues, > > we build L3 topology for use with CEPH, which is based on OSPF routing > between Loopbacks, in order to get reliable and ECMPed topology, like this: ... > CEPH configured in the way You have a minor misconfigu

[ceph-users] Ceph in OSPF environment

2019-01-18 Thread Max Krasilnikov
Dear colleagues, we build L3 topology for use with CEPH, which is based on OSPF routing between Loopbacks, in order to get reliable and ECMPed topology, like this: 10.10.200.6 proto bird metric 64     nexthop via 10.10.15.3 dev enp97s0f1 weight 1     nexthop via 10.10.25.3 dev enp19s0f0 weight