Hi Emmanuel, Dan,

Thanks for the responses.
It is working now with unicast.
The updated configurations are mentioned below. I have to update the threads 
parameter though.


Node A
IP: 10.10.0.201
Netmask: 255.255.0.0
corosync.conf
===========
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
        version: 2
        secauth: on
        threads: 0
        interface {
                member {
                        memberaddr: 10.10.0.201
                }
                member {
                        memberaddr: 192.168.0.197
                }
                ringnumber: 0
                bindnetaddr: 10.10.0.0
                mcastport: 3300
        }
        transport: udpu
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

service {
    # Load the Pacemaker Cluster Resource Manager
    name: pacemaker
    ver: 0
}

Node B
IP: 192.168.0.197
Netmask: 255.255.255.0
corosync.conf
==========
compatibility: whitetank

totem {
        version: 2
        secauth: on
        threads: 0
        interface {
                member {
                        memberaddr: 192.168.0.197
                }
                member {
                        memberaddr: 10.10.0.201
                }
                ringnumber: 0
                bindnetaddr: 192.168.0.0
                mcastport: 3300
        }
        transport: udpu
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

service {
    # Load the Pacemaker Cluster Resource Manager
    name: pacemaker
    ver: 0
}

Thanks,
Raffi

________________________________
From: Emmanuel Saint-Joanis [mailto:[email protected]]
Sent: Tuesday, September 25, 2012 1:15 PM
To: S, MOHAMED (MOHAMED)** CTR **
Subject: Re: [Linux-HA] corosync - nodes in different network

hi,
Honestly, kind of questions I would test extensively to prove 
feasability/or/impossibility
But in a way, I have two disctinct networks as I got 3 machines connected by 
crossed cables with a bridge.
when the center node fails, the two overs keep `alone`, not seeing each over 
and keep on working independantly.
As soon as central node bring up, all 3 see all and everything goes fine.


2012/9/21 S, MOHAMED (MOHAMED)** CTR ** 
<[email protected]<mailto:[email protected]>>
Hi Dan, Emmanuel,

Thanks for the quick responses.

We have two variations.
1- High Availability - Both the nodes will be in the same network in the same 
lab. In this we have two physical interfaces, one through the network, another 
one connected back to back.

2- Disaster Recovery - Both the nodes will be in different network located in 
different places. In this case we have only one physical interface.

Both used to work with heartbeat + pacemaker.
We are upgrading to corosync + pacemaker.

First setup worked. Now, I am trying to configure the Disaster Recovery setup 
with corosync + pacemaker.

With Corosync, Is it not possible to have two nodes in different network in a 
cluster?

Thanks,
Raffi


> -----Original Message-----
> From: 
> [email protected]<mailto:[email protected]>
>  [mailto:linux-ha-<mailto:linux-ha->
> [email protected]<mailto:[email protected]>] On Behalf Of 
> Dan Frincu
> Sent: Friday, September 21, 2012 6:54 PM
> To: General Linux-HA mailing list
> Subject: Re: [Linux-HA] corosync - nodes in different network
>
> Hi,
>
> On Fri, Sep 21, 2012 at 4:17 PM, Emmanuel Saint-Joanis
> <[email protected]<mailto:[email protected]>> wrote:
> > right this kind for example :
>
> More like this.
>
>         interface {
>                 # The following values need to be set based on your
> environment
>                 ringnumber: 0
>                 bindnetaddr: 172.16.17.18
>                 mcastaddr: 226.94.1.1
>                 mcastport: 5405
>         }
>         interface {
>                 # The following values need to be set based on your
> environment
>                 ringnumber: 1
>                 bindnetaddr: 192.168.169.170
>                 mcastaddr: 226.94.1.2
>                 mcastport: 5405
>         }
>
> There are a couple of issues with your setup (conceptually speaking).
>
> >
> > totem {
> >         version: 2
> >         secauth: off
> >         threads: 0
> >         rrp_mode: passive
> >         interface {
> >                 ringnumber: 0
> >                 member {
> >                         memberaddr: 10.0.0.11
> >                 }
> >                 member {
> >                         memberaddr: 10.0.0.12
> >                 }
> >                 bindnetaddr: 10.0.0.0
> >                 mcastaddr: 226.94.1.1
>
> mcastaddr does not apply to udpu transport.
>
> >                 mcastport: 5405
> >                 ttl: 1
> >         }
> >         interface {
> >                 ringnumber: 1
> >                 member {
> >                         memberaddr: 192.168.0.11
> >                 }
> >                 member {
> >                         memberaddr: 192.168.0.12
> >                 }
> >                 bindnetaddr: 192.168.0.0
> >                 mcastaddr: 226.94.1.2
> >                 mcastport: 5407
> >                 ttl: 1
> >         }
>
> And you're missing transport: udpu before the closing of the totem stanza.
>
> > }
> >
> >
> > 2012/9/21 Dan Frincu <[email protected]<mailto:[email protected]>>
> >>
> >> Hi,
> >>
> >> On Fri, Sep 21, 2012 at 2:51 PM, S, MOHAMED (MOHAMED)** CTR **
> >> <[email protected]<mailto:[email protected]>> wrote:
> >> > Hi,
> >> > If I set up the two nodes cluster in the same network (10.10.0.0,
> >> > 10.10.0.0), the nodes are joining in the cluster, I see the
> >> > "pcmk_peer_update" and "crm_update_peer" messages in
> >> > /var/log/cluster/corosync.log
> >> >
> >> > When I setup two nodes cluster with each node in different
> >> > network(10.10.0.0, 192.168.0.0), the nodes are not joining in the
> cluster.
> >> > Both the nodes has same /etc/corosync/authkey (confirmed through
> md5sum)
> >> > The nodes are not joining in the cluster; I do not see the
> >> > "pcmk_peer_update" messages in /var/log/cluster/corosync.log
> >> > I think I am not configuring the corosync.conf properly in this
> >> > scenario.
> >> >
> >> > The details of the two nodes are mentioned below.
> >> > Any help is really appreciated.
> >>
> >> First, both nodes for the same interface statement should be in the
> >> same network in order to work.
> >> Second, you need at least 2 physical interfaces for redundancy (and
> >> I'm not talking about bond here). You could set the 
> >> 10.10.0.0/16<http://10.10.0.0/16>
> >> network on ringnumber 0 and 192.168.0.0/24<http://192.168.0.0/24> on 
> >> ringnumber 1 (again,
> >> you'd need 2 physical interfaces).
> >> Third, ttl=1 for different subnets, how do you expect to route the
> >> packets?
> >> Fourth, my personal favourite, secauth=on and threads=0. Set
> >> threads=#number_of_cpus_on_the_system
> >>
> >> HTH,
> >> Dan
> >>
> >> >
> >> > Node A
> >> > IP: 10.10.0.38
> >> > Netmask: 255.255.0.0
> >> > corosync.conf
> >> > =================
> >> > compatibility: whitetank
> >> >
> >> > totem {
> >> >         version: 2
> >> >         secauth: on
> >> >         threads: 0
> >> >         interface {
> >> >                 ringnumber: 0
> >> >                 bindnetaddr: 10.10.0.0
> >> >                 mcastaddr: 226.94.1.3
> >> >                 mcastport: 3300
> >> >                 ttl: 1
> >> >         }
> >> > }
> >> >
> >> > logging {
> >> >         fileline: off
> >> >         to_stderr: no
> >> >         to_logfile: yes
> >> >         to_syslog: yes
> >> >         logfile: /var/log/cluster/corosync.log
> >> >         debug: off
> >> >         timestamp: on
> >> >         logger_subsys {
> >> >                 subsys: AMF
> >> >                 debug: off
> >> >         }
> >> > }
> >> >
> >> > amf {
> >> >         mode: disabled
> >> > }
> >> >
> >> > service {
> >> >     # Load the Pacemaker Cluster Resource Manager
> >> >     name: pacemaker
> >> >     ver: 0
> >> > }
> >> >
> >> > Node B
> >> > IP: 192.168.0.199
> >> > Netmask: 255.255.255.0
> >> > corosync.conf
> >> > =================
> >> > compatibility: whitetank
> >> >
> >> > totem {
> >> >         version: 2
> >> >         secauth: on
> >> >         threads: 0
> >> >         interface {
> >> >                 ringnumber: 0
> >> >                 bindnetaddr: 192.168.0.0
> >> >                 mcastaddr: 226.94.1.3
> >> >                 mcastport: 3300
> >> >                 ttl: 1
> >> >         }
> >> > }
> >> >
> >> > logging {
> >> >         fileline: off
> >> >         to_stderr: no
> >> >         to_logfile: yes
> >> >         to_syslog: yes
> >> >         logfile: /var/log/cluster/corosync.log
> >> >         debug: off
> >> >         timestamp: on
> >> >         logger_subsys {
> >> >                 subsys: AMF
> >> >                 debug: off
> >> >         }
> >> > }
> >> >
> >> > amf {
> >> >         mode: disabled
> >> > }
> >> >
> >> > service {
> >> >     # Load the Pacemaker Cluster Resource Manager
> >> >     name: pacemaker
> >> >     ver: 0
> >> > }
> >> >
> >> > Thanks,
> >> > Raffi
> >> > _______________________________________________
> >> > Linux-HA mailing list
> >> > [email protected]<mailto:[email protected]>
> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> > See also: http://linux-ha.org/ReportingProblems
> >>
> >>
> >>
> >> --
> >> Dan Frincu
> >> CCNA, RHCE
> >> _______________________________________________
> >> Linux-HA mailing list
> >> [email protected]<mailto:[email protected]>
> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> See also: http://linux-ha.org/ReportingProblems
> >
> >
>
>
>
> --
> Dan Frincu
> CCNA, RHCE
> _______________________________________________
> Linux-HA mailing list
> [email protected]<mailto:[email protected]>
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]<mailto:[email protected]>
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to