right this kind for example :

totem {
        version: 2
        secauth: off
        threads: 0
        rrp_mode: passive
        interface {
                ringnumber: 0
                member {
                        memberaddr: 10.0.0.11
                }
                member {
                        memberaddr: 10.0.0.12
                }
                bindnetaddr: 10.0.0.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
                ttl: 1
        }
        interface {
                ringnumber: 1
                member {
                        memberaddr: 192.168.0.11
                }
                member {
                        memberaddr: 192.168.0.12
                }
                bindnetaddr: 192.168.0.0
                mcastaddr: 226.94.1.2
                mcastport: 5407
                ttl: 1
        }
}


2012/9/21 Dan Frincu <[email protected]>

> Hi,
>
> On Fri, Sep 21, 2012 at 2:51 PM, S, MOHAMED (MOHAMED)** CTR **
> <[email protected]> wrote:
> > Hi,
> > If I set up the two nodes cluster in the same network (10.10.0.0,
> 10.10.0.0), the nodes are joining in the cluster, I see the
> "pcmk_peer_update" and "crm_update_peer" messages in
> /var/log/cluster/corosync.log
> >
> > When I setup two nodes cluster with each node in different
> network(10.10.0.0, 192.168.0.0), the nodes are not joining in the cluster.
> > Both the nodes has same /etc/corosync/authkey (confirmed through md5sum)
> > The nodes are not joining in the cluster; I do not see the
> "pcmk_peer_update" messages in /var/log/cluster/corosync.log
> > I think I am not configuring the corosync.conf properly in this scenario.
> >
> > The details of the two nodes are mentioned below.
> > Any help is really appreciated.
>
> First, both nodes for the same interface statement should be in the
> same network in order to work.
> Second, you need at least 2 physical interfaces for redundancy (and
> I'm not talking about bond here). You could set the 10.10.0.0/16
> network on ringnumber 0 and 192.168.0.0/24 on ringnumber 1 (again,
> you'd need 2 physical interfaces).
> Third, ttl=1 for different subnets, how do you expect to route the packets?
> Fourth, my personal favourite, secauth=on and threads=0. Set
> threads=#number_of_cpus_on_the_system
>
> HTH,
> Dan
>
> >
> > Node A
> > IP: 10.10.0.38
> > Netmask: 255.255.0.0
> > corosync.conf
> > =================
> > compatibility: whitetank
> >
> > totem {
> >         version: 2
> >         secauth: on
> >         threads: 0
> >         interface {
> >                 ringnumber: 0
> >                 bindnetaddr: 10.10.0.0
> >                 mcastaddr: 226.94.1.3
> >                 mcastport: 3300
> >                 ttl: 1
> >         }
> > }
> >
> > logging {
> >         fileline: off
> >         to_stderr: no
> >         to_logfile: yes
> >         to_syslog: yes
> >         logfile: /var/log/cluster/corosync.log
> >         debug: off
> >         timestamp: on
> >         logger_subsys {
> >                 subsys: AMF
> >                 debug: off
> >         }
> > }
> >
> > amf {
> >         mode: disabled
> > }
> >
> > service {
> >     # Load the Pacemaker Cluster Resource Manager
> >     name: pacemaker
> >     ver: 0
> > }
> >
> > Node B
> > IP: 192.168.0.199
> > Netmask: 255.255.255.0
> > corosync.conf
> > =================
> > compatibility: whitetank
> >
> > totem {
> >         version: 2
> >         secauth: on
> >         threads: 0
> >         interface {
> >                 ringnumber: 0
> >                 bindnetaddr: 192.168.0.0
> >                 mcastaddr: 226.94.1.3
> >                 mcastport: 3300
> >                 ttl: 1
> >         }
> > }
> >
> > logging {
> >         fileline: off
> >         to_stderr: no
> >         to_logfile: yes
> >         to_syslog: yes
> >         logfile: /var/log/cluster/corosync.log
> >         debug: off
> >         timestamp: on
> >         logger_subsys {
> >                 subsys: AMF
> >                 debug: off
> >         }
> > }
> >
> > amf {
> >         mode: disabled
> > }
> >
> > service {
> >     # Load the Pacemaker Cluster Resource Manager
> >     name: pacemaker
> >     ver: 0
> > }
> >
> > Thanks,
> > Raffi
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
>
>
>
> --
> Dan Frincu
> CCNA, RHCE
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to