Hello Andrew,

Thank you for your reply...

I set this corosync config:

totem {
version: 2
secauth: off
cluster_name: cluster
  interface {
        ringnumber: 0
        bindnetaddr: 192.168.1.0
        ttl: 1
  }
transport: udpu
}

nodelist {
  node {
        ring0_addr: noeud1.xxxx.fr
        nodeid: 1
       }
  node {
        ring0_addr: noeud2.xxxx.fr
        nodeid: 2
       }
}

quorum {
provider: corosync_votequorum
}

logging {
to_syslog: yes
debug: off
}

Start the cluster, all ok (?), stop/start the cluster now I have this:

on FC18:
    <nodes>
      <node id="-1062731267" uname="noeud1.xxxx.fr" type="normal"/>
      <node id="-33445696" uname="noeud2.xxxx.fr" type="normal"/>
      <node id="3232236029" uname="noeud1.xxxx.fr"/>
      <node id="4261521600" uname="noeud2.xxxx.fr"/>
      <node id="1" uname="noeud1.xxxx.fr"/>
      <node id="2" uname="noeud2.xxxx.fr"/>
    </nodes>

...
Aug 26 10:38:39 noeud1 cib[8505]: warning: cib_process_replace: Replacement 0.5.4 from noeud2.xxxx.fr not applied to 0.9.1: current epoch is greater than the replacement
...

on FC17:
    <nodes>
      <node id="-1062731267" uname="noeud1.xxxx.fr" type="normal"/>
      <node id="-33445696" uname="noeud2.xxxx.fr" type="normal"/>
    </nodes>

...
Aug 26 10:32:55 noeud2 crmd[23373]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Aug 26 10:32:55 noeud2 crmd[23373]: info: do_dc_join_finalize: join-14301: Syncing the CIB from noeud1.apec.fr to the rest of the cluster Aug 26 10:32:55 noeud2 cib[23368]: error: cib_perform_op: Discarding update with feature set '3.0.7' greater than our own '3.0.6' Aug 26 10:32:55 noeud2 cib[23368]: warning: cib_diff_notify: Update (client: crmd, call:28613): -1.-1.-1 -> 0.9.1 (The action/feature is not supported) Aug 26 10:32:55 noeud2 cib[23368]: error: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=noeud1.xxxx.fr/crmd/28613, version=0.5.4): The action/feature is not supported (rc=-29) Aug 26 10:32:55 noeud2 cib[23368]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=noeud1.xxxx.fr/noeud1.xxxx.fr/28613, version=0.5.4): ok (rc=0) Aug 26 10:32:55 noeud2 crmd[23373]: error: finalize_sync_callback: Sync from noeud1.xxxx.fr failed: The action/feature is not supported Aug 26 10:32:55 noeud2 crmd[23373]: warning: do_log: FSA: Input I_ELECTION_DC from finalize_sync_callback() received in state S_FINALIZE_JOIN Aug 26 10:32:55 noeud2 crmd[23373]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=finalize_sync_callback ] Aug 26 10:32:55 noeud2 crmd[23373]: info: do_dc_join_offer_all: join-14302: Waiting on 2 outstanding join acks Aug 26 10:32:55 noeud2 crmd[23373]: info: update_dc: Set DC to noeud2.xxxx.fr (3.0.6)
...


Best regards.

Francis

On 08/26/2013 01:42 AM, Andrew Beekhof wrote:

On 23/08/2013, at 7:18 PM, Francis SOUYRI <francis.sou...@apec.fr> wrote:

Hello,

For a long time I used heartbeat/drbd for 2 nodes clusters with Fedora, I used 
the internal crm of heartbeat not pacemaker.

I planned to upgrade from the fc17 to the fc18, but on fc18 heartbeat is 
obsolete and I have to change to corosync/pacemaker.
For information the heartbeat fc17 package work fine on fc18 and the cluster 
with a node fc17 and the other fc18 (without the firewall activated by default 
!!!) work perfectly (The final configuration is to have the both node in fc18).

But the corosync/pacemaker do not work with a fc17 node and a fc18 node.

I have these packages.

drbd-pacemaker-8.4.2-1.fc17.i686
pacemaker-libs-1.1.7-2.fc17.i686
pacemaker-1.1.7-2.fc17.i686
corosync-2.3.0-1.fc17.i686
corosynclib-2.3.0-1.fc17.i686
pacemaker-cli-1.1.7-2.fc17.i686
pacemaker-cluster-libs-1.1.7-2.fc17.i686

pacemaker-libs-1.1.9-0.1.70ad9fa.git.fc18.i686
pacemaker-1.1.9-0.1.70ad9fa.git.fc18.i686
drbd-pacemaker-8.4.2-1.fc18.i686
pacemaker-cluster-libs-1.1.9-0.1.70ad9fa.git.fc18.i686
pacemaker-cli-1.1.9-0.1.70ad9fa.git.fc18.i686
corosynclib-2.3.1-1.fc18.i686
corosync-2.3.1-1.fc18.i686

The corosync config :

totem {
version: 2
secauth: off
cluster_name: cluster
  interface {
        ringnumber: 0
        bindnetaddr: 192.168.1.0
        ttl: 1
  }
transport: udpu
}

nodelist {
  node {
        ring0_addr: noeud1.xxxx.fr
       }
  node {
        ring0_addr: noeud2.xxxx.fr
       }
}

quorum {
provider: corosync_votequorum
}

logging {
to_syslog: yes
debug: off
}

A short time after starting pacemaker I have this.

FC18 node:

Corosync Nodes:
noeud1.xxxx.fr noeud2.xxxx.fr
Pacemaker Nodes:
noeud1.xxxx.fr noeud1.xxxx.fr noeud2.xxxx.fr noeud2.xxxx.fr

<node id="-33445696" uname="noeud2.xxxx.fr" type="normal"/>
<node id="-1062731267" uname="noeud1.xxxx.fr" type="normal"/>
<node id="3232236029" uname="noeud1.xxxx.fr"/>
<node id="4261521600" uname="noeud2.xxxx.fr"/>

Why four nodes ?!? What are the nodes 3232236029 and 4261521600 ?

The same as the other two but stored as %u (unsigned int) instead of %d (signed 
int).
This was a bug in older versions, you can work around it by specifying a 
(small) nodeid in corosync.conf


FC17 node:

Corosync Nodes:
noeud1.xxxx.fr noeud2.xxxx.fr
Pacemaker Nodes:
noeud1.xxxx.fr noeud2.xxxx.fr

<node id="-33445696" uname="noeud2.xxxx.fr" type="normal"/>
<node id="-1062731267" uname="noeud1.xxxx.fr" type="normal"/>

On the FC17 I have some messages like this "error: cib_perform_op: Discarding update 
with feature set '3.0.7' greater than our own '3.0.6'".
On the FC18 "warning: cib_process_replace: Replacement 0.5.4 from noeud2.xxxx.fr not 
applied to 0.9.0: current epoch is greater than the replacement"

Pacemaker 1.1.7 and 1.1.9 are not compatible ?

This should provide some more information:
    http://blog.clusterlabs.org/blog/2013/mixing-pacemaker-versions/




_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to