On Fri, Mar 15, 2013 at 8:49 PM, emmanuel segura <[email protected]> wrote:
> Hello Fedrik
>
> Why you have a clone of cl_exportfs_root and you have ext4 filesystem, and
> i think this order is not correct
>
> order o_drbd_before_nfs inf: ms_drbd_nfs:promote g_nfs:start
> order o_root_before_nfs inf: cl_exportfs_root g_nfs:start
>
> I think like that you try to start g_nfs twice

No, multiple ordering constraints wont lead to that happening.

>
>
> 2013/3/14 Fredrik Hudner <[email protected]>
>
>> Hi all,
>> I have a problem after I removed a node with the force command from my crm
>> config.
>> Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6,
>> pacemaker 1.1.7-6.el6)
>>
>> Then I wanted to add a third node acting as quorum node, but was not able
>> to get it to work (probably because I don't understand how to set it up).
>> So I removed the 3rd node, but had to use the force command as crm
>> complained when I tried to remove it.
>>
>> Now when I start up Pacemaker the resources doesn't look like they come up
>> correctly
>>
>> Online: [ testclu01 testclu02 ]
>>
>> Master/Slave Set: ms_drbd_nfs [p_drbd_nfs]
>>      Masters: [ testclu01 ]
>>      Slaves: [ testclu02 ]
>> Clone Set: cl_lsb_nfsserver [p_lsb_nfsserver]
>>      Started: [ tdtestclu01 tdtestclu02 ]
>> Resource Group: g_nfs
>>      p_lvm_nfs  (ocf::heartbeat:LVM):   Started testclu01
>>      p_fs_shared        (ocf::heartbeat:Filesystem):    Started testclu01
>>      p_fs_shared2       (ocf::heartbeat:Filesystem):    Started testclu01
>>      p_ip_nfs   (ocf::heartbeat:IPaddr2):       Started testclu01
>> Clone Set: cl_exportfs_root [p_exportfs_root]
>>      Started: [ testclu01 testclu02 ]
>>
>> Failed actions:
>>     p_exportfs_root:0_monitor_30000 (node=testclu01, call=12, rc=7,
>> status=complete): not running
>>     p_exportfs_root:1_monitor_30000 (node=testclu02, call=12, rc=7,
>> status=complete): not running
>>
>> The filesystems mount correctly on the master at this stage and can be
>> written to.
>> When I stop the services on the master node for it to failover, it doesn't
>> work.. Looses cluster-ip connectivity
>>
>> Corosync.log from master after I stopped pacemaker on master node :  see
>> attached file
>>
>> Additional files (attached): crm-configure show
>>                                                           Corosync.conf
>>
>> Global_common.conf
>>
>>
>> I'm not sure how to proceed to get it up in a fair state now
>> So if anyone could help me it would be much appreciated
>>
>> Kind regards
>> /Fredrik Hudner
>>
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to