I have a two node test cluster running with CMAN plugin. Fencing is not
configured. I see that vsanqa7 sends a message to vsanqa8 to shutdown.
However, it is not clear why vsanqa7 takes this decision.
===/var/log/messages=
Node vsanqa7
Jul 15 08
Hi,
Where can I find information about which versions of pacemaker, cman and
corosync are compatible with each other ?
Regards,
Kiran
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Proje
6.2 (Final)
[root@vsanqa8 ~]#
so do I need to upgrade to pacemaker 1.1.9 for the fix ?
Regards,
Kiran
On Wed, Jul 17, 2013 at 4:31 AM, Andrew Beekhof wrote:
>
> On 16/07/2013, at 11:03 PM, K Mehta wrote:
>
> > I have a two node test cluster running with CMAN plugin.
node level.I hope it is ok to embed
logic of selective blocking of resources in fencing agent.
On Wed, Jul 17, 2013 at 9:15 AM, Digimer wrote:
> On 16/07/13 09:03, K Mehta wrote:
>
>> I have a two node test cluster running with CMAN plugin. Fencing is not
>> configured.
>
upgrade components which have
some known critical fixes in new versions.
2. Are there versions of these components which are definitely not
expected to work together ?
Regards,
kiran
On Wed, Jul 17, 2013 at 10:23 AM, Andrew Beekhof wrote:
>
> On 16/07/2013, at 11:59 PM, K Mehta
Hi,
I have a two node cluster. I have few resources configured on it. On vqa12,
CRMd dies due to some internal error. It is not clear why CRMd decides to
die on May5 at 22:14:50 on system vqa12
May 05 22:14:50 [3518] vqa12 crmd: info: do_exit: Performing
A_EXIT_0 -
Hi,
I am trying to use pcs commands instead of crmsh commands to manage
resources. How do I stop a resource using pcs ? I didnt find any pcs
resource stop command.
Regards,
kiran
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clus
ed.
>
>
> A simple 'pcs resource disable MyResource' should work.
>
>
> Frank
>
> Am 24.02.2014 12:39, schrieb K Mehta:
>
>> Hi,
>>
>> I am trying to use pcs commands instead of crmsh commands to manage
>>
ut=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd ]
Error: Unable to stop: vha-de5566b1-c2a3-4dc6-9712-c82bb43f19d8 before
deleting (re-run with --force to force deletion)
On Wed, Feb 26, 2014 at 1:10 PM, Frank Brendel wrote:
> No erros in syslog?
>
>
> Am 25.02.2014 15:45, schrieb K M
tup/configuration would be nice.
> What was the cluster status before you tried to delete the resource?
> And did you try the --force option?
>
>
> Am 26.02.2014 11:46, schrieb K Mehta:
>
> Here is the log
>
> [root@sys11 ~]# pcs resource delete
> vha-de5566b1-c2a3-4dc6-9712-c82
> On Wed, Feb 26, 2014 at 4:57 PM, Frank Brendel
> wrote:
>
>> An overview of your setup/configuration would be nice.
>> What was the cluster status before you tried to delete the resource?
>> And did you try the --force option?
>>
>>
>> Am 26.02.2014 11
eb 26 15:43:16 node1 cib[1820]: error: xml_log: Element cib failed to
> > validate content
> > Feb 26 15:43:16 node1 cib[1820]: warning: cib_perform_op: Updated CIB
> does
> > not validate against pacemaker-1.2 schema/dtd
> > Feb 26 15:43:16 node1 cib[1820]: warning: cib_d
Can anyone tell me why --wait parameter always causes pcs resource disable
to return failure though resource actually stops within time ?
On Wed, Feb 26, 2014 at 10:45 PM, K Mehta wrote:
> Deleting master resource id does not work. I see the same issue.
> However, uncloning helps.
k
if (expire_time < int(time.time())):
break
time.sleep(1)
return False<<< False is returned
On Fri, Feb 28, 2014 at 10:49 PM, David Vossel wrote:
>
>
>
>
> - Original Message -
> > From: "K Mehta"
>
Has no one ever faced this issue ?
On Fri, Feb 28, 2014 at 11:51 PM, K Mehta wrote:
> Yes, the issue is seen only with multi state resource. Non multi state
> resource work fine. Looks like is_resource_started function in utils.py
> does not compare resource name properly. Let
What is meant by the following error ?
This node is within the non-primary component and will NOT provide any
services.
Are all resources expected to go in unmanaged state after this message is
seen ?
Regards,
kiran
___
Pacemaker mailing list: Pace
I created a multi state resource ms-2be6c088-a1fa-464a-b00d-f4bccb4f5af2
(vha-2be6c088-a1fa-464a-b00d-f4bccb4f5af2).
Here is the configuration:
==
[root@vsanqa11 ~]# pcs config
Cluster Name: vsanqa11_12
Corosync Nodes:
Pacemaker Nodes:
vsanqa11 vsanqa12
Resources:
Maste
Hi,
When and why this message is printed
What does it mean ?
May 13 01:38:36 vsanqa27 cib[6956]: notice: cib_process_diff: Diff
0.9967.124 -> 0.9967.125 from vsanqa28 not applied to 0.9967.124: Failed
application of an update diff
Regards,
Kiran
i unclone the multi state resource. Uncloning is done successfully, however
deletion of the resource fails sometimes
On Tue, May 20, 2014 at 10:20 AM, K Mehta wrote:
> Because i see this message every time delete of a multistate resource
> fails (i have posted information about is
:06 pm, K Mehta wrote:
>
> > Hi,
> > When and why this message is printed
> > What does it mean ?
> >
> >May 13 01:38:36 vsanqa27 cib[6956]: notice: cib_process_diff: Diff
> 0.9967.124 -> 0.9967.125 from vsanqa28 not applied to 0.9967.124: Failed
[root@vsan-test2 ~]# rpm -qa | grep pcs
pcs-0.9.26-10.el6.noarch
[root@vsan-test2 ~]# rpm -qa | grep ccs
ccs-0.16.2-63.el6.x86_64
[root@vsan-test2 ~]# rpm -qa | grep pace
pacemaker-libs-1.1.8-7.el6.x86_64
pacemaker-cli-1.1.8-7.el6.x86_64
pacemaker-1.1.8-7.el6.x86_64
pacemaker-cluster-libs-1.1.8-7.e
wrote:
>
> On 19 May 2014, at 5:43 pm, K Mehta wrote:
>
> > Please see my reply inline. Attached is the crm_report output.
> >
> >
> > On Thu, May 8, 2014 at 5:45 AM, Andrew Beekhof
> wrote:
> >
> > On 8 May 2014, at 12:38 am, K Mehta wrote:
Hi,
vsanqa27 is promoted to master and vsanqa28 is slave. Suddently,
vsanqa27 is demoted and vsanqa28 is promoted.
[root@vsanqa28 vsh-mp-05]# rpm -qa | grep pcs; rpm -qa | grep ccs ; rpm -qa
| grep pacemaker ; rpm -qa | grep corosync
pcs-0.9.90-2.el6.centos.2.noarch
ccs-0.16.2-69.el6_5.1.x86_6
Attached is the file
On Thu, May 22, 2014 at 4:00 PM, Andrew Beekhof wrote:
>
> On 22 May 2014, at 4:33 pm, K Mehta wrote:
>
> > Hi,
> >vsanqa27 is promoted to master and vsanqa28 is slave. Suddently,
> vsanqa27 is demoted and vsanqa28 is promoted.
> >
&g
pcs versions 0.9.26 and 0.9.90
pacemaker versions 1.1.8 and 1.1.10
Which pcs versions are expected to work with which pacemaker versions ?
Regards,
Kiran
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/list
REATE_LOG constraint location ms-${uuid} prefers
$node2
Any issue here ?
Regards,
Kiran
On Mon, May 26, 2014 at 8:54 AM, Andrew Beekhof wrote:
>
> On 22 May 2014, at 11:20 pm, K Mehta wrote:
>
> > > May 13 01:38:36 vsanqa28 pengine[4310]: notice: LogActions: Promote
14, at 5:15 pm, K Mehta wrote:
>
> > pcs versions 0.9.26 and 0.9.90
> > pacemaker versions 1.1.8 and 1.1.10
> >
> > Which pcs versions are expected to work with which pacemaker versions ?
>
> I think for the most part, all versions will work together.
> There m
So is globally-unique=false correct in my case ?
On Tue, May 27, 2014 at 5:30 AM, Andrew Beekhof wrote:
>
> On 26 May 2014, at 9:56 pm, K Mehta wrote:
>
> > What I understand from "globally-unique=false" is as follows
> > Agent handling the resource does exact
configured
Online: [ vsanqa11 vsanqa12 ]
Full list of resources:
On Tue, May 27, 2014 at 11:01 AM, Andrew Beekhof wrote:
>
> On 27 May 2014, at 2:34 pm, K Mehta wrote:
>
> > I have seen that 0.9.26 works with 1.1.8 pacemaker and 0.9.90 works with
> 1.1.10 pacemaker.
> &
is ?
On Tue, May 27, 2014 at 11:01 AM, Andrew Beekhof wrote:
>
> On 27 May 2014, at 2:37 pm, K Mehta wrote:
>
> > So is globally-unique=false correct in my case ?
>
> yes
>
> >
> >
> > On Tue, May 27, 2014 at 5:30 AM, Andrew Beekhof
> wrote:
>
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
Linux vsanqa11 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012
x86_64 x86_64 x86_64 GNU/Linux
CentOS release 6.3 (Final)
Regards,
Kiran
On Wed, May 28, 2014 at 2:47 AM, Chris Feist wrote:
> On 05/27/14 05:38, K Mehta wrote:
>
>&
In which pcs version is this issue fixed ?
On Wednesday, May 28, 2014, K Mehta wrote:
> Chris,
> Here is the required information
> [root@vsanqa11 ~]# rpm -qa | grep pcs ; rpm -qa | grep pacemaker ; uname
-a ; cat /etc/redhat-release
> pcs-0.9.90-2.el6.centos.2.noarch
> pacemake
any update ?
On Thu, May 29, 2014 at 9:08 AM, K Mehta wrote:
> In which pcs version is this issue fixed ?
>
>
> On Wednesday, May 28, 2014, K Mehta wrote:
> > Chris,
> > Here is the required information
> > [root@vsanqa11 ~]# rpm -qa | grep pcs ; rpm -qa | gre
Mehta wrote:
> any update ?
>
>
> On Thu, May 29, 2014 at 9:08 AM, K Mehta wrote:
>
>> In which pcs version is this issue fixed ?
>>
>>
>> On Wednesday, May 28, 2014, K Mehta wrote:
>> > Chris,
>> > Here is the required information
&g
Hi,
In case stonith is not setup and a two node cluster automatically
recovers from split brain, I get "Another DC detected" message in log file.
Here are the cluster settings
Cluster Properties:
cluster-infrastructure: cman
dc-version: 1.1.10-14.el6_5.3-368c726
no-quorum-policy: ignore
st
Didnt see any buffer size suggesstion in syslog
Changed buffer size from 20k to 200k and rebooted both systems
Did the following after reboot:
[root@vsanqa11 kiran]# cat /etc/sysconfig/pacemaker | grep PCMK_ipc
# PCMK_ipc_type=shared-mem|socket|posix|sysv
export PCMK_ipc_buffer=204800
[root@vsan
Any update on this ?
On Sat, Jul 12, 2014 at 11:55 PM, K Mehta wrote:
> Andrew,
> Attached is the report.
>
> Regards,
> Kiran
>
>
> On Fri, Jul 11, 2014 at 4:26 AM, Andrew Beekhof
> wrote:
>
>> Can you run crm_report for the period covered by your
e client side and possibly before the pacemaker tools are invoked.
>
> On 9 Jul 2014, at 6:49 pm, K Mehta wrote:
>
> > [root@vsanqa11 ~]# pcs resource create
> vha-3de5ab16-9917-4b90-93d2-7b04fc71879c ocf:heartbeat:vgc-cm-agent.ocf
> cluster_uuid=3de5ab16-9917-4b90-93d2-7b0
ou are simply hitting a limit of the amount of characters you can use on
> one command because cib.xml is getting large. Use pcs --debug to dump the
> XML to screen, paste it into a file and run cibadmin --xml-file directly
> for a workaround. Works great.
>
>
>
> Colin
>
39 matches
Mail list logo