> On 25 Oct 2014, at 9:08 am, Lax wrote:
>
> Andrew Beekhof writes:
>>> So on pacemaker restart, is there any way I can stop my LSB resource coming
>>> up in START mode when such resource is already running on a master?
>>
>> Tell init/systemd not to start it when the node boots
>
> Thanks f
I guess corosync and pacemaker are started as user hacluster
The method start of the init script managed by SMF:
…
start() {
stop
su ${CLUSTER_USER} -c ${APPPATH}${COROSYNC}
sleep $sleep0
su ${CLUSTER_USER} -c ${APPPATH}${PACEMAKERD} &
return 0
}
….
root@zd
> On 25 Oct 2014, at 8:11 pm, Grüninger, Andreas (LGL Extern)
> wrote:
>
> I guess corosync and pacemaker are started as user hacluster
>
> The method start of the init script managed by SMF:
> …
> start() {
> stop
> su ${CLUSTER_USER} -c ${APPPATH}${COROSYNC}
> sleep
Hi all.
I use Percona as RA on cluster (nothing mission-critical, currently -
just zabbix data); today after restarting MySQL resource (crm resource
restart p_mysql) I've got a split brain state - MySQL for some reason
started first at ex-slave node, ex-master starts later (possibly I've
set
On 25/10/14 03:32 PM, Andrew wrote:
Hi all.
I use Percona as RA on cluster (nothing mission-critical, currently -
just zabbix data); today after restarting MySQL resource (crm resource
restart p_mysql) I've got a split brain state - MySQL for some reason
started first at ex-slave node, ex-master
Hi all.
After upgrade CentOS to current (Pacemaker 1.1.8-7.el6 to
1.1.10-14.el6_5.3), Pacemaker produces tonns of logs. Near 20GB per day.
What may cause this behavior?
Running config:
node node2.cluster \
attributes p_mysql_mysql_master_IP="192.168.253.4" \
attributes p_pgsql-data-sta
25.10.2014 22:34, Digimer пишет:
On 25/10/14 03:32 PM, Andrew wrote:
Hi all.
I use Percona as RA on cluster (nothing mission-critical, currently -
just zabbix data); today after restarting MySQL resource (crm resource
restart p_mysql) I've got a split brain state - MySQL for some reason
started
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This results
in a split brain situation and both nodes take their resources online.
When the communication links get back, I see following behaviour:
On drbd l
On 25/10/14 05:09 PM, Vladimir wrote:
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This results
in a split brain situation and both nodes take their resources online.
When the communication links get bac
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
> On 25/10/14 05:09 PM, Vladimir wrote:
> > Hi,
> >
> > currently I'm testing a 2 node setup using ubuntu trusty.
> >
> > # The scenario:
> >
> > All communication links betwenn the 2 nodes are cut off. This
> > results in a split brain situation
On 25/10/14 06:35 PM, Vladimir wrote:
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
On 25/10/14 05:09 PM, Vladimir wrote:
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This
results in a split brain
В Sat, 25 Oct 2014 23:34:54 +0300
Andrew пишет:
> 25.10.2014 22:34, Digimer пишет:
> > On 25/10/14 03:32 PM, Andrew wrote:
> >> Hi all.
> >>
> >> I use Percona as RA on cluster (nothing mission-critical, currently -
> >> just zabbix data); today after restarting MySQL resource (crm resource
> >>
12 matches
Mail list logo