[Pacemaker] Speed up resource failover?

2011-01-11 Thread Patrick H.
As it is right now, pacemaker seems to take a long time (in computer terms) to fail over resources from one node to the other. Right now, I have 477 IPaddr2 resources evenly distributed among 2 nodes. When I put one node in standby, it takes approximately 5 minutes to move the half of those fro

Re: [Pacemaker] pingd process dies for no reason

2011-01-11 Thread Lars Ellenberg
On Tue, Jan 11, 2011 at 03:53:29PM +0100, Andrew Beekhof wrote: > On Tue, Jan 11, 2011 at 2:45 PM, Lars Ellenberg > wrote: > > On Tue, Jan 11, 2011 at 11:24:35AM +0100, patrik.rappo...@knapp.com wrote: > >> we already made changes to the interval and timeout ( >> id="pingd-op-monitor-30s" interval

Re: [Pacemaker] pingd process dies for no reason

2011-01-11 Thread Andrew Beekhof
On Tue, Jan 11, 2011 at 2:45 PM, Lars Ellenberg wrote: > On Tue, Jan 11, 2011 at 11:24:35AM +0100, patrik.rappo...@knapp.com wrote: >> we already made changes to the interval and timeout (> id="pingd-op-monitor-30s" interval="30s" name="monitor" timeout="10s"/>). >> >> how big should dampen be set

Re: [Pacemaker] Best stonith method to avoid split brain on a drbd cluster

2011-01-11 Thread Dejan Muhamedagic
Hi, On Wed, Jan 05, 2011 at 05:18:36PM -0700, Devin Reade wrote: > Johannes Freygner wrote: > > > *) Yes, and I found the wrong setting: > > Excellent. > > > But if I pull the power cable without a regular shutting down, > > the powerless node gets status "UNCLEAN (offline)" and the > > resou

[Pacemaker] pingd process dies for no reason

2011-01-11 Thread Patrik . Rapposch
hy, thx i configured these values now. i hope that we won't face this problem again, otherwise, like i said, i turned on the debug mode of the ping ra, and if i get the next maintenance window, i'll turn on cluster debog mode. so we'd have more log info to find the reason for this problem. thx

Re: [Pacemaker] pingd process dies for no reason

2011-01-11 Thread Lars Ellenberg
On Tue, Jan 11, 2011 at 11:24:35AM +0100, patrik.rappo...@knapp.com wrote: > we already made changes to the interval and timeout ( id="pingd-op-monitor-30s" interval="30s" name="monitor" timeout="10s"/>). > > how big should dampen be set? > > please correct me, if i am wrong, as i calculate it as

Re: [Pacemaker] Split-site cluster in two locations

2011-01-11 Thread Holger Teutsch
On Tue, 2011-01-11 at 10:21 +0100, Christoph Herrmann wrote: > -Ursprüngliche Nachricht- > Von: Andrew Beekhof > Gesendet: Di 11.01.2011 09:01 > An: The Pacemaker cluster resource manager ; > CC: Michael Schwartzkopff ; > Betreff: Re: [Pacemaker] Split-site cluster in two locations > >

Re: [Pacemaker] Split-site cluster in two locations

2011-01-11 Thread Robert van Leeuwen
-Original message- To: The Pacemaker cluster resource manager ; From: Christoph Herrmann Sent: Tue 11-01-2011 10:24 Subject:Re: [Pacemaker] Split-site cluster in two locations > As long as you have only two computing centers it doesn't matter if you run a > corosync > o

[Pacemaker] pingd process dies for no reason

2011-01-11 Thread Patrik . Rapposch
we already made changes to the interval and timeout (). how big should dampen be set? please correct me, if i am wrong, as i calculate it as following: assuming the last check was ok and in the next second, the failures takes place: then we there would be 29s till the next check will start, and

Re: [Pacemaker] Split-site cluster in two locations

2011-01-11 Thread Christoph Herrmann
-Ursprüngliche Nachricht- Von: Andrew Beekhof Gesendet: Di 11.01.2011 09:01 An: The Pacemaker cluster resource manager ; CC: Michael Schwartzkopff ; Betreff: Re: [Pacemaker] Split-site cluster in two locations > On Tue, Dec 28, 2010 at 10:21 PM, Anton Altaparmakov wrote: > > Hi, > > >

Re: [Pacemaker] Split-site cluster in two locations

2011-01-11 Thread Andrew Beekhof
On Tue, Dec 28, 2010 at 10:21 PM, Anton Altaparmakov wrote: > Hi, > > On 28 Dec 2010, at 20:32, Michael Schwartzkopff wrote: >> Hi, >> >> I have four nodes in a split site scenario located in two computing centers. >> STONITH is enabled. >> >> Is there and best practise how to deal with this setup