[Pacemaker] displaying failcounts with crm_mon --failcounts

2010-01-27 Thread Schaefer, Diane E
Hi, I am using Pacemaker 1.0.6. I noticed when I use crm_mon --failcounts when I have a resource in the failed state I get the migration threshold and the fail-count Migration summary: * Node qpr2: default_route: migration-threshold=5 fail-count=6 last-failure='Wed Jan 27 14:47:49 2010' *

Re: [Pacemaker] Debian packages of the clusterstack updated: Bugreport, No 2

2010-01-27 Thread Michael Schwartzkopff
Am Mittwoch, 27. Januar 2010 18:10:48 schrieb Rasto Levrinc: > On Wed, January 27, 2010 5:41 pm, Michael Schwartzkopff wrote: > > Am Mittwoch, 27. Januar 2010 16:13:09 schrieb Martin Gerhard Loschwitz: > >> Packages available for Lenny on amd64 and i386 from the usual source: > >> > >> > >> deb htt

Re: [Pacemaker] Debian packages of the clusterstack updated: Bugreport, No 2

2010-01-27 Thread Rasto Levrinc
On Wed, January 27, 2010 5:41 pm, Michael Schwartzkopff wrote: > Am Mittwoch, 27. Januar 2010 16:13:09 schrieb Martin Gerhard Loschwitz: >> Packages available for Lenny on amd64 and i386 from the usual source: >> >> >> deb http://people.debian.org/~madkiss/ha lenny main deb-src >> http://people.d

Re: [Pacemaker] Debian packages of the clusterstack updated: Bugreport, No 2

2010-01-27 Thread Michael Schwartzkopff
Am Mittwoch, 27. Januar 2010 17:56:38 schrieb Raoul Bhatia [IPAX]: > On 01/27/2010 05:41 PM, Michael Schwartzkopff wrote: > > Where do I get your old packages? > > # apt-cache policy pacemaker > pacemaker: > Installed: (none) > Candidate: 1.0.7+hg20100127-0test1~bpo50+1 > Version table: >

Re: [Pacemaker] Debian packages of the clusterstack updated: Bugreport, No 2

2010-01-27 Thread Raoul Bhatia [IPAX]
On 01/27/2010 05:41 PM, Michael Schwartzkopff wrote: > Where do I get your old packages? # apt-cache policy pacemaker pacemaker: Installed: (none) Candidate: 1.0.7+hg20100127-0test1~bpo50+1 Version table: 1.0.7+hg20100127-0test1~bpo50+1 0 50 http://people.debian.org lenny/main

[Pacemaker] Debian packages of the clusterstack updated: Bugreport, No 2

2010-01-27 Thread Michael Schwartzkopff
Am Mittwoch, 27. Januar 2010 16:13:09 schrieb Martin Gerhard Loschwitz: > Ladies and Gentleman, > > it's a great pleasure for me to announce the availability of up-to-date > packages of the standard cluster stack components once more, including all > the recently released release-candidates from He

Re: [Pacemaker] Debian packages of the clusterstack updated

2010-01-27 Thread Michael Schwartzkopff
Am Mittwoch, 27. Januar 2010 16:13:09 schrieb Martin Gerhard Loschwitz: > Ladies and Gentleman, > > it's a great pleasure for me to announce the availability of up-to-date > packages of the standard cluster stack components once more, including all > the recently released release-candidates from He

Re: [Pacemaker] Debian packages of the clusterstack updated

2010-01-27 Thread Andrew Beekhof
Nice work! On Wed, Jan 27, 2010 at 4:13 PM, Martin Gerhard Loschwitz wrote: > Ladies and Gentleman, > > it's a great pleasure for me to announce the availability of up-to-date > packages > of the standard cluster stack components once more, including all the recently > released release-candidate

[Pacemaker] Debian packages of the clusterstack updated

2010-01-27 Thread Martin Gerhard Loschwitz
Ladies and Gentleman, it's a great pleasure for me to announce the availability of up-to-date packages of the standard cluster stack components once more, including all the recently released release-candidates from Heartbeat, Cluster-Glue and Cluster-Agents as well as Pacemaker 1.0.7. Packages a

Re: [Pacemaker] crm: timeout for start warning (possible bug?)

2010-01-27 Thread Maros Timko
>> seems like it is a bit more complicated then I initially though. Now I >> tried to set the timeout longer so that there will not be any warning. >> However, the second group is not created neither: >> # crm >> crm(live)# configure >> crm(live)configure# group udom udom-drbd-udom0 udom-drbd-udom1

Re: [Pacemaker] SLES 11 ocf:heartbeat:drbd versus ocf:linbit:drbd

2010-01-27 Thread Raoul Bhatia [IPAX]
On 01/27/2010 11:26 AM, Oliver Ladner wrote: > Hello, > > Can someone please tell me the difference between the resource agents > ocf:heartbeat:drbd and ocf:linbit:drbd? > > I'm using the legacy heartbeat:drbd agent at the moment, as drbd on SLES 11 > is only at 8.2.7. Problem is that the two-n

Re: [Pacemaker] resource migrate has no effect - got no clue

2010-01-27 Thread Andrew Beekhof
On Tue, Jan 26, 2010 at 6:33 PM, Koch, Sebastian wrote: > Hi, > > > > i am kind of new to pacemaker. I am trying to configure a active/passive > pacemaker/drbd/mysql cluster. Preferably all resources should run on node1 > if available, if not they should migrate to node2.  I already read all the >

[Pacemaker] SLES 11 ocf:heartbeat:drbd versus ocf:linbit:drbd

2010-01-27 Thread Oliver Ladner
Hello, Can someone please tell me the difference between the resource agents ocf:heartbeat:drbd and ocf:linbit:drbd? I'm using the legacy heartbeat:drbd agent at the moment, as drbd on SLES 11 is only at 8.2.7. Problem is that the two-node cluster only moves the drbd resource when the active n

Re: [Pacemaker] Wiki Account

2010-01-27 Thread Andrew Beekhof
Done. Though perhaps you could amend http://www.clusterlabs.org/wiki/Debian_Lenny_HowTo if there were additional steps it didnt cover. On Tue, Jan 26, 2010 at 9:52 AM, Koch, Sebastian wrote: > Hi, > > > > i would like to contribute to your great project. I needed to figure out > some strange err

Re: [Pacemaker] crm: timeout for start warning (possible bug?)

2010-01-27 Thread Dejan Muhamedagic
Hi, On Tue, Jan 26, 2010 at 09:38:56PM +0100, Lars Ellenberg wrote: > On Tue, Jan 26, 2010 at 04:47:12PM +, Maros Timko wrote: > > OK, > > > > seems like it is a bit more complicated then I initially though. Now I > > tried to set the timeout longer so that there will not be any warning. > >

Re: [Pacemaker] cloned ping ressource with multiple target hosts fails

2010-01-27 Thread Moritz Krinke
Hello, found the problem myself, if anyone wonders what the problem was: having the timeout value for the ping resource set to 5 seconds and the number of pings per host set to "default - 4", the probe needs at least 4 seconds per node to complete, so it got always killed by the lrm :-) Thanks