----- Original Message ----- > From: "Andrew Beekhof" <and...@beekhof.net> > To: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org> > Sent: Tuesday, May 15, 2012 7:59:47 PM > Subject: Re: [Pacemaker] how to enable verbose logging for failed > > On Tue, May 15, 2012 at 9:27 PM, Igor Zinovik > <zinovik.i...@gmail.com> wrote: > > 2012/5/14 Andrew Beekhof <and...@beekhof.net>: > >> On Sat, May 12, 2012 at 11:41 PM, Igor Zinovik > >> <zinovik.i...@gmail.com> wrote: > >>> Hello. > >>>
A little late to the party but... Where are you getting the slapd RA from? And/or what version of it? What OS? I had a small bit of trouble when I first tried it - I since got a small patch applied to the RA and it runs nicely in my cluster. Jake > >>> How can i increase verbosity level for resource that fails to > >>> start: > >>> # egrep -ei '(warn|error)' /var/log/pacemaker.log > >>> May 12 17:25:12 ldap2 lrmd: [6279]: WARN: Core dumps could be > >>> lost if > >>> multiple dumps occur. > >>> May 12 17:25:12 ldap2 lrmd: [6279]: WARN: Consider setting > >>> non-default > >>> value in /proc/sys/kernel/core_pattern (or equivalent) for > >>> maximum > >>> supportability > >>> May 12 17:25:12 ldap2 lrmd: [6279]: WARN: Consider setting > >>> /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum > >>> supportability > >>> May 12 17:25:13 ldap2 corosync[6270]: [pcmk ] ERROR: > >>> pcmk_wait_dispatch: Child process mgmtd exited (pid=6283, rc=100) > >>> May 12 17:28:02 ldap2 crmd: [6282]: info: process_lrm_event: LRM > >>> operation slapd_mirrormode_monitor_10000 (call=5, rc=1, > >>> cib-update=14, > >>> confirmed=false) unknown error > >>> May 12 17:28:04 ldap2 crmd: [6282]: info: process_lrm_event: LRM > >>> operation slapd_mirrormode_monitor_10000 (call=8, rc=1, > >>> cib-update=17, > >>> confirmed=false) unknown error > >>> May 12 17:28:05 ldap2 crmd: [6282]: info: process_lrm_event: LRM > >>> operation slapd_mirrormode_monitor_10000 (call=11, rc=1, > >>> cib-update=20, confirmed=false) unknown error > >>> > >>> In slapd log i do not see any messages which shows the culprit of > >>> failure. > >> > >> Is it producing any? Did you grep for "slapd" too? > > > > Unfortunately nothing. > > The only additional thing Pacemaker can log is output from the RA. > Does it produce any? > Is slapd even being called? If so, you'll have to look at the slapd > config to see how to increase it's verbosity. > > > > > My config file is correct: > > ldap2:~# /usr/lib/openldap/slapd -Tt > > config file testing succeeded > > > > I can successfully start slapd by hands: > > ldap2:~ # service ldap start > > redirecting to systemctl > > ldap2:~ # service ldap status > > redirecting to systemctl > > ldap.service - LSB: OpenLDAP Server (slapd) > > Loaded: loaded (/etc/init.d/ldap) > > Active: active (running) since Tue, 15 May 2012 15:19:51 > > +0400; 3s ago > > Process: 2278 ExecStop=/etc/init.d/ldap stop (code=exited, > > status=0/SUCCESS) > > Process: 2762 ExecStart=/etc/init.d/ldap start > > (code=exited, > > status=0/SUCCESS) > > CGroup: name=systemd:/system/ldap.service > > 2852 /usr/lib/openldap/slapd -h ldap:/// ldaps:/// > > ldapi:/// -f /etc/openldap/slapd.conf -u ldap -g lda... > > > > When i create resource and commit configuration, i see following in > > pacemaker log: > > ldap2:~# egrep -e 'May 15 15:14' /var/log/pacemaker.log > > May 15 15:14:47 ldap2 crmd: [13576]: info: update_dc: Unset DC > > ldap1 > > May 15 15:14:47 ldap2 crmd: [13576]: info: do_state_transition: > > State > > transition S_NOT_DC -> S_PENDING [ input=I_PENDING > > cause=C_FSA_INTERNAL origin=do_election_count_vote ] > > May 15 15:14:47 ldap2 crmd: [13576]: info: update_dc: Set DC to > > ldap1 (3.0.5) > > May 15 15:14:47 ldap2 crmd: [13576]: info: do_state_transition: > > State > > transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC > > cause=C_HA_MESSAGE > > origin=do_cl_join_finalize_respond ] > > > > ldap2:~# crm_mon -1 -o -r > > ============ > > Last updated: Tue May 15 15:24:05 2012 > > Last change: Tue May 15 15:14:47 2012 by root via cibadmin on ldap2 > > Stack: openais > > Current DC: ldap1 - partition with quorum > > Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c > > 2 Nodes configured, 2 expected votes > > 2 Resources configured. > > ============ > > > > Online: [ ldap2 ldap1 ] > > > > Full list of resources: > > > > ldap_virtual_ip (ocf::heartbeat:IPaddr2): Started > > ldap1 > > slapd_mirrormode (ocf::heartbeat:slapd): Stopped > > > > Operations: > > * Node ldap1: > > ldap_virtual_ip: migration-threshold=1000000 > > + (33) start: rc=0 (ok) > > + (34) monitor: interval=5000ms rc=0 (ok) > > slapd_mirrormode:0: migration-threshold=1000000 > > fail-count=1000000 > > + (7) start: rc=1 (unknown error) > > slapd_mirrormode: migration-threshold=3 fail-count=3 > > + (31) monitor: interval=10000ms rc=1 (unknown error) > > + (32) stop: rc=0 (ok) > > * Node ldap2: > > slapd_mirrormode: migration-threshold=3 fail-count=3 > > + (11) monitor: interval=10000ms rc=1 (unknown error) > > + (12) stop: rc=0 (ok) > > > > Failed actions: > > slapd_mirrormode:0_start_0 (node=ldap1, call=7, rc=1, > > status=complete): unknown error > > slapd_mirrormode_monitor_10000 (node=ldap1, call=31, rc=1, > > status=complete): unknown error > > slapd_mirrormode_monitor_10000 (node=ldap2, call=11, rc=1, > > status=complete): unknown error > > > > I tried following change to slapd agent: > > --- slapd.orig 2012-05-15 15:25:19.625554295 +0400 > > +++ slapd 2012-05-15 15:25:27.330815814 +0400 > > @@ -38,7 +38,7 @@ > > # Initialization: > > > > : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat} > > -. ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs > > +. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs > > > > : ${OCF_RESKEY_slapd="/usr/sbin/slapd"} > > : ${OCF_RESKEY_ldapsearch="ldapsearch"} > > > > because in other agents that come with pacemaker there was no dot > > before `ocf-shellfuncs', e.g.: > > ldap2:~# grep ocf-shellfuncs > > /usr/lib/ocf/resource.d/heartbeat/{nginx,mysql,pgsql} > > /usr/lib/ocf/resource.d/heartbeat/nginx:. > > ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs > > /usr/lib/ocf/resource.d/heartbeat/mysql:. > > ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs > > /usr/lib/ocf/resource.d/heartbeat/pgsql:. > > ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs > > > > _______________________________________________ > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > > > Project Home: http://www.clusterlabs.org > > Getting started: > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org