-----Ursprüngliche Nachricht----- Von: Andrew Beekhof [mailto:and...@beekhof.net] Gesendet: Dienstag, 2. Oktober 2012 12:18 An: The Pacemaker cluster resource manager Betreff: Re: [Pacemaker] Every line produced with ocf_log is written twice
On Thu, Sep 27, 2012 at 2:19 AM, Grüninger, Andreas (LGL Extern) <andreas.gruenin...@lgl.bwl.de> wrote: > Pacemaker 1.1.8 (Build: bdd3f2e): ncurses libqb-logging libqb-ipc > lha-fencing upstart systemd heartbeat corosync-native snmp libesmtp > resource-agents 3.9.3.113-a796f3-dirty > > I use in a resource script the logging facility with ocf_log. > Every line produced with ocf_log is written twice. At a guess, its because both $HA_LOGFILE and $HA_DEBUGLOG are defined. >From reading the ha_log(), it seems this is necessary: diff --git a/mcp/corosync.c b/mcp/corosync.c index 7f83d2d..20b6114 100644 --- a/mcp/corosync.c +++ b/mcp/corosync.c @@ -603,7 +603,6 @@ read_config(void) /* What a cluster fsck, eventually we need to mandate /one/ */ set_daemon_option("debugfile", logging_logfile); set_daemon_option("DEBUGLOG", logging_logfile); - set_daemon_option("LOGFILE", logging_logfile); have_log = TRUE; } else { > I checked with logging to_syslog enabled and disabled. > > from corosync.conf > .... > logging { > fileline: on > function_name: on > to_stderr: on > to_logfile: on > to_syslog: off > syslog_facility: local6 > logfile: /opt/ha/var/log/corosync.log > debug: off > logfile_priority: error > syslog_priority: error > tags: enter|leave|trace > timestamp: on > } > .... > > from syslog: > .... > Sep 26 08:39:20 [8670] crmd: info: te_rsc_command: Initiating > action 5: stop zone_zd-sol-s25_stop_0 on zd-sol-s1 > Sep 26 08:39:26 [8670] crmd: info: te_rsc_command: Initiating > action 6: start zone_zd-sol-s25_start_0 on zd-sol-s2 (local) > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:26 INFO: zpool pool1 > apparently exported > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:26 INFO: zpool pool1 > apparently exported > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:36 INFO: zpool pool1 imported > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:36 INFO: zpool pool1 imported > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:47 INFO: zone zd-sol-s25 > booted > zpool(zone_zd-sol-s25)[848]: 2012/09/26_08:39:47 INFO: zone zd-sol-s25 > booted > Sep 26 08:39:48 [8670] crmd: info: services_os_action_execute: > Managed zpool_meta-data_0 process 1928 exited with rc=0 > Sep 26 08:39:48 [8670] crmd: notice: process_lrm_event: LRM > operation zone_zd-sol-s25_start_0 (call=855, rc=0, cib-update=617, > confirmed=true) ok > .... > > Excerpt from script: > ..... > zpool_start() { > zpool_monitor; rc=$? > if [ $rc = $OCF_SUCCESS ]; then > ocf_log err "zpool ${OCF_RESKEY_state} already running." > return $OCF_ERR_GENERIC > fi > if [ $rc = $OCF_NOT_RUNNING ]; then > ocf_log info "zpool ${OCF_RESKEY_zpoolname} apparently > exported" > sudo zpool import ${OCF_RESKEY_zpoolname}; rc=$? > if [ $rc != $OCF_SUCCESS ]; then > ocf_log err "zpool import ${OCF_RESKEY_zpoolname} returns ${rc}" > return $OCF_ERR_GENERIC > fi > sudo touch ${OCF_RESKEY_state} > zpool_monitor; rc=$? > if [ $rc != $OCF_SUCCESS ]; then > ocf_log err "monitor returns ${rc}" > return $OCF_ERR_GENERIC > else > ocf_log info "zpool ${OCF_RESKEY_zpoolname} imported" > ..... > > > Andreas > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org