> What do you mean by "a VirtualDomain KVM machine which doesn't
> support shutdown" ?
the VM doesn't have acpid.
> What version of the resource/cluster agents are you using?
3.9.2
thanks
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
htt
Hi.
I've an installation with a VirtualDomain KVM machine which doesn't support
shutdown, so it has to be destroyed.
>From what I see VirtualDomain should first issue a shutdown and after the
>timeout destroy the vm, but I'm using Pacemaker 1.1.6 and I found this thread
http://www.gossamer-thre
> Firewall:slave cannot run on a different node. I assume you are
> trying to have the DRBD masters on the same node? That should be
> something like:
>
> colocation FirewallDiskWithWork inf: Firewall:Master Config:Master
that is! thanks
___
Pace
Hi.
I've a simple config with Corosync 1.4.2 on Ubuntu 12.04 which is causing me
some troubles.
I'm actually handling two drbd master/slave resources. The problem is that one
of the two is not activated on the slave, causing it to remain out of sync!
The setup is very simple (follows), the Confi
> I don't mean in the cluster config, I mean actual processes.
no, for sure not.but for these resource I have problems because they're
detected as too active. vsftpd is an upstart script, winbind and "old" init
script.
_
> So yes to adding them directly to a group?
> Is it possible they were already running on the other machine before
> you added them?
I can't remember exactly, but more likely I had set them to Stopped or directly
removed from the crm. So, no...
___
> Did you put it in the group or just add it to the configuration?
> Did you add any colocation constraints for them?
Actually not, since from what I read in the documentation all resources in a
group are executed on the same node.
__
> Some parts of the config:
More readable config here:
http://pastebin.ca/2159726
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http
Hi.
I've created a 2 node active/passive cluster. The HA manages 17 resources,
among IP and other services.
But I've a problem when resource placement: after I add one of the latest
resource, it is started on node2 instead of node1 with all the others!
I couldn't really find a reason for this