Hi,

For a simple demonstration I've set up a 2-node cluster (both kvm
virtuals) and configured stonith to interact with the kvm hypervisor. My
config:

[root@node2 ~]# crm configure show
node node1.testnet.lan
node node2.testnet.lan
primitive fence_node1 stonith:fence_virsh \
        params action="reboot" ipaddr="192.168.122.1" login="root"
passwd="123qwe" port="node1.testnet.lan" verbose="true" \
        meta target-role="Started"
primitive fence_node2 stonith:fence_virsh \
        params action="reboot" ipaddr="192.168.122.1" login="root"
passwd="123qwe" port="node2.testnet.lan" verbose="true" \
        meta target-role="Started"
location loc_fench_node1 fence_node1 -inf: node1.testnet.lan
location loc_fench_node2 fence_node2 -inf: node2.testnet.lan
property $id="cib-bootstrap-options" \
        dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        no-quorum-policy="ignore" \
        stonith-enabled="true" \
        last-lrm-refresh="1338974996"
rsc_defaults $id="rsc-options" \
        resource-stickiness="100"

Now according to the fence_virsh ra info, the param 'port' should
indicate the name of the guest on the hypervisor. In my first attempt,
the name in virt-manager was 'pacemaker-1'. Fencing then didn't work. It
would only work when the node name (#uname) was the same as the guest
kvm name.

I think this is not suppose to happen, but perhaps I'm wrong. 

Also, a strange behavior that I can't explain:  if I ssh-copy-id the
public keys of both pacemaker nodes to the hypervisor machine, fencing
no longer works, even if I specify the path to the public key in param
identity_file and/or leave out the password. 


Kind regards,

Léon

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to