Guest's firewall is off entirely, as is selinux.

On 06/16/2013 02:25 AM, Vladislav Bogdanov wrote:
16.06.2013 06:19, Digimer wrote:
Tried allowing everything into the host from the bridge. No luck...

I meant guest's firewall.

Tried with;

iptables -I INPUT -i virbr0 -j ACCEPT

new rules;

====
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 23:17:15 2013
*nat
:PREROUTING ACCEPT [397:94492]
:INPUT ACCEPT [34:7520]
:OUTPUT ACCEPT [709:61932]
:POSTROUTING ACCEPT [681:59876]
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
COMMIT
# Completed on Sat Jun 15 23:17:15 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 23:17:15 2013
*mangle
:PREROUTING ACCEPT [52594:46854362]
:INPUT ACCEPT [38456:45153011]
:FORWARD ACCEPT [13667:1595902]
:OUTPUT ACCEPT [28298:2665665]
:POSTROUTING ACCEPT [42095:4288350]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM
--checksum-fill
COMMIT
# Completed on Sat Jun 15 23:17:15 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 23:17:15 2013
*filter
:INPUT ACCEPT [69:8423]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [130:12757]
-A INPUT -i virbr0 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate
RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Sat Jun 15 23:17:15 2013
====

On 06/15/2013 06:04 PM, Vladislav Bogdanov wrote:
15.06.2013 20:26, Digimer wrote:
Ah, I think it's a problem with the firewall rules on the host. Not sure
how to fix it though...

You probably need to open port 1229/tcp in filter INPUT on virtual
cluster members. That is where fence_xvm listens for a connection from
fence_virtd after it sends multicast request iirc. Design deficiency,
fence_xvm works only one copy on a system at a time, it listens for tcp
connection on a predefined port.


lemass:/home/digimer# iptables-save
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*nat
:PREROUTING ACCEPT [246583:89552160]
:INPUT ACCEPT [2335:362026]
:OUTPUT ACCEPT [11740:741351]
:POSTROUTING ACCEPT [11706:738225]
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j
MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
COMMIT
# Completed on Sat Jun 15 13:26:33 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*mangle
:PREROUTING ACCEPT [3250861:2486027770]
:INPUT ACCEPT [2557761:1301267981]
:FORWARD ACCEPT [444644:1094901100]
:OUTPUT ACCEPT [1919457:2636518995]
:POSTROUTING ACCEPT [2364615:3731498365]
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM
--checksum-fill
COMMIT
# Completed on Sat Jun 15 13:26:33 2013
# Generated by iptables-save v1.4.16.2 on Sat Jun 15 13:26:33 2013
*filter
:INPUT ACCEPT [2557761:1301267981]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1919457:2636518995]
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate
RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Sat Jun 15 13:26:33 2013

digimer

On 06/15/2013 01:09 PM, Digimer wrote:
Hi all,

     I'm trying to play with pacemaker on fedora 19 (pre-release) and
I am
having trouble getting the guests to talk to the host.

   From the host, I can run;

lemass:/home/digimer# fence_xvm -o list
pcmk1                83f6abdc-bb48-d794-4aca-13f091f32c8b on
pcmk2                2d778455-de7d-a9fa-994c-69d7b079fda8 on

I can fence the guests from the host as well. However, I can not get
the
list (or fence) from the quests;

[root@pcmk1 ~]# fence_xvm -o list
Timed out waiting for response
Operation failed

I suspect a multicast issue, but so far as I can tell, multicast is
enabled on the bridge;

lemass:/home/digimer# ifconfig
virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
           inet 192.168.122.1  netmask 255.255.255.0  broadcast
192.168.122.255
           ether 52:54:00:da:90:a1  txqueuelen 0  (Ethernet)
           RX packets 103858  bytes 8514464 (8.1 MiB)
           RX errors 0  dropped 0  overruns 0  frame 0
           TX packets 151988  bytes 177742562 (169.5 MiB)
           TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
           inet6 fe80::fc54:ff:feed:3701  prefixlen 64  scopeid
0x20<link>
           ether fe:54:00:ed:37:01  txqueuelen 500  (Ethernet)
           RX packets 212828  bytes 880551892 (839.7 MiB)
           RX errors 0  dropped 0  overruns 0  frame 0
           TX packets 225430  bytes 182955760 (174.4 MiB)
           TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
           inet6 fe80::fc54:ff:fe45:e9ae  prefixlen 64  scopeid
0x20<link>
           ether fe:54:00:45:e9:ae  txqueuelen 500  (Ethernet)
           RX packets 4840  bytes 587902 (574.1 KiB)
           RX errors 0  dropped 0  overruns 0  frame 0
           TX packets 7495  bytes 899578 (878.4 KiB)
           TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I tried specifying the mcast address and port without success.

The host's config is:

lemass:/home/digimer# cat /etc/fence_virt.conf
backends {
       libvirt {
           uri = "qemu:///system";
       }

}

listeners {
       multicast {
           port = "1229";
           family = "ipv4";
           interface = "virbr0";
           address = "239.192.214.190";
           key_file = "/etc/cluster/fence_xvm.key";
       }

}

fence_virtd {
       module_path = "/usr/lib64/fence-virt";
       backend = "libvirt";
       listener = "multicast";
}

The cluster forms and corosync is using multicast, so I am not sure if
mcast really is the problem.

Any tips/help?

Thanks!





_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org






--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without access to education?

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to