I mentioned in yesterday's OVN meeting that I was finally to the point where I could start testing Neutron security groups with OVN ACLs but hadn't done so before the meeting. I worked on it yesterday afternoon and here's what happened.
I'm using OVN from Justin's ovn-acl branch. I'm using my security-groups branch for the Neutron plugin (networking-ovn). This test is on Ubuntu 14.04. I do most dev on Fedora 22, but the kernel modules didn't build there (kernel too new). I used Ubuntu 14.04 because I thought I remembered that being something Justin had tested with. I'd actually like to clarify that point so let me know what you all are using successfully. I was going to switch to CentOS 7 next. The test environment is a simple single hypervisor environment. I created a Neutron port and booted a VM using this port. From there I can add/remove/update security groups on this port and see what happens to the traffic. With no security group, things work as usual. I can ping and ssh to the VM. I have a security group named "default", which has the default rules plus two other rules that are supposed to allow me to ping and ssh to the VM. > $ neutron security-group-list > +--------------------------------------+---------+----------------------------------------------------------------------+ > | id | name | security_group_rules > | > +--------------------------------------+---------+----------------------------------------------------------------------+ > | 2bdabbcf-cde5-47fb-8a16-f5772545accc | default | egress, IPv4 > | > | | | egress, IPv6 > | Allow all outbound traffic from the VM. > | | | ingress, IPv4, 22/tcp, > remote_ip_prefix: 0.0.0.0/0 | > | | | ingress, IPv4, icmp, > remote_ip_prefix: 0.0.0.0/0 | Allow icmp and ssh from any source. > | | | ingress, IPv4, > remote_group_id: 2bdabbcf-cde5-47fb-8a16-f5772545accc | > | | | ingress, IPv6, > remote_group_id: 2bdabbcf-cde5-47fb-8a16-f5772545accc | > +--------------------------------------+---------+----------------------------------------------------------------------+ Allow all traffic from other ports on the same security group. With that configuration, here are the OVN ACLs, the logical flow table row for ACLs, and the OpenFlow flows for the egress pipeline ACLs: > $ ovn-nbctl acl-list e7936efd-1cf3-4198-bff5-cd0abf1164c8 > > from-lport 1002 (inport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4) > allow-related > from-lport 1002 (inport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip6) > allow-related > from-lport 1001 (inport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip) drop > to-lport 1002 (outport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4 && > ip4.dst == 0.0.0.0/0 && icmp4) allow-related > to-lport 1002 (outport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4 && > ip4.dst == 0.0.0.0/0 && tcp && tcp.dst >= 22 && tcp.dst <= 22) allow-related > to-lport 1001 (outport == "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip) > drop > $ ovn-sbctl lflow-list > > Datapath: adec5d12-20a8-4a33-9de9-0fed21109118 Pipeline: ingress > table=1( pre_acl), priority=100, match=(ip), action=(ct_next;) > table=1( pre_acl), priority= 0, match=(1), action=(next;) > table=2( acl), priority=65535, match=(!ct.est && ct.rel && !ct.new && > !ct.inv), action=(next;) > table=2( acl), priority=65535, match=(ct.est && !ct.rel && !ct.new && > !ct.inv), action=(next;) > table=2( acl), priority=65535, match=(ct.inv), action=(drop;) > table=2( acl), priority=1002, match=(ct.new && inport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4), action=(ct_commit; next;) > table=2( acl), priority=1002, match=(ct.new && inport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip6), action=(ct_commit; next;) > table=2( acl), priority=1001, match=(inport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip), action=(drop;) > table=2( acl), priority= 0, match=(1), action=(next;) > > Datapath: adec5d12-20a8-4a33-9de9-0fed21109118 Pipeline: egress > table=0( pre_acl), priority=100, match=(ip), action=(ct_next;) > table=0( pre_acl), priority= 0, match=(1), action=(next;) > table=1( acl), priority=65535, match=(!ct.est && ct.rel && !ct.new && > !ct.inv), action=(next;) > table=1( acl), priority=65535, match=(ct.est && !ct.rel && !ct.new && > !ct.inv), action=(next;) > table=1( acl), priority=65535, match=(ct.inv), action=(drop;) > table=1( acl), priority=1002, match=(ct.new && outport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4 && ip4.dst == 0.0.0.0/0 && > icmp4), action=(ct_commit; next;) > table=1( acl), priority=1002, match=(ct.new && outport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip4 && ip4.dst == 0.0.0.0/0 && tcp > && tcp.dst >= 22 && tcp.dst <= 22), action=(ct_commit; next;) > table=1( acl), priority=1001, match=(outport == > "fbda6395-71fe-4eb5-abda-531bddf479ba" && ip), action=(drop;) > table=1( acl), priority= 0, match=(1), action=(next;) > flows for egress pipeline ACLs > > cookie=0x0, duration=31.955s, table=48, n_packets=15, n_bytes=1650, > priority=100,ipv6,metadata=0x1 > actions=ct(recirc,next_table=49,zone_reg=NXM_NX_REG5[]) > cookie=0x0, duration=31.952s, table=48, n_packets=2, n_bytes=196, > priority=100,ip,metadata=0x1 > actions=ct(recirc,next_table=49,zone_reg=NXM_NX_REG5[]) > cookie=0x0, duration=31.955s, table=48, n_packets=2, n_bytes=84, > priority=0,metadata=0x1 actions=resubmit(,49) > cookie=0x0, duration=31.955s, table=49, n_packets=0, n_bytes=0, > priority=65535,ct_state=-new+est-rel-inv+trk,metadata=0x1 > actions=resubmit(,50) > cookie=0x0, duration=31.952s, table=49, n_packets=0, n_bytes=0, > priority=65535,ct_state=-new-est+rel-inv+trk,metadata=0x1 > actions=resubmit(,50) > cookie=0x0, duration=31.952s, table=49, n_packets=1, n_bytes=98, > priority=65535,ct_state=+inv+trk,metadata=0x1 actions=drop > cookie=0x0, duration=31.955s, table=49, n_packets=0, n_bytes=0, > priority=1002,ct_state=+new+trk,tcp,reg7=0x4,metadata=0x1,tp_dst=22 > actions=ct(commit,zone_reg=NXM_NX_REG5[]),resubmit(,50) > cookie=0x0, duration=31.952s, table=49, n_packets=1, n_bytes=98, > priority=1002,ct_state=+new+trk,icmp,reg7=0x4,metadata=0x1 > actions=ct(commit,zone_reg=NXM_NX_REG5[]),resubmit(,50) > cookie=0x0, duration=31.955s, table=49, n_packets=5, n_bytes=550, > priority=1001,ipv6,reg7=0x4,metadata=0x1 actions=drop > cookie=0x0, duration=31.952s, table=49, n_packets=0, n_bytes=0, > priority=1001,ip,reg7=0x4,metadata=0x1 actions=drop > cookie=0x0, duration=31.955s, table=49, n_packets=12, n_bytes=1184, > priority=0,metadata=0x1 actions=resubmit(,50) When I send a single ping from the hypervisor to the VM, I get no response. Looking at the flows, it seems the ping makes it to the VM, but the response is dropped by an ACL flow as invalid. After the ping, it shows up in conntrack like so: > $ sudo conntrack -L > ... > icmp 1 21 src=172.24.4.1 dst=10.0.0.4 type=8 code=0 id=9730 src=10.0.0.4 > dst=172.24.4.1 type=0 code=0 id=9730 mark=0 zone=3 use=1 > icmp 1 21 src=172.24.4.1 dst=10.0.0.4 type=8 code=0 id=9730 [UNREPLIED] > src=10.0.0.4 dst=172.24.4.1 type=0 code=0 id=9730 mark=0 use=1 The flows that I believe shows what happens are: > cookie=0x0, duration=31.952s, table=48, n_packets=2, n_bytes=196, > priority=100,ip,metadata=0x1 > actions=ct(recirc,next_table=49,zone_reg=NXM_NX_REG5[]) > > cookie=0x0, duration=31.952s, table=49, n_packets=1, n_bytes=98, > priority=65535,ct_state=+inv+trk,metadata=0x1 actions=drop > > cookie=0x0, duration=31.952s, table=49, n_packets=1, n_bytes=98, > priority=1002,ct_state=+new+trk,icmp,reg7=0x4,metadata=0x1 > actions=ct(commit,zone_reg=NXM_NX_REG5[]),resubmit(,50) I did a similar test using ssh. After the ssh attempt, conntrack shows: > sudo conntracl -L > tcp 6 27 SYN_RECV src=172.24.4.1 dst=10.0.0.4 sport=47784 dport=22 > src=10.0.0.4 dst=172.24.4.1 sport=22 dport=47784 mark=0 zone=3 use=1 > tcp 6 87 SYN_SENT src=172.24.4.1 dst=10.0.0.4 sport=47784 dport=22 > [UNREPLIED] src=10.0.0.4 dst=172.24.4.1 sport=22 dport=47784 mark=0 use=1 It looks like the SYN makes it to the VM, but the return traffic is dropped as invalid. Any thoughts on this? Is there anything obviously wrong with what I've done? Are there more details I should gather? Thanks! -- Russell Bryant _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev