On 7/2/16 10:17 AM, Xiao Liang wrote:
Hi Tom,
Thanks for your test and comments! Please see inline.
On Sat, Jul 2, 2016 at 1:34 AM, Thomas F Herbert
<thomasfherb...@gmail.com> wrote:
Xiao,
Thanks. I think this is nice work and I think it advances the ball.
It needs to be rebased to latest. I got some fuzz applying the patch.
It fails make check on latest master on a few tests: 403, 405, 1091, 1117,
1119 and 2092.
A primary problem is that I forgot to modify odp_flow_from_string,
which causes flow parse not working in tests. I'm checking if there
are other bugs and try to fix them.
I tested the patch with my kernel module patch and it works. I am passing
double tagged traffic through your patch now.
I like your approach of using an array to encode the vlans in the flow.
It seems to fail appctl trace. See http://pastebin.com/fn4AKL7q I think when
it parses the packet representation, it is confusing the dl_type with the
outer TPID.
I think the dl_type resembles eth_type in OpenFlow, which defined as
"ethernet type of the OpenFlow packet payload, after VLAN tags". In
implementation, it should be the innermost ethertype that the switch
can recognise. Suppose we support two VLAN headers, an IPv4 packet
with one or two VLAN headers will have dl_type 0x0800, and an IPv4
packet with more than two VLAN headers will have dl_type 0x8100 or
0x88a8. Seems we can't match outer TPIDs with OpenFlow.
I believe that the OF spec says the match should always be on the outer
vlan. Currently OVS matches the inner vlan only because of short cuts in
the code.
The flow probably should be:
in_port=2,vlan_tci=0x1064,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00
in_port=1,vlan_tci=0x13e7,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00
The dl_type doesn't matter in this case.
Ben Pfaff and I had a discussion at OVSCON 2015 about getting rid of the
implied vlan detection on the CFI bit and moving to always detecting
vlans by TPID. This would allow vlan with a 0 tci. I am sure he will
weigh in on this.
I have a few other comments inline below.
It may also need some cleanup here and there and there may be some white
space introduced by patch.
I didn't fully understand the double tagged tunnel type and didn't
immediately have a way to test that.
--Tom
Add new port VLAN mode "dot1q-tunnel":
- Example:
ovs-vsctl set Port p1 vlan_mode=dot1q-tunnel tag=100
Don't you mean dot1ad tunnel, below you show cvlans?
Here I followed naming convention of the Cisco CLI "switchport mode
dot1q-tunnel". By Q-in-Q it means dot1Q-in-dot1Q, and it's simply two
802.1q headers in a 802.1ad frame (see
https://en.wikipedia.org/wiki/IEEE_802.1ad). I don't have strong
preference for the name. It can be changed if it's too confusing.
+ struct vlan_hdr vlan[FLOW_MAX_VLAN_HEADERS]; /* 802.1q VLAN
headers */
Proper terminology would be just VLAN headers. When both vlans are present
that is 802.1ad not 802.1q.
As explained above. I will remove the "802.1q" because it's confusing.
I think encoding the vlans as an array is a good idea.
It is interesting that you put the vlan headers before the dl_type in the
order
they appear in the packet. I am concerned that the inner dl_type is not in
the same 64 bit word as the vlan_tci in the case of single vlans. I have
more commentary on this below.
Please see below.
--- a/lib/flow.c
+++ b/lib/flow.c
@@ -72,8 +72,6 @@ const uint8_t flow_segment_u64s[4] = {
/* miniflow_extract() assumes the following to be true to optimize the
* extraction process. */
-ASSERT_SEQUENTIAL_SAME_WORD(dl_type, vlan_tci);
-
This bothers me. There is legacy behavior that does many matches on
Ethertype+tci. I struggled with this as well in my version of the patch
which encoded both TPIDs. I am concerned that this patch to double tagged
vlans may substantially decrease performance of legacy single tagged vlans.
In my latest patch, when I encoded a single vlan I put it in the inner
position in flow even though that was contradictory to the idea that the
flow fields should be in order that they were found in the packet. Even
though you use an array for encoding the vlans, they both are always encoded
in the flow. We have to think about where the single vlan should be encoded.
This is my understanding (not very sure, please correct me if wrong):
The performance optimization is only relevant to miniflow_extract and
expand. This assertion is used by miniflow_extract so that it can push
dl_type and vlan_tci sequentially without any padding. There's no
problem with functionality as long as flow and miniflow structures are
in sync (see the miniflow_pad_to_64 added). There is performance
impact because more data are pushed into miniflow, but it shouldn't be
too much.
Well, we don't know about the performance impact. There could be
significant impact to DPDK netdev. When matches fail in the EMC, they go
to the miniflow. This could cause the layer 2 flow to drop out of cache.
It is hard to verify this without testing. The SAME_WORD test was done
to protect against this.
+ vlan_hdrs[encaps].tpid = qp->eth_type;
+ vlan_hdrs[encaps].tci = qp->tci | htons(VLAN_CFI);
+ eth_type = *datap;
Here a single tagged vlan will go into the first vlan tag instead of the one
that is adjacent to the encapsulated dl_type. My supposition is untested but
I think they should be encoded in the same 64 bit word for performance.
Please see above. And I think putting single VLAN headers to 0 could
simplify the logic in many places (like VLAN matching and pop/push).
+ while (encaps < FLOW_MAX_VLAN_HEADERS &&
eth_type_vlan(flow->dl_type)) {
This might be a good idea because it allows support of legacy packets having
0x8100 ethertypes for both inner and outer vlans.
+ if (!eth_type_vlan(htons(ethertype))) {
/* XXX ETH_TYPE_VLAN_8021AD case isn't supported */
This comment needs to be removed.
Forgot to remove it, thanks.
Thanks,
Xiao
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev