On 10/12/2017 10:24 AM, Roger B Melton wrote: > When copying VLAN tags from the RX descriptor to the vlan_tci field > in the mbuf header, igb_rxtx.c:eth_igb_recv_pkts() and > eth_igb_recv_scattered_pkts() both assume that the VLAN tag is always > little endian. While i350, i354 and /i350vf VLAN non-loopback > packets are stored little endian, VLAN tags in loopback packets for > those devices are big endian. > > For i350, i354 and i350vf VLAN loopback packets, swap the tag when > copying from the RX descriptor to the mbuf header. This will ensure > that the mbuf vlan_tci is always little endian. > > Signed-off-by: Roger B Melton <rmel...@cisco.com>
<...> > @@ -946,9 +954,16 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf > **rx_pkts, > > rxm->hash.rss = rxd.wb.lower.hi_dword.rss; > hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); > - /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); > - > + /* > + * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is > + * set in the pkt_flags field and must be in CPU byte order. > + */ > + if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) && > + (rxq->flags & IGB_RXQ_FLAG_LB_BSWAP_VLAN)) { This is adding more condition checks into Rx path. What is the performance cost of this addition? > + rxm->vlan_tci = rte_be_to_cpu_16(rxd.wb.upper.vlan); > + } else { > + rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); > + } > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(rxq, > hlen_type_rss); > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr); > pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr); <...>