On Sun, Jul 10, 2016 at 06:25:40PM +0300, Tariq Toukan wrote:
> 
> On 09/07/2016 10:58 PM, Saeed Mahameed wrote:
> >On Fri, Jul 8, 2016 at 5:15 AM, Brenden Blanco <bbla...@plumgrid.com> wrote:
> >>+               /* A bpf program gets first chance to drop the packet. It 
> >>may
> >>+                * read bytes but not past the end of the frag.
> >>+                */
> >>+               if (prog) {
> >>+                       struct xdp_buff xdp;
> >>+                       dma_addr_t dma;
> >>+                       u32 act;
> >>+
> >>+                       dma = be64_to_cpu(rx_desc->data[0].addr);
> >>+                       dma_sync_single_for_cpu(priv->ddev, dma,
> >>+                                               
> >>priv->frag_info[0].frag_size,
> >>+                                               DMA_FROM_DEVICE);
> >In case of XDP_PASS we will dma_sync again in the normal path, this
> >can be improved by doing the dma_sync as soon as we can and once and
> >for all, regardless of the path the packet is going to take
> >(XDP_DROP/mlx4_en_complete_rx_desc/mlx4_en_rx_skb).
> I agree with Saeed, dma_sync is a heavy operation that is now done
> twice for all packets with XDP_PASS.
> We should try our best to avoid performance degradation in the flow
> of unfiltered packets.
Makes sense, do folks here see a way to do this cleanly?
> 
> Regards,
> Tariq

Reply via email to