Vladimir Oltean <olte...@gmail.com> writes:

> On Thu, Apr 01, 2021 at 01:26:02PM +0200, Toke Høiland-Jørgensen wrote:
>> > +int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
>> > +             struct xdp_frame **frames, u32 flags)
>> > +{
>> > +  struct enetc_tx_swbd xdp_redirect_arr[ENETC_MAX_SKB_FRAGS] = {0};
>> > +  struct enetc_ndev_priv *priv = netdev_priv(ndev);
>> > +  struct enetc_bdr *tx_ring;
>> > +  int xdp_tx_bd_cnt, i, k;
>> > +  int xdp_tx_frm_cnt = 0;
>> > +
>> > +  tx_ring = priv->tx_ring[smp_processor_id()];
>> 
>> What mechanism guarantees that this won't overflow the array? :)
>
> Which array, the array of TX rings?

Yes.

> You mean that it's possible to receive a TC_SETUP_QDISC_MQPRIO or
> TC_SETUP_QDISC_TAPRIO with num_tc == 1, and we have 2 CPUs?

Not just that, this ndo can be called on arbitrary CPUs after a
redirect. The code just calls through from the XDP receive path so which
CPU it ends up on depends on the RSS+IRQ config of the other device,
which may not even be the same driver; i.e., you have no control over
that... :)

> Well, yeah, I don't know what's the proper way to deal with that. Ideas?

Well the obvious one is just:

tx_ring = priv->tx_ring[smp_processor_id() % num_ring_ids];

and then some kind of locking to deal with multiple CPUs accessing the
same TX ring...

-Toke

Reply via email to