On Fri, Apr 11, 2025 at 05:57:42PM +0800, Wei Fang wrote:
>  static const struct enetc_pf_ops enetc4_pf_ops = {
>       .set_si_primary_mac = enetc4_pf_set_si_primary_mac,
>       .get_si_primary_mac = enetc4_pf_get_si_primary_mac,
> @@ -303,12 +489,55 @@ static void enetc4_pf_free(struct enetc_pf *pf)
>       enetc4_free_ntmp_user(pf->si);
>  }
>  
> +static void enetc4_psi_do_set_rx_mode(struct work_struct *work)
> +{
> +     struct enetc_si *si = container_of(work, struct enetc_si, rx_mode_task);
> +     struct enetc_pf *pf = enetc_si_priv(si);
> +     struct net_device *ndev = si->ndev;
> +     struct enetc_hw *hw = &si->hw;
> +     bool uc_promisc = false;
> +     bool mc_promisc = false;
> +     int type = 0;
> +
> +     if (ndev->flags & IFF_PROMISC) {
> +             uc_promisc = true;
> +             mc_promisc = true;
> +     } else if (ndev->flags & IFF_ALLMULTI) {

enetc4_psi_do_set_rx_mode() runs unlocked relative to changes made
to ndev->flags, so could you at least read it just once to avoid
inconsistencies?

Speaking of running unlocked: if I'm not mistaken, this code design
might lose consecutive updates to ndev->flags, as well as to the address
lists, if queue_work() is executed while si->rx_mode_task is still
running. There is a difference between statically allocating and
continuously queuing the same work item, vs allocating one work item
per each ndo_set_rx_mode() call.

In practice it might be hard to trigger an actual issue, because the
call sites serialize under rtnl_lock() which is so bulky that
si->rx_mode_task should have time to finish by the time ndo_set_rx_mode()
has a chance to be called again.

I can't tell you exactly how, but my gut feeling is that the combination
of these 2 things is going to be problematic.

Reply via email to