On Mon, Jul 18, 2016 at 01:39:02PM +0200, Tom Herbert wrote: > On Mon, Jul 18, 2016 at 11:10 AM, Thomas Graf <tg...@suug.ch> wrote: > > On 07/15/16 at 10:49am, Tom Herbert wrote: [...] > >> To me, an XDP program is just another attribute of an RX queue, it's > >> really not special!. We already have a very good infrastructure for > >> managing multiqueue and pretty much everything in the receive path > >> operates at the queue level not the device level-- we should follow > >> that model. > > > > I agree with that but I would like to keep the current per net_device > > atomic properties. > > I don't see that see that there is any synchronization guarantees > using xchg. For instance, if the pointer is set right after being read > by a thread for one queue and right before being read by a thread for > another queue, this could result in the old and new program running > concurrently or old one running after new. If we need to synchronize > the operation across all queues then sequence > ifdown,modify-config,ifup will work. The case you mentioned is a valid criticism. The reason I wanted to keep this fast xchg around is because the full stop/start operation on mlx4 is a second or longer of downtime. I think something like the following should suffice to have a clean cut between programs without bringing the whole port down, buffers and all:
{ struct bpf_prog *old_prog; bool port_up; int i; mutex_lock(&mdev->state_lock); port_up = priv->port_up; priv->port_up = false; for (i = 0; i < priv->rx_ring_num; i++) napi_synchronize(&priv->rx_cq[i]->napi); old_prog = xchg(&priv->prog, prog); if (old_prog) bpf_prog_put(old_prog); priv->port_up = port_up; mutex_unlock(&mdev->state_lock); } Thoughts? > > Tom