On Mon, Jul 18, 2016 at 03:07:01PM +0200, Tom Herbert wrote: > On Mon, Jul 18, 2016 at 2:48 PM, Thomas Graf <tg...@suug.ch> wrote: > > On 07/18/16 at 01:39pm, Tom Herbert wrote: > >> On Mon, Jul 18, 2016 at 11:10 AM, Thomas Graf <tg...@suug.ch> wrote: > >> > I agree with that but I would like to keep the current per net_device > >> > atomic properties. > >> > >> I don't see that see that there is any synchronization guarantees > >> using xchg. For instance, if the pointer is set right after being read > >> by a thread for one queue and right before being read by a thread for > >> another queue, this could result in the old and new program running > >> concurrently or old one running after new. If we need to synchronize > >> the operation across all queues then sequence > >> ifdown,modify-config,ifup will work. > > > > Right, there are no synchronization guarantees between threads and I > > don't think that's needed. The guarantee that is provided is that if > > I replace a BPF program, the replace either succeeds in which case > > all packets have been either processed by the old or new program. Or > > the replace failed in which case the old program was left intact and > > all packets are still going through the old program. > > > > This is a nice atomic replacement principle which would be nice to > > preserve. > > Sure, if replace operation fails then old program should remain in > place. But xchg can't fail, so it seems like part is just giving a > false sense of security that program replacement is somehow > synchronized across queues.
good point. we do read_once at the beginning of napi, so we can process a bunch of packets in other cpus even after xchg is all done. Then I guess we can have a prog pointers in rings and it only marginally increases the race. Why not if it doesn't increase the patch complexity... btw we definitely want to avoid drain/start/stop or any slow operation during prog xchg. When xdp is used for dos, the prog swap needs to be fast.