On Thu, Dec 15, 2011 at 01:00:00PM -0800, Jesse Gross wrote:
> On Thu, Dec 15, 2011 at 12:34 PM, Ben Pfaff <b...@nicira.com> wrote:
> That's true; I don't really think that the whole worker thread concept
> is really all that great overall anyways.
> 
> > A workaround would be to call synchronize_rcu() and send the genl
> > reply from some context that doesn't hold genl_lock, but that's hardly
> > elegant. ??Also it means that the real reply would come after the one
> > generated automatically by af_netlink.c when NLM_F_ACK is used, which
> > violates the normal rules.
> 
> Isn't that almost exactly the same as sending the message from the RCU
> callback (including the Netlink ordering issue)?

In the "send from RCU callback" case, I intended that the normal
Netlink reply would be sent synchronously just after deletion, just as
it is now.  The ordering would therefore stay just as it is now.  Only
the broadcast reply would be deferred.

The "use synchronize_rcu() on the side then send the reply" case would
change the message ordering.

So far I'm not seeing an option I like.

Could we just use the existing spinlock in sw_flow plus the 'dead'
variable to ensure that no packets go through a deleted flow after
it's deleted?  On the delete side, take the spinlock and set 'dead'.
On the fast path, take the spinlock and check 'dead' before executing
the actions, release the spinlock after executing the actions.  We
already have to take the spinlock for every packet anyway to update
the stats, so it's not an additional cost.  I doubt that there's any
parallel work for a given flow anyway (that would imply packet
reordering).  We would have to avoid recursive locking somehow.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to