On Fri, 16 Mar 2007, Aniruddha Bohra wrote:

Robert Watson wrote:

I can't speak to the details of the above, but can speak generally on the issue of the link layer input path and locking. There is no assumption that the caller will provide serialization, and fully concurrent input from multiple threads is supported. The reason the drivers drop their locks is that the network stack frequently holds locks over calls to driver output routines. As a result, driver locks tend to follow network stack locks in the lock order--at least, for drivers that have a single lock covering both send and receive paths (quite common). As a result, the driver must drop the driver lock before calling into the stack to avoid a lock order reversal.

So, if I have a queue shared between ether_input() and another thread, I need to ensure mutual exclusion. In such scenarios, should spinlocks or default mutexes be used?

I'm not sure I completely understand the scenario you are suggesting -- could you be specific? Normally, from the perspective of a device driver author, you simply drop your driver-specific locks and call ifp->if_input() to hand the mbuf chain up to the link layer. No locks need to be held around the call to the input routine.

On the other hand, if you are modifying the link layer itself (i.e., hooking into it in one of a number of ways), this means that you must provide any synchronization necessary to make your code operate correctly in the presence of concurrency. Many instances of ether_input() may run at the same time on various CPUs -- typically one per input source, since they run in the ithread of the device driver. In general, you should use default mutexes in preference to spin mutexes unless you know your code will run the fast interrupt path (rather than the ithread path). Universally, the network stack assumes it will not run in the fast interrupt path, so unless you're doing something quite low level and special involving a device driver, default mutexes (or rwlocks if you need shared locking) are the right way to go.

The driver locks themselves are usually MTX_DEF, whereas in netgraph for example, (the scenario above), a spinlock is used. Is there a rule of thumb, for example, never use blocking locks in the network interrupt path?

I am not sure why Netgraph uses spin locks -- it probably shouldn't be doing so. In the kernel we have several different notions of "sleeping", and unfortunately the terminology is not entirely clear. The best way I've found to explain it is using the term "bounded". Sleeping associated with mutexes and rwlocks is bounded sleeping, whereas sleeping associated with condition variables, wait channels, and sx locks is unbounded sleeping. This distinction is important because you don't want, for example, an interrupt thread performing an unbounded sleep waiting on something that may not happen for a very long (unbounded) period of time, such as waiting for keyboard input or disk I/O to return. If you run with INVARIANTS and WITNESS, a debugging kernel should warn you if you try to acquire the wrong type of lock in the wrong context. A locking(9) man page talking about some of the selection choices was recently added to 7-CURRENT, FYI.

Robert N M Watson
Computer Laboratory
University of Cambridge
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to