Stefan Farfeleder wrote: > Imagine this scenario where CPU 0 inserts a knote kn1 (the marker) in > knote_scan and CPU 1 kn2 in kqueue_enqueue: > > CPU 0 | CPU 1 > --------------------------------+------------------------------- > kn1->kn_tqe.tqe_next = NULL; | > | > --------------------------------+------------------------------- > kn1->kn_tqe.tqe_prev = | kn2->kn_tqe.tqe_next = NULL; > kq_head.tqh_last; | > --------------------------------+------------------------------- > *kq_head.tqh_last = kn1; | kn2->kn_tqe.tqe_prev = > | kq_head.tqh_last; > --------------------------------+------------------------------- > kq_head.tqh_last = | *kq_head.tqh_last = kn2; > &kn1->kn_tqe.tqe_next; | > --------------------------------+------------------------------- > | kq_head.tqh_last = > | &kn2->kn_tqe.tqe_next; > > The marker would never appear on the queue.
The macro is broken. Here is a fix. #define TAILQ_INSERT_TAIL(head, elm, field) do { \ void **savepp; \ (elm)->field.tqe_next = NULL; \ (elm)->field.tqe_prev = (head)->tqh_last; \ savepp = (head)->tqh_last; \ (head)->tqh_last = &(elm)->field.tqe_next; \ (elm)->field.tqe_prev = savepp; \ *savepp = (elm); \ } while (0) I'm not sure that it's possible to totally close the race; I think the way you would have to do it is to traverse the tqh_last pointer until the tqh_last->file.tqe_next == NULL, before reference it (ugly). The failure mode with the race is an inverted insertion order, which I think, in this case, is not a problem. The double setting of the prev guarantees it has a "harmless" value for a traversal during insertion; it acts as if the insertion had not occurred, if there are two that occurred simultaneously, until both sets of data are valid. This is doable in Intel assembly, I think (CMPXCHGL). Ugh. On other architectures, there's no way to avoid a mutex (e.g. MIPS). I hate tailq's. 8-(. -- Terry To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message