On 9/25/12 1:07 PM, Ryan Stone wrote:
When ng_snd_item tries to send an item, has to acquire a "lock" on the
node that it's sending to (I put lock in quotes because this isn't a
standard FreeBSD synchronization primitive like a rw_lock or anything;
netgraph has rolled its own synchronization primitive using atomic
ops).  Netgraph's node locks are not blocking locks (presumably to
prevent deadlocks from cyclic graphs), so if
ng_acquire_read/ng_acquire_write fails to get a lock of the proper
type, it queues the item for processing from one of the ngthreads.

When it hits this code path, item->depth is not updated.  This means
that after the message has been delivered by ngthread calling
ng_apply_item, the following code does not send the return value of
the rcvmsg/rcvdata method to the apply callback (note that depth must
be 1 for it to pass the error through):

         /* Apply callback. */
         if (apply != NULL) {
                 if (depth == 1 && error != 0)
                         apply->error = error;
                 if (refcount_release(&apply->refs))
                         (*apply->apply)(apply->context, apply->error);
         }

In the case that I saw, ngctl sent a message to a node and the item
got enqueued.  The node's rcvmsg method returned an error, but the
error didn't propagate back to ngctl because the depth wasn't set
right.  The following patch resolves this particular issue for me but
I don't entirely buy that it's the correct fix.  I don't really
understand why the depth should be forced to 1 when the item is
enqueued, but it does match what's already done in another code path
in ng_snd_item where an item is queued:
I believe (across much time) that
the depth is a 'nested call depth' and is designed to stop recursive packet paths within netgraph from using up infinite stack.. imagine if you will a packet that routes from node a to node b to node c and then back to node a.....

when you queue the packet you are in effect resetting the depth as you have returned from all calling frames and the new handler of hte packet is starting with a fresh stack.

It's been a while since I was doing any netgraph work so I may be out of date with some details.

I will try remember to read the code tonight to refresh my memory as to what is going on.
As I say, several people have rewritten bits of this since I was in it.



Index: ng_base.c
===================================================================
--- ng_base.c   (revision 240923)
+++ ng_base.c   (working copy)
@@ -2008,6 +2008,7 @@
                 NGI_SET_WRITER(item);
         else
                 NGI_SET_READER(item);
+       item->depth = 1;

         NG_QUEUE_LOCK(ngq);
         /* Set OP_PENDING flag and enqueue the item. */
@@ -2286,7 +2287,6 @@
         }

         if (queue) {
-               item->depth = 1;
                 /* Put it on the queue for that node*/
                 ng_queue_rw(node, item, rw);
                 return ((flags & NG_PROGRESS) ? EINPROGRESS : 0);


If any netgraph experts have thoughts on this please chime in as I'm
really not that confident in my understanding of the subtleties in
netgraph.
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"



_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to