On Sat, 2006-01-07 at 01:45 +0200, Thomas Graf wrote:
> * jamal <[EMAIL PROTECTED]> 2006-06-30 17:31
> > Better to explain the reason for ifb first:
> > ifb exists initially as a replacement for IMQ. 
> > 1) qdiscs/policies that are per device as opposed to system wide.
> > This now allows for sharing.
> > 
> > 2) Allows for queueing incoming traffic for shaping instead of
> > dropping.
> > 
> > In other wise, the main use is for multiple devices to redirect to it.
> > Main desire is not for it to redirect to any other ifb device or eth
> > devices. I actually tried to get it to do that, but run into issues
> > of complexity and and came up with decision to drop instead of killing
> > the machine.
> > Other than that, it can redirect to any other devices - but may still
> > not be meaningful.
> 
> Last time I'm asking.

Then please listen carefully - because if you read my message with a
reasonable openness the answer is there. 

>  Why are packets dropped? 

When looping happens the only sane thing to do is to drop packets.

I mentioned this was a very tricky thing to achieve in all cases since
there are many behaviors and netdevices that have to be considered. As
an example i ended not submitting the code for looping from egress to
ingress because i felt it needed more testing. And i am certain i have
missed some other device combinations.
Loops could happen within ingress or egress. They could mostly happen
because of mirred or ifb. And some i have discovered via testing.

> You mentioned tc_verd is set to 0 leading to a invalid from verdict. 

yes, for ifb.

> Fact is that the from verdict is set to a meaningful value again at 
> dev_queue_xmit() or ing_filter() so ifb_xmit() only sees valid values. 

Ok, Thomas this is one of those places where we end up colliding for no
good reason - your statement above is accusatory. And sometimes i say 3
things and you pick one and use that as context. The ifb does clear the
field. So if you get redirected again, it will be 0.

> tx locks of individual ifb devices are independant, why would it deadlock? 

different instances of the same device do not deadlock. If however, you
have a series of devices in which A->B->C->D->A then you will have a
possible deadlock. In the case of the ifb i dissallowed it because it
was obvious from the testing. 

> Where is the packet exactly dropped?
> 

For the above in the ifb when they come in with from=0.

> > I have been thinking of Herberts change of qdisc_is_running and this may
> > help actually.
> 
> Help on what?

In the case of deadlocks. Now that it is impossible to contend for the
txlock twice (with that patch), a second redirect will end up just
enqueueing.

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to