On Fri, 2006-23-06 at 13:35 +1000, Herbert Xu wrote:
> On Thu, Jun 22, 2006 at 08:52:17PM -0400, jamal wrote:
> >
> > It does feel like the qdisc_is_running though is now a replacement
> > for the need for dev->txlock which existed to protect multi-cpus from
> > entering the device transmit path.
On Thu, Jun 22, 2006 at 08:52:17PM -0400, jamal wrote:
>
> It does feel like the qdisc_is_running though is now a replacement
> for the need for dev->txlock which existed to protect multi-cpus from
> entering the device transmit path. Is that unintended side effect?
> i.e why would dev->txlock be
On Fri, 2006-23-06 at 08:43 +1000, Herbert Xu wrote:
> Sure. However, I still don't see the point of transmitting in parallel
> even there. The reason is that there is no work being done here by the
> CPU between dequeueing the packet and obtaining the TX lock.
You make a reasonable argument.
On Thu, Jun 22, 2006 at 03:31:22PM -0400, jamal wrote:
>
> Your gut feeling is for #1 and my worry is for #2 ;->
> I actually think your change is obviously valuable for scenarios where
> the bus is slower and therefore transmits take longer - my feeling is it
> may not be beneficial for fast buse
On Wed, 2006-21-06 at 09:52 +1000, Herbert Xu wrote:
> Well my gut feeling is that multiple qdisc_run's on the same dev can't
> be good for perfomance. The reason is that SMP is only good when the
> CPUs work on different tasks. If you get two or more CPUs to work on
> qdisc_run at the same time
On Tue, Jun 20, 2006 at 10:42:06AM -0400, jamal wrote:
>
> I apologize for hand-waving with % numbers above and using gut feeling
> instead of experimental facts - I dont have time to chase it. I have
> CCed Robert who may have time to see if this impacts forwarding
> performance for one. I will h
Herbert,
Thanks for your patience.
On Tue, 2006-20-06 at 08:33 +1000, Herbert Xu wrote:
> First of all you could receive an IRQ in between dropping xmit_lock
> and regaining the queue lock.
Indeed you could. Sorry, I overlooked that in my earlier email. This
issue has been there forever though
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Mon, 19 Jun 2006 22:15:19 +1000
> [NET]: Prevent multiple qdisc runs
I have no real objection to this semantically.
But this is yet another atomic operation on the transmit
path :-( This problem, however, is inevitable because of
how we do t
On Mon, Jun 19, 2006 at 11:57:19PM -0700, David Miller wrote:
>
> But this is yet another atomic operation on the transmit
> path :-( This problem, however, is inevitable because of
> how we do things and thus isn't the fault of your change.
>
> I'm going to apply this patch to 2.6.18, however..
On Mon, Jun 19, 2006 at 10:36:50AM -0400, jamal wrote:
>
> Ok, but:
> The queue lock will ensure only one of the qdisc runs (assuming
> different CPUs) will be able to dequeue at any one iota in time, no?
> And if you assume that the cpu that manages to get the tx lock as well
> is going to be con
On Tue, 2006-20-06 at 00:29 +1000, Herbert Xu wrote:
> Correct. When qdisc_run happens we take an skb off the head of the
> queue. If it can't be transmitted right away, we try to put it back
> in the same spot.
>
> If you have two qdisc_run's happening at the same time then that spot
> could b
On Mon, Jun 19, 2006 at 10:23:29AM -0400, jamal wrote:
>
> Ok, I am trying to visualize but having a hard time:
> Re-queueing is done at the front of the queue to maintain ordering
> whereas queueing is done at the front (i.e it is a FIFO). i,e
> even if p2 comes in and gets queued while p1 is bei
Herbert,
On Mon, 2006-19-06 at 23:42 +1000, Herbert Xu wrote:
> Hi Jamal:
>
> On Mon, Jun 19, 2006 at 09:33:51AM -0400, jamal wrote:
[..]
>
> Actually I discovered the problem only because the generic segmentation
> offload stuff that I'm working on needs to deal with the situation where
> a sup
Hi Jamal:
On Mon, Jun 19, 2006 at 09:33:51AM -0400, jamal wrote:
>
> I take it you saw a lot of requeues happening that prompted this? What
> were the circumstances? The _only_ times i have seen it happen is when
> the (PCI) bus couldnt handle the incoming rate or there was a bug in the
> driver.
arly done with the generic segmentation offload stuff (although
> only TCPv4 is implemented for now), and I encountered this problem.
>
> [NET]: Prevent multiple qdisc runs
>
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMA
Hi Dave:
I'm nearly done with the generic segmentation offload stuff (although
only TCPv4 is implemented for now), and I encountered this problem.
[NET]: Prevent multiple qdisc runs
Having two or more qdisc_run's contend against each other is bad because
it can induce packet reorder
16 matches
Mail list logo