Krishna Kumar2 wrote:
What about a race between trying to reacquire queue_lock and another
failed transmit?
That is not possible too. I hold the QDISC_RUNNING bit in dev->state and
am the only sender for this device, so there is no other failed transmit.
Also, on failure of dev_hard_start_xmit,
Right, but I am the sole dequeue'r, and on failure, I requeue those packets
to
the beginning of the queue (just as it would happen in the regular case of
one
packet xmit/failure/requeue).
What about a race between trying to reacquire queue_lock and another
failed transmit?
--
Gagan
- KK
Krishna Kumar2 wrote:
I haven't seen reordering packets (I did once when I was having a bug in
the requeue code, some TCP messages on receiver indicating packets out of
order). When a send fails, the packet are requeued in reverse (go to end of
the failed skb and traverse back to the failed skb a
> If you have braindead slow hardware,
> there is nothing that says your start_xmit routine can't do its own
> coalescing. The cost of calling the transmit routine is the
responsibility
> of the driver, not the core network code.
Yes, except you very likely run the risk of the driver introduci
David Miller wrote:
Right.
But I think it's critical to do two things:
1) Do this when netif_wake_queue() is triggers and thus the
TX is locked already.
2) Have some way for the driver to say how many free TX slots
there are in order to minimize if not eliminate requeueing
during thi
David Miller wrote:
From: Rick Jones <[EMAIL PROTECTED]>
Date: Thu, 10 May 2007 13:49:44 -0700
I'd think one would only do this in those situations/places where a
natural "out of driver" queue develops in the first place wouldn't
one?
Indeed.
And one builds in qdisc because your device sink
David Miller wrote:
If the qdisc is packed with packets and we would just loop sending
them to the device, yes it might make sense.
But if that isn't the case, which frankly is the usual case, you add a
non-trivial amount of latency by batching and that's bad exactly for
the kind of application
For example:
my biggest challenge with the e1000 was just hacking up the DMA setup
path - i seem to get better numbers if i dont kick the DMA until i stash
all the packets on the ring first etc. It seemed counter-intuitive.
That seems to make sense. The rings are(?) in system memory and you ca
jamal wrote:
You would need to almost re-write the driver to make sure it does IO
which is taking advantage of the batching.
Really! It's just the transmit routine. How radical can that be?
--
Gagan
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message t
small packets belonging to the same connection could be coalesced by
TCP, but this may help the case where multiple parallel connections are
sending small packets.
It's not just small packets. The cost of calling hard_start_xmit/byte
was rather high on your particular device. I've seen PCI rea
I like the counting semaphore idea.
--
Gagan
Michael Chan wrote:
On Wed, 2007-04-04 at 13:34 -0700, Gagan Arneja wrote:
Can't this BUG_ON be hit very easily:
static void tg3_irq_quiesce(struct tg3 *tp)
{
BUG_ON(tp->irq_sync);
...
}
tg3_reset_task could easily be racing with
Can't this BUG_ON be hit very easily:
static void tg3_irq_quiesce(struct tg3 *tp)
{
BUG_ON(tp->irq_sync);
...
}
tg3_reset_task could easily be racing with another thread, that calls
tg3_full_lock(tp, 1); e.g tg3_change_mtu. Maybe I'm missing something
obvious.
--
Gagan
-
To unsubscribe f
12 matches
Mail list logo