David Woodhouse <dw...@infradead.org> writes:

> On Wed, 2019-04-10 at 15:42 +0200, Toke Høiland-Jørgensen wrote:
>> > > That doesn't seem to make much difference at all; it's still dropping a
>> > > lot of packets because ptr_ring_produce() is returning non-zero.
>> > 
>> > 
>> > I think you need try to stop the queue just in this case? Ideally we may 
>> > want to stop the queue when the queue is about to full, but we don't 
>> > have such helper currently.
>
> I don't quite understand. If the ring isn't full after I've put a
> packet into it... how can it be full subsequently? We can't end up in
> tun_net_xmit() concurrently, right? I'm not (knowingly) using XDP.
>
>> Ideally we want to react when the queue starts building rather than when
>> it starts getting full; by pushing back on upper layers (or, if
>> forwarding, dropping packets to signal congestion).
>
> This is precisely what my first accidental if (!ptr_ring_empty())
> variant was doing, right? :)

Yeah, I guess. But maybe a too aggressively? How are you processing
packets on the dequeue side (for crypto)? One at a time, or is there
some kind of batching in play?

>> In practice, this means tuning the TX ring to the *minimum* size it can
>> be without starving (this is basically what BQL does for Ethernet), and
>> keeping packets queued in the qdisc layer instead, where it can be
>> managed...
>
> I was going to add BQL (as $SUBJECT may have caused you to infer) but
> trivially adding the netdev_sent_queue() in tun_net_xmit() and
> netdev_completed_queue() for xdp vs. skb in tun_do_read() was tripping
> the BUG in dql_completed(). I just ripped that part out and focused on
> the queue stop/start and haven't gone back to it yet.

Right, makes sense. What qdisc are you running on the tun device? Also,
I assume that netperf is running on the same host that has the tun
device on it, right?

-Toke

Reply via email to