On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast
> wrote:
>
>> Hi All,
>>
>> Sorry for the spam. I'm trying to support a meaningful TCP message latency
>> w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I
,obl/obu=0/0) (6.396
>> ms/1635289683.794338)
>> [ 1] 8.00-9.00 sec 40.0 KBytes 328 Kbits/sec 10/0 0
>> 14K/5329 us 8
>> [ 1] 8.00-9.00 sec S8-PDF:
>> bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
>> (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0
bits/sec 49.881 ms (5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 6.0002-6.0511 sec 40.0 KBytes 6.44 Mbits/sec 50.895 ms (5.1%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 7.0002-7.0501 sec 40.0 KBytes 6.57 Mbits/sec 49.889 ms (5%)
> 10=10:0:0:0:0:0:0:0 0
> [ 1] 8.0002-8.0481 sec
On Sun, 2012-08-26 at 14:36 -0700, Dave Taht wrote:
> From looking over the history of this idea, it does seem to be a good
> idea for small devices with potentially big queues.
>
> http://www.spinics.net/lists/netdev/msg176967.html
>
> That said I do tend to agree with davem's summary in fixing
On Thu, 2012-08-30 at 15:59 -0700, Dave Taht wrote:
> I have finally found the source of the issues I was having with htb +
> fq_codel at low bandwidths, and it wasn't htb to the extent I thought
> it was.
>
> It was fq_codel's use of byte quantums, which was resulting in head of
> line blocking f
On Thu, 2012-08-30 at 16:19 -0700, Dave Taht wrote:
> In that case it will deliver 3 acks in a row from
> stream A, and then 3 acks in stream B, in the linux 3.5 version, and
> push the the 1500 byte packet from my example to the old flows queue -
Nope, the 1500 byte packet will be sent as normal
On Wed, 2012-11-28 at 13:37 -0500, Michael Richardson wrote:
> > "Paul" == Paul E McKenney writes:
> Paul> You lost me on this one. It looks to me like
> Paul> net/sched/sch_fq_codel.c
> Paul> in fact does hash packets into flows, so FQ-CoDel is
> Paul> stochastic in the
>
On Wed, 2012-11-28 at 09:44 -0800, Paul E. McKenney wrote:
> You lost me on this one. It looks to me like net/sched/sch_fq_codel.c
> in fact does hash packets into flows, so FQ-CoDel is stochastic in the
> the same sense that SFQ is. In particular, FQ-CoDel can hash a thin
> session into the same
On Sun, 2012-12-02 at 22:37 +0100, Toke Høiland-Jørgensen wrote:
> Eric Dumazet writes:
>
> > This can help if you really want to avoid a thick flow sharing a thin
> > flow bucket, but given that all packets are going eventually into the
> > Internet (or equivalent c
On Sun, 2012-12-02 at 23:15 +0100, Toke Høiland-Jørgensen wrote:
> Eric Dumazet writes:
>
> > If the next packet arrives while the bucket is still in old_flows,
> > we wont put the bucket in new_flow, its bucket have to wait its turn in
> > the RR list.
>
> Rig
On Mon, 2012-12-03 at 06:58 -0800, Paul E. McKenney wrote:
> On Mon, Dec 03, 2012 at 01:54:35PM +0100, Toke Høiland-Jørgensen wrote:
> > Dave Taht writes:
> >
> > > you have no control. The tx queue rings are flooded before control is
> > > handed back to the fq_codel scheduler. You can get some
On Tue, 2012-12-11 at 22:18 +0530, Ketan Kulkarni wrote:
> Hi,
> I am testing tcp tfo behavior with httping client and polipo server.
>
> One observation from my TFO testing -If for a connection server sends
> a cookie to client, client always does TFO for subsequent connections.
> This is ok.
>
Sorry, could you give us a copy of the panic stack trace ?
Thanks
On Fri, Jan 4, 2013 at 9:04 AM, Dave Taht wrote:
> On Thu, Jan 3, 2013 at 8:54 AM, Ketan Kulkarni wrote:
> > Thanks Dave.
> > I upgraded my 3800 to 3.7.1-1. It is working for day to day Internet
> activity.
> >
> > However, I a
E..{..@.@.> >>> > 0x0010: 7f00 0001 0035 b8c8 0067 fe7a d864 8180 .5...g.z.d..
>> >>> > 0x0020: 0001 0001 0002 0377 066f 736e .www.osn
>> >>> > 0x0030: 6577 7303 636f 6d00 0001 0001 c00c 0001 ews.com.
>> >>>
AAA? www.osnews.com. (32)
> > >> >>> > 0x0000: 4500 003c c3d2 4000 4011 78dc 7f00 0001 E..<..@.@.x.
> > >> >>> > 0x0010: 7f00 0001 b8c8 0035 0028 fe3b d864 0100 ...5.(.;.d..
> > >> >>> > 0x0020: 0001 0377 777
!spin_is_locked(&khugepaged_mm_lock));
On Sun, Jan 13, 2013 at 1:39 PM, Felix Fietkau wrote:
> On 2013-01-13 7:03 PM, Eric Dumazet wrote:
> > I suspect a bug in the spin_is_locked() implementation on your arch, as
> > he socket lock should be held at this point.
> I don't thin
q->qlen--;
On Sun, Jan 13, 2013 at 7:05 PM, Eric Dumazet wrote:
> Oh well yes, this doesnt quite work on !SMP.
>
> And this kind of bug is frequent
>
> See following example :
>
> commit b9980cdcf2524c5fe15d8cbae9c97b3ed6385563
> Author: Hugh Dickins
> Da
Some paths want to check a spinlock is held, others want to check if its
not held, it depends on the context.
So returning 1 on UP would break a bunch of code as well.
On Mon, Jan 14, 2013 at 12:18 AM, Jerry Chu wrote:
>
>
> On Sun, Jan 13, 2013 at 7:05 PM, Eric Dumazet wrote:
>
On Tue, 2013-05-07 at 14:56 -0500, Wes Felter wrote:
> Is it time for prio_fq_codel or wfq_codel? That's what comes to mind
> when seeing the BitTorrent vs. VoIP results.
Sure !
eth=eth0
tc qdisc del dev $eth root 2>/dev/null
tc -batch << EOF
qdisc add dev $eth root handle 1: prio bands 3
qdis
On Wed, 2013-05-08 at 15:25 -0700, Dave Taht wrote:
> Heh. I am hoping you are providing this as a negative proof!? as the
> strict prioritization of this particular linux scheduler means that a
> single full rate TCP flow in class 1:1 will completely starve classes
> 1:2 and 1:3.
>
> Some level
On Tue, 2013-05-14 at 03:24 -0700, Dave Taht wrote:
>
> As for dealing with incoming vs outgoing traffic, it might be possible
> to use connection tracking to successfully re-mark traffic on incoming
> to match the outgoing.
Indeed, we had a discussion about that during Netfilter workshop 2013
On Tue, 2013-07-09 at 09:57 +0200, Toke Høiland-Jørgensen wrote:
> Mikael Abrahamsson writes:
>
> > For me, it shows that FQ_CODEL indeed affects TCP performance
> > negatively for long links, however it looks like the impact is only
> > about 20-30%.
>
> As far as I can tell, fq_codel's through
On Mon, 2013-07-08 at 23:32 -0700, Dave Taht wrote:
> and... unlike in the past where tcp was being optimized for
> supercomputer center to supercomputer center, the vast majority of tcp
> related work is now coming out of google, who are optimizing for short
> transfers over short rtts.
That's n
On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
>
> Doesn't netem have an option to simulate reordering?
Its really too basic for my needs.
It decides to put the new packet at the front of transmit queue.
If you use netem to add a delay, then adding reordering is only a matter
On Tue, 2013-07-09 at 15:13 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet writes:
>
> > What do you mean ? This makes little sense to me.
>
> The data from my previous post
> (http://archive.tohojo.dk/bufferbloat-data/long-rtt/throughput.txt)
> shows fq_codel ac
On Tue, 2013-07-09 at 15:45 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet writes:
>
> > OK, thats a total of 200 ms RTT. Its a pretty high value :(
>
> Yeah, that was the point; Mikael requested such a test be run, and I
> happened to be near my lab setup yesterday,
On Tue, 2013-07-09 at 15:53 +0200, Toke Høiland-Jørgensen wrote:
> Eric Dumazet writes:
>
> > It would be nice it the rrul results could include a nstat snapshot
> >
> > nstat >/dev/null ; rrul_tests ; nstat
>
> Sure, can do. Is that from the client machine or t
On Sat, 2013-08-31 at 13:47 -0700, Dave Taht wrote:
>
>
> Eric Dumazet just posted a pure fq scheduler (using the highly
> optimized red/black trees in the kernel)
>
> http://marc.info/?l=linux-netdev&m=137740009008261&w=2
>
>
> which "scales to mil
28 matches
Mail list logo