On Tue, Feb 6, 2018 at 8:27 AM, Tal Gilboa wrote:
> On 2/6/2018 5:52 PM, Eric Dumazet wrote:
>>
>> On Tue, 2018-02-06 at 15:22 +, David Laight wrote:
>>>
>>> From: Eric Dumazet
Sent: 06 February 2018 14:20
>>>
>>>
>>> ...
Please give exact details.
Sending 64, 128, 256
On 2/6/2018 5:52 PM, Eric Dumazet wrote:
On Tue, 2018-02-06 at 15:22 +, David Laight wrote:
From: Eric Dumazet
Sent: 06 February 2018 14:20
...
Please give exact details.
Sending 64, 128, 256 or 512 bytes at a time on TCP_STREAM makes little sense.
We are not optimizing stack for patholo
On Tue, 2018-02-06 at 15:22 +, David Laight wrote:
> From: Eric Dumazet
> > Sent: 06 February 2018 14:20
>
> ...
> > Please give exact details.
> > Sending 64, 128, 256 or 512 bytes at a time on TCP_STREAM makes little
> > sense.
> > We are not optimizing stack for pathological cases, sorry.
From: Eric Dumazet
> Sent: 06 February 2018 14:20
...
> Please give exact details.
> Sending 64, 128, 256 or 512 bytes at a time on TCP_STREAM makes little sense.
> We are not optimizing stack for pathological cases, sorry.
There are plenty of workloads which are not bulk data and where multiple
s
On Tue, Feb 6, 2018 at 5:51 AM, Tal Gilboa wrote:
> On 1/24/2018 5:09 PM, Eric Dumazet wrote:
>>
>> On Wed, 2018-01-24 at 16:42 +0200, Tal Gilboa wrote:
>>>
>>> Hi Eric,
>>> My choice of words in my comment was misplaced, and I apologies. It
>>> completely missed the point. I understand, of course
On 1/24/2018 5:09 PM, Eric Dumazet wrote:
On Wed, 2018-01-24 at 16:42 +0200, Tal Gilboa wrote:
Hi Eric,
My choice of words in my comment was misplaced, and I apologies. It
completely missed the point. I understand, of course, the importance of
optimizing real-life scenarios.
We are currently ev
On Wed, 2018-01-24 at 16:42 +0200, Tal Gilboa wrote:
> Hi Eric,
> My choice of words in my comment was misplaced, and I apologies. It
> completely missed the point. I understand, of course, the importance of
> optimizing real-life scenarios.
>
> We are currently evaluating this patch and if/how
Hi Eric,
My choice of words in my comment was misplaced, and I apologies. It
completely missed the point. I understand, of course, the importance of
optimizing real-life scenarios.
We are currently evaluating this patch and if/how it might affect our
customers. We would also evaluate your sug
On Sun, Jan 21, 2018 at 12:52 PM, Tal Gilboa wrote:
> Hi Eric,
> We have noticed a degradation on both of our drivers (mlx4 and mlx5) when
> running TCP. Exact scenario is single stream TCP with 1KB packets. The
> degradation is a steady 50% drop.
> We tracked the offending commit to be:
> 75c119a
Hi Eric,
We have noticed a degradation on both of our drivers (mlx4 and mlx5)
when running TCP. Exact scenario is single stream TCP with 1KB packets.
The degradation is a steady 50% drop.
We tracked the offending commit to be:
75c119a ("tcp: implement rb-tree based retransmit queue")
Since mlx
From: Eric Dumazet
Date: Thu, 5 Oct 2017 22:21:20 -0700
> This patch series implement RB-tree based retransmit queue for TCP,
> to better match modern BDP.
Indeed, there was a lot of resistence to this due to the overhead
for small retransmit queue sizes, but with today's scale this is
long ove
This patch series implement RB-tree based retransmit queue for TCP,
to better match modern BDP.
Tested:
On receiver :
netem on ingress : delay 150ms 200us loss 1
GRO disabled to force stress and SACK storms.
for f in `seq 1 10`
do
./netperf -H lpaa6 -l30 -- -K bbr -o THROUGHPUT|tail -1
done
12 matches
Mail list logo