From: Rick Jones
Date: Wed, 30 Nov 2016 09:42:40 -0800
> And indeed, based on a quick check, send() is what is being called,
> though it becomes it seems a sendto() system call - with the
> destination information NJULL:
>
> write(1, "send\n", 5) = 5
> sendto(4, "netperf\0netpe
On 11/30/2016 02:43 AM, Jesper Dangaard Brouer wrote:
Notice the "fib_lookup" cost is still present, even when I use
option "-- -n -N" to create a connected socket. As Eric taught us,
this is because we should use syscalls "send" or "write" on a connected
socket.
In theory, once the data socke
On Mon, 28 Nov 2016 10:33:49 -0800 Rick Jones wrote:
> On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
> >> time to try IP_MTU_DISCOVER ;)
> >
> > To Rick, maybe you can find a good solution or option with Eric's hint,
> > to send appropriate sized UDP packets with Don't Fragment (DF).
On 11/28/2016 10:33 AM, Rick Jones wrote:
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of trunk has a
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of trunk has a change adding an omni, test-specific -f opt
On Mon, 2016-11-21 at 17:03 +0100, Jesper Dangaard Brouer wrote:
> On Thu, 17 Nov 2016 10:51:23 -0800
> Eric Dumazet wrote:
>
> > On Thu, 2016-11-17 at 19:30 +0100, Jesper Dangaard Brouer wrote:
> >
> > > The point is I can see a socket Send-Q forming, thus we do know the
> > > application have
On Thu, 17 Nov 2016 10:51:23 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 19:30 +0100, Jesper Dangaard Brouer wrote:
>
> > The point is I can see a socket Send-Q forming, thus we do know the
> > application have something to send. Thus, and possibility for
> > non-opportunistic bulking. Al
On Thu, 17 Nov 2016 13:44:02 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 22:19 +0100, Jesper Dangaard Brouer wrote:
>
> >
> > Maybe you can share your udp flood "udpsnd" program source?
>
> Very ugly. This is based on what I wrote when tracking the UDP v6
> checksum bug (4f2e4ad56a65f3
On 11/17/2016 04:37 PM, Julian Anastasov wrote:
On Thu, 17 Nov 2016, Rick Jones wrote:
raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F
src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472
...
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
getsockopt(4, SOL_SOCKET, SO_SND
Hello,
On Thu, 17 Nov 2016, Rick Jones wrote:
> raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F
> src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472
>
> ...
>
> socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
> getsockopt(4, SOL_SOCKET, SO_SNDBUF, [212992], [4])
On 11/17/2016 01:44 PM, Eric Dumazet wrote:
because netperf sends the same message
over and over...
Well, sort of, by default. That can be altered to a degree.
The global -F option should cause netperf to fill the buffers in its
send ring with data from the specified file. The number of buf
On Thu, Nov 17, 2016 at 9:34 AM, David Laight wrote:
> From: Jesper Dangaard Brouer
>> Sent: 17 November 2016 14:58
>> On Thu, 17 Nov 2016 06:17:38 -0800
>> Eric Dumazet wrote:
>>
>> > On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
>> >
>> > > I can see that qdisc layer does not
On Thu, 2016-11-17 at 22:19 +0100, Jesper Dangaard Brouer wrote:
>
> Maybe you can share your udp flood "udpsnd" program source?
Very ugly. This is based on what I wrote when tracking the UDP v6
checksum bug (4f2e4ad56a65f3b7d64c258e373cb71e8d2499f4 net: mangle zero
checksum in skb_checksum_help
On Thu, 17 Nov 2016 10:51:23 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 19:30 +0100, Jesper Dangaard Brouer wrote:
>
> > The point is I can see a socket Send-Q forming, thus we do know the
> > application have something to send. Thus, and possibility for
> > non-opportunistic bulking. All
On Thu, 2016-11-17 at 19:30 +0100, Jesper Dangaard Brouer wrote:
> The point is I can see a socket Send-Q forming, thus we do know the
> application have something to send. Thus, and possibility for
> non-opportunistic bulking. Allowing/implementing bulk enqueue from
> socket layer into qdisc laye
On Thu, 17 Nov 2016 08:21:19 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 15:57 +0100, Jesper Dangaard Brouer wrote:
> > On Thu, 17 Nov 2016 06:17:38 -0800
> > Eric Dumazet wrote:
> >
> > > On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
> > >
> > > > I can see that q
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Well, I suppose adding another setsockopt() to the data socket creation
From: Jesper Dangaard Brouer
> Sent: 17 November 2016 14:58
> On Thu, 17 Nov 2016 06:17:38 -0800
> Eric Dumazet wrote:
>
> > On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
> >
> > > I can see that qdisc layer does not activate xmit_more in this case.
> > >
> >
> > Sure. Not enou
On Thu, 17 Nov 2016 06:17:38 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
>
> > I can see that qdisc layer does not activate xmit_more in this case.
> >
>
> Sure. Not enough pressure from the sender(s).
>
> The bottleneck is not the NIC or qdi
On Thu, 17 Nov 2016 05:20:50 -0800
Eric Dumazet wrote:
> On Thu, 2016-11-17 at 09:16 +0100, Jesper Dangaard Brouer wrote:
>
> >
> > I noticed there is a Send-Q, and the perf-top2 is _raw_spin_lock, which
> > looks like it comes from __dev_queue_xmit(), but we know from
> > experience that this
On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
> I can see that qdisc layer does not activate xmit_more in this case.
>
Sure. Not enough pressure from the sender(s).
The bottleneck is not the NIC or qdisc in your case, meaning that BQL
limit is kept at a small value.
(BTW not
On Thu, 2016-11-17 at 09:16 +0100, Jesper Dangaard Brouer wrote:
>
> I noticed there is a Send-Q, and the perf-top2 is _raw_spin_lock, which
> looks like it comes from __dev_queue_xmit(), but we know from
> experience that this stall is actually caused by writing the
> tailptr/doorbell in the HW.
On Thu, 2016-11-17 at 15:57 +0100, Jesper Dangaard Brouer wrote:
> On Thu, 17 Nov 2016 06:17:38 -0800
> Eric Dumazet wrote:
>
> > On Thu, 2016-11-17 at 14:42 +0100, Jesper Dangaard Brouer wrote:
> >
> > > I can see that qdisc layer does not activate xmit_more in this case.
> > >
> >
> > Sure
On Wed, 16 Nov 2016 16:34:09 -0800
Eric Dumazet wrote:
> On Wed, 2016-11-16 at 23:40 +0100, Jesper Dangaard Brouer wrote:
>
> > Using -R 1 does not seem to help remove __ip_select_ident()
> >
> > Samples: 56K of event 'cycles', Event count (approx.): 78628132661
> > Overhead CommandS
On Wed, 2016-11-16 at 23:40 +0100, Jesper Dangaard Brouer wrote:
> Using -R 1 does not seem to help remove __ip_select_ident()
>
> Samples: 56K of event 'cycles', Event count (approx.): 78628132661
> Overhead CommandShared ObjectSymbol
> +9.11% netperf[kernel.vmlin
On 11/16/2016 02:40 PM, Jesper Dangaard Brouer wrote:
On Wed, 16 Nov 2016 09:46:37 -0800
Rick Jones wrote:
It is a wild guess, but does setting SO_DONTROUTE affect whether or not
a connect() would have the desired effect? That is there to protect
people from themselves (long story about people
On Wed, 16 Nov 2016 09:46:37 -0800
Rick Jones wrote:
> On 11/16/2016 04:16 AM, Jesper Dangaard Brouer wrote:
> > [1] Subj: High perf top ip_idents_reserve doing netperf UDP_STREAM
> > - https://www.spinics.net/lists/netdev/msg294752.html
> >
> > Not fixed in version 2.7.0.
> > - ftp://ftp.netpe
On 11/16/2016 04:16 AM, Jesper Dangaard Brouer wrote:
[1] Subj: High perf top ip_idents_reserve doing netperf UDP_STREAM
- https://www.spinics.net/lists/netdev/msg294752.html
Not fixed in version 2.7.0.
- ftp://ftp.netperf.org/netperf/netperf-2.7.0.tar.gz
Used extra netperf configure compile
While optimizing the kernel RX path, I've run into an issue where I
cannot use netperf UDP_STREAM for testing, because the sender is
slower than receiver. Thus, it cannot show my receiver improvements
(as receiver have idle cycles).
Eric Dumazet previously told me[1] this was related to netperf
29 matches
Mail list logo