On Fri, May 17, 2019 at 8:57 AM David Laight wrote:
>
> From: Willem de Bruijn
> > Sent: 17 May 2019 04:23
> > On Thu, May 16, 2019 at 8:27 PM Adam Urban wrote:
> > >
> > > And replying to your earlier comment about TTL, yes I think a TTL on
> > > arp_queues would be hugely helpful.
> > >
> > > I
From: Willem de Bruijn
> Sent: 17 May 2019 04:23
> On Thu, May 16, 2019 at 8:27 PM Adam Urban wrote:
> >
> > And replying to your earlier comment about TTL, yes I think a TTL on
> > arp_queues would be hugely helpful.
> >
> > In any environment where you are streaming time-sensitive UDP traffic,
>
On Thu, May 16, 2019 at 8:27 PM Adam Urban wrote:
>
> And replying to your earlier comment about TTL, yes I think a TTL on
> arp_queues would be hugely helpful.
>
> In any environment where you are streaming time-sensitive UDP traffic,
> you really want the kernel to be tuned to immediately drop t
And replying to your earlier comment about TTL, yes I think a TTL on
arp_queues would be hugely helpful.
In any environment where you are streaming time-sensitive UDP traffic,
you really want the kernel to be tuned to immediately drop the
outgoing packet if the destination isn't yet known/in the a
How can I see if there is an active arp queue?
Regarding the qdisc, I don't think we're bumping up against that (at
least not in my tiny bench setup):
tc -s qdisc show
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024
quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 925035443 b
On 5/16/19 9:32 AM, Adam Urban wrote:
> Eric, thanks. Increasing wmem_default from 229376 to 2293760 indeed
> makes the issue go away on my test bench. What's a good way to
> determine the optimal value here? I assume this is in bytes and needs
> to be large enough so that the SO_SNDBUF doesn't
Eric, thanks. Increasing wmem_default from 229376 to 2293760 indeed
makes the issue go away on my test bench. What's a good way to
determine the optimal value here? I assume this is in bytes and needs
to be large enough so that the SO_SNDBUF doesn't fill up before the
kernel drops the packets. How
On 5/16/19 9:05 AM, Eric Dumazet wrote:
> We probably should add a ttl on arp queues.
>
> neigh_probe() could do that quite easily.
>
Adam, all you need to do is to increase UDP socket sndbuf.
Either by increasing /proc/sys/net/core/wmem_default
or using setsockopt( ... SO_SNDBUF ... )
On 5/16/19 7:47 AM, Willem de Bruijn wrote:
> On Wed, May 15, 2019 at 3:57 PM Adam Urban wrote:
>>
>> We have an application where we are use sendmsg() to send (lots of)
>> UDP packets to multiple destinations over a single socket, repeatedly,
>> and at a pretty constant rate using IPv4.
>>
>>
/proc/net/stat/ndisc_cache show unresolved_discards appears to show 0
unresolved_discards:
entries,allocs,destroys,hash_grows,lookups,hits,res_failed,rcv_probes_mcast,rcv_probes_ucast,periodic_gc_runs,forced_gc_runs,unresolved_discards,table_fulls
0005,0005,,,,0
On Wed, May 15, 2019 at 3:57 PM Adam Urban wrote:
>
> We have an application where we are use sendmsg() to send (lots of)
> UDP packets to multiple destinations over a single socket, repeatedly,
> and at a pretty constant rate using IPv4.
>
> In some cases, some of these destinations are no longer
We have an application where we are use sendmsg() to send (lots of)
UDP packets to multiple destinations over a single socket, repeatedly,
and at a pretty constant rate using IPv4.
In some cases, some of these destinations are no longer present on the
network, but we continue sending data to them
12 matches
Mail list logo