I agree with others here that actually solving the DoS issue isn't trivial, but making it less absurdly trivial to have
30 second dropouts of your VPN connection would also be a nice change.
Matt
On 4/19/21 05:43, Eric Dumazet wrote:
On Sun, Apr 18, 2021 at 4:31 PM Matt Corallo
wrote:
Should the default, though, be so low? If someone is still using a old modem they can crank up the sysctl, it does seem
like such things are pretty rare these days :). Its rather trivial to, without any kind of attack, hit 1Mbps of lost
fragments in today's networks, at which point all fragments
a very short time, it would be hard to launch the
attack(evicting the legit fragment before it's assembled requires a
large packet sending rate). And this seems better than the existing
solution (drop all incoming fragments when full).
Keyu
On Sat, Apr 17, 2021 at 6:30 PM Matt Corallo
See-also "[PATCH] Reduce IP_FRAG_TIME fragment-reassembly timeout to 1s, from 30s" (and the two resends of it) - given
the size of the default cache (4MB) and the time that it takes before we flush the cache (30 seconds) you only need
about 1Mbps of fragments to hit this issue. While DoS attacks
The default IP reassembly timeout of 30 seconds predates git
history (and cursory web searches turn up nothing related to it).
The only relevant source cited in net/ipv4/ip_fragment.c is RFC
791 defining IPv4 in 1981. RFC 791 suggests allowing the timer to
increase on the receipt of each fragment
The default IP reassembly timeout of 30 seconds predates git
history (and cursory web searches turn up nothing related to it).
The only relevant source cited in net/ipv4/ip_fragment.c is RFC
791 defining IPv4 in 1981. RFC 791 suggests allowing the timer to
increase on the receipt of each fragment
The default IP reassembly timeout of 30 seconds predates git
history (and cursory web searches turn up nothing related to it).
The only relevant source cited in net/ipv4/ip_fragment.c is RFC
791 defining IPv4 in 1981. RFC 791 suggests allowing the timer to
increase on the receipt of each fragment
On 3/29/21 20:04, Matt Corallo wrote:
This issue largely goes away when setting net.ipv4.ipfrag_time to 0/1.
Quick correction - the issue is reduced when set to 1 (as you might expect, you don't see as much loss if you wipe the
fragment buffer every second) but if you set it to zero
IP_FRAG_TIME defaults to 30 full long seconds to wait for reassembly of fragments. In practice, with the default values,
if I send enough fragments over a line that there is material loss, its not strange to see fragments be completely
dropped for the remainder of a 30 second time period before r
On 2/1/21 2:45 PM, Jakub Kicinski wrote:
Matt, would you mind clarifying if this is indeed a regression for i211?
Admittedly, I didn't go all the way back to test that it is, indeed, a regression. The Fixes commit that it was tagged
with on Tony's tree was something more recent than initia
Given this fixes a major (albeit ancient) performance regression, is it not a candidate for backport? It landed on
Tony's dev-queue branch with a Fixes tag but no stable CC.
Thanks,
Matt
On 12/21/20 5:25 PM, Nick Lowe wrote:
The Intel I211 Ethernet Controller supports 2 Receive Side Scaling (R
Damn mail clients and their helpful corruption of patches...
Resent w/o the extran \n in the diff header.
On 06/29/16 07:58, David Miller wrote:
> From: Matt Corallo
> Date: Sat, 25 Jun 2016 19:35:03 +
>
>> At least on Meson GXBB, the CORE_IRQ_MTL_RX_OVERFLOW interrupt is thr
(resent due to overhelpful mail client corrupting patch)
At least on Meson GXBB, the CORE_IRQ_MTL_RX_OVERFLOW interrupt is thrown
with the stmmac1000 driver, which does not support set_rx_tail_ptr. With
this patch and the clock fixes, 1G ethernet works on ODROID-C2.
Signed-off-by: Matt Corallo
At least on Meson GXBB, the CORE_IRQ_MTL_RX_OVERFLOW interrupt is thrown
with the stmmac1000 driver, which does not support set_rx_tail_ptr. With
this patch and the clock fixes, 1G ethernet works on ODROID-C2.
Signed-off-by: Matt Corallo
---
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2
14 matches
Mail list logo