When building with W=1, we got one warning as belows:
drivers/net/wireless/ath/ath6kl/wmi.c:3509:6: warning: variable ‘ret’
set but not used [-Wunused-but-set-variable]
At the end of ath6kl_wmi_set_pvb_cmd, it is returned by 0 regardless of
return value of ath6kl_wmi_cmd_send.
This patch fixes ret
Hello
We want to let you know that record available to us states that sometime ago,
you were contacted by some people who said they wanted to wire some money into
your account and you people share at an agreed ratio. You opened communication
with the consulting company then but after sometime,
On Thu, Sep 15, 2016 at 3:11 PM, Thadeu Lima de Souza Cascardo
wrote:
> Instead of using flow stats per NUMA node, use it per CPU. When using
> megaflows, the stats lock can be a bottleneck in scalability.
>
> On a E5-2690 12-core system, usual throughput went from ~4Mpps to
> ~15Mpps when forward
On Thu, Sep 15, 2016 at 3:11 PM, Thadeu Lima de Souza Cascardo
wrote:
> On a system with only node 1 as possible, all statistics is going to be
> accounted on node 0 as it will have a single writer.
>
> However, when getting and clearing the statistics, node 0 is not going
> to be considered, as i
On Sat, 2016-09-17 at 16:38 -0700, Florian Fainelli wrote:
> 2016-09-17 16:23 GMT-07:00 Joe Perches :
> > On Sat, 2016-09-17 at 16:17 -0700, Florian Fainelli wrote:
> > > The list does not accept public subscribers, so this is the correct
> > > entry to use.
> > Then M: is definitely _not_ the corr
On Sat, Sep 17, 2016 at 4:17 PM, Florian Fainelli wrote:
> 2016-09-17 15:51 GMT-07:00 Joe Perches :
>> On Sat, 2016-09-17 at 15:27 -0700, Florian Fainelli wrote:
>>> Gary has not been with Broadcom for some time now, replace his address
>>> with the internal mailing-list used for other entries.
>>
2016-09-17 16:23 GMT-07:00 Joe Perches :
> On Sat, 2016-09-17 at 16:17 -0700, Florian Fainelli wrote:
>> 2016-09-17 15:51 GMT-07:00 Joe Perches :
> []
>> > Without an actual maintainer, this should really be
>> > orphan and not supported.
>> I would like to hear from Michael before concluding tha
/dhowells/linux-fs.git
rxrpc-rewrite-20160917-2
David
---
David Howells (11):
rxrpc: Print the packet type name in the Rx packet trace
rxrpc: Add some additional call tracing
rxrpc: Add connection tracepoint and client conn state tracepoint
rxrpc: Add a tracepoint to
Add additional call tracepoint points for noting call-connected,
call-released and connection-failed events.
Also fix one tracepoint that was using an integer instead of the
corresponding enum value as the point type.
Signed-off-by: David Howells
---
net/rxrpc/ar-internal.h |3 +++
net/rxr
Add a tracepoint to follow the insertion of a packet into the transmit
buffer, its transmission and its rotation out of the buffer.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h | 26 ++
net/rxrpc/ar-internal.h | 12
net/rxrpc/input.
Add a configuration option to inject packet loss by discarding
approximately every 8th packet received and approximately every 8th DATA
packet transmitted.
Note that no locking is used, but it shouldn't really matter.
Signed-off-by: David Howells
---
net/rxrpc/Kconfig |7 +++
net/rxrp
Add a tracepoint to log information from received ACK packets.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h | 26 ++
net/rxrpc/input.c|2 ++
2 files changed, 28 insertions(+)
diff --git a/include/trace/events/rxrpc.h b/include/trace/ev
Add a tracepoint to follow what recvmsg does within AF_RXRPC.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h | 34 ++
net/rxrpc/ar-internal.h | 17 +
net/rxrpc/misc.c | 14 ++
net/rxrpc/recvmsg.c
Improve sk_buff tracing within AF_RXRPC by the following means:
(1) Use an enum to note the event type rather than plain integers and use
an array of event names rather than a big multi ?: list.
(2) Distinguish Rx from Tx packets and account them separately. This
requires the call ph
Remove _enter/_debug/_leave calls from rxrpc_recvmsg_data() of which one
uses an uninitialised variable.
Signed-off-by: David Howells
---
net/rxrpc/recvmsg.c |8
1 file changed, 8 deletions(-)
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index b62a08151895..79e65668bc58
Add a tracepoint to log information about ACK transmission.
Signed-off-by: David Howels
---
include/trace/events/rxrpc.h | 30 ++
net/rxrpc/conn_event.c |3 +++
net/rxrpc/output.c |7 ++-
3 files changed, 39 insertions(+), 1 deletion(-)
On Sat, 2016-09-17 at 16:17 -0700, Florian Fainelli wrote:
> 2016-09-17 15:51 GMT-07:00 Joe Perches :
[]
> > Without an actual maintainer, this should really be
> > orphan and not supported.
> I would like to hear from Michael before concluding that
No worries.
> > And the M: bcm-kernel-feedback-
Record calls that need to be accepted using sk_acceptq_added() otherwise
the backlog counter goes negative because sk_acceptq_removed() is called.
This causes the preallocator to malfunction.
Calls that are preaccepted by AFS within the kernel aren't affected by
this.
Signed-off-by: David Howells
Move the check of rx_pkt_offset from rxrpc_locate_data() to the caller,
rxrpc_recvmsg_data(), so that it's more clear what's going on there.
Signed-off-by: David Howells
---
net/rxrpc/recvmsg.c |9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/net/rxrpc/recvmsg.c b
Print a symbolic packet type name for each valid received packet in the
trace output, not just a number.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h |5 +++--
net/rxrpc/ar-internal.h |6 +++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include
Add a pair of tracepoints, one to track rxrpc_connection struct ref
counting and the other to track the client connection cache state.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h | 60 +++
net/rxrpc/ar-internal.h | 76 ++
The soft-ACK parser doesn't increment the pointer into the soft-ACK list,
resulting in the first ACK/NACK value being applied to all the relevant
packets in the Tx queue. This has the potential to miss retransmissions
and cause excessive retransmissions.
Fix this by incrementing the pointer.
Sig
Add a tracepoint to follow the life of packets that get added to a call's
receive buffer.
Signed-off-by: David Howells
---
include/trace/events/rxrpc.h | 33 +
net/rxrpc/ar-internal.h | 12
net/rxrpc/call_accept.c |3 +++
net/rxrpc/
Make the retransmission algorithm use for-loops instead of do-loops and
move the counter increments into the for-statement increment slots.
Though the do-loops are slighly more efficient since there will be at least
one pass through the each loop, the counter increments are harder to get
right as
Call rxrpc_release_call() on getting an error in rxrpc_new_client_call()
rather than trying to do the cleanup ourselves. This isn't a problem,
provided we set RXRPC_CALL_HAS_USERID only if we actually add the call to
the calls tree as cleanup code fragments that would otherwise cause
problems are
Purge the queue of to_be_accepted calls on socket release. Note that
purging sock_calls doesn't release the ref owned by to_be_accepted.
Probably the sock_calls list is redundant given a purges of the recvmsg_q,
the to_be_accepted queue and the calls tree.
Signed-off-by: David Howells
---
net
The code for determining the last packet in rxrpc_recvmsg_data() has been
using the RXRPC_CALL_RX_LAST flag to determine if the rx_top pointer points
to the last packet or not. This isn't a good idea, however, as the input
code may be running simultaneously on another CPU and that sets the flag
*b
Don't transmit an ACK if call->ackr_reason in unset. There's the
possibility of a race between recvmsg() sending an ACK and the background
processing thread trying to send the same one.
Signed-off-by: David Howells
---
net/rxrpc/output.c |5 +
1 file changed, 5 insertions(+)
diff --gi
2016-09-17 15:51 GMT-07:00 Joe Perches :
> On Sat, 2016-09-17 at 15:27 -0700, Florian Fainelli wrote:
>> Gary has not been with Broadcom for some time now, replace his address
>> with the internal mailing-list used for other entries.
>>
>> > Signed-off-by: Florian Fainelli
>> ---
>> Michael,
>>
>>
Fix the basic transmit DATA packet content size at 1412 bytes so that they
can be arbitrarily assembled into jumbo packets.
In the future, I'm thinking of moving to keeping a jumbo packet header at
the beginning of each packet in the Tx queue and creating the packet header
on the spot when kernel_
If the last call on a client connection is release after the connection has
had a bunch of calls allocated but before any DATA packets are sent (so
that it's not yet marked RXRPC_CONN_EXPOSED), an assertion will happen in
rxrpc_disconnect_client_call().
af_rxrpc: Assertion failed - 1(0x1)
rxrpc_send_call_packet() should use type in both its switch-statements
rather than using pkt->whdr.type. This might give the compiler an easier
job of uninitialised variable checking.
Signed-off-by: David Howells
---
net/rxrpc/output.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
In rxrpc_put_one_client_conn(), if a connection has RXRPC_CONN_COUNTED set
on it, then it's accounted for in rxrpc_nr_client_conns and may be on
various lists - and this is cleaned up correctly.
However, if the connection doesn't have RXRPC_CONN_COUNTED set on it, then
the put routine returns rath
write-20160917-1
David
---
David Howells (14):
rxrpc: Remove some whitespace.
rxrpc: Move the check of rx_pkt_offset from rxrpc_locate_data() to caller
rxrpc: Check the return value of rxrpc_locate_data()
rxrpc: Fix handling of the last packet in rxrpc_recvmsg_data()
Remove a tab that's on a line that should otherwise be blank.
Signed-off-by: David Howells
---
net/rxrpc/call_event.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 61432049869b..9367c3be31eb 100644
--- a/net/rxrpc/ca
Check the return value of rxrpc_locate_data() in rxrpc_recvmsg_data().
Signed-off-by: David Howells
---
net/rxrpc/recvmsg.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index 0d085f5cf1bf..1edf2cf62cc5 100644
--- a/net/rxrp
Doesn't change generated code.
Signed-off-by: Florian Westphal
---
net/sched/sch_pie.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c
index a570b0b..d976d74 100644
--- a/net/sched/sch_pie.c
+++ b/net/sched/sch_pie.c
@@ -511,7 +511,7
During Netfilter Workshop 2016 Eric Dumazet pointed out that qdisc
schedulers use doubly-linked lists, even though single-linked list
would be enough.
The double-linked skb lists incur one extra write on enqueue/dequeue
operations (to change ->prev pointer of next list elem).
This series converts
After previous patch these functions are identical.
Replace __skb_dequeue in qdiscs with __qdisc_dequeue_head.
Next patch will then make __qdisc_dequeue_head handle
single-linked list instead of strcut sk_buff_head argument.
Doesn't change generated code.
Signed-off-by: Florian Westphal
---
ne
This change replaces sk_buff_head struct in Qdiscs with new qdisc_skb_head.
Its similar to the skb_buff_head api, but does not use skb->prev pointers.
Qdiscs will commonly enqueue at the tail of a list and dequeue at head.
While skb_buff_head works fine for this, enqueue/dequeue needs to also
adj
Moves qdisc stat accouting to qdisc_dequeue_head.
The only direct caller of the __qdisc_dequeue_head version open-codes
this now.
This allows us to later use __qdisc_dequeue_head as a replacement
of __skb_dequeue() (which operates on sk_buff_head list).
Signed-off-by: Florian Westphal
---
incl
A followup change will replace the sk_buff_head in the qdisc
struct with a slightly different list.
Use of the sk_buff_head helpers will thus cause compiler
warnings.
Open-code these accesses in an extra change to ease review.
Signed-off-by: Florian Westphal
---
net/sched/sch_fifo.c| 4 ++-
On Sat, 2016-09-17 at 15:27 -0700, Florian Fainelli wrote:
> Gary has not been with Broadcom for some time now, replace his address
> with the internal mailing-list used for other entries.
>
> > Signed-off-by: Florian Fainelli
> ---
> Michael,
>
> Since this is an old driver, not sure who could
Gary has not been with Broadcom for some time now, replace his address
with the internal mailing-list used for other entries.
Signed-off-by: Florian Fainelli
---
Michael,
Since this is an old driver, not sure who could step up as a maintainer
for b44?
MAINTAINERS | 2 +-
1 file changed, 1 inse
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes
---
drivers/net/ethernet/broadcom/b44.c | 98 +++
1 files changed, 54 insertions(+), 44 deletions(-)
diff --git a/drivers/n
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes
---
drivers/net/ethernet
Hi all,
I have an odroid c2 board which shows this issue. No data is
transmitted or received after a moment of intense tx traffic. Copying a
1GB file per scp from the board triggers it repeatedly.
The board has a stmmac - user ID: 0x11, Synopsys ID: 0x37.
When switching the network to 100Mb/s
On Sat, Sep 17, 2016 at 12:04 PM, kbuild test robot wrote:
> Hi Yuchung,
>
> [auto build test ERROR on net-next/master]
>
> url:
> https://github.com/0day-ci/linux/commits/Neal-Cardwell/tcp-BBR-congestion-control-algorithm/20160918-014058
> config: x86_64-randconfig-s2-09180225 (attached as .c
Hi Yuchung,
[auto build test ERROR on net-next/master]
url:
https://github.com/0day-ci/linux/commits/Neal-Cardwell/tcp-BBR-congestion-control-algorithm/20160918-014058
config: x86_64-randconfig-s2-09180225 (attached as .config)
compiler: gcc-4.4 (Debian 4.4.7-8) 4.4.7
reproduce:
# sav
Add the tso_segs_goal() function in tcp_congestion_ops to allow the
congestion control module to specify the number of segments that
should be in a TSO skb sent by tcp_write_xmit() and
tcp_xmit_retransmit_queue(). The congestion control module can either
request a particular number of segments in T
From: Yuchung Cheng
This commit introduces an optional new "omnipotent" hook,
cong_control(), for congestion control modules. The cong_control()
function is called at the end of processing an ACK (i.e., after
updating sequence numbers, the SACK scoreboard, and loss
detection). At that moment we h
From: Eric Dumazet
Revert to the tcp_skb_cb size check that tcp_init() had before commit
b4772ef879a8 ("net: use common macro for assering skb->cb[] available
size in protocol families"). As related commit 744d5a3e9fe2 ("net:
move skb->dropcount to skb->cb[]") explains, the
sock_skb_cb_check_size
From: Yuchung Cheng
This patch generates data delivery rate (throughput) samples on a
per-ACK basis. These rate samples can be used by congestion control
modules, and specifically will be used by TCP BBR in later patches in
this series.
Key state:
tp->delivered: Tracks the total number of data
From: Eric Dumazet
This commit adds to the fq module a low_rate_threshold parameter to
insert a delay after all packets if the socket requests a pacing rate
below the threshold.
This helps achieve more precise control of the sending rate with
low-rate paths, especially policers. The basic issue
Refactor the TCP min_rtt code to reuse the new win_minmax library in
lib/win_minmax.c to simplify the TCP code.
This is a pure refactor: the functionality is exactly the same. We
just moved the windowed min code to make TCP easier to read and
maintain, and to allow other parts of the kernel to use
The TCP CUBIC module already uses 64 bytes.
The upcoming TCP BBR module uses 88 bytes.
Signed-off-by: Van Jacobson
Signed-off-by: Neal Cardwell
Signed-off-by: Yuchung Cheng
Signed-off-by: Nandita Dukkipati
Signed-off-by: Eric Dumazet
Signed-off-by: Soheil Hassas Yeganeh
---
include/net/inet
Export tcp_mss_to_mtu(), so that congestion control modules can use
this to help calculate a pacing rate.
Signed-off-by: Van Jacobson
Signed-off-by: Neal Cardwell
Signed-off-by: Yuchung Cheng
Signed-off-by: Nandita Dukkipati
Signed-off-by: Eric Dumazet
Signed-off-by: Soheil Hassas Yeganeh
--
This commit introduces a generic library to estimate either the min or
max value of a time-varying variable over a recent time window. This
is code originally from Kathleen Nichols. The current form of the code
is from Van Jacobson.
A single struct minmax_sample will track the estimated windowed-m
From: Yuchung Cheng
Currently the TCP send buffer expands to twice cwnd, in order to allow
limited transmits in the CA_Recovery state. This assumes that cwnd
does not increase in the CA_Recovery.
For some congestion control algorithms, like the upcoming BBR module,
if the losses in recovery do n
From: Yuchung Cheng
This commit export two new fields in struct tcp_info:
tcpi_delivery_rate: The most recent goodput, as measured by
tcp_rate_gen(). If the socket is limited by the sending
application (e.g., no data to send), it reports the highest
measurement instead of the most
Count the number of packets that a TCP connection marks lost.
Congestion control modules can use this loss rate information for more
intelligent decisions about how fast to send.
Specifically, this is used in TCP BBR policer detection. BBR uses a
high packet loss rate as one signal in its policer
From: Soheil Hassas Yeganeh
This commit adds code to track whether the delivery rate represented
by each rate_sample was limited by the application.
Upon each transmit, we store in the is_app_limited field in the skb a
boolean bit indicating whether there is a known "bubble in the pipe":
a point
To allow congestion control modules to use the default TSO auto-sizing
algorithm as one of the ingredients in their own decision about TSO sizing:
1) Export tcp_tso_autosize() so that CC modules can use it.
2) Change tcp_tso_autosize() to allow callers to specify a minimum
number of segments p
This commit implements a new TCP congestion control algorithm: BBR
(Bottleneck Bandwidth and RTT). A detailed description of BBR will be
published in ACM Queue, Vol. 14 No. 5, September-October 2016, as
"BBR: Congestion-Based Congestion Control".
BBR has significantly increased throughput and redu
From: Soheil Hassas Yeganeh
The upcoming change "lib/win_minmax: windowed min or max estimator"
introduces a struct called minmax, which is then included in
include/linux/tcp.h in the upcoming change "tcp: use windowed min
filter library for TCP min_rtt estimation". This would create a
compilatio
tcp: BBR congestion control algorithm
This patch series implements a new TCP congestion control algorithm:
BBR (Bottleneck Bandwidth and RTT). A paper with a detailed
description of BBR will be published in ACM Queue, September-October
2016, as "BBR: Congestion-Based Congestion Control". BBR is wi
On 09/14/2016 07:03 AM, LABBE Corentin wrote:
> On Mon, Sep 12, 2016 at 10:44:51PM +0200, Maxime Ripard wrote:
>>> +static int __maybe_unused sun8i_emac_resume(struct platform_device *pdev)
>>> +{
>>> + struct net_device *ndev = platform_get_drvdata(pdev);
>>> + struct sun8i_emac_priv *priv = n
The series add the large receive offload (LRO) functions by hardware and
the ethtool functions to configure RX flows of HW LRO.
changes since v3:
- Respin the patch by the newer driver
- Move the dts description of hwlro to optional properties
changes since v2:
- Add ndo_fix_features to prevent N
From: Wei Yongjun
Fix the retrn value check which testing the wrong variable
in cfg_queues_uld().
Fixes: 94cdb8bb993a ("cxgb4: Add support for dynamic allocation of
resources for ULD")
Signed-off-by: Wei Yongjun
---
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 2 +-
1 file changed, 1 inser
Add the dts property for the capability if the hardware supports LRO.
Signed-off-by: Nelson Chang
---
Documentation/devicetree/bindings/net/mediatek-net.txt | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/Documentation/devicetree/bindings/net/mediatek-net.txt
b/Documentati
The codes add ethtool functions to set RX flows for HW LRO. Because the
HW LRO hardware can only recognize the destination IP of TCP/IP RX flows,
the ethtool command to add HW LRO flow is as below:
ethtool -N [devname] flow-type tcp4 dst-ip [ip_addr] loc [0~1]
Otherwise, cause the hardware can set
The codes add the large receive offload (LRO) functions by hardware as below:
1) PDMA has total four RX rings that one is the normal ring, and others can
be configured as LRO rings.
2) Only TCP/IP RX flows can be offloaded. The hardware can set four IP
addresses at most, if the destination IP
The series add the large receive offload (LRO) functions by hardware and
the ethtool functions to configure RX flows of HW LRO.
changes since v3:
- Respin the patch by the newer driver
- Move the dts description of hwlro to optional properties
changes since v2:
- Add ndo_fix_features to prevent N
Thanks David!
I'll respin the patch and submit the newer version.
-Original Message-
From: David Miller [mailto:da...@davemloft.net]
Sent: Saturday, September 17, 2016 9:46 PM
To: Nelson Chang (張家祥)
Cc: j...@phrozen.org; f.faine...@gmail.com; n...@openwrt.org;
netdev@vger.kernel.org; linu
The XDP_TX action can fail transmitting the frame in case the TX ring
is full or port is down. In case of TX failure it should drop the
frame, and not as now call 'break' which is the same as XDP_PASS.
Fixes: 9ecc2d86171a ("net/mlx4_en: add xdp forwarding and data write support")
Signed-off-by: J
Your (Email Address) Outlook exceeded its storage limit.
https://docs.google.com/forms/d/e/1FAIpQLSdtc96pXFgZ5LIOEaRYQaBOvX0ae7kS_RpTukKOq7eI4RASQw/viewform
(FILL) and Click on Submit to get more space or you wont be able to send Mail.
On Fri, 16 Sep 2016 13:43:50 -0700
Brenden Blanco wrote:
> On Fri, Sep 16, 2016 at 10:36:12PM +0200, Jesper Dangaard Brouer wrote:
> > The XDP_TX action can fail transmitting the frame in case the TX ring
> > is full or port is down. In case of TX failure it should drop the
> > frame, and not as
From: Alexei Starovoitov
Date: Thu, 15 Sep 2016 13:00:28 -0700
> Similar to geneve, vxlan, gre tunnels implement 'collect metadata' mode
> in ipip, ipip6, ip6ip6 tunnels.
Series applied, thanks.
From: Julia Lawall
Date: Thu, 15 Sep 2016 22:23:23 +0200
> Constify net_device_ops structures.
All applied, thanks.
From: David Ahern
Date: Thu, 15 Sep 2016 10:13:47 -0700
> No longer used after d66f6c0a8f3c0 ("net: ipv4: Remove l3mdev_get_saddr")
>
> Signed-off-by: David Ahern
Applied.
From: David Ahern
Date: Thu, 15 Sep 2016 10:18:45 -0700
> No longer used after e0d56fdd73422 ("net: l3mdev: remove redundant calls")
>
> Signed-off-by: David Ahern
Applied.
From: Alan
Date: Thu, 15 Sep 2016 18:51:25 +0100
> (As asked by Dave in Februrary)
>
> Signed-off-by: Alan Cox
Applied.
From: Eric Dumazet
Date: Thu, 15 Sep 2016 09:33:02 -0700
> From: Eric Dumazet
>
> With large BDP TCP flows and lossy networks, it is very important
> to keep a low number of skbs in the write queue.
>
> RACK and SACK processing can perform a linear scan of it.
>
> We should avoid putting any
From: Marcelo Ricardo Leitner
Date: Thu, 15 Sep 2016 15:02:38 -0300
> This function actually operates on u32 yet its paramteres were declared
> as u16, causing integer truncation upon calling.
>
> Note in patch context that ADDIP_SERIAL_SIGN_BIT is already 32 bits.
>
> Signed-off-by: Marcelo Ri
From: Phil Turnbull
Date: Thu, 15 Sep 2016 12:41:44 -0400
> skb is not freed if newsk is NULL. Rework the error path so free_skb is
> unconditionally called on function exit.
>
> Fixes: c3ea9fa27413 ("[IrDA] af_irda: IRDA_ASSERT cleanups")
> Signed-off-by: Phil Turnbull
Applied.
From: Eric Dumazet
Date: Thu, 15 Sep 2016 08:12:33 -0700
> From: Eric Dumazet
>
> If a TCP socket gets a large write queue, an overflow can happen
> in a test in __tcp_retransmit_skb() preventing all retransmits.
>
> The flow then stalls and resets after timeouts.
>
> Tested:
>
> sysctl -w n
From: Eric Dumazet
Date: Thu, 15 Sep 2016 08:48:46 -0700
> From: Eric Dumazet
>
> A malicious TCP receiver, sending SACK, can force the sender to split
> skbs in write queue and increase its memory usage.
>
> Then, when socket is closed and its write queue purged, we might
> overflow sk_forwar
From: Filipe Manco
Date: Thu, 15 Sep 2016 17:10:46 +0200
> In case of error during netback_probe() (e.g. an entry missing on the
> xenstore) netback_remove() is called on the new device, which will set
> the device backend state to XenbusStateClosed by calling
> set_backend_state(). However, the
From: Kalle Valo
Date: Thu, 15 Sep 2016 18:09:21 +0300
> here's the first pull request for 4.9. The ones I want to point out are
> the FIELD_PREP() and FIELD_GET() macros added to bitfield.h, which are
> reviewed by Linus, and make it possible to remove util.h from mt7601u.
>
> Also we have new
From: Tariq Toukan
Date: Thu, 15 Sep 2016 16:08:35 +0300
> In this series, we refactor our Striding RQ receive-flow to always use
> fragmented WQEs (Work Queue Elements) using order-0 pages, omitting the
> flow that allocates and splits high-order pages which would fragment
> and deplete high-ord
From: Nelson Chang
Date: Wed, 14 Sep 2016 13:58:56 +0800
> The series add the large receive offload (LRO) functions by hardware and
> the ethtool functions to configure RX flows of HW LRO.
>
> changes since v2:
> - Add ndo_fix_features to prevent NETIF_F_LRO off while RX flow is programmed
> - R
From: Jamal Hadi Salim
Date: Thu, 15 Sep 2016 06:49:54 -0400
> +static int __init ifetc_index_init_module(void)
> +{
> + pr_emerg("Loaded IFE tc_index\n");
...
> +static void __exit ifetc_index_cleanup_module(void)
> +{
> + pr_emerg("Unloaded IFE tc_index\n");
This looks like some lefto
On Wed, Sep 14, 2016 at 04:03:04PM +0200, LABBE Corentin wrote:
> > > +static int __maybe_unused sun8i_emac_suspend(struct platform_device
> > > *pdev, pm_message_t state)
> > > +{
> > > + struct net_device *ndev = platform_get_drvdata(pdev);
> > > + struct sun8i_emac_priv *priv = netdev_priv(ndev
On Fri, Sep 16, 2016 at 5:38 PM, kbuild test robot wrote:
> Hi Yuchung,
>
> [auto build test WARNING on net-next/master]
> All warnings (new ones prefixed by >>):
>
>In file included from net/ipv4/route.c:103:0:
>>> include/net/tcp.h:769:11: warning: 'packed' attribute ignored for field of
>>
> On Sep 16, 2016, at 4:05 PM, Jiri Pirko wrote:
>
> From: Nogah Frankel
>
> Add a nested attribute of offload stats to if_stats_msg
> named IFLA_STATS_LINK_OFFLOAD_XSTATS.
> Under it, add SW stats, meaning stats only per packets that went via
> slowpath to the cpu, named IFLA_OFFLOAD_XSTATS_C
> On Sep 16, 2016, at 4:05 PM, Jiri Pirko wrote:
>
> From: Nogah Frankel
>
> Add a new ndo to return statistics for offloaded operation.
> Since there can be many different offloaded operation with many
> stats types, the ndo gets an attribute id by which it knows which
> stats are wanted. The
On Thursday, September 09/15/16, 2016 at 07:27:24 -0700, John Fastabend wrote:
> On 16-09-13 04:42 AM, Rahul Lakkireddy wrote:
> > Parse information sent by u32 into internal filter specification.
> > Add support for parsing several fields in IPv4, IPv6, TCP, and UDP.
> >
> > Signed-off-by: Rahul
From: David Howells
Date: Sat, 17 Sep 2016 07:26:01 +0100
> Add CONFIG_AF_RXRPC_IPV6 and make the IPv6 support code conditional on it.
> This is then made conditional on CONFIG_IPV6.
>
> Without this, the following can be seen:
>
>net/built-in.o: In function `rxrpc_init_peer':
>>> peer_obje
98 matches
Mail list logo