Thanks Tobin. I will fold these changes in.
> On May 3, 2018, at 12:19 AM, Tobin C. Harding wrote:
>
> On Wed, May 02, 2018 at 04:20:30PM -0700, Song Liu wrote:
>> This new test captures stackmap with build_id with hardware event
>> PERF_COUNT_HW_CPU_CYCLES.
>>
>> Because we only support one i
This series contains some bugfixes, mostly minor though one
is worthy of a stable backport I think - tagged with Fixes and Cc: stable.
Then there are improvements to walking, which have been discussed
to some degree already.
Finally a code simplification which I think is correct...
Thanks,
NeilBr
print_ht in rhashtable_test calls rht_dereference() with neither
RCU protection or the mutex. This triggers an RCU warning.
So take the mutex to silence the warning.
Signed-off-by: NeilBrown
---
lib/test_rhashtable.c |3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/test_rhashtable.
This "feature" is unused, undocumented, and untested and so
doesn't really belong. If a use for the nulls marker
is found, all this code would need to be reviewed to
ensure it works as required. It would be just as easy to
just add the code if/when it is needed instead.
This patch actually fixes
Rather than borrowing one of the bucket locks to
protect ->future_tbl updates, use cmpxchg().
This gives more freedom to change how bucket locking
is implemented.
Signed-off-by: NeilBrown
---
lib/rhashtable.c | 17 ++---
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git
If the sequence:
obj = rhashtable_walk_next(iter);
rhashtable_walk_stop(iter);
rhashtable_remove_fast(ht, &obj->head, params);
rhashtable_walk_start(iter);
races with another thread inserting or removing
an object on the same hash chain, a subsequent
rhashtable_walk_next() is not gu
If two threads run nested_table_alloc() at the same time
they could both allocate a new table.
Best case is that one of them will never be freed, leaking memory.
Worst case is hat entry get stored there before it leaks,
and the are lost from the table.
So use cmpxchg to detect the race and free th
rhashtable_walk_prev() returns the object returned by
the previous rhashtable_walk_next(), providing it is still in the
table (or was during this grace period).
This works even if rhashtable_walk_stop() and rhashtable_talk_start()
have been called since the last rhashtable_walk_next().
If there ha
rhashtable_try_insert() currently hold a lock on the bucket in
the first table, while also locking buckets in subsequent tables.
This is unnecessary and looks like a hold-over from some earlier
version of the implementation.
As insert and remove always lock a bucket in each table in turn, and
as i
This function has a somewhat confused behavior that is not properly
described by the documentation.
Sometimes is returns the previous object, sometimes it returns the
next one.
Sometimes it changes the iterator, sometimes it doesn't.
This function is not currently used and is not worth keeping, so
On 05/03/2018 05:33 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> This patch adds support for a software provided checksum and GSO_PARTIAL
> segmentation support. With this we can offload UDP segmentation on devices
> that only have partial support for tunnels.
>
> Since we are no lon
On 5/3/18 1:01 AM, Martin KaFai Lau wrote:
> On Wed, May 02, 2018 at 10:30:32PM +0300, Julian Anastasov wrote:
>>
>> Hello,
>>
>> On Wed, 2 May 2018, Martin KaFai Lau wrote:
>>
>>> On Wed, May 02, 2018 at 09:38:43AM +0300, Julian Anastasov wrote:
- initial traffic for port 21 does no
#syz fix: net/smc: simplify wait when closing listen socket
On 4/30/18 6:58 AM, Sukumar Gopalakrishnan wrote:
> VRF: ICMPV6 Echo Reply failed to egress if ingress pkt Src is IPV6
> Global and Dest is IPV6 Link Local.
...
> if (fl6->flowi6_oif == dev->ifindex) {
try adding ' && !rt6_need_strict(saddr)' to the above.
If it works, add a comment above the l
Commit d0266046ad54 ("x86: Remove FAST_FEATURE_TESTS")
removed X86_FAST_FEATURE_TESTS and make macro static_cpu_has() always
use __always_inline function _static_cpu_has() funciton.
The static_cpu_has() uses gcc feature asm goto construct,
which is not supported by clang.
Issues
==
Currently,
Making sure the headers line up properly with the actual value output of the
command
`cat /proc/net/netlink`
Before the patch:
sk Eth PidGroups Rmem Wmem Dump LocksDrops
Inode
33203952 0 8970113 000 20
Making sure the headers line up properly with the actual value output of the
command
`cat /proc/net/netlink`
Before the patch:
sk Eth PidGroups Rmem Wmem Dump LocksDrops
Inode
33203952 0 8970113 000 20
Add stubs to retrieve a handle to an IPv6 FIB table, fib6_get_table,
a stub to do a lookup in a specific table, fib6_table_lookup, and
a stub for a full route lookup.
The stubs are needed for core bpf code to handle the case when the
IPv6 module is not builtin.
Signed-off-by: David Ahern
Acked-b
Rename fib6_lookup to fib6_node_lookup to better reflect what it
returns. The fib6_lookup name will be used in a later patch for
an IPv6 equivalent to IPv4's fib_lookup.
Signed-off-by: David Ahern
---
include/net/ip6_fib.h | 6 +++---
net/ipv6/ip6_fib.c| 14 --
net/ipv6/route.c
Similar to IPv4, IPv6 should use the FIB lookup result in the
tracepoint.
Signed-off-by: David Ahern
Acked-by: David S. Miller
---
include/trace/events/fib6.h | 14 +++---
net/ipv6/route.c| 14 ++
2 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/i
ip6_pol_route is used for ingress and egress FIB lookups. Refactor it
moving the table lookup into a separate fib6_table_lookup that can be
invoked separately and export the new function.
ip6_pol_route now calls fib6_table_lookup and uses the result to generate
a dst based rt6_info.
Signed-off-by
Provide a helper for doing a FIB and neighbor lookup in the kernel
tables from an XDP program. The helper provides a fastpath for forwarding
packets. If the packet is a local delivery or for any reason is not a
simple lookup and forward, the packet continues up the stack.
If it is to be forwarded,
Add IPv6 equivalent to fib_lookup. Does a fib lookup, including rules,
but returns a FIB entry, fib6_info, rather than a dst based rt6_info.
fib6_lookup is any where from 140% (MULTIPLE_TABLES config disabled)
to 60% faster than any of the dst based lookup methods (without custom
rules) and 25% fas
Move source address lookup from fib6_rule_action to a helper. It will be
used in a later patch by a second variant for fib6_rule_action.
Signed-off-by: David Ahern
Acked-by: David S. Miller
---
net/ipv6/fib6_rules.c | 52 ++-
1 file changed, 31 in
Rename rt6_multipath_select to fib6_multipath_select and export it.
A later patch wants access to it similar to IPv4's fib_select_path.
Signed-off-by: David Ahern
Acked-by: David S. Miller
---
include/net/ip6_fib.h | 5 +
net/ipv6/route.c | 17 +
2 files changed, 14 in
Simple example of fast-path forwarding. It has a serious flaw
in not verifying the egress device index supports XDP forwarding.
If the egress device does not packets are dropped.
Take this only as a simple example of fast-path forwarding.
Signed-off-by: David Ahern
Acked-by: David S. Miller
---
Provide a helper for doing a FIB and neighbor lookup in the kernel
tables from an XDP program. The helper provides a fastpath for forwarding
packets. If the packet is a local delivery or for any reason is not a
simple lookup and forward, the packet is expected to continue up the stack
for full proc
On Fri, May 04, 2018 at 02:13:57AM +0200, Daniel Borkmann wrote:
> Commit 9ef09e35e521 ("bpf: fix possible spectre-v1 in find_and_alloc_map()")
> converted find_and_alloc_map() over to use array_index_nospec() to sanitize
> map type that user space passes on map creation, and this patch does an
> a
From: Weilin Chang
Support setting the link speed of CN23XX-225 cards (which can do 25Gbps or
10Gbps) via ethtool_ops.set_link_ksettings.
Also fix the function assigned to ethtool_ops.get_link_ksettings to use the
new link_ksettings api completely (instead of partially via
ethtool_convert_legacy
On 05/03/2018 06:52 PM, David Miller wrote:
> From: Eric Dumazet
> Date: Thu, 3 May 2018 17:05:06 -0700
>
>>
>>
>> On 05/02/2018 07:18 AM, Tariq Toukan wrote:
>>>
>>>
>>> On 27/04/2018 1:56 AM, Saeed Mahameed wrote:
>>
LGTM,
Reviewed-by: Saeed Mahameed
>>>
>>> Acked-by: Tar
> -Original Message-
> From: David Miller [mailto:da...@davemloft.net]
> Sent: Thursday, May 03, 2018 15:22
> To: syzbot+df0257c92ffd4fcc5...@syzkaller.appspotmail.com
> Cc: Jon Maloy ; linux-ker...@vger.kernel.org;
> netdev@vger.kernel.org; syzkaller-b...@googlegroups.com; tipc-
> discus
On 05/03/2018 05:33 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> This patch makes it so that if a destructor is not present we avoid trying
> to update the skb socket or any reference counting that would be associated
> with the NULL socket and/or descriptor. By doing this we can supp
On 05/03/2018 05:33 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> This patch is meant to be a start at cleaning up some of the UDP GSO
> segmentation code. Specifically we were passing mss and a recomputed
> checksum when we really didn't need to. The function itself could derive
> tha
From: Eric Dumazet
Date: Thu, 3 May 2018 17:05:06 -0700
>
>
> On 05/02/2018 07:18 AM, Tariq Toukan wrote:
>>
>>
>> On 27/04/2018 1:56 AM, Saeed Mahameed wrote:
>
>>> LGTM,
>>>
>>> Reviewed-by: Saeed Mahameed
>>>
>>
>> Acked-by: Tariq Toukan
>>
>> Thanks Eric.
>
> Thanks guys.
>
> I se
On 05/03/2018 05:33 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> We need to record the number of segments that will be generated when this
> frame is segmented. The expectation is that if gso_size is set then
> gso_segs is set as well. Without this some drivers such as ixgbe get
> con
Instead of spelling [hex] BYTES everywhere use DATA as keyword
for generalized value. This will help us keep the messages
concise when longer command are added in the future. It will
also be useful once BTF support comes. We will only have to
change the definition of DATA.
Signed-off-by: Jakub
Offloads may find host map pointers more useful than map fds.
Map pointers can be used to identify the map, while fds are
only valid within the context of loading process.
Jump to skip_full_check on error in case verifier log overflow
has to be handled (replace_map_fd_with_map_ptr() prints to the
bpf_event_output() is useful for offloads to add events to BPF
event rings, export it. Note that export is placed near the stub
since tracing is optional and kernel/bpf/core.c is always going
to be built.
Signed-off-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Reviewed-by: Jiong Wang
---
ke
Kernel will now replace map fds with actual pointer before
calling the offload prepare. We can identify those pointers
and replace them with NFP table IDs instead of loading the
table ID in code generated for CALL instruction.
This allows us to support having the same CALL being used with
differe
BPF_MAP_TYPE_PERF_EVENT_ARRAY is special as far as offload goes.
The map only holds glue to perf ring, not actual data. Allow
non-offloaded perf event arrays to be used in offloaded programs.
Offload driver can extract the events from HW and put them in
the map for user space to retrieve.
Signed-
Users of BPF sooner or later discover perf_event_output() helpers
and BPF_MAP_TYPE_PERF_EVENT_ARRAY. Dumping this array type is
not possible, however, we can add simple reading of perf events.
Create a new event_pipe subcommand for maps, this sub command
will only work with BPF_MAP_TYPE_PERF_EVENT
Add support for the perf_event_output family of helpers.
The implementation on the NFP will not match the host code exactly.
The state of the host map and rings is unknown to the device, hence
device can't return errors when rings are not installed. The device
simply packs the data into a firmwar
Comments in the verifier refer to free_bpf_prog_info() which
seems to have never existed in tree. Replace it with
free_used_maps().
Signed-off-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
---
kernel/bpf/verifier.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel
For asynchronous events originating from the device, like perf event
output, we need to be able to make sure that objects being referred
to by the FW message are valid on the host. FW events can get queued
and reordered. Even if we had a FW message "barrier" we should still
protect ourselves from
Hi!
This series centres on NFP offload of bpf_event_output(). The
first patch allows perf event arrays to be used by offloaded
programs. Next patch makes the nfp driver keep track of such
arrays to be able to filter FW events referring to maps.
Perf event arrays are not device bound. Having dri
Move the get_possible_cpus() function to shared code. No functional
changes.
Signed-off-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Reviewed-by: Jiong Wang
---
tools/bpf/bpftool/common.c | 58 +-
tools/bpf/bpftool/main.h | 3 +-
tools/bpf/bpftool/map.
This reverts commit 7b4dc3600e48 ("[XFRM]: Do not add a state whose SPI
is zero to the SPI hash.").
Zero SPI is legal and defined for IPcomp.
We shouldn't omit adding the state to SPI hash because it'll not be
possible to delete or lookup for it afterward:
__xfrm_state_insert() obviously doesn't a
On 5/2/18 2:56 PM, David Ahern wrote:
> On 5/2/18 2:48 PM, Thomas Winter wrote:
>> Should I look at reworking this? It would be great to have these ECMP routes
>> for other purposes.
>
> Looking at my IPv6 bug list this change is on it -- allowing ECMP routes
> to have a device only hop.
>
> Let
On Thu, 3 May 2018 13:45:30 +0100, Jose Abreu wrote:
> + case TC_SETUP_CLSU32:
> + if (!(priv->dev->hw_features & NETIF_F_HW_TC))
> + ret = -EOPNOTSUPP;
> + else
> + ret = stmmac_tc_setup_cls_u32(priv, priv, type_data);
> +
On 5/3/18 6:45 PM, Daniel Borkmann wrote:
>> +.ret_type = RET_INTEGER,
>> +.arg1_type = ARG_PTR_TO_CTX,
>> +.arg2_type = ARG_PTR_TO_MEM,
>> +.arg3_type = ARG_CONST_SIZE,
>> +.arg4_type = ARG_ANYTHING,
>> +};
>> +
>> +BPF_CALL_4(bpf_skb_fib_lookup, struc
On 05/03/2018 05:53 AM, David Ahern wrote:
[...]
> +
> +BPF_CALL_4(bpf_xdp_fib_lookup, struct xdp_buff *, ctx,
> +struct bpf_fib_lookup *, params, int, plen, u32, flags)
> +{
> + if (plen < sizeof(*params))
> + return -EINVAL;
> +
> + switch (params->family) {
> +#if IS_
From: Alexander Duyck
This patch adds support for UDP segmentation offload. Relatively few
changes were needed to add this support as it functions much like the TCP
segmentation offload.
Signed-off-by: Alexander Duyck
---
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 24 ++-
From: Alexander Duyck
This patch adds support for UDP segmentation offload. Relatively few
changes were needed to add this support as it functions much like the TCP
segmentation offload.
Signed-off-by: Alexander Duyck
---
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 25 +++
These patches are meant to be a follow-up to the following series:
https://patchwork.ozlabs.org/project/netdev/list/?series=42476&archive=both&state=*
These patches enable driver support for the new UDP segmentation offload
feature. For now I am pushing them as an RFC as they haven't been
official
From: Alexander Duyck
We need to record the number of segments that will be generated when this
frame is segmented. The expectation is that if gso_size is set then
gso_segs is set as well. Without this some drivers such as ixgbe get
confused if they attempt to offload this as they record 0 segmen
From: Alexander Duyck
Enable UDP offload as a generic software offload since we can now handle it
for multiple cases including if the checksum isn't present and via
gso_partial in the case of tunnels.
Signed-off-by: Alexander Duyck
---
include/linux/netdev_features.h |3 ++-
1 file changed
From: Alexander Duyck
This patch is meant to be a start at cleaning up some of the UDP GSO
segmentation code. Specifically we were passing mss and a recomputed
checksum when we really didn't need to. The function itself could derive
that information based on the already provided checksum, the len
From: Alexander Duyck
This patch adds support for a software provided checksum and GSO_PARTIAL
segmentation support. With this we can offload UDP segmentation on devices
that only have partial support for tunnels.
Since we are no longer needing the hardware checksum we can drop the checks
in the
From: Alexander Duyck
This patch makes it so that if a destructor is not present we avoid trying
to update the skb socket or any reference counting that would be associated
with the NULL socket and/or descriptor. By doing this we can support
traffic coming from another namespace without any issue
This patch set addresses a number of issues I found while sorting out
enabling UDP GSO Segmentation support for ixgbe/ixgbevf. Specifically there
were a number of issues related to the checksum and such that seemed to
cause either minor irregularities or kernel panics in the case of the
offload req
On 05/03/2018 06:04 PM, Mark Rutland wrote:
> It's possible for userspace to control attr->map_type. Sanitize it when
> using it as an array index to prevent an out-of-bounds value being used
> under speculation.
>
> Found by smatch.
>
> Signed-off-by: Mark Rutland
> Cc: Alexei Starovoitov
> Cc
Commit 9ef09e35e521 ("bpf: fix possible spectre-v1 in find_and_alloc_map()")
converted find_and_alloc_map() over to use array_index_nospec() to sanitize
map type that user space passes on map creation, and this patch does an
analogous conversion for progs in find_prog_type() as it's also passed fro
On 05/02/2018 07:18 AM, Tariq Toukan wrote:
>
>
> On 27/04/2018 1:56 AM, Saeed Mahameed wrote:
>> LGTM,
>>
>> Reviewed-by: Saeed Mahameed
>>
>
> Acked-by: Tariq Toukan
>
> Thanks Eric.
Thanks guys.
I see this patch ( http://patchwork.ozlabs.org/patch/901336/ ) in
a state I do not know
tg3_free_consistent() calls dma_free_coherent() to free tp->hw_stats
under spinlock and can trigger BUG_ON() in vunmap() because vunmap()
may sleep. Fix it by removing the spinlock and relying on the
TG3_FLAG_INIT_COMPLETE flag to prevent race conditions between
tg3_get_stats64() and tg3_free_cons
On Fri, May 04, 2018 at 01:08:11AM +0200, Daniel Borkmann wrote:
> This set simplifies BPF JITs significantly by moving ld_abs/ld_ind
> to native BPF, for details see individual patches. Main rationale
> is in patch 'implement ld_abs/ld_ind in native bpf'. Thanks!
>
> v1 -> v2:
> - Added missing
On Fri, May 04, 2018 at 12:49:09AM +0200, Daniel Borkmann wrote:
> On 05/02/2018 01:01 PM, Björn Töpel wrote:
> > From: Björn Töpel
> >
> > This patch set introduces a new address family called AF_XDP that is
> > optimized for high performance packet processing and, in upcoming
> > patch sets, ze
This adds a small BPF helper similar to bpf_skb_load_bytes() that
is able to load relative to mac/net header offset from the skb's
linear data. Compared to bpf_skb_load_bytes(), it takes a fifth
argument namely start_header, which is either BPF_HDR_START_MAC
or BPF_HDR_START_NET. This allows for a
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from arm64 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
arch/arm64/net/bpf_jit_comp
Remove all eBPF tests involving LD_ABS/LD_IND from test_bpf.ko. Reason
is that the eBPF tests from test_bpf module do not go via BPF verifier
and therefore any instruction rewrites from verifier cannot take place.
Therefore, move them into test_verifier which runs out of user space,
so that verfie
The main part of this work is to finally allow removal of LD_ABS
and LD_IND from the BPF core by reimplementing them through native
eBPF instead. Both LD_ABS/LD_IND were carried over from cBPF and
keeping them around in native eBPF caused way more trouble than
actually worth it. To just list some o
No change in functionality, just remove the '__' prefix and replace it
with a 'bpf_' prefix instead. We later on add a couple of more helpers
for cBPF and keeping the scheme with '__' is suboptimal there.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
net/core/filter.c | 18 +++
Only sync the header from include/uapi/linux/bpf.h.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
tools/include/uapi/linux/bpf.h | 33 -
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from ppc64 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Naveen N. Rao
Acked-by: Alexei Starovoitov
Tested-
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from x64 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
arch/x86/net/Makefile |
This set simplifies BPF JITs significantly by moving ld_abs/ld_ind
to native BPF, for details see individual patches. Main rationale
is in patch 'implement ld_abs/ld_ind in native bpf'. Thanks!
v1 -> v2:
- Added missing seen_lds_abs in LDX_MSH and use X = A
initially due to being preserved o
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from s390x JIT.
Tested on s390x instance on LinuxONE.
Signed-off-by: Daniel Borkmann
Cc: Michael Holzheu
Ack
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from x32 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
arch/x86/net/bpf_jit_comp32.c
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from mips64 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
arch/mips/net/ebpf_jit.c |
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from sparc64 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
Acked-by: David S. Miller
---
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from arm32 JIT.
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
---
arch/arm/net/bpf_jit_32.c |
On 05/02/2018 01:01 PM, Björn Töpel wrote:
> From: Björn Töpel
>
> This patch set introduces a new address family called AF_XDP that is
> optimized for high performance packet processing and, in upcoming
> patch sets, zero-copy semantics. In this patch set, we have removed
> all zero-copy related
On 5/3/2018 5:11 PM, Or Gerlitz wrote:
> On Thu, May 3, 2018 at 9:37 PM, LR wrote:
>
>> MELLANOX MLX5 core VPI driver
>> M: Saeed Mahameed
>> -M: Matan Barak
>
> Goodbye Matan!
>
> You were a long time developer, maintainer, hacker and a very deeply thinking,
> pleasant, nice and ope
Currently, skb->len and skb->data_len are set to the page size, not the
packet size. This causes the frame check sequence to not be located at
the "end" of the packet resulting in ethernet frame check errors. The
driver does work currently, but stricter kernel facing networking
solutions like O
On Thu, May 3, 2018 at 9:37 PM, LR wrote:
> MELLANOX MLX5 core VPI driver
> M: Saeed Mahameed
> -M: Matan Barak
Goodbye Matan!
You were a long time developer, maintainer, hacker and a very deeply thinking,
pleasant, nice and open person in our team, enjoy your new adventures and than
> I am using kernel 2.6.37, but I think it is not kernel issue, but more
> bad patches done on kernel.
> It is based on TI's kernel, but with some custom modifications on
> driver's switch, to make it work with TI's cpsw switch.
> Seems like someone made some bad patch, I'll continue investigating
On Thu, May 3, 2018 at 11:41 PM, Andrew Lunn wrote:
> On Thu, May 03, 2018 at 11:35:08PM +0300, Ran Shalit wrote:
>> On Wed, May 2, 2018 at 11:56 PM, Andrew Lunn wrote:
>> > On Wed, May 02, 2018 at 11:20:05PM +0300, Ran Shalit wrote:
>> >> Hello,
>> >>
>> >> Is it possible to use switch just like
From: r...@taglang.io
Date: Thu, 03 May 2018 16:38:04 -0400
> Ah, gotcha. Should I make a new thread?
Yes, please do.
Thank you.
On Thu, May 03, 2018 at 11:35:08PM +0300, Ran Shalit wrote:
> On Wed, May 2, 2018 at 11:56 PM, Andrew Lunn wrote:
> > On Wed, May 02, 2018 at 11:20:05PM +0300, Ran Shalit wrote:
> >> Hello,
> >>
> >> Is it possible to use switch just like external real switch,
> >> connecting all ports to the same
Ah, gotcha. Should I make a new thread?
Patch should be properly formatted below.
Thanks,
Rob
Signed-off-by: Rob Taglang
---
drivers/net/ethernet/sun/niu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/sun/niu.c
b/drivers/net/ethernet/sun/niu.c
syzbot caught an infinite recursion in nsh_gso_segment().
Problem here is that we need to make sure the NSH header is of
reasonable length.
BUG: MAX_LOCK_DEPTH too low!
turning off the locking correctness validator.
depth: 48 max: 48!
48 locks held by syz-executor0/10189:
#0: (ptrval) (
On Wed, May 2, 2018 at 11:56 PM, Andrew Lunn wrote:
> On Wed, May 02, 2018 at 11:20:05PM +0300, Ran Shalit wrote:
>> Hello,
>>
>> Is it possible to use switch just like external real switch,
>> connecting all ports to the same subnet ?
>
> Yes. Just bridge all ports/interfaces together and put you
From: Ursula Braun
Date: Thu, 3 May 2018 18:12:35 +0200
> From: Ursula Braun
>
> Dave,
>
> Stefan comes up with an smc implementation for splice(). The first
> three patches are preparational patches, the 4th patch implements
> splice().
Doesn't look too bad :)
Series applied, thanks.
> -Original Message-
> This does change the dmesg reporting of link speeds, and in the ixgbe case,
> it changes the reporting from KERN_WARN level to KERN_INFO. If that's an
> issue, let's talk about it. I'm hoping the reduce code size, improved
> functionality, and consistency across dri
Hi Christoph,
On Thu, May 3, 2018 at 8:51 PM, Christoph Hellwig wrote:
> On Thu, May 03, 2018 at 10:46:56AM +0200, Geert Uytterhoeven wrote:
>> Perhaps you can add a new helper (platform_device_register_simple_dma()?)
>> that takes the DMA mask, too?
>> With people setting the mask to kill the WA
1) Various sockmap fixes from John Fastabend (pinned map handling, blocking
in recvmsg, double page put, error handling during redirect failures, etc.)
2) Fix dead code handling in x86-64 JIT, from Gianluca Borello.
3) Missing device put in RDS IB code, from Dag Moxnes.
4) Don't process fast
From: Rob Taglang
Date: Thu, 03 May 2018 11:06:04 -0400
> Currently, skb->len and skb->data_len are set to the page size, not
> the packet size. This causes the frame check sequence to not be
> located at the "end" of the packet resulting in ethernet frame check
> errors. The driver does work cur
From: Bjorn Helgaas
In some cases pcie_get_minimum_link() returned misleading information
because it found the slowest link and the narrowest link without
considering the total bandwidth of the link.
For example, consider a path with these two links:
- 16.0 GT/s x1 link (16.0 * 10^9 * 128 /
This is based on Tal's recent work to unify the approach for reporting PCIe
link speed/width and whether the device is being limited by a slower
upstream link.
The new pcie_print_link_status() interface appeared in v4.17-rc1; see
9e506a7b5147 ("PCI: Add pcie_print_link_status() to log link speed a
From: Bjorn Helgaas
Previously the driver used pcie_get_minimum_link() to warn when the NIC
is in a slot that can't supply as much bandwidth as the NIC could use.
pcie_get_minimum_link() can be misleading because it finds the slowest link
and the narrowest link (which may be different links) wit
From: Bjorn Helgaas
Previously the driver used pcie_get_minimum_link() to warn when the NIC
is in a slot that can't supply as much bandwidth as the NIC could use.
pcie_get_minimum_link() can be misleading because it finds the slowest link
and the narrowest link (which may be different links) wit
1 - 100 of 260 matches
Mail list logo