On Thu, Sep 24, 2020 at 10:18 AM Andrii Nakryiko wrote:
>
> Fix regression in libbpf, introduced by XDP link change, which causes XDP
> programs to fail to be loaded into kernel due to specified BPF_XDP
> expected_attach_type. While kernel doesn't enforce expected_attach_type for
> BPF_PROG_TYPE_X
On Thu, Sep 24, 2020 at 10:34 AM Alexei Starovoitov
wrote:
>
> On Thu, Sep 24, 2020 at 10:18 AM Andrii Nakryiko wrote:
> >
> > Fix regression in libbpf, introduced by XDP link change, which causes XDP
> > programs to fail to be loaded into kernel due to specified BPF_XDP
> > expected_attach_type.
Update the link mode tables to include 100base Fx Full and Half duplex
modes.
Signed-off-by: Dan Murphy
---
ethtool.c | 6 ++
netlink/settings.c | 2 ++
2 files changed, 8 insertions(+)
diff --git a/ethtool.c b/ethtool.c
index ab9b4577cbce..2f71fa92bb09 100644
--- a/ethtool.c
+++ b
Update to kernel commit 55f13311785c
Signed-off-by: Dan Murphy
---
uapi/linux/ethtool.h | 2 ++
uapi/linux/ethtool_netlink.h | 19 ++-
2 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/uapi/linux/ethtool.h b/uapi/linux/ethtool.h
index 847ccd0b1fce..052689bcc
On Thu, 2020-09-24 at 09:03 -0700, Jakub Kicinski wrote:
> On Wed, 23 Sep 2020 22:49:37 -0700 Saeed Mahameed wrote:
> > 2) Another problematic scenario which i see is repeated in many
> > drivers:
> >
> > shutdown/suspend()
> > rtnl_lock()
> > netif_device_detach()//Mark !present;
> >
With its use in BPF the cookie generator can be called very frequently
in particular when used out of cgroup v2 hooks (e.g. connect / sendmsg)
and attached to the root cgroup, for example, when used in v1/v2 mixed
environments. In particular when there's a high churn on sockets in the
system there
Add a redirect_neigh() helper as redirect() drop-in replacement
for the xmit side. Main idea for the helper is to be very similar
in semantics to the latter just that the skb gets injected into
the neighboring subsystem in order to let the stack do the work
it knows best anyway to populate the L2 a
Similarly to 5a52ae4e32a6 ("bpf: Allow to retrieve cgroup v1 classid
from v2 hooks"), add a helper to retrieve cgroup v1 classid solely
based on the skb->sk, so it can be used as key as part of BPF map
lookups out of tc from host ns, in particular given the skb->sk is
retained these days when cross
This series adds two BPF helpers, that is, one for retrieving the classid
of an skb and another one to redirect via the neigh subsystem, and improves
also the cookie helpers by removing the atomic counter. I've also added
the bpf_tail_call_static() helper to the libbpf API that we've been using
in
For those locations where we use an immediate tail call map index use the
newly added bpf_tail_call_static() helper.
Signed-off-by: Daniel Borkmann
---
tools/testing/selftests/bpf/progs/bpf_flow.c | 12
tools/testing/selftests/bpf/progs/tailcall1.c | 28 +--
tools/testi
Add a small test that excercises the new redirect_neigh() helper for the
IPv4 and IPv6 case.
Signed-off-by: Daniel Borkmann
---
.../selftests/bpf/progs/test_tc_neigh.c | 144 +++
tools/testing/selftests/bpf/test_tc_neigh.sh | 168 ++
2 files changed, 312 insert
Port of tail_call_static() helper function from Cilium's BPF code base [0]
to libbpf, so others can easily consume it as well. We've been using this
in production code for some time now. The main idea is that we guarantee
that the kernel's BPF infrastructure and JIT (here: x86_64) can patch the
JIT
If we AND two values together that are known in the 32bit subregs, but not
known in the 64bit registers we rely on the tnum value to report the 32bit
subreg is known. And do not use mark_reg_known() directly from
scalar32_min_max_and()
Add an AND test to cover the case with known 32bit subreg, but
In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst
tnum is a constant.
1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)
2 scalar32_min_max_[op]
3 if (known) return
4 scalar_min_max_[op]
5 if (known)
6 __mark_reg_known(dst_reg,
On 9/24/20 8:21 PM, Daniel Borkmann wrote:
> With its use in BPF the cookie generator can be called very frequently
> in particular when used out of cgroup v2 hooks (e.g. connect / sendmsg)
> and attached to the root cgroup, for example, when used in v1/v2 mixed
> environments. In particular whe
On 9/23/2020 9:25 PM, Jakub Kicinski wrote:
On Fri, 18 Sep 2020 19:06:37 +0300 Moshe Shemesh wrote:
Add devlink reload action to allow the user to request a specific reload
action. The action parameter is optional, if not specified then devlink
driver re-init action is used (backward compatib
On Wed, 23 Sep 2020 11:57:41 +0200
Heiner Kallweit wrote:
> On 03.09.2020 10:41, Petr Tesarik wrote:
> > Hi Heiner,
> >
> > this issue was on the back-burner for some time, but I've got some
> > interesting news now.
> >
> > On Sat, 18 Jul 2020 14:07:50 +0200
> > Heiner Kallweit wrote:
> >
Fixes: f2c17e107900 ("netlink: add netlink handler for gfeatures (-k)")
Cc: Michal Kubecek
Signed-off-by: Ivan Vecera
---
netlink/features.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/netlink/features.c b/netlink/features.c
index 3f1240437350..b2cf57eea660 1006
Potentially allocated memory allocated for mask is not freed when
the allocation for value fails.
Fixes: 81a30f416ec7 ("netlink: add bitset command line parser handlers")
Cc: Michal Kubecek
Signed-off-by: Ivan Vecera
---
netlink/parser.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
On 9/23/2020 9:36 PM, Jakub Kicinski wrote:
On Fri, 18 Sep 2020 19:06:38 +0300 Moshe Shemesh wrote:
Add reload action limit level to demand restrictions on actions.
Reload action limit levels supported:
none (default): No constrains on actions. Driver implementation may
includ
On 9/23/2020 9:42 PM, Jakub Kicinski wrote:
External email: Use caution opening links or attachments
On Fri, 18 Sep 2020 19:06:39 +0300 Moshe Shemesh wrote:
Add reload action stats to hold the history per reload action type and
limit level.
For example, the number of times fw_activate has b
On Thu, Sep 24, 2020 at 08:21:26PM +0200, Daniel Borkmann wrote:
> For those locations where we use an immediate tail call map index use the
> newly added bpf_tail_call_static() helper.
>
> Signed-off-by: Daniel Borkmann
> ---
> tools/testing/selftests/bpf/progs/bpf_flow.c | 12
> tool
On 9/23/2020 9:50 PM, Jakub Kicinski wrote:
On Fri, 18 Sep 2020 19:06:40 +0300 Moshe Shemesh wrote:
Expose devlink reload actions stats to the user through devlink dev
get command.
Examples:
$ devlink dev show
pci/:82:00.0:
stats:
reload_action_stats:
driver_reinit 2
On 24.09.2020 21:14, Petr Tesarik wrote:
> On Wed, 23 Sep 2020 11:57:41 +0200
> Heiner Kallweit wrote:
>
>> On 03.09.2020 10:41, Petr Tesarik wrote:
>>> Hi Heiner,
>>>
>>> this issue was on the back-burner for some time, but I've got some
>>> interesting news now.
>>>
>>> On Sat, 18 Jul 2020 14:0
On Wed, Sep 23, 2020 at 6:46 PM Song Liu wrote:
>
> Add .test_run for raw_tracepoint. Also, introduce a new feature that runs
> the target program on a specific CPU. This is achieved by a new flag in
> bpf_attr.test, BPF_F_TEST_RUN_ON_CPU. When this flag is set, the program
> is triggered on cpu w
On Fri, Sep 25, 2020 at 12:45:42AM +0800, Kai-Heng Feng wrote:
> We are seeing the following error after S3 resume:
> [ 704.746874] e1000e :00:1f.6 eno1: Setting page 0x6020
> [ 704.844232] e1000e :00:1f.6 eno1: MDI Write did not complete
> [ 704.902817] e1000e :00:1f.6 eno1: Setting
The meaning of PTR_TO_BTF_ID_OR_NULL differs slightly from other types
denoted with the *_OR_NULL type. For example the types PTR_TO_SOCKET
and PTR_TO_SOCKET_OR_NULL can be used for branch analysis because the
type PTR_TO_SOCKET is guaranteed to _not_ have a null value.
In contrast PTR_TO_BTF_ID a
On Wed, Sep 23, 2020 at 6:45 PM Song Liu wrote:
>
> Add bpf_prog_test_run_opts() with support of new fields in bpf_attr.test,
> namely, flags and cpu. Also extend _opts operations to support outputs via
> opts.
>
> Signed-off-by: Song Liu
> ---
> tools/lib/bpf/bpf.c | 31
On Wed, Sep 23, 2020 at 6:55 PM Song Liu wrote:
>
> This test runs test_run for raw_tracepoint program. The test covers ctx
> input, retval output, and running on correct cpu.
>
> Signed-off-by: Song Liu
> ---
> .../bpf/prog_tests/raw_tp_test_run.c | 79 +++
> .../bpf/pr
On 24.09.2020 21:14, Petr Tesarik wrote:
> On Wed, 23 Sep 2020 11:57:41 +0200
> Heiner Kallweit wrote:
>
>> On 03.09.2020 10:41, Petr Tesarik wrote:
>>> Hi Heiner,
>>>
>>> this issue was on the back-burner for some time, but I've got some
>>> interesting news now.
>>>
>>> On Sat, 18 Jul 2020 14:0
On Wed, Sep 23, 2020 at 7:26 PM wrote:
>
> From: Bimmy Pujari
>
> Test xdping measures RTT from xdp using monotonic time helper.
> Extending xdping test to use real time helper function in order
> to verify this helper function.
>
> Signed-off-by: Bimmy Pujari
> ---
This is exactly the use of R
On Thu, Sep 24, 2020 at 8:21 AM John Fastabend wrote:
>
> Andrii Nakryiko wrote:
> > Refactor internals of struct btf to remove assumptions that BTF header, type
> > data, and string data are layed out contiguously in a memory in a single
> > memory allocation. Now we have three separate pointers
On Thu, Sep 24, 2020 at 8:56 AM John Fastabend wrote:
>
> Andrii Nakryiko wrote:
> > Allow internal BTF representation to switch from default read-only mode, in
> > which raw BTF data is a single non-modifiable block of memory with BTF
> > header,
> > types, and strings layed out sequentially and
On Thu, 24 Sep 2020 22:01:42 +0300 Moshe Shemesh wrote:
> On 9/23/2020 9:25 PM, Jakub Kicinski wrote:
> >> Signed-off-by: Moshe Shemesh
> >> @@ -3971,15 +3972,19 @@ static int mlx4_devlink_reload_up(struct devlink
> >> *devlink,
> >>int err;
> >>
> >>err = mlx4_restart_one_up(pers
On Thu, Sep 24, 2020 at 7:36 AM Toke Høiland-Jørgensen wrote:
>
> Alexei Starovoitov writes:
>
> > On Tue, Sep 22, 2020 at 08:38:38PM +0200, Toke Høiland-Jørgensen
> > wrote:
> >> @@ -746,7 +748,9 @@ struct bpf_prog_aux {
> >> u32 max_rdonly_access;
> >> u32 max_rdwr_access;
> >>
On 9/24/2020 8:36 AM, Jakub Kicinski wrote:
> On Wed, 23 Sep 2020 17:10:30 -0700 Jacob Keller wrote:
>>> - printf("RX negotiated: %s\nTX negotiated: %s\n",
>>> - rx_status ? "on" : "off", tx_status ? "on" : "off");
>>> +
>>> + if (is_json_context()) {
>>> + open_json_objec
On Thu, 24 Sep 2020 22:29:55 +0300 Moshe Shemesh wrote:
> >> @@ -3964,6 +3965,7 @@ static int mlx4_devlink_reload_down(struct devlink
> >> *devlink, bool netns_change,
> >> }
> >>
> >> static int mlx4_devlink_reload_up(struct devlink *devlink, enum
> >> devlink_reload_action action,
> >> +
Possible subject:
PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
On Wed, Sep 23, 2020 at 02:11:26PM -0400, Nitesh Narayan Lal wrote:
> This patch limits the pci_alloc_irq_vectors, max_vecs argument that is
> passed on by the caller based on the housekeeping online CPUs (that are
> mean
On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
> Introduce a new API hk_num_online_cpus(), that can be used to
> retrieve the number of online housekeeping CPUs that are meant to handle
> managed IRQ jobs.
>
> This API is introduced for the drivers that were previously relying
> On Sep 24, 2020, at 12:56 PM, Andrii Nakryiko
> wrote:
>
> On Wed, Sep 23, 2020 at 6:46 PM Song Liu wrote:
>>
>> Add .test_run for raw_tracepoint. Also, introduce a new feature that runs
>> the target program on a specific CPU. This is achieved by a new flag in
>> bpf_attr.test, BPF_F_TES
On Thu, Sep 24, 2020 at 11:22 AM Daniel Borkmann wrote:
>
> Port of tail_call_static() helper function from Cilium's BPF code base [0]
> to libbpf, so others can easily consume it as well. We've been using this
> in production code for some time now. The main idea is that we guarantee
> that the k
Andrii Nakryiko writes:
> On Thu, Sep 24, 2020 at 7:36 AM Toke Høiland-Jørgensen
> wrote:
>>
>> Alexei Starovoitov writes:
>>
>> > On Tue, Sep 22, 2020 at 08:38:38PM +0200, Toke Høiland-Jørgensen
>> > wrote:
>> >> @@ -746,7 +748,9 @@ struct bpf_prog_aux {
>> >> u32 max_rdonly_acces
On Thu, 24 Sep 2020 12:06:37 +0530 Rohit Maheshwari wrote:
> + if (chcr_setup_connection(sk, tx_info))
> + goto put_module;
> +
> + /* Wait for reply */
> + wait_for_completion_timeout(&tx_info->completion, 30 * HZ);
> + if (tx_info->open_pending)
> + goto pu
>> > I think I will just start marking patches as changes-requested when I see
>> > that
>> > they break tests without replying and without reviewing.
>> > Please respect reviewer's time.
>>
>> That is completely fine if the tests are working in the first place. And
>> even when they're not (lik
On Thu, 24 Sep 2020 12:28:45 +0530 Rohit Maheshwari wrote:
> BUG: kernel NULL pointer dereference, address: 00b8
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x) - not-present page
> PGD 8008b6fef067 P4D 8008b6fef067 PUD 8b6fe6067 PMD 0
> Oops: [#1
On 9/24/20 4:45 PM, Bjorn Helgaas wrote:
> Possible subject:
>
> PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
Will switch to this.
>
> On Wed, Sep 23, 2020 at 02:11:26PM -0400, Nitesh Narayan Lal wrote:
>> This patch limits the pci_alloc_irq_vectors, max_vecs argument that is
>> pas
On 9/24/20 10:26 AM, Brian J. Murrell wrote:
> On Thu, 2020-09-24 at 10:15 -0600, David Ahern wrote:
>>
>> check your routes for a prohibit entry:
>
> I don't have any prohibit entries
>
perf record -e fib6:* -g -- ip route get 2001:4860:4860::8844
perf script
It's a config problem somewhere.
On 9/24/20 4:47 PM, Bjorn Helgaas wrote:
> On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
>> Introduce a new API hk_num_online_cpus(), that can be used to
>> retrieve the number of online housekeeping CPUs that are meant to handle
>> managed IRQ jobs.
>>
>> This API is introdu
On Thu, 24 Sep 2020 13:20:25 +0530 Rohit Maheshwari wrote:
> At first when sendpage gets called, if there is more data, 'more' in
> tls_push_data() gets set which later sets pending_open_record_frags, but
> when there is no more data in file left, and last time tls_push_data()
> gets called, pendin
Alexei Starovoitov writes:
>> +struct mutex tgt_mutex; /* protects tgt_* pointers below, *after* prog
>> becomes visible */
>> +struct bpf_prog *tgt_prog;
>> +struct bpf_trampoline *tgt_trampoline;
>> bool verifier_zext; /* Zero extensions has been inserted by verifier. */
>>
On Thu, Sep 24, 2020 at 2:24 PM Toke Høiland-Jørgensen wrote:
>
> Andrii Nakryiko writes:
>
> > On Thu, Sep 24, 2020 at 7:36 AM Toke Høiland-Jørgensen
> > wrote:
> >>
> >> Alexei Starovoitov writes:
> >>
> >> > On Tue, Sep 22, 2020 at 08:38:38PM +0200, Toke Høiland-Jørgensen
> >> > wrot
On 9/24/20 8:58 PM, Eric Dumazet wrote:
On 9/24/20 8:21 PM, Daniel Borkmann wrote:
[...]
diff --git a/include/linux/cookie.h b/include/linux/cookie.h
new file mode 100644
index ..2488203dc004
--- /dev/null
+++ b/include/linux/cookie.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GP
On 9/24/20 9:25 PM, Maciej Fijalkowski wrote:
On Thu, Sep 24, 2020 at 08:21:26PM +0200, Daniel Borkmann wrote:
For those locations where we use an immediate tail call map index use the
newly added bpf_tail_call_static() helper.
Signed-off-by: Daniel Borkmann
---
tools/testing/selftests/bpf/p
From: Bimmy Pujari
The existing bpf helper functions to get timestamp return the time
elapsed since system boot. This timestamp is not particularly useful
where epoch timestamp is required or more than one server is involved
and time sync is required. Instead, you want to use CLOCK_REALTIME,
whic
On 9/24/20 12:21 PM, Daniel Borkmann wrote:
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 0f913755bcba..19caa2fc21e8 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -2160,6 +2160,205 @@ static int __bpf_redirect(struct sk_buff *skb, struct
> net_device *dev,
>
On 9/24/20 4:07 PM, bimmy.puj...@intel.com wrote:
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index a22812561064..198e69a6508d 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3586,6 +3586,13 @@ union bpf_attr {
> * the data in *dst
On 9/24/20 10:53 PM, Andrii Nakryiko wrote:
On Thu, Sep 24, 2020 at 11:22 AM Daniel Borkmann wrote:
Port of tail_call_static() helper function from Cilium's BPF code base [0]
to libbpf, so others can easily consume it as well. We've been using this
in production code for some time now. The mai
On 9/25/20 12:12 AM, David Ahern wrote:
On 9/24/20 12:21 PM, Daniel Borkmann wrote:
diff --git a/net/core/filter.c b/net/core/filter.c
index 0f913755bcba..19caa2fc21e8 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2160,6 +2160,205 @@ static int __bpf_redirect(struct sk_buff *skb, st
Andrii Nakryiko writes:
> On Thu, Sep 24, 2020 at 2:24 PM Toke Høiland-Jørgensen
> wrote:
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Sep 24, 2020 at 7:36 AM Toke Høiland-Jørgensen
>> > wrote:
>> >>
>> >> Alexei Starovoitov writes:
>> >>
>> >> > On Tue, Sep 22, 2020 at 08:38:38PM +0200, T
From: Priyaranjan Jha
Currently, we use length of DSACKed range to compute number of
delivered packets. And if sequence range in DSACK is corrupted,
we can get bogus dsacked/acked count, and bogus cwnd.
This patch put bounds on DSACKed range to skip update of data
delivery and spurious retransmi
On Mon, 2020-09-21 at 14:44 -0700, Jakub Kicinski wrote:
> On Sat, 19 Sep 2020 07:23:58 + Brown, Aaron F wrote:
> > > From: Intel-wired-lan On
> > > Behalf Of Jakub
> > > Kicinski
> > > Sent: Tuesday, July 21, 2020 6:27 PM
> > > To: da...@davemloft.net
> > > Cc: netdev@vger.kernel.org; intel-w
On 9/25/20 12:15 AM, David Ahern wrote:
On 9/24/20 4:07 PM, bimmy.puj...@intel.com wrote:
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index a22812561064..198e69a6508d 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3586,6 +3586,13 @@ union bpf_attr {
On Thu, 24 Sep 2020 22:25:46 + Nguyen, Anthony L wrote:
> On Mon, 2020-09-21 at 14:44 -0700, Jakub Kicinski wrote:
> > Ah, good catch, thanks! Please adjust in your tree or I can send a
> > follow up with other patches I still have queued.
>
> Hi Jakub,
>
> It'd be great if you could adjust
On Thu, Sep 24, 2020 at 3:20 PM Toke Høiland-Jørgensen wrote:
>
> Andrii Nakryiko writes:
>
> > On Thu, Sep 24, 2020 at 2:24 PM Toke Høiland-Jørgensen
> > wrote:
> >>
> >> Andrii Nakryiko writes:
> >>
> >> > On Thu, Sep 24, 2020 at 7:36 AM Toke Høiland-Jørgensen
> >> > wrote:
> >> >>
> >> >>
Jarod Wilson wrote:
>On Tue, Sep 22, 2020 at 8:01 PM Stephen Hemminger
> wrote:
>>
>> On Tue, 22 Sep 2020 16:47:07 -0700
>> Jay Vosburgh wrote:
>>
>> > Stephen Hemminger wrote:
>> >
>> > >On Tue, 22 Sep 2020 09:37:30 -0400
>> > >Jarod Wilson wrote:
>> > >
>> > >> By default, enable retaining a
On Wed, 23 Sep 2020 18:01:04 -0700 David Awogbemila wrote:
> + info->skb = skb;
double space
> + addr = dma_map_single(tx->dev, skb->data, len, DMA_TO_DEVICE);
> + if (unlikely(dma_mapping_error(tx->dev, addr))) {
> + priv->dma_mapping_error++;
> + goto drop;
On Thu, Sep 24, 2020 at 05:39:07PM -0400, Nitesh Narayan Lal wrote:
>
> On 9/24/20 4:45 PM, Bjorn Helgaas wrote:
> > Possible subject:
> >
> > PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
>
> Will switch to this.
>
> > On Wed, Sep 23, 2020 at 02:11:26PM -0400, Nitesh Narayan Lal wro
On Wed, 23 Sep 2020 18:01:03 -0700 David Awogbemila wrote:
> This patch lets the driver reuse buffers that have been freed by the
> networking stack.
>
> In the raw addressing case, this allows the driver avoid allocating new
> buffers.
> In the qpl case, the driver can avoid copies.
>
> Signed-o
Add .test_run for raw_tracepoint. Also, introduce a new feature that runs
the target program on a specific CPU. This is achieved by a new flag in
bpf_attr.test, BPF_F_TEST_RUN_ON_CPU. When this flag is set, the program
is triggered on cpu with id bpf_attr.test.cpu. This feature is needed for
BPF pr
Add bpf_prog_test_run_opts() with support of new fields in bpf_attr.test,
namely, flags and cpu. Also extend _opts operations to support outputs via
opts.
Signed-off-by: Song Liu
---
tools/lib/bpf/bpf.c | 31 +++
tools/lib/bpf/bpf.h | 26 ++
This set enables BPF_PROG_TEST_RUN for raw_tracepoint type programs. This
set also enables running the raw_tp program on a specific CPU. This feature
can be used by user space to trigger programs that access percpu resources,
e.g. perf_event, percpu variables.
Changes v4 => v5:
1.Fail test_run wit
This test runs test_run for raw_tracepoint program. The test covers ctx
input, retval output, and running on correct cpu.
Signed-off-by: Song Liu
---
.../bpf/prog_tests/raw_tp_test_run.c | 98 +++
.../bpf/progs/test_raw_tp_test_run.c | 24 +
2 files changed,
On Wed, 23 Sep 2020 18:01:01 -0700 David Awogbemila wrote:
> @@ -518,6 +521,49 @@ int gve_adminq_describe_device(struct gve_priv *priv)
> priv->rx_desc_cnt = priv->rx_pages_per_qpl;
> }
> priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues);
> + dev_o
Andrii Nakryiko writes:
>> [root@(none) bpf]# ./test_progs -t map_in_map
>> test_lookup_update:PASS:skel_open 0 nsec
>> test_lookup_update:PASS:skel_attach 0 nsec
>> test_lookup_update:PASS:inner1 0 nsec
>> test_lookup_update:PASS:inner2 0 nsec
>> test_lookup_update:PASS:inner1 0 nsec
>> test_loo
Andrii Nakryiko writes:
> On Wed, Sep 23, 2020 at 6:08 PM Alexei Starovoitov
> wrote:
>>
>> On Tue, Sep 22, 2020 at 08:38:45PM +0200, Toke Høiland-Jørgensen wrote:
>> > -const struct bench bench_trig_fmodret = {
>> > - .name = "trig-fmodret",
>> > - .validate = trigger_validate,
>> > -
Since commit cfde141ea3faa30e ("mptcp: move option parsing into
mptcp_incoming_options()"), the 3rd function argument is no longer used.
Signed-off-by: Florian Westphal
---
include/net/mptcp.h | 6 ++
net/ipv4/tcp_input.c | 4 ++--
net/mptcp/options.c | 3 +--
3 files changed, 5 insertions
Hi Horatiu,
On Thu, Apr 23, 2020 at 10:29:48AM +0200, Horatiu Vultur wrote:
> > > +static const struct vcap_props vcap_is2 = {
> > > + .name = "IS2",
> > > + .tg_width = 2,
> > > + .sw_count = 4,
> > > + .entry_count = VCAP_IS2_CNT,
> > > + .entry_words = BITS_TO_32BI
On 9/24/20 6:59 PM, Bjorn Helgaas wrote:
> On Thu, Sep 24, 2020 at 05:39:07PM -0400, Nitesh Narayan Lal wrote:
>> On 9/24/20 4:45 PM, Bjorn Helgaas wrote:
>>> Possible subject:
>>>
>>> PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
>> Will switch to this.
>>
>>> On Wed, Sep 23, 2020 at
On Wed, 23 Sep 2020 21:16:27 -0700 Florian Fainelli wrote:
> While we should always make sure that we specify a valid VLAN protocol
> to vlan_proto_idx(), killing the machine when an invalid value is
> specified is too harsh and not helpful for debugging. All callers are
> capable of dealing with a
On Thu, 24 Sep 2020 08:27:22 +0200 Wilken Gottwalt wrote:
> Reposted and added netdev as suggested by Jakub Kicinski.
Thanks!
> ---
If you want to add a comment like the above you need to place it under
the '---' which git generates. Git removes everything after those lines.
With the patch as po
On 9/24/2020 4:46 PM, Jakub Kicinski wrote:
On Wed, 23 Sep 2020 21:16:27 -0700 Florian Fainelli wrote:
While we should always make sure that we specify a valid VLAN protocol
to vlan_proto_idx(), killing the machine when an invalid value is
specified is too harsh and not helpful for debugging.
This set allows networking prog type to directly read fields from
the in-kernel socket type, e.g. "struct tcp_sock".
Patch 2 has the details on the use case.
v3:
- Pass arg_btf_id instead of fn into check_reg_type() in Patch 1 (Lorenz)
- Move arg_btf_id from func_proto to struct bpf_reg_types in
check_reg_type() checks whether a reg can be used as an arg of a
func_proto. For PTR_TO_BTF_ID, the check is actually not
completely done until the reg->btf_id is pointing to a
kernel struct that is acceptable by the func_proto.
Thus, this patch moves the btf_id check into check_reg_type().
"arg_
The previous patch allows the networking bpf prog to use the
bpf_skc_to_*() helpers to get a PTR_TO_BTF_ID socket pointer,
e.g. "struct tcp_sock *". It allows the bpf prog to read all the
fields of the tcp_sock.
This patch changes the bpf_sk_release() and bpf_sk_*cgroup_id()
to take ARG_PTR_TO_BT
There is a constant need to add more fields into the bpf_tcp_sock
for the bpf programs running at tc, sock_ops...etc.
A current workaround could be to use bpf_probe_read_kernel(). However,
other than making another helper call for reading each field and missing
CO-RE, it is also not as intuitive
This patch changes the bpf_sk_storage_*() to take
ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
returned by the bpf_skc_to_*() helpers also.
A micro benchmark has been done on a "cgroup_skb/egress" bpf program
which does a bpf_sk_storage_get(). It was driven by netperf d
The patch tests for:
1. bpf_sk_release() can be called on a tcp_sock btf_id ptr.
2. Ensure the tcp_sock btf_id pointer cannot be used
after bpf_sk_release().
Signed-off-by: Martin KaFai Lau
---
.../selftests/bpf/verifier/ref_tracking.c | 47 +++
1 file changed, 47 inserti
This test uses bpf_skc_to_tcp_sock() to get a kernel tcp_sock ptr "ktp".
Access the ktp->lsndtime and also pass ktp to bpf_sk_storage_get().
It also exercises the bpf_sk_cgroup_id() and bpf_sk_ancestor_cgroup_id()
with the "ktp". To do that, a parent cgroup and a child cgroup are
created. The bp
This patch changes the bpf_tcp_*_syncookie() to take
ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
returned by the bpf_skc_to_*() helpers also.
Acked-by: Lorenz Bauer
Signed-off-by: Martin KaFai Lau
---
include/uapi/linux/bpf.h | 4 ++--
net/core/filter.c
This patch uses start_server() and connect_to_fd() from network_helpers.h
to remove the network testing boiler plate codes. epoll is no longer
needed also since the timeout has already been taken care of also.
Signed-off-by: Martin KaFai Lau
---
.../selftests/bpf/prog_tests/sock_fields.c| 8
The enum tcp_ca_state is available in .
Remove it from the bpf_tcp_helpers.h to avoid conflict when the bpf prog
needs to include both both and bpf_tcp_helpers.h.
Modify the bpf_cubic.c and bpf_dctcp.c to use instead.
The is needed by .
Signed-off-by: Martin KaFai Lau
---
tools/testing/selft
This is a mechanical change to
1. move test_sock_fields.c to prog_tests/sock_fields.c
2. rename progs/test_sock_fields_kern.c to progs/test_sock_fields.c
Minimal change is made to the code itself. Next patch will make
changes to use new ways of writing test, e.g. use skel and global
variables.
S
This patch attaches a classifier prog to the ingress filter.
It exercises the following helpers with different socket pointer
types in different logical branches:
1. bpf_sk_release()
2. bpf_sk_assign()
3. bpf_skc_to_tcp_request_sock(), bpf_skc_to_tcp_sock()
4. bpf_tcp_gen_syncookie, bpf_tcp_check_s
skel is used.
Global variables are used to store the result from bpf prog.
addr_map, sock_result_map, and tcp_sock_result_map are gone.
Instead, global variables listen_tp, srv_sa6, cli_tp,, srv_tp,
listen_sk, srv_sk, and cli_sk are added.
Because of that, bpf_addr_array_idx and bpf_result_array_i
This patch changes the bpf_sk_assign() to take
ARG_PTR_TO_BTF_ID_SOCK_COMMON such that they will work with the pointer
returned by the bpf_skc_to_*() helpers also.
The bpf_sk_lookup_assign() is taking ARG_PTR_TO_SOCKET_"OR_NULL". Meaning
it specifically takes a literal NULL. ARG_PTR_TO_BTF_ID_SO
On Thu, 24 Sep 2020, Geliang Tang wrote:
This patch renamed addr_signal and the related functions with the explicit
word "add".
Suggested-by: Matthieu Baerts
Suggested-by: Paolo Abeni
Signed-off-by: Geliang Tang
---
net/mptcp/options.c | 14 +++---
net/mptcp/pm.c | 12 ++---
On Thu, 24 Sep 2020, Geliang Tang wrote:
This patch added a new signal named rm_addr_signal in PM. On outgoing path,
we called mptcp_pm_should_rm_signal to check if rm_addr_signal has been
set. If it has been, we sent out the RM_ADDR option.
Suggested-by: Matthieu Baerts
Suggested-by: Paolo Ab
On Thu, 24 Sep 2020, Geliang Tang wrote:
This patch added the RM_ADDR option parsing logic:
We parsed the incoming options to find if the rm_addr option is received,
and called mptcp_pm_rm_addr_received to schedule PM work to a new status,
named MPTCP_PM_RM_ADDR_RECEIVED.
PM work got this stat
On Thu, 24 Sep 2020, Geliang Tang wrote:
This patch implements the remove announced addr and subflow logic in PM
netlink.
When the PM netlink removes an address, we traverse all the existing msk
sockets to find the relevant sockets.
We add a new list named anno_list in mptcp_pm_data, to record
On Thu, 24 Sep 2020, Geliang Tang wrote:
This patch added two new mibs for RM_ADDR, named MPTCP_MIB_RMADDR and
MPTCP_MIB_RMSUBFLOW, when the RM_ADDR suboption is received, increase
the first mib counter, when the local subflow is removed, increase the
second mib counter.
Suggested-by: Matthieu
101 - 200 of 260 matches
Mail list logo