5:41 +0100 Lorenzo Bianconi wrote:
> >>>>> Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
> >>>>> skb_shared_info area only on the last fragment.
> >>>>> Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm
> &g
On 11/24/20 11:30 PM, Jakub Kicinski wrote:
On Tue, 24 Nov 2020 23:25:11 +0100 Daniel Borkmann wrote:
On 11/24/20 11:18 PM, Lorenzo Bianconi wrote:
On Fri, 20 Nov 2020 18:05:41 +0100 Lorenzo Bianconi wrote:
Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
skb_shared_info
On Tue, 24 Nov 2020 23:25:11 +0100 Daniel Borkmann wrote:
> On 11/24/20 11:18 PM, Lorenzo Bianconi wrote:
> >> On Fri, 20 Nov 2020 18:05:41 +0100 Lorenzo Bianconi wrote:
> >>> Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
> >>> s
On 11/24/20 11:18 PM, Lorenzo Bianconi wrote:
On Fri, 20 Nov 2020 18:05:41 +0100 Lorenzo Bianconi wrote:
Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
skb_shared_info area only on the last fragment.
Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm
> On Fri, 20 Nov 2020 18:05:41 +0100 Lorenzo Bianconi wrote:
> > Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
> > skb_shared_info area only on the last fragment.
> > Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm routine.
> > T
On Fri, 20 Nov 2020 18:05:41 +0100 Lorenzo Bianconi wrote:
> Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
> skb_shared_info area only on the last fragment.
> Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm routine.
> This a preliminary series
Lorenzo Bianconi wrote:
> Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
> skb_shared_info area only on the last fragment.
> Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm routine.
> This a preliminary series to complete xdp multi-buff in m
Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
skb_shared_info area only on the last fragment. Leftover cache miss in
mvneta_swbm_rx_frame will be addressed introducing mb bit in
xdp_buff/xdp_frame struct
Signed-off-by: Lorenzo Bianconi
---
drivers/net/ethernet/marvell
Pass skb_shared_info pointer from mvneta_xdp_put_buff caller. This is a
preliminary patch to reduce accesses to skb_shared_info area and reduce
cache misses.
Remove napi parameter in mvneta_xdp_put_buff signature since it is always
run in NAPI context
Signed-off-by: Lorenzo Bianconi
---
drivers
Build skb_shared_info on mvneta_rx_swbm stack and sync it to xdp_buff
skb_shared_info area only on the last fragment.
Avoid avoid unnecessary xdp_buff initialization in mvneta_rx_swbm routine.
This a preliminary series to complete xdp multi-buff in mvneta driver.
Lorenzo Bianconi (3):
net
introduce skb_shared_info pointer in bpf_test_finish signature in order
to copy back paged data from a xdp multi-buff frame to userspace buffer
Tested-by: Eelco Chaudron
Signed-off-by: Lorenzo Bianconi
---
net/bpf/test_run.c | 58 --
1 file changed
, however
this causes a problem that we have the skb_shared_info structure at the
end of the Ethernet frame, and that area can be overwritten by the
hardware. Right now we allocate a new sk_buff and copy from the offset
within the 4KB buffer. The CPU is fast enough and this warms up the data
cache
w BPF programs access skb_shared_info->gso_segs
field")
Signed-off-by: Eric Dumazet
Reported-by: syzbot
---
net/core/filter.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index
4e2a79b2fd77f36ba2
On 01/23/2019 06:22 PM, Eric Dumazet wrote:
> This adds the ability to read gso_segs from a BPF program.
>
> v3: Use BPF_REG_AX instead of BPF_REG_TMP for the temporary register,
> as suggested by Martin.
>
> v2: refined Eddie Hao patch to address Alexei feedback.
>
> Signed-off-by: Eric Dum
On Wed, Jan 23, 2019 at 09:22:27AM -0800, Eric Dumazet wrote:
> This adds the ability to read gso_segs from a BPF program.
Acked-by: Martin KaFai Lau
ct sk_buff, end),
+ si->dst_reg, si->src_reg,
+ offsetof(struct sk_buff, end));
+#endif
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct skb_shared_info,
gso_segs),
+ si-
On Wed, Jan 23, 2019 at 3:55 AM Daniel Borkmann wrote:
>
> On 01/18/2019 07:42 PM, Martin Lau wrote:
> > On Thu, Jan 17, 2019 at 03:31:57PM -0800, Eric Dumazet wrote:
> >> This adds the ability to read gso_segs from a BPF program.
> >>
> >> v2: refined Eddie Hao patch to address Alexei feedback.
>
BPF.
> Daniel, can BPF_REG_AX be used here as a tmp?
BPF_REG_AX would work in this case, yes. Neither of the above insns are used
in blinding nor would they collide with current verifier rewrites.
>> +*insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, B
?
> + *insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, BPF_REG_TMP);
> +#else
> + *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, end),
> + si->dst_reg, si->src_reg,
> + offsetof(st
t_reg, si->src_reg,
+ offsetof(struct sk_buff, end));
+#endif
+ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct skb_shared_info,
gso_segs),
+ si->dst_reg, si->dst_reg,
+
From: Mat Martineau
Date: Fri, 10 Nov 2017 14:03:51 -0800
> ip6_frag_id was only used by UFO, which has been removed.
> ipv6_proxy_select_ident() only existed to set ip6_frag_id and has no
> in-tree callers.
>
> Signed-off-by: Mat Martineau
Applied to net-next, thanks.
---
3 files changed, 33 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 57d712671081..54fe91183a8e 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -500,7 +500,6 @@ struct skb_shared_info {
struct skb_shared_hwtstamps hwtstamps
John Crispin wrote:
> When the flow offloading engine forwards a packet to the DMA it will send
> additional info to the sw path. this includes
> * physical switch port
> * internal flow hash - this is required to populate the correct flow table
> entry
> * ppe state - this indicates what state th
From: John Crispin
Date: Fri, 21 Jul 2017 19:01:57 +0200
> When the flow offloading engine forwards a packet to the DMA it will
> send additional info to the sw path. this includes
> * physical switch port
> * internal flow hash - this is required to populate the correct flow
> * table entry
> *
patch adds an extra element to
struct skb_shared_info allowing the ethernet drivers RX napi code to store
the required information and make it persistent for the lifecycle of the
skb and its clones.
Signed-off-by: John Crispin
---
include/linux/skbuff.h | 1 +
1 file changed, 1 insertion(+)
diff
> struct skb_shared_info allowing the ethernet drivers RX napi code to store
> the required information and make it persistent for the lifecycle of the
> skb and its clones.
>
> Signed-off-by: John Crispin
> ---
> include/linux/skbuff.h | 1 +
> 1 file changed, 1 inser
In order to make HW flow offloading work in latest MediaTek silicon we need
to propagate part of the RX DMS descriptor to the upper layers populating
the flow offload engines HW tables. This patch adds an extra element to
struct skb_shared_info allowing the ethernet drivers RX napi code to store
would like to use 8 out of those 40 bytes to extend
the size of skb->cb.
>
>> If not then may I increase skb_shared_info -- However that would have
>> to be by 64bytes.
>
>
> You will have a very hard time to convince us that this 8 byte field is
> needed on all skb
o do that without causing
> any adverse effects.
>
> Now that we have discovered that there are 40 bytes that can be used
> without any adverse effect, may I increase skb->cb by 8 bytes ?
>
skb->cb is already 48 bytes, not 40.
> If not then may I increase skb_shared_info -
without any adverse effect, may I increase skb->cb by 8 bytes ?
If not then may I increase skb_shared_info -- However that would have
to be by 64bytes.
On Tue, Apr 18, 2017 at 4:55 PM, Eric Dumazet wrote:
> Please do not top post on netdev
>
> On Tue, 2017-04-18 at 16:26 -0700, Code So
e most
> >> common cache lines. Since the alignment calculation will align the
> >> structure with the hw cache line, it seems like we might be wasting
> >> space ?
> >>
> >> skb_shared_info on the other hand is perfectly aligned with a size of 320
> >> bytes.
> >>
> >> Thanks,
> >>
> >
> > The alignment is there.
> > Look at skb_init() code, using SLAB_HWCACHE_ALIGN
> >
> >
> >
> >
> >
>
>
>
e with the hw cache line, it seems like we might be wasting
>> space ?
>>
>> skb_shared_info on the other hand is perfectly aligned with a size of 320
>> bytes.
>>
>> Thanks,
>>
>
> The alignment is there.
> Look at skb_init() code, using SLAB_HWCACHE_ALIGN
>
>
>
>
>
--
CS1
bytes -- Why
> is that. I expected it to be a multiple of 32/64 as they are the most
> common cache lines. Since the alignment calculation will align the
> structure with the hw cache line, it seems like we might be wasting
> space ?
>
> skb_shared_info on the other hand is
common cache lines. Since the alignment calculation will align the
structure with the hw cache line, it seems like we might be wasting
space ?
skb_shared_info on the other hand is perfectly aligned with a size of 320 bytes.
Thanks,
--
CS1
From: Alexey Dobriyan
Date: Tue, 11 Apr 2017 12:41:08 +0300
> On Mon, Apr 10, 2017 at 5:43 PM, Eric Dumazet wrote:
>> On Mon, 2017-04-10 at 11:07 +0300, Alexey Dobriyan wrote:
>>> struct skb_shared_info {
>>> - unsigned short _unused;
>>> unsi
On Tue, 2017-04-11 at 12:41 +0300, Alexey Dobriyan wrote:
> On Mon, Apr 10, 2017 at 5:43 PM, Eric Dumazet wrote:
> > On Mon, 2017-04-10 at 11:07 +0300, Alexey Dobriyan wrote:
> >> struct skb_shared_info {
> >> - unsigned short _unused;
> >> unsig
On Mon, Apr 10, 2017 at 5:43 PM, Eric Dumazet wrote:
> On Mon, 2017-04-10 at 11:07 +0300, Alexey Dobriyan wrote:
>> struct skb_shared_info {
>> - unsigned short _unused;
>> unsigned char nr_frags;
>> __u8tx_flags;
>> u
On Mon, 2017-04-10 at 11:07 +0300, Alexey Dobriyan wrote:
> commit 7f564528a480084e2318cd48caba7aef4a54a77f
> ("skbuff: Extend gso_type to unsigned int.") created padding as first
> field of struct skb_shared_info requiring [R64+imm8] addressing mode
> for all fields.
>
&
commit 7f564528a480084e2318cd48caba7aef4a54a77f
("skbuff: Extend gso_type to unsigned int.") created padding as first
field of struct skb_shared_info requiring [R64+imm8] addressing mode
for all fields.
Patch bubbles up padding brinding code size down to original levels and
ev
Hello!
> I still like existing way - it is much simpler (I hope :) to convince
> e1000 developers to fix driver's memory usage
e1000 is not a problem at all. It just has to use pages.
If it is going to use high order allocations, it will suck,
be it order 3 or 2.
> area (does MAX_TCP_HEADER eno
ood.
> The second half of that mail suggested three different solutions,
> all of them creepy. :-)
I still like existing way - it is much simpler (I hope :) to convince
e1000 developers to fix driver's memory usage with help of putting
skb_shared_info() or only pointer into skb.
With pointe
Hello!
> e1000 will setup head/data/tail pointers to point to the area in the
> first sg page.
Maybe.
But I still hope this is not necessary, the driver should be able to do
at least primitive header splitting, in that case the header could
be inlined to skb.
Alternatively, header can be copied
On Monday 14 August 2006 09:50, Herbert Xu wrote:
> On Mon, Aug 14, 2006 at 09:45:53AM +0200, Andi Kleen wrote:
> >
> > Even for 1.5k MTU? (which is still the most common case after all)
>
> Ideally they would stay in kmalloc memory. Could you explain the cache
> colouring problem for 1500-byte
>
> Let it use pages. Someone should start. :-)
>
> High order allocations are disaster in any case.
>
>
> > If we store raw kmalloc buffers, we cannot attach them to an arbitrary
> > skb because of skb_shared_info(). This is true even if we
> > purposefully al
On Mon, Aug 14, 2006 at 09:45:53AM +0200, Andi Kleen wrote:
>
> Even for 1.5k MTU? (which is still the most common case after all)
Ideally they would stay in kmalloc memory. Could you explain the cache
colouring problem for 1500-byte packets?
Cheers,
--
Visit Openswan at http://www.openswan.or
ame colors.
>
> If we went with a fully paged skbs this should be a non-issue, right?
Even for 1.5k MTU? (which is still the most common case after all)
> In a fully paged representation the head would be small which is the
> perfect place to place the skb_shared_info.
If the head is sma
right?
In a fully paged representation the head would be small which is the
perfect place to place the skb_shared_info.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http:
> The e1000 issue is just one example of this, another
> would be any attempt to consolidate the TCP retransmit
> queue data management.
Another reason to move it in the sk_buff would be better cache
coloring? Currently on large/small MTU packets it will be always on
the same colors.
-Andi
-
T
loc buffers, we cannot attach them to an arbitrary
> skb because of skb_shared_info(). This is true even if we
> purposefully allocate the necessary head room for these kmalloc based
> buffers.
I still do not see. For non-SG paths, you have to keep header and data
together, you just do
K packets, you
> must give it the whole next power of 2 buffer size for the MTU you
> wish to use.
>
> With skb_shared_info() overhead this becomes a 32K allocation
> in the simplest implementation.
I think is no longer an issue because we've all come to the conclusion
that E
izes, and next hop from 9K is 16K.
It is not possible to tell the chip to only accept 9K packets, you
must give it the whole next power of 2 buffer size for the MTU you
wish to use.
With skb_shared_info() overhead this becomes a 32K allocation
in the simplest implementation.
Whichever hardware pe
ample of this, another
What is this issue?
What's about aggregated tcp queue, I can guess you did not find place
where to add protocol headers, but cannot figure out how adding non-pagecache
references could help.
You would rather want more then one skb_shared_info(): at least two,
one is im
On Tue, Aug 08, 2006 at 04:39:15PM -0700, David Miller ([EMAIL PROTECTED])
wrote:
>
> I'm beginning to think that where we store the
> skb_shared_info() is a weakness of the SKB design.
Food for thoughts - unix sockets can use PAGE_SIZEd chunks of memory
(and they do it almost
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 09:35:24 +0400
> We can separate kmalloced data from page by using internal page structures
> (lru pointers and PG_slab bit).
Yes, it is one idea.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a
On Tue, Aug 08, 2006 at 05:58:39PM -0700, David Miller ([EMAIL PROTECTED])
wrote:
> From: Herbert Xu <[EMAIL PROTECTED]>
> Date: Wed, 9 Aug 2006 10:36:16 +1000
>
> > I'm not sure whether the problem is where we store skb_shared_info,
> > or the fact that we
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 10:36:16 +1000
> I'm not sure whether the problem is where we store skb_shared_info,
> or the fact that we can't put kmalloc'ed memory into
> skb_shinfo->frags.
That's a good point.
I gue
On Tue, Aug 08, 2006 at 04:39:15PM -0700, David Miller wrote:
>
> I'm beginning to think that where we store the
> skb_shared_info() is a weakness of the SKB design.
I'm not sure whether the problem is where we store skb_shared_info,
or the fact that we can't pu
I'm beginning to think that where we store the
skb_shared_info() is a weakness of the SKB design.
It makes it more difficult to have local memory
management schemes and to just wrap SKB's around
arbitrary pieces of data.
The e1000 issue is just one example of this, another
would be a
58 matches
Mail list logo