Hi Matthew,
[...]
>
> And the contents of this page already came from that device ... if it
> wanted to write bad data, it could already have done so.
>
> > > > (3) The page_pool is optimized for refcnt==1 case, and AFAIK TCP-RX
> > > > zerocopy will bump the refcnt, which means the page_pool wi
On Mon, Apr 19, 2021 at 09:21:55AM -0700, Shakeel Butt wrote:
> On Mon, Apr 19, 2021 at 8:43 AM Ilias Apalodimas
> wrote:
> >
> [...]
> > > Pages mapped into the userspace have their refcnt elevated, so the
> > > page_ref_count() check by the drivers indica
Hi Shakeel,
On Mon, Apr 19, 2021 at 07:57:03AM -0700, Shakeel Butt wrote:
> On Sun, Apr 18, 2021 at 10:12 PM Ilias Apalodimas
> wrote:
> >
> > On Wed, Apr 14, 2021 at 01:09:47PM -0700, Shakeel Butt wrote:
> > > On Wed, Apr 14, 2021 at 12:42 PM Jesper D
Hi Christoph,
On Mon, Apr 19, 2021 at 08:34:41AM +0200, Christoph Hellwig wrote:
> On Fri, Apr 16, 2021 at 04:27:55PM +0100, Matthew Wilcox wrote:
> > On Thu, Apr 15, 2021 at 08:08:32PM +0200, Jesper Dangaard Brouer wrote:
> > > See below patch. Where I swap32 the dma address to satisfy
> > > pag
On Wed, Apr 14, 2021 at 01:09:47PM -0700, Shakeel Butt wrote:
> On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
> wrote:
> >
> [...]
> > > >
> > > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType
> > > > can not be used.
> > >
> > > Yes it can, since it's going to be
On Sat, Apr 17, 2021 at 09:22:40PM +0100, Matthew Wilcox wrote:
> On Sat, Apr 17, 2021 at 09:32:06PM +0300, Ilias Apalodimas wrote:
> > > +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t
> > > addr)
> > > +{
> > > + page->dma_add
Hi Matthew,
On Sat, Apr 17, 2021 at 03:45:22AM +0100, Matthew Wilcox wrote:
>
> Replacement patch to fix compiler warning.
>
> From: "Matthew Wilcox (Oracle)"
> Date: Fri, 16 Apr 2021 16:34:55 -0400
> Subject: [PATCH 1/2] mm: Fix struct page layout on 32-bit systems
> To: bro...@redhat.com
> Cc
On Wed, Apr 14, 2021 at 12:50:52PM +0100, Matthew Wilcox wrote:
> On Wed, Apr 14, 2021 at 10:10:44AM +0200, Jesper Dangaard Brouer wrote:
> > Yes, indeed! - And very frustrating. It's keeping me up at night.
> > I'm dreaming about 32 vs 64 bit data structures. My fitbit stats tell
> > me that I do
Hi Shakeel,
On Sat, Apr 10, 2021 at 10:42:30AM -0700, Shakeel Butt wrote:
> On Sat, Apr 10, 2021 at 9:16 AM Ilias Apalodimas
> wrote:
> >
> > Hi Matthew
> >
> > On Sat, Apr 10, 2021 at 04:48:24PM +0100, Matthew Wilcox wrote:
> > > On Sat, Apr 10, 2021
Hi Matthew
On Sat, Apr 10, 2021 at 04:48:24PM +0100, Matthew Wilcox wrote:
> On Sat, Apr 10, 2021 at 12:37:58AM +0200, Matteo Croce wrote:
> > This is needed by the page_pool to avoid recycling a page not allocated
> > via page_pool.
>
> Is the PageType mechanism more appropriate to your needs?
+CC Grygorii for the cpsw part as Ivan's email is not valid anymore
Thanks for catching this. Interesting indeed...
On Sat, 10 Apr 2021 at 09:22, Jesper Dangaard Brouer wrote:
>
> On Sat, 10 Apr 2021 03:43:13 +0100
> Matthew Wilcox wrote:
>
> > On Sat, Apr 10, 2021 at 06:45:35AM +0800, kernel t
Hi Matteo,
[...]
> +bool page_pool_return_skb_page(void *data);
> +
> struct page_pool *page_pool_create(const struct page_pool_params *params);
>
> #ifdef CONFIG_PAGE_POOL
> @@ -243,4 +247,13 @@ static inline void page_pool_ring_unlock(struct
> page_pool *pool)
> spin_unlock_b
On Fri, Apr 09, 2021 at 12:29:29PM -0700, Jakub Kicinski wrote:
> On Fri, 9 Apr 2021 22:01:51 +0300 Ilias Apalodimas wrote:
> > On Fri, Apr 09, 2021 at 11:56:48AM -0700, Jakub Kicinski wrote:
> > > On Fri, 2 Apr 2021 20:17:31 +0200 Matteo Croce wrote:
> > > > Co
On Fri, Apr 09, 2021 at 11:56:48AM -0700, Jakub Kicinski wrote:
> On Fri, 2 Apr 2021 20:17:31 +0200 Matteo Croce wrote:
> > Co-developed-by: Jesper Dangaard Brouer
> > Co-developed-by: Matteo Croce
> > Signed-off-by: Ilias Apalodimas
>
> Checkpatch says we need
On Wed, Mar 24, 2021 at 10:28:35AM +0100, Lorenzo Bianconi wrote:
> [...]
> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c
> > > b/drivers/net/ethernet/marvell/mvneta.c
> > > index a635cf84608a..8b3250394703 100644
> > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > +++ b/drivers/net/e
Hi Alexander,
On Tue, Mar 23, 2021 at 08:03:46PM +, Alexander Lobakin wrote:
> From: Ilias Apalodimas
> Date: Tue, 23 Mar 2021 19:01:52 +0200
>
> > On Tue, Mar 23, 2021 at 04:55:31PM +, Alexander Lobakin wrote:
> > > > > > > >
> >
> > [
On Tue, Mar 23, 2021 at 04:55:31PM +, Alexander Lobakin wrote:
> > > > > >
[...]
> > > > >
> > > > > Thanks for the testing!
> > > > > Any chance you can get a perf measurement on this?
> > > >
> > > > I guess you mean perf-report (--stdio) output, right?
> > > >
> > >
> > > Yea,
> > > As hin
On Tue, Mar 23, 2021 at 05:04:47PM +0100, Jesper Dangaard Brouer wrote:
> On Tue, 23 Mar 2021 17:47:46 +0200
> Ilias Apalodimas wrote:
>
> > On Tue, Mar 23, 2021 at 03:41:23PM +, Alexander Lobakin wrote:
> > > From: Matteo Croce
> > > Date
skb_frag_unref
> > users, and 5,6 enable the recycling on two drivers.
> >
> > In the last two patches I reported the improvement I have with the series.
> >
> > The recycling as is can't be used with drivers like mlx5 which do page
> > split,
> > but this is
Hi David,
On Tue, Mar 23, 2021 at 08:57:57AM -0600, David Ahern wrote:
> On 3/22/21 11:02 AM, Matteo Croce wrote:
> > From: Matteo Croce
> >
> > This series enables recycling of the buffers allocated with the page_pool
> > API.
> > The first two patches are just prerequisite to save space in a
[...]
> 6. return last_page
>
> > + /* Remaining pages store in alloc.cache */
> > + list_for_each_entry_safe(page, next, &page_list, lru) {
> > + list_del(&page->lru);
> > + if ((pp_flags & PP_FLAG_DMA_MAP) &&
> > + unlikely(!page_pool_dma
}
>
> /* Track how many pages are held 'in-flight' */
> pool->pages_state_hold_cnt++;
> - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
> + trace_page_pool_state_hold(pool, first_page,
> pool->pages_state_hold_cnt);
>
> /* When page just alloc'ed is should/must have refcnt 1. */
> - return page;
> + return first_page;
> }
>
> /* For using page_pool replace: alloc_pages() API calls, but provide
> --
> 2.26.2
>
Reviewed-by: Ilias Apalodimas
_SYNC_DEV)
> - page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
> -
> -skip_dma_map:
> /* Track how many pages are held 'in-flight' */
> pool->pages_state_hold_cnt++;
> -
> trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
>
> /* When page just alloc'ed is should/must have refcnt 1. */
> --
> 2.26.2
>
Otherwise
Reviewed-by: Ilias Apalodimas
Hi Lorenzo
for the netsec driver
Reviewed-by: Ilias Apalodimas
On Sat, Feb 27, 2021 at 12:04:13PM +0100, Lorenzo Bianconi wrote:
> We want to change the current ndo_xdp_xmit drop semantics because
> it will allow us to implement better queue overflow handling.
> This is working to
Hi Jesper,
On Wed, Feb 24, 2021 at 07:56:46PM +0100, Jesper Dangaard Brouer wrote:
> There are cases where the page_pool need to refill with pages from the
> page allocator. Some workloads cause the page_pool to release pages
> instead of recycling these pages.
>
> For these workload it can impr
On Wed, Feb 24, 2021 at 07:56:41PM +0100, Jesper Dangaard Brouer wrote:
> In preparation for next patch, move the dma mapping into its own
> function, as this will make it easier to follow the changes.
>
> Signed-off-by: Jesper Dangaard Brouer
> ---
> net/core/page_pool.c | 49
t; /* Read barrier done in page_ref_count / READ_ONCE */
>
> if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> --
> 2.30.0
>
>
Reviewed-by: Ilias Apalodimas
Hi Jakub,
On Fri, Nov 20, 2020 at 10:14:34AM -0800, Jakub Kicinski wrote:
> On Fri, 20 Nov 2020 20:07:13 +0200 Ilias Apalodimas wrote:
> > On Fri, Nov 20, 2020 at 10:00:07AM -0800, Jakub Kicinski wrote:
> > > On Tue, 17 Nov 2020 10:35:28 +0100 Lorenzo Bianconi wrote:
>
e consider the caller must not use data area after running
s/consider/note/
> + page_pool_put_page_bulk(), as this function overwrites it.
> +
> Coding examples
> ===
>
> --
> 2.28.0
>
Other than that
Acked-by: Ilias Apalodimas
Hi Jakub,
On Fri, Nov 20, 2020 at 10:00:07AM -0800, Jakub Kicinski wrote:
> On Tue, 17 Nov 2020 10:35:28 +0100 Lorenzo Bianconi wrote:
> > Convert netsec driver to xdp_return_frame_bulk APIs.
> > Rely on xdp_return_frame_rx_napi for XDP_TX in order to try to recycle
> > the page in the "in-irq" p
h
> +F: Documentation/networking/page_pool.rst
>
> PANASONIC LAPTOP ACPI EXTRAS DRIVER
> M: Harald Welte
>
>
Acked-by: Ilias Apalodimas
| 1 +
> 29 files changed, 47 insertions(+), 33 deletions(-)
>
For the socionext driver
Acked-by: Ilias Apalodimas
d330ebda893 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -393,16 +393,11 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
> void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
> {
> struct xdp_mem_allocator *xa = bq->xa;
> - int i;
>
> - if (unlikely(!xa))
> + if (unlikely(!xa || !bq->count))
> return;
>
> - for (i = 0; i < bq->count; i++) {
> - struct page *page = virt_to_head_page(bq->q[i]);
> -
> - page_pool_put_full_page(xa->page_pool, page, false);
> - }
> + page_pool_put_page_bulk(xa->page_pool, bq->q, bq->count);
> /* bq->xa is not cleared to save lookup, if mem.id same in next bulk */
> bq->count = 0;
> }
> --
> 2.26.2
>
Acked-by: Ilias Apalodimas
bq->count = 0;
> + bq->xa = xa;
> + }
> +
> + if (bq->count == XDP_BULK_QUEUE_SIZE)
> + xdp_flush_frame_bulk(bq);
> +
> + if (unlikely(mem->id != xa->mem.id)) {
> + xdp_flush_frame_bulk(bq);
> + bq->xa = rhashtable_lookup(mem_id_ht, &mem->id,
> mem_id_rht_params);
> + }
> +
> + bq->q[bq->count++] = xdpf->data;
> +}
> +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);
> +
> void xdp_return_buff(struct xdp_buff *xdp)
> {
> __xdp_return(xdp->data, &xdp->rxq->mem, true);
> --
> 2.26.2
>
Could you add the changes in the Documentation as well (which can do in later)
Acked-by: Ilias Apalodimas
Hi Lorenzo,
On Thu, Oct 29, 2020 at 08:28:44PM +0100, Lorenzo Bianconi wrote:
> XDP bulk APIs introduce a defer/flush mechanism to return
> pages belonging to the same xdp_mem_allocator object
> (identified via the mem.id field) in bulk to optimize
> I-cache and D-cache since xdp_return_frame is
On Thu, Oct 29, 2020 at 03:39:34PM +0100, Andrew Lunn wrote:
> > What about reverting the realtek PHY commit from stable?
> > As Ard said it doesn't really fix anything (usage wise) and causes a bunch
> > of
> > problems.
> >
> > If I understand correctly we have 3 options:
> > 1. 'Hack' the dri
Hi Andrew
On Sun, Oct 25, 2020 at 03:42:58PM +0100, Andrew Lunn wrote:
> On Sun, Oct 25, 2020 at 03:34:06PM +0100, Ard Biesheuvel wrote:
> > On Sun, 25 Oct 2020 at 15:29, Andrew Lunn wrote:
> > >
> > > On Sun, Oct 25, 2020 at 03:16:36PM +0100, Ard Biesheuvel wrote:
> > > > On Sun, 18 Oct 2020 at
On Thu, Oct 29, 2020 at 03:02:16PM +0100, Lorenzo Bianconi wrote:
> > On Tue, 27 Oct 2020 20:04:07 +0100
> > Lorenzo Bianconi wrote:
> >
> > > diff --git a/net/core/xdp.c b/net/core/xdp.c
> > > index 48aba933a5a8..93eabd789246 100644
> > > --- a/net/core/xdp.c
> > > +++ b/net/core/xdp.c
> > > @@
Hi Lorenzo,
On Tue, Oct 27, 2020 at 08:04:08PM +0100, Lorenzo Bianconi wrote:
> Introduce the capability to batch page_pool ptr_ring refill since it is
> usually run inside the driver NAPI tx completion loop.
>
> Suggested-by: Jesper Dangaard Brouer
> Signed-off-by: Lorenzo Bianconi
> ---
> i
On Tue, Oct 27, 2020 at 08:04:08PM +0100, Lorenzo Bianconi wrote:
> Introduce the capability to batch page_pool ptr_ring refill since it is
> usually run inside the driver NAPI tx completion loop.
>
> Suggested-by: Jesper Dangaard Brouer
> Signed-off-by: Lorenzo Bianconi
> ---
> include/net/pag
On Wed, Oct 28, 2020 at 11:23:04AM +0100, Lorenzo Bianconi wrote:
> > Hi Lorenzo,
>
> Hi Ilias,
>
> thx for the review.
>
> >
> > On Tue, Oct 27, 2020 at 08:04:07PM +0100, Lorenzo Bianconi wrote:
>
> [...]
>
> > > +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> > > +
Hi Lorenzo,
On Tue, Oct 27, 2020 at 08:04:07PM +0100, Lorenzo Bianconi wrote:
> Introduce bulking capability in xdp tx return path (XDP_REDIRECT).
> xdp_return_frame is usually run inside the driver NAPI tx completion
> loop so it is possible batch it.
> Current implementation considers only page_
Hi Ard,
On Mon, Oct 19, 2020 at 08:30:45AM +0200, Ard Biesheuvel wrote:
> On Sun, 18 Oct 2020 at 22:32, Ilias Apalodimas
> wrote:
> >
> > On Sun, Oct 18, 2020 at 07:52:18PM +0200, Andrew Lunn wrote:
> > > > --- a/Documentation/devicetree/bindings/net/soci
On Sun, Oct 18, 2020 at 07:52:18PM +0200, Andrew Lunn wrote:
> > --- a/Documentation/devicetree/bindings/net/socionext-netsec.txt
> > +++ b/Documentation/devicetree/bindings/net/socionext-netsec.txt
> > @@ -30,7 +30,9 @@ Optional properties: (See ethernet.txt file in the same
> > directory)
> > -
Hi Ard,
[...]
> > > > You can also use '' as the phy-mode, which results in
> > > > PHY_INTERFACE_MODE_NA, which effectively means, don't touch the PHY
> > > > mode, something else has already set it up. This might actually be the
> > > > correct way to go for ACPI. In the DT world, we tend to ass
Hi Ard,
On Sat, Oct 17, 2020 at 05:18:16PM +0200, Ard Biesheuvel wrote:
> On Sat, 17 Oct 2020 at 17:11, Andrew Lunn wrote:
> >
> > On Sat, Oct 17, 2020 at 04:46:23PM +0200, Ard Biesheuvel wrote:
> > > On Sat, 17 Oct 2020 at 16:44, Andrew Lunn wrote:
> > > >
> > > > On Sat, Oct 17, 2020 at 04:20
oju
Reported-by: Jiri Olsa
Co-developed-by: Jean-Philippe Brucker
Signed-off-by: Jean-Philippe Brucker
Co-developed-by: Yauheni Kaliuta
Signed-off-by: Yauheni Kaliuta
Signed-off-by: Ilias Apalodimas
---
Changes since v1:
- Added Co-developed-by, Reported-by and Fixes tags correctly
- Descr
Hi Will,
On Tue, Sep 15, 2020 at 02:11:03PM +0100, Will Deacon wrote:
[...]
> > continue;
> > }
> > - if (ctx->image == NULL)
> > - ctx->offset[i] = ctx->idx;
> > if (ret)
> > return ret;
> > }
> > +
On Tue, Sep 15, 2020 at 02:11:03PM +0100, Will Deacon wrote:
> Hi Ilias,
>
> On Mon, Sep 14, 2020 at 07:03:55PM +0300, Ilias Apalodimas wrote:
> > Running the eBPF test_verifier leads to random errors looking like this:
> >
> > [ 6525.735488] Unexpected
Hi Will,
On Tue, Sep 15, 2020 at 03:17:08PM +0100, Will Deacon wrote:
> On Tue, Sep 15, 2020 at 04:53:44PM +0300, Ilias Apalodimas wrote:
> > On Tue, Sep 15, 2020 at 02:11:03PM +0100, Will Deacon wrote:
> > > Hi Ilias,
> > >
> > > On Mon, Sep 14, 2020 at 07:0
On Mon, Sep 14, 2020 at 11:52:16AM -0700, Xi Wang wrote:
> On Mon, Sep 14, 2020 at 11:28 AM Ilias Apalodimas
> wrote:
> > Even if that's true, is any reason at all why we should skip the first
> > element
> > of the array, that's now needed since 7c2
Hi Luke,
On Mon, Sep 14, 2020 at 11:21:58AM -0700, Luke Nelson wrote:
> On Mon, Sep 14, 2020 at 11:08 AM Xi Wang wrote:
> > I don't think there's some consistent semantics of "offsets" across
> > the JITs of different architectures (maybe it's good to clean that
> > up). RV64 and RV32 JITs are
Hi Xi,
On Mon, Sep 14, 2020 at 11:08:13AM -0700, Xi Wang wrote:
> On Mon, Sep 14, 2020 at 10:55 AM Ilias Apalodimas
> wrote:
> > We've briefly discussed this approach with Yauheni while coming up with the
> > posted patch.
> > I think that contructing the array co
On Mon, Sep 14, 2020 at 10:47:33AM -0700, Xi Wang wrote:
> On Mon, Sep 14, 2020 at 10:03 AM Ilias Apalodimas
> wrote:
> > Naresh from Linaro reported it during his tests on 5.8-rc1 as well [1].
> > I've included both Jiri and him on the v2 as reporters.
> >
> >
On Mon, Sep 14, 2020 at 06:12:34PM +0200, Jesper Dangaard Brouer wrote:
>
> On Mon, 14 Sep 2020 15:01:15 +0100 Will Deacon wrote:
>
> > Hi Ilias,
> >
> > On Mon, Sep 14, 2020 at 04:23:50PM +0300, Ilias Apalodimas wrote:
> > > On Mon, Sep 14, 2020 at 03:3
Hi Will,
On Mon, Sep 14, 2020 at 03:01:15PM +0100, Will Deacon wrote:
> Hi Ilias,
>
[...]
> > > >
> > > > No Fixes: tag?
> > >
> > > I'll re-spin and apply one
> > >
> > Any suggestion on any Fixes I should apply? The original code was 'correct'
> > and
> > broke only when bounded loops an
-developed-by: Jean-Philippe Brucker
Signed-off-by: Jean-Philippe Brucker
Co-developed-by: Yauheni Kaliuta
Signed-off-by: Yauheni Kaliuta
Signed-off-by: Ilias Apalodimas
---
Changes since v1:
- Added Co-developed-by, Reported-by and Fixes tags correctly
- Describe the expected context o
Hi Will,
On Mon, Sep 14, 2020 at 03:35:04PM +0300, Ilias Apalodimas wrote:
> On Mon, Sep 14, 2020 at 01:20:43PM +0100, Will Deacon wrote:
> > On Mon, Sep 14, 2020 at 11:36:21AM +0300, Ilias Apalodimas wrote:
> > > Running the eBPF test_verifier leads to random errors
On Mon, Sep 14, 2020 at 01:20:43PM +0100, Will Deacon wrote:
> On Mon, Sep 14, 2020 at 11:36:21AM +0300, Ilias Apalodimas wrote:
> > Running the eBPF test_verifier leads to random errors looking like this:
> >
> > [ 6525.735488] Unexpected kernel BRK exception at EL1
> &
t;offset[] correctly in the first
place and account for the extra instruction while calculating the arm
instruction offsets.
Signed-off-by: Ilias Apalodimas
Signed-off-by: Jean-Philippe Brucker
Signed-off-by: Yauheni Kaliuta
---
arch/arm64/net/bpf_jit_comp.c | 23 +++
1 file c
On Tue, Jun 30, 2020 at 08:09:29PM +0200, Matteo Croce wrote:
> From: Matteo Croce
>
> Add XDP native support.
> By now only XDP_DROP, XDP_PASS and XDP_REDIRECT
> verdicts are supported.
>
> Co-developed-by: Sven Auhagen
> Signed-off-by: Sven Auhagen
> Signed-off-by: Matteo Croce
> ---
[...]
Hi Matteo,
Thanks for working on this!
On Tue, Jun 30, 2020 at 08:09:28PM +0200, Matteo Croce wrote:
> From: Matteo Croce
>
> Use the page_pool API for memory management. This is a prerequisite for
> native XDP support.
>
> Tested-by: Sven Auhagen
> Signed-off-by: Matteo Croce
> ---
> driv
On Fri, May 01, 2020 at 01:12:17PM +0300, Denis Kirjanov wrote:
> The patch adds a basic XDP processing to xen-netfront driver.
>
> We ran an XDP program for an RX response received from netback
> driver. Also we request xen-netback to adjust data offset for
> bpf_xdp_adjust_head() header space fo
On Tue, Oct 22, 2019 at 04:44:24AM +, Saeed Mahameed wrote:
> From: Jonathan Lemon
>
> 1) Rename functions to reflect what they are actually doing.
>
> 2) Unify the condition to keep a page.
>
> 3) When page can't be kept in cache, fallback to releasing page to page
> allocator in one place
as it is
> done for skb counterpart
>
> Tested-by: Ilias Apalodimas
> Signed-off-by: Lorenzo Bianconi
> ---
> Changes since v1:
> - fix BQL accounting
> - target the patch to next-next
> ---
> drivers/net/ethernet/socionext/netsec.c | 5 +++--
> 1 file changed,
Hi Saeed,
On Wed, Oct 16, 2019 at 03:50:22PM -0700, Jonathan Lemon wrote:
> From: Saeed Mahameed
>
> Add page_pool_update_nid() to be called from drivers when they detect
> numa node changes.
>
> It will do:
> 1) Flush the pool's page cache and ptr_ring.
> 2) Update page pool nid value to start
Hi Jakub,
On Wed, Oct 16, 2019 at 05:14:01PM -0700, Jakub Kicinski wrote:
> On Wed, 16 Oct 2019 14:40:32 +0300, Ilias Apalodimas wrote:
> > bpf_xdp_adjust_head() can change the frame boundaries. Account for the
> > potential shift properly by calculating the new offset before
bpf_xdp_adjust_head() can change the frame boundaries. Account for the
potential shift properly by calculating the new offset before
syncing the buffer to the device for XDP_TX
Fixes: ba2b232108d3 ("net: netsec: add XDP support")
Signed-off-by: Ilias Apalodimas
---
drivers/ne
On Wed, Oct 16, 2019 at 12:09:00PM +0200, Lorenzo Bianconi wrote:
> > On Mon, 14 Oct 2019 12:49:55 +0200, Lorenzo Bianconi wrote:
> > > Implement XDP_TX verdict and ndo_xdp_xmit net_device_ops function
> > > pointer
> > >
> > > Signed-off-by: Lorenzo Bianconi
> >
> > > @@ -1972,6 +1975,109 @@ in
Hi Jakub,
On Tue, Oct 15, 2019 at 05:03:53PM -0700, Jakub Kicinski wrote:
> On Mon, 14 Oct 2019 12:49:54 +0200, Lorenzo Bianconi wrote:
> > Allow tx buffer array to contain both skb and xdp buffers in order to
> > enable xdp frame recycling adding XDP_TX verdict support
> >
> > Signed-off-by: Lor
On Fri, Oct 11, 2019 at 05:15:03PM +0300, Ilias Apalodimas wrote:
> Hi Lorenzo,
>
> On Fri, Oct 11, 2019 at 03:45:38PM +0200, Lorenzo Bianconi wrote:
> > Increment netdev rx counters even for XDP_DROP verdict. Moreover report
> > even tx bytes for xdp buffers (
Hi Lorenzo,
On Fri, Oct 11, 2019 at 03:45:38PM +0200, Lorenzo Bianconi wrote:
> Increment netdev rx counters even for XDP_DROP verdict. Moreover report
> even tx bytes for xdp buffers (TYPE_NETSEC_XDP_TX or
> TYPE_NETSEC_XDP_NDO)
The RX counters work fine. The TX change is causing a panic though
Hi Lorenzo, Jesper,
On Thu, Oct 10, 2019 at 09:08:31AM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 10 Oct 2019 01:18:34 +0200
> Lorenzo Bianconi wrote:
>
> > mvneta driver can run on not cache coherent devices so it is
> > necessary to sync dma buffers before sending them to the device
> > in
ing
> the page_pool API.
> This patch fixes even an issue in the original driver where dma buffers
> are accessed before dma sync
>
> Signed-off-by: Ilias Apalodimas
> Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Lorenzo Bianconi
> ---
> drivers
ing
> the page_pool API
>
> Tested-by: Ilias Apalodimas
> Signed-off-by: Ilias Apalodimas
> Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Lorenzo Bianconi
> ---
> drivers/net/ethernet/marvell/mvneta.c | 198 ++
> 1 file chan
go into the network stack.
>
> The page_pool API offers buffer recycling capabilities for XDP but
> allocates one page per packet, unless the driver splits and manages
> the allocated page.
> This is a preliminary patch to add XDP support to mvneta driver
>
> Tested-by: Ilias A
On Tue, Oct 01, 2019 at 11:24:43AM +0200, Lorenzo Bianconi wrote:
> Add basic XDP support to mvneta driver for devices that rely on software
> buffer management. Currently supported verdicts are:
> - XDP_DROP
> - XDP_PASS
> - XDP_REDIRECT
>
> Signed-off-by: Lorenzo Bianconi
> ---
> drivers/net/e
oading a xdp program on a different device (e.g virtio-net) and
> xdp_do_redirect_map/xdp_do_redirect_slow can redirect to netsec even if
> we do not have a xdp program on it.
>
> Fixes: ba2b232108d3 ("net: netsec: add XDP support")
> Tested-by: Ilias Apalodim
up_tc)
> return -EOPNOTSUPP;
> - }
> +
> + return ds->ops->port_setup_tc(ds, dp->index, type, type_data);
> }
>
> static void dsa_slave_get_stats64(struct net_device *dev,
> --
> 2.17.1
>
Acked-by: Ilias Apalodimas
-EOPNOTSUPP;
> - }
> +
> + return ds->ops->port_setup_tc(ds, dp->index, type, type_data);
> }
>
> static void dsa_slave_get_stats64(struct net_device *dev,
> --
> 2.17.1
>
Acked-by: Ilias Apalodimas
> NULL page.
>
> Restructure the logic so eliminate both cases.
Acked-by: Ilias Apalodimas
>
> Signed-off-by: Jonathan Lemon
> ---
> net/core/page_pool.c | 39 ---
> 1 file changed, 16 insertions(+), 23 deletions(-)
>
> diff --git a/n
Add myself to maintainers since i provided the XDP and page_pool
implementation
Signed-off-by: Ilias Apalodimas
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 211ea3a199bd..64f659d8346c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14789,6
On commit ba2b232108d3 ("net: netsec: add XDP support") a static
declaration for netsec_set_tx_de() was added to make the diff easier
to read. Now that the patch is merged let's move the functions around
and get rid of that
Signed-off-by: Ilias Apalodimas
---
drivers/net/eth
While freeing tx buffers the memory has to be unmapped if the packet was
an skb or was used for .ndo_xdp_xmit using the same arguments. Get rid
of the unneeded extra 'else if' statement
Signed-off-by: Ilias Apalodimas
---
drivers/net/ethernet/socionext/netsec.c | 8
1 file
On Tue, Jul 09, 2019 at 12:31:54PM -0400, Andy Gospodarek wrote:
> On Tue, Jul 09, 2019 at 06:20:57PM +0300, Ilias Apalodimas wrote:
> > Hi,
> >
> > > > Add page_pool_destroy() in bnxt_free_rx_rings() during normal RX ring
> > > > cleanup, as Ilias has in
pool_destroy")
> >
> > The special error handling code to call page_pool_free() can now be
> > removed. bnxt_free_rx_rings() will always be called during normal
> > shutdown or any error paths.
> >
> > Fixes: 322b87ca55f2 ("bnxt_en: add page_pool suppo
)
Signed-off-by: Ilias Apalodimas
---
drivers/net/ethernet/socionext/netsec.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/socionext/netsec.c
b/drivers/net/ethernet/socionext/netsec.c
index d7307ab90d74..c3a4f86f56ee 100644
--- a/drive
Hi David,
On Mon, Jul 08, 2019 at 03:20:20PM -0700, David Miller wrote:
> From: Michael Chan
> Date: Mon, 8 Jul 2019 17:53:00 -0400
>
> > This patch series adds XDP_REDIRECT support by Andy Gospodarek.
>
> Series applied, thanks everyone.
We need a fix on this after merging Ivans patch
commi
scales linearly with the number of cores performing
> redirect actions when using the page pools instead of the standard page
> allocator.
>
> v2: Fix up the error path from XDP registration, noted by Ilias Apalodimas.
>
> Signed-off-by: Andy Gospodarek
> Signed-off-by: Michael
Hi Andy,
> On Mon, Jul 08, 2019 at 11:28:03AM +0300, Ilias Apalodimas wrote:
> > Thanks Andy, Michael
> >
> > > + if (event & BNXT_REDIRECT_EVENT)
> > > + xdp_do_flush_map();
> > > +
> > > if (event & BNXT_TX_EVENT) {
>
Hi Vladimir,
> tc-taprio is a qdisc based on the enhancements for scheduled traffic
> specified in IEEE 802.1Qbv (later merged in 802.1Q). This qdisc has
> a software implementation and an optional offload through which
> compatible Ethernet ports may configure their egress 802.1Qbv
> schedulers.
++ b/MAINTAINERS
> @@ -11902,6 +11902,14 @@ F: kernel/padata.c
> F: include/linux/padata.h
> F: Documentation/padata.txt
>
> +PAGE POOL
> +M: Jesper Dangaard Brouer
> +M: Ilias Apalodimas
> +L: netdev@vger.kernel.org
> +S: Supported
> +F: net/co
Thanks Andy, Michael
> + if (event & BNXT_REDIRECT_EVENT)
> + xdp_do_flush_map();
> +
> if (event & BNXT_TX_EVENT) {
> struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
> u16 prod = txr->tx_prod;
> @@ -2254,9 +2257,23 @@ static void bnxt_free_tx_skbs
cd1973a9215a ("net: netsec: Sync dma for device on buffer allocation")
was merged on it's v1 instead of the v3.
Merge the proper patch version
Signed-off-by: Ilias Apalodimas
---
drivers/net/ethernet/socionext/netsec.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-
e the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.
Since the coherency is configurable in this device make sure we cover
all configurations by explicitly syncing the allocated buffer for the
device before refilling it's descriptors
Signed-off-by: Ilias
On Thu, Jul 04, 2019 at 08:52:50PM +0300, Ilias Apalodimas wrote:
> On Thu, Jul 04, 2019 at 07:39:44PM +0200, Jesper Dangaard Brouer wrote:
> > On Thu, 4 Jul 2019 17:46:09 +0300
> > Ilias Apalodimas wrote:
> >
> > > Quoting Arnd,
> > >
> > > W
On Thu, Jul 04, 2019 at 07:39:44PM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 4 Jul 2019 17:46:09 +0300
> Ilias Apalodimas wrote:
>
> > Quoting Arnd,
> >
> > We have to do a sync_single_for_device /somewhere/ before the
> > buffer is given to the device
e the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.
Since the coherency is configurable in this device make sure we cover
all configurations by explicitly syncing the allocated buffer for the
device before refilling it's descriptors
Signed-off
e the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.
Since the coherency is configurable in this device make sure we cover
all configurations by explicitly syncing the allocated buffer for the
device before refilling it's descriptors
Signed-off
ec_de *entry;
> int tail = dring->tail;
> int cnt = 0;
> @@ -642,7 +642,6 @@ static bool netsec_clean_tx_dring(struct netsec_priv
> *priv)
> if (dring->is_xdp)
> spin_lock(&dring->lock);
>
> - pkts = 0;
> bytes = 0;
> entry = dring->vaddr + DESC_SZ * tail;
>
>
>
Acked-by: Ilias Apalodimas
1 - 100 of 229 matches
Mail list logo