On Mon, Mar 20, 2017 at 5:59 AM, Tariq Toukan wrote:
>
> Hi Eric,
>
> While testing XDP scenarios, I noticed a small degradation.
> However, more importantly, I hit a kernel panic, see trace below.
>
> I'll need time to debug this.
> I will update about progress in debug and XDP testing.
>
> If y
On 15/03/2017 5:36 PM, Tariq Toukan wrote:
On 14/03/2017 5:11 PM, Eric Dumazet wrote:
When adding order-0 pages allocations and page recycling in receive path,
I added issues on PowerPC, or more generally on arches with large pages.
A GRO packet, aggregating 45 segments, ended up using 45 p
On Wed, 2017-03-15 at 22:39 -0700, Alexei Starovoitov wrote:
> when there is no room in the rx fifo the hw will increment the counter.
> That's the same as oom causing alloc fails and rx ring not being replenished.
> When there is nothing free in rx ring to dma the packet to, the hw will
> increme
On Wed, Mar 15, 2017 at 07:48:04PM -0700, Eric Dumazet wrote:
> On Wed, Mar 15, 2017 at 6:56 PM, Alexei Starovoitov
> wrote:
> > On Wed, Mar 15, 2017 at 06:07:16PM -0700, Eric Dumazet wrote:
> >> On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
> >>
> >> > yes. and we have 'xdp_tx_full
On Wed, Mar 15, 2017 at 6:56 PM, Alexei Starovoitov
wrote:
> On Wed, Mar 15, 2017 at 06:07:16PM -0700, Eric Dumazet wrote:
>> On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
>>
>> > yes. and we have 'xdp_tx_full' counter for it that we monitor.
>> > When tx ring and mtu are sized prop
On Wed, Mar 15, 2017 at 06:07:16PM -0700, Eric Dumazet wrote:
> On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
>
> > yes. and we have 'xdp_tx_full' counter for it that we monitor.
> > When tx ring and mtu are sized properly, we don't expect to see it
> > incrementing at all. This is
On Wed, 2017-03-15 at 18:07 -0700, Eric Dumazet wrote:
> On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
>
> > yes. and we have 'xdp_tx_full' counter for it that we monitor.
> > When tx ring and mtu are sized properly, we don't expect to see it
> > incrementing at all. This is somethi
On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
> yes. and we have 'xdp_tx_full' counter for it that we monitor.
> When tx ring and mtu are sized properly, we don't expect to see it
> incrementing at all. This is something in our control. 'Our' means
> humans that setup the environmen
On Wed, Mar 15, 2017 at 04:34:51PM -0700, Eric Dumazet wrote:
> > > > > -/* We recover from out of memory by scheduling our napi poll
> > > > > - * function (mlx4_en_process_cq), which tries to allocate
> > > > > - * all missing RX buffers (call to mlx4_en_refill_rx_buffers).
> > > > > +/* Under me
On Wed, 2017-03-15 at 16:06 -0700, Alexei Starovoitov wrote:
> On Wed, Mar 15, 2017 at 06:21:29AM -0700, Eric Dumazet wrote:
> > On Tue, 2017-03-14 at 21:06 -0700, Alexei Starovoitov wrote:
> > > On Tue, Mar 14, 2017 at 08:11:43AM -0700, Eric Dumazet wrote:
> > > > +static struct page *mlx4_alloc_p
On Wed, Mar 15, 2017 at 06:21:29AM -0700, Eric Dumazet wrote:
> On Tue, 2017-03-14 at 21:06 -0700, Alexei Starovoitov wrote:
> > On Tue, Mar 14, 2017 at 08:11:43AM -0700, Eric Dumazet wrote:
> > > +static struct page *mlx4_alloc_page(struct mlx4_en_priv *priv,
> > > + st
On Wed, 2017-03-15 at 17:36 +0200, Tariq Toukan wrote:
>
> Hi Eric,
>
> Thanks for your patch.
>
> I will do the XDP tests and complete the review, by tomorrow.
>
Thanks a lot Tariq !
On 14/03/2017 5:11 PM, Eric Dumazet wrote:
When adding order-0 pages allocations and page recycling in receive path,
I added issues on PowerPC, or more generally on arches with large pages.
A GRO packet, aggregating 45 segments, ended up using 45 page frags
on 45 different pages. Before my cha
On Tue, 2017-03-14 at 21:06 -0700, Alexei Starovoitov wrote:
> On Tue, Mar 14, 2017 at 08:11:43AM -0700, Eric Dumazet wrote:
> > +static struct page *mlx4_alloc_page(struct mlx4_en_priv *priv,
> > + struct mlx4_en_rx_ring *ring,
> > + dma_
On Tue, Mar 14, 2017 at 08:11:43AM -0700, Eric Dumazet wrote:
> +static struct page *mlx4_alloc_page(struct mlx4_en_priv *priv,
> + struct mlx4_en_rx_ring *ring,
> + dma_addr_t *dma,
> + unsigned int nod
When adding order-0 pages allocations and page recycling in receive path,
I added issues on PowerPC, or more generally on arches with large pages.
A GRO packet, aggregating 45 segments, ended up using 45 page frags
on 45 different pages. Before my changes we were very likely packing
up to 42 Ether
16 matches
Mail list logo