> From: Jakub Kicinski
> Sent: Thursday, September 22, 2022 6:05 PM
>
> On Thu, 22 Sep 2022 06:14:59 -0400 Michael S. Tsirkin wrote:
> > It's nitpicking to be frank. v6 arrived while I was traveling and I
> > didn't notice it. I see Jason acked that so I guess I will just apply
> > as is.
>
> O
> From: Michael S. Tsirkin
> Sent: Thursday, September 22, 2022 6:15 AM
>
> It's nitpicking to be frank. v6 arrived while I was traveling and I didn't
> notice it.
> I see Jason acked that so I guess I will just apply as is. Do you ack v6 too?
>
Yes. I reviewed it. Gavin added reviewed-by.
Th
On Thu, Sep 22, 2022 at 10:04:53AM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Thursday, September 22, 2022 5:35 AM
> >
> > On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> > > Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big
> > > packets e
> From: Michael S. Tsirkin
> Sent: Thursday, September 22, 2022 5:27 AM
> > >
> > > And I'd like commit log to include results of perf testing
> > > - with indirect feature on
> > Which device do you suggest using for this test?
>
> AFAIK most devices support INDIRECT, e.g. don't nvidia cards
> From: Michael S. Tsirkin
> Sent: Thursday, September 22, 2022 5:35 AM
>
> On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> > Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big
> > packets even when GUEST_* offloads are not present on the device.
> > However, if gu
On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big
> packets even when GUEST_* offloads are not present on the device.
> However, if guest GSO is not supported, it would be sufficient to
> allocate segments to cover just
On Wed, Sep 07, 2022 at 07:51:38PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 3:36 PM
> >
> > On Wed, Sep 07, 2022 at 07:27:16PM +, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin
> > > > Sent: Wednesday, September 7, 2022 3:2
On 9/7/2022 3:11 PM, Parav Pandit wrote:
From: Si-Wei Liu
Sent: Wednesday, September 7, 2022 5:40 PM
On 9/7/2022 12:51 PM, Parav Pandit wrote:
And I'd like commit log to include results of perf testing
- with indirect feature on
Which device do you suggest using for this test?
You may us
> From: Si-Wei Liu
> Sent: Wednesday, September 7, 2022 5:40 PM
>
>
> On 9/7/2022 12:51 PM, Parav Pandit wrote:
> >> And I'd like commit log to include results of perf testing
> >> - with indirect feature on
> > Which device do you suggest using for this test?
> >
> You may use software vhost-n
On 9/7/2022 12:51 PM, Parav Pandit wrote:
And I'd like commit log to include results of perf testing
- with indirect feature on
Which device do you suggest using for this test?
You may use software vhost-net backend with and without fix to compare.
Since this driver fix effectively lowers d
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 3:36 PM
>
> (c) replace mtu = 0 with sensibly not calling the function when mtu is
> unknown.
Even when mtu is zero, virtnet_set_big_packets() must be called to act on the
gso bits.
Currently handling by virtnet_set_big_packets()
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 3:38 PM
>
> and if possible a larger ring. 1k?
What do you expect to see here for which test report should be added to commit
log?
What is special about 1k vs 512, 128 and 2k? is 1K default for some
configuration?
__
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 3:36 PM
>
> On Wed, Sep 07, 2022 at 07:27:16PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin
> > > Sent: Wednesday, September 7, 2022 3:24 PM
> > >
> > > On Wed, Sep 07, 2022 at 07:18:06PM +, Parav Pandit wrote
On Wed, Sep 07, 2022 at 03:36:16PM -0400, Michael S. Tsirkin wrote:
> On Wed, Sep 07, 2022 at 07:27:16PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin
> > > Sent: Wednesday, September 7, 2022 3:24 PM
> > >
> > > On Wed, Sep 07, 2022 at 07:18:06PM +, Parav Pandit wrote:
> > >
On Wed, Sep 07, 2022 at 07:27:16PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 3:24 PM
> >
> > On Wed, Sep 07, 2022 at 07:18:06PM +, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin
> > > > Sent: Wednesday, September 7, 2022 3:1
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 3:24 PM
>
> On Wed, Sep 07, 2022 at 07:18:06PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin
> > > Sent: Wednesday, September 7, 2022 3:12 PM
> >
> > > > Because of shallow queue of 16 entries deep.
> > >
> > > but
On Wed, Sep 07, 2022 at 07:18:06PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 3:12 PM
>
> > > Because of shallow queue of 16 entries deep.
> >
> > but why is the queue just 16 entries?
> I explained the calculation in [1] about 16 entries.
>
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 3:12 PM
> > Because of shallow queue of 16 entries deep.
>
> but why is the queue just 16 entries?
I explained the calculation in [1] about 16 entries.
[1]
https://lore.kernel.org/netdev/ph0pr12mb54812ec7f4711c1ea4caa119dc...@ph
On Wed, Sep 07, 2022 at 07:06:09PM +, Parav Pandit wrote:
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 2:16 PM
> >
> > On Wed, Sep 07, 2022 at 04:12:47PM +, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin
> > > > Sent: Wednesday, September 7, 2022 10:40
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 2:16 PM
>
> On Wed, Sep 07, 2022 at 04:12:47PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin
> > > Sent: Wednesday, September 7, 2022 10:40 AM
> > >
> > > On Wed, Sep 07, 2022 at 02:33:02PM +, Parav Pandit wrote:
On Wed, Sep 07, 2022 at 04:12:47PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 10:40 AM
> >
> > On Wed, Sep 07, 2022 at 02:33:02PM +, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin
> > > > Sent: Wednesday, September 7, 2022 10
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 10:40 AM
>
> On Wed, Sep 07, 2022 at 02:33:02PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin
> > > Sent: Wednesday, September 7, 2022 10:30 AM
> >
> > [..]
> > > > > actually how does this waste space? Is this bec
On Wed, Sep 07, 2022 at 02:33:02PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 10:30 AM
>
> [..]
> > > > actually how does this waste space? Is this because your device does
> > > > not have INDIRECT?
> > > VQ is 256 entries deep.
> > > Driver
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 10:30 AM
[..]
> > > actually how does this waste space? Is this because your device does
> > > not have INDIRECT?
> > VQ is 256 entries deep.
> > Driver posted total of 256 descriptors.
> > Each descriptor points to a page of 4K.
>
On Wed, Sep 07, 2022 at 02:08:18PM +, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin
> > Sent: Wednesday, September 7, 2022 5:27 AM
> >
> > On Wed, Sep 07, 2022 at 04:08:54PM +0800, Gavin Li wrote:
> > >
> > > On 9/7/2022 1:31 PM, Michael S. Tsirkin wrote:
> > > > External email: Use cau
> From: Michael S. Tsirkin
> Sent: Wednesday, September 7, 2022 5:27 AM
>
> On Wed, Sep 07, 2022 at 04:08:54PM +0800, Gavin Li wrote:
> >
> > On 9/7/2022 1:31 PM, Michael S. Tsirkin wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > On Thu, Sep 01, 2022 at 0
On Wed, Sep 07, 2022 at 04:08:54PM +0800, Gavin Li wrote:
>
> On 9/7/2022 1:31 PM, Michael S. Tsirkin wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> > > Currently add_recvbuf_big() allocates MAX_SKB_FRAG
On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for big
> packets even when GUEST_* offloads are not present on the device.
> However, if guest GSO is not supported, it would be sufficient to
> allocate segments to cover just
28 matches
Mail list logo