On 6/26/19 3:24 PM, Matan Azrad wrote:


-----Original Message-----
From: Maxime Coquelin <maxime.coque...@redhat.com>
Sent: Wednesday, June 26, 2019 3:06 PM
To: Matan Azrad <ma...@mellanox.com>; Noa Ezra <n...@mellanox.com>
Cc: dev@dpdk.org; Tiwei Bie <tiwei....@intel.com>
Subject: Re: [PATCH 2/2] net/vhost: support mrg-rxbuf disabling



On 6/26/19 1:18 PM, Matan Azrad wrote:


From: Maxime Coquelin
On 6/26/19 9:50 AM, Matan Azrad wrote:
Hi Maxim

Any response here?

Besides that,

Regarding the TSO and this patch:
I think we shouldn't be so strict to not take them for this version:
1. The later time was a technical issue with the mailer - a mistake.
2. The patches don't change any default and makes sense - will not
hurt
anyone.

So I think we can do it beyond the letter of the law.

    From: Maxime Coquelin
    > Sent: Thursday, June 20, 2019 10:19 AM
    > To: Matan Azrad <ma...@mellanox.com>; Noa Ezra
    <n...@mellanox.com>
    > Cc: dev@dpdk.org
    > Subject: Re: [PATCH 2/2] net/vhost: support mrg-rxbuf disabling
    >
    >
    >
    > On 6/20/19 8:52 AM, Matan Azrad wrote:
    > > Hi all
    > >
    > >> -----Original Message-----
    > >> From: Noa Ezra
    > >> Sent: Thursday, June 20, 2019 8:58 AM
    > >> To: Maxime Coquelin <maxime.coque...@redhat.com>
    > >> Cc: Matan Azrad <ma...@mellanox.com>; dev@dpdk.org
    > >> Subject: RE: [PATCH 2/2] net/vhost: support mrg-rxbuf disabling
    > >>
    > >> Hi Maxime,
    > >> Thanks for your comment, please see below.
    > >>
    > >>> -----Original Message-----
    > >>> From: Maxime Coquelin [mailto:maxime.coque...@redhat.com]
    > >>> Sent: Wednesday, June 19, 2019 12:10 PM
    > >>> To: Noa Ezra <n...@mellanox.com>
    > >>> Cc: Matan Azrad <ma...@mellanox.com>; dev@dpdk.org
    > >>> Subject: Re: [PATCH 2/2] net/vhost: support mrg-rxbuf disabling
    > >>>
    > >>> Hi Noa,
    > >>>
    > >>> On 6/19/19 8:13 AM, Noa Ezra wrote:
    > >>>> Rx mergeable buffers is a virtio feature that allows chaining of
    > >>>> multiple virtio descriptors to handle large packet size.
    > >>>> This behavior is supported and enabled by default, however in
    > >>>> case the user knows that rx mergeable buffers are not needed,
he
    > >>>> can disable the feature.
    > >>>> The user should also set mrg_rxbuf=off in virtual machine's xml.
    > >>>
    > >>> I'm not sure to understand why it is needed, as the vhost-user
    > >>> library supports the feature, it's better to let it being advertised.
    > >>>
    > >>> As you say, it is up to the user to disable it in the VM's XML.
    > >>> Done this way, the feature won't be negotiated.
    > >>>
    > >> I agree with you, I'll remove this patch from the series.
    > >
    > > Are you sure that no performance impact exists for redundant
    > > merg-rx-buf
    > configuration here?
    >
    > I'm not sure to understand what you mean, could you please
elaborate?
    >
    I guess that if this feature is enabled and the feature actually are not
used
    (no packets are scattered or merged) it will hurt the performance.

Well, latest performance measurements does not show a big impact now
on enabling mergeable buffers feature unconditionaly.

Did you test small packets \ big?

64B packets, in non-vector mode on Virtio PMD side.


    So if one of the sides doesn't want to use it because of
performance, it
may
    want to disable it.

And even if there is an impact, the way to disable it is through
Libvirt/Qemu.

Not sure, as TSO application may decide to not do it in spite of it is
configured in Qemu.

    > > What if the second side want it and the current side no?
    >
    > The feature won't be negotiated, assuming it has been disabled
in
QEMU
    > cmdline (or via libvirt).
    > > It may be that the vhost PMD user may want to disable it to save
    > performance from some reasons, no?
    > >
    >
    > Then this user should disable it at QEMU level.
    >
    So the vhost PMD is not one of the sides to decide?
    If so, why do we need the APIs to configure the features?

Are you talking about the rte_vhost_driver_set_features() and related
APIs?

Yes

This is used for example by the external backends that support
features specific to the backend type (e.g. crypto), or also used by
OVS-DPDK, to disable TSO. So these usages are for functional reasons, not
tuning.

Exactly, applications (like OVS) may decide to disable features because a lot
of reasons.

    Looks like also the qemu is configured with the feature the VM\host
sides
    may decide in some cases to disable it.

For functional reasons, I agree. So I that's why I agree with your
tso patch as the application has to support it, but that's not the
case of the mergeable buffers features.

Performance reasons are not good enough?

No, that's not what I mean.
I mean that the application should be able to disable a feature when it does
not meet the functional requirement.

For performance tuning, the qemu way is available, and enough.


I think that this is the point we are not agree on.

I think that application may want to disable the feature in some cases because 
of performance reasons (maybe others too),
And in some other cases to work with the feature.

So, it makes sense IMO to let the application to decide what it wants without 
any concern about the QEMU configuration.

Why to not allow to the PMD user to do it by the application (using prob 
parameters)?

I think we should restrict the Virtio features from the Vhost PMD
parameter at as min as possible, only to ensure compatibility with the
application (iommu, postcopy, tso, ...). One problem I see with
providing the possibility to change any Virtio feature at runtime
is reconnection.

For example, you start your application with enabling mergeable buffers,
stop it and restart it without the feature enabled by the application.
As the negotiation with the driver is not done again at reconnect time,
Qemu will fail.




Tiwei, what's your opinion on this?

    > Regards,
    > Maxime

Reply via email to