Vincent,
Perhaps you can help me understand why the performance or functionality of AVP 
vs. Virtio is relevant to the decision of accepting this driver.   There are 
many drivers in the DPDK; most of which provide the same functionality at 
comparable performance rates.  AVP is just another such driver.   The fact that 
it is virtual rather than physical, in my opinion, should not influence the 
decision of accepting this driver.   On the other hand, code quality/complexity 
or lack of a maintainer are reasonable reasons for rejecting.    If our driver 
is accepted we are committed to maintaining it and testing changes as required 
by any driver framework changes which may impact all drivers.

Along the same lines, I do not understand why upstreaming AVP in to the Linux 
kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.   
Continuing my analogy from above, the AVP device is a commercial offering tied 
to the Wind River Systems Titanium product line.   It enables virtualized DPDK 
applications and increases DPDK adoption.   Similarly to how a driver from 
company XYX is tied to a commercial NIC that must be purchased by a customer, 
our AVP device is available to operators that choose to leverage our Titanium 
product to implement their Cloud solutions.    It is not our intention to 
upstream the qemu/kvm or host vswitch portion of the AVP device.   Our qemu/kvm 
extensions are GPL so they are available to our customers if they desire to 
rebuild qemu/kvm with their own proprietary extensions

Our AVP device was implemented in 2013 in response to the challenge of lower 
than required performance of qemu/virtio in both user space and DPDK 
applications in the VM.   Rather than making complex changes to qemu/virtio and 
continuously have to forward prop those as we upgraded to newer versions of 
qemu we decided to decouple ourselves from that code base.   We developed the 
AVP device based on an evolution of KNI+ivshmem by enhancing both with features 
that would meet the needs of our customers; better performance, multi-queue 
support, live-migration support, hot-plug support.    As I said in my earlier 
response, since 2013, qemu/virtio has seen improved performance with the 
introduction of vhost-user.   The performance of vhost-user still has not yet 
achieved performance levels equal to our AVP PMD.   

I acknowledge that the AVP driver could exist as an out-of-tree driver loaded 
as a shared library at runtime.  In fact, 2 years ago we released our driver 
source on github for this very reason.  We provide instructions and support for 
building the AVP PMD as a shared library.   Some customers have adopted this 
method while many insist on an in-tree driver for several reasons.   

Most importantly, they want to eliminate the burden of needing to build and 
support an additional package into their product.   An in-tree driver would 
eliminate the need for a separate build/packaging process.   Also, they want an 
option that allows them to be able to develop directly on the bleeding edge of 
DPDK rather than waiting on us to update our out-of-tree driver based on stable 
releases of the DPDK.   In this regard, an in-tree driver would allow our 
customers to work directly on the latest DPDK. 

An in-tree driver provides obvious benefits to our customers, but keep in mind 
that this also provides a benefit to the DPDK.  If a customer must develop on a 
stable release because they must use an out-of-tree driver then they are less 
likely to contribute fixes/enhancements/testing upstream.  I know this first 
hand because I work with software from different sources on a daily basis and 
it is a significant burden to have to reproduce/test fixes on master when you 
build/ship on an older stable release.   Accepting this driver would increase 
the potential pool of developers available for contributions and reviews.

Again, we are committed to contributing to the DPDK community by supporting our 
driver and upstreaming other fixes/enhancements we develop along the way.   We 
feel that if the DPDK is limited to only a single virtual driver of any type 
then choice and innovation is also limited.   In the end if more variety and 
innovation increases DPDK adoption then this is a win for DPDK and everyone 
that is involved in the project.

Regards,
Allain

Allain Legacy, Software Developer
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
 



> -----Original Message-----
> From: Vincent JARDIN [mailto:vincent.jar...@6wind.com]
> Sent: Friday, March 03, 2017 11:22 AM
> To: Legacy, Allain; YIGIT, FERRUH
> Cc: Jolliffe, Ian; jerin.ja...@caviumnetworks.com;
> step...@networkplumber.org; thomas.monja...@6wind.com;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 16/16] doc: adds information related to
> the AVP PMD
> 
> Le 02/03/2017 à 01:20, Allain Legacy a écrit :
> > +Since the initial implementation of AVP devices, vhost-user has become
> > +part of the qemu offering with a significant performance increase over
> > +the original virtio implementation.  However, vhost-user still does
> > +not achieve the level of performance that the AVP device can provide
> > +to our customers for DPDK based VM instances.
> 
> Allain,
> 
> please, can you be more explicit: why is virtio not fast enough?
> 
> Moreover, why should we get another PMD for Qemu/kvm which is not
> virtio? There is not argument into your doc about it.
> NEC, before vhost-user, made a memnic proposal too because
> virtio/vhost-user was not available.
> Now, we all agree that vhost-user is the right way to support VMs, it
> avoids duplication of maintenances.
> 
> Please add some arguments that explains why virtio should not be used,
> so others like memnic or avp should be.
> 
> Regarding,
> +    nova boot --flavor small --image my-image \
> +       --nic net-id=${NETWORK1_UUID} \
> +       --nic net-id=${NETWORK2_UUID},vif-model=avp \
> +       --nic net-id=${NETWORK3_UUID},vif-model=avp \
> +       --security-group default my-instance1
> 
> I do not see how to get it working with vanilla nova. Please, I think
> you should rather show with qemu or virsh.
> 
> Then, there is not such AVP netdevice into Linux kernel upstream. Before
> adding any AVP support, it should be added into legacy upstream so we
> can be sure that the APIs will be solid and won't need to be updated
> because of some kernel constraints.
> 
> Thank you,
>    Vincent

Reply via email to