On Mon, 18 Dec 2017 17:46:19 +0100 Adrien Mazarguil <adrien.mazarg...@6wind.com> wrote:
> Virtual machines hosted by Hyper-V/Azure platforms are fitted with > simplified virtual network devices named NetVSC that are used for fast > communication between VM to VM, VM to hypervisor, and the outside. > > They appear as standard system netdevices to user-land applications, the > main difference being they are implemented on top of VMBUS [1] instead of > emulated PCI devices. > > While this reads like a case for a standard DPDK PMD, there is more to it. > > To accelerate outside communication, NetVSC devices as they appear in a VM > can be paired with physical SR-IOV virtual function (VF) devices owned by > that same VM [2]. Both netdevices share the same MAC address in that case. > > When paired, egress and most of the ingress traffic flow through the VF > device, while part of it (e.g. multicasts, hypervisor control data) still > flows through NetVSC. Moreover VF devices are not retained and disappear > during VM migration; from a VM standpoint, they can be hot-plugged anytime > with NetVSC acting as a fallback. > > Running DPDK applications in such a context involves driving VF devices > using their dedicated PMDs in a vendor-independent fashion (to benefit from > maximum performance without writing dedicated code) while simultaneously > listening to NetVSC and handling the related hot-plug events. > > This new virtual PMD (referred to as "hyperv" from this point on) > automatically coordinates the Hyper-V/Azure-specific management part > described above by relying on vendor-specific, failsafe and tap PMDs to > expose a single consolidated Ethernet device usable directly by existing > applications. > > .------------------. > | DPDK application | > `--------+---------' > | > .------+------. > | DPDK ethdev | > `------+------' Control > | | > .------------+------------. v .------------. > | failsafe PMD +---------+ hyperv PMD | > `--+-------------------+--' `------------' > | | > | .........|......... > | : | : > .----+----. : .----+----. : > | tap PMD | : | any PMD | : > `----+----' : `----+----' : <-- Hot-pluggable > | : | : > .------+-------. : .-----+-----. : > | NetVSC-based | : | SR-IOV VF | : > | netdevice | : | device | : > `--------------' : `-----------' : > :.................: > > Note this diagram differs from that of the original RFC [3], with hyperv no > longer acting as a data plane layer. > > This initial version of the driver only works in whitelist mode. Users have > to provide the --vdev net_hyperv EAL option at least once to trigger it. > > Subsequent work will add support for blacklist mode based on automatic > detection of the host environment. > > [1] http://dpdk.org/ml/archives/dev/2017-January/054165.html > [2] > https://docs.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-hyper-v > [3] http://dpdk.org/ml/archives/dev/2017-November/082339.html > > Adrien Mazarguil (3): > net/hyperv: introduce MS Hyper-V platform driver > net/hyperv: implement core functionality > net/hyperv: add "force" parameter > > MAINTAINERS | 6 + > config/common_base | 6 + > config/common_linuxapp | 1 + > doc/guides/nics/features/hyperv.ini | 12 + > doc/guides/nics/hyperv.rst | 119 +++ > doc/guides/nics/index.rst | 1 + > drivers/net/Makefile | 1 + > drivers/net/hyperv/Makefile | 58 ++ > drivers/net/hyperv/hyperv.c | 799 +++++++++++++++++++++ > drivers/net/hyperv/rte_pmd_hyperv_version.map | 4 + > mk/rte.app.mk | 1 + > 11 files changed, 1008 insertions(+) > create mode 100644 doc/guides/nics/features/hyperv.ini > create mode 100644 doc/guides/nics/hyperv.rst > create mode 100644 drivers/net/hyperv/Makefile > create mode 100644 drivers/net/hyperv/hyperv.c > create mode 100644 drivers/net/hyperv/rte_pmd_hyperv_version.map > Please don't call this drivers/net/hyperv/ that name conflicts with the real netvsc PMD that I am working on. Maybe vdev-netvsc?