Hi John,

See answers inline below...

On Thu, Apr 28, 2022 at 12:51 PM John Martinez <john.marti...@gmail.com>
wrote:

> Hello everyone!
>
> This is my first post to the list!
>
> I was able to setup VPP in AWS after
> properly identifying the instance type that would allow VPP to grap the PCI
> addresses that I needed to specify in the /etc/vpp/startup.conf in the dpdk
> stanza.
>
> I used an m5.2xlarge in AWS and it works well!
>
> [image: image.png]
>
> I am now looking to setup VPP in an Azure instance but lspci is
> showing only a single Mellanox PCI device even though I added the
> additional interfaces to the VM.
>
> [image: image.png]
>

When an Azure VM has a Mellanox VF attached to it's PCI bus, that indicates
that there is an interface present which has accelerated networking
enabled. Since you have multiple interfaces but only see one PCI device
associated with a Mellanox VF, that suggests that only one of the
interfaces attached to the VM has accelerated networking enabled. When you
created the additional NICs, did you enable accelerated networking on them?
I'm not positive, but I don't think you can enable accelerated networking
when you create a NIC via the Azure web portal, you might need to create it
using the Azure CLI or PowerShell.

The presence of PCI devices doesn't actually matter too much. Azure netvsc
interfaces are attached to the Hyper-V VMbus rather than the PCI bus. See
the first few paragraphs of
https://docs.microsoft.com/en-us/azure/virtual-network/accelerated-networking-how-it-works.
You only need one PCI device to be present per interface if you intend to
use accelerated networking.

I tried DS3 and DS4 instance types in Azure
>
> The only reference I could find for VPP in Azure was in this link but it
> is very outdated:
>
> https://fd.io/docs/vpp/v2101/usecases/vppinazure.html
>

Yes, that doc is way outdated. I advise against following the instructions
in it.


> Has anyone successfully installed and used VPP in Azure, if so,what
> instance type did you use?
>

Yes, I have used Standard_D3_v2 and Standard_DS3_v2. The instance type
shouldn't matter too much. The exception is that if you want to use
accelerated networking, you need to pick an instance type that supports it.
But accelerated networking has been around for a few years so a lot of
instance types support it.


> Any other Azure hints and tips welcome!
>

If you load the uio_hv_generic kernel module and make sure that the kernel
interfaces you want VPP to attach to are down (e.g. 'ip link set dev eth2
down') at the time VPP starts, VPP's DPDK plugin should automatically bind
them to the uio_hv_generic module and set them up as VPP interfaces. I
think you need the version of uio_hv_generic and hv_vmbus from at least
version 4.20 of the linux kernel (IIRC the most recent versions of RHEL &
CentOS 8 stream have the required patches backported to their 4.18-based
kernel).

As explained in the article I linked above, the netvsc driver (DPDK's
NetVSC PMD) abstracts and manages the Mellanox VF and another virtual path.
So you should not whitelist the PCI address of the Mellanox VF in
startup.conf. If you try to have VPP manage the Mellanox VF directly,
you'll probably have trouble making it work. You want VPP to manage
the netvsc devices. These can be whitelisted similar to PCI devices by
adding a 'dev <VMbus ID>' to the dpdk stanza in startup.conf.

Good luck.

-Matt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21301): https://lists.fd.io/g/vpp-dev/message/21301
Mute This Topic: https://lists.fd.io/mt/90759128/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to