Hey Stephen, it's been a while...

> On May 16, 2023, at 2:43 PM, Stephen Hemminger <step...@networkplumber.org> 
> wrote:
> 
> On Tue, 16 May 2023 13:18:40 -0600
> Philip Prindeville <philipp_s...@redfish-solutions.com> wrote:
> 
>> Hi,
>> 
>> I'm a packaging maintainer for some of the OpenWrt ancillary packages, and 
>> I'm looking at upstreaming DPDK support into OpenWrt.  Apologies in advance 
>> if this is a bit dense (a brain dump) or hops between too many topics.
>> 
>> I was looking at what's been done to date, and it seems there are a few 
>> shortcomings which I hope to address.
>> 
>> Amongst them are:
>> 
>> * getting DPDK support into OpenWrt's main repo for the kmod's and into the 
>> packages repo for the user-space support;
> 
> DPDK kernel modules are deprecated, creating more usage of them is 
> problematic.


Well, some kernel modules are still required, aren't they?  What's been 
deprecated?


> 
>> 
>> * making DPDK supported on all available architectures (i.e. agnostic, not 
>> just x86_64 specific);
>> 
>> * integrating into the OpenWrt "make menuconfig" system, so that editing 
>> packages directly isn't required;
> 
> Does openwrt build system support meson?
> 
>> * supporting cross-building and avoiding the [flawed] assumption that the 
>> micro-architecture of the build server is in any way related to the 
>> processor type of the target machine(s);
>> 
>> * integration into the OpenWrt CI/CD tests;
>> 
>> * making the kernel support as secure/robust as possible (i.e. avoiding an 
>> ill-behaved application taking down the kernel, since this is a firewall 
>> after all);
> 
> Not a problem with vfio
> 
>> 
>> * avoiding conflict with other existing module or package functionality;
>> 
>> * avoiding, to the extent possible, introducing one-off toolchain 
>> dependencies that unnecessarily complicate the build ecosystem;
>> 
>> To this end, I'm asking the mailing list for guidance on the best packaging 
>> practices that satisfy these goals.  I'm willing to do whatever heavy 
>> lifting that's within my ability.
>> 
>> I have some related questions.
>> 
>> 1. OpenWrt already bakes "vfio.ko" and "vfio-pci.ko" here:
>> 
>> https://github.com/openwrt/openwrt/blob/master/package/kernel/linux/modules/virt.mk#L77-L117
>> 
>>   but does so with "CONFIG_VFIO_PCI_IGD=y", which seems to be specifically 
>> for VGA acceleration of guests in a virtualized environment.  That seems to 
>> be an odd corner case, and unlikely given that OpenWrt almost always runs on 
>> headless hardware.  Should this be reverted?
> 
> Yes.


Okay, thanks.


> 
>> 
>> 2. "vfio.ko" is built with "CONFIG_VFIO_NOIOMMU=y", which seems insecure.  
>> Can this be reverted?
> 
> No. Most of OpenWrt's systems do not have an IOMMU.


And most of OpenWrt isn't going to need DPDK, either.  We're thinking of Xeon, 
Xeon-D, and Atom64 (C26xx) based Intel hardware, or high-end multicore ARM64 
designs like the Traverse Technologies ten64.

In other words, Enterprise-class firewalls and data center appliances.


> 
>> 3. Is "uio_pci_generic.ko" worth the potential insecurity/instability of a 
>> misbehaving application?  My understanding is that it's only needed on 
>> SR-IOV hardware where MSI/MSI-X interrupts aren't supported: is there even 
>> any current hardware that doesn't support MSI/MSI-X?  Or am I 
>> misunderstanding the use case?
> 
> Use VFIO noiommu, it is more supported and tested upstream.  With PCI 
> generic, no interrupts work.


What is the risk/reward of using CONFIG_NOIOMMU=n?  It's a non-trivial bit of 
logic to include in a processor design: it must have had some scenario where it 
would be useful otherwise that's a lot of wasted gates...


> 
>> 4. Can most functionality be achieved with VFIO + IOMMU support?
> 
> Yes.
> 
>> 5. I've seen packaging for the "iommu_v2.ko" module done here:
>> 
>> https://github.com/k13132/openwrt-dpdk/blob/main/packages/kmod-iommu_v2/Makefile#L22-L42
>> 
>>   Is this potentially useful?  What platforms/architectures use this driver?
>> 
>> 6. Hand editing a wrapper for dpdk.tar.gz is a non-starter.  I'd rather add 
>> Kconfig adornments to OpenWrt packaging for the wrapper so that options for 
>> "-Denable_drivers=" and "-Dcpu_instruction_set=" can be passed in once 
>> global build options for OpenWrt have been selected.  Defaulting the 
>> instruction set to the build host is going to be wrong at least some of the 
>> time, if not most of the time.  For x86_64, what is a decent compromise for 
>> a micro-architecture that will build and run on most AMD and Intel hardware? 
>>  What's a decent baseline for an ARM64 micro-architecture that will build 
>> and run on most ARM hardware?
>> 
>> 7. Many embedded systems don't build with glibc because it's too bloated 
>> (and because critical fixes sometimes take too long to roll out), and 
>> instead use MUSL, eglibc, or uClibc (although the last one seems to be 
>> waning).  Only glibc supports <execinfo.h> from what I can tell.  Can 
>> broader support for other C runtimes be added?  Can RTE_BACKTRACE be made a 
>> parameter or conditionally defined based on the runtime headers?  (autotools 
>> would be and HAVE_EXECINFO_H would be really handy here, but I'm not sure 
>> how to make this play well with meson/ninja, and truth be told I'm an old 
>> school Makefile + autotools knuckle-dragger).
> 
> You could do without backtrace, but then if DPDK applications you are flying 
> blind.


Yeah, that occurred to me too, but it seems that libbacktrace couldn't be 
shimmed in because it does require heap integrity if I remember...


> 
>> 8. How do I validate that DPDK is properly being built with the cross-tools 
>> and not native tools?  Even when building x86_64 targets on an x86_64 build 
>> host, we end up using a custom toolchain and not the "native" compiler that 
>> comes with the distro.
>> 
>> 9. What is the parity between AMD64 and ARM64?  Do both platforms offer 
>> equal functionality and security, if not performance?
> 
> Apples/Oranges.
> 
>> 10. Who else is using DPDK with OpenWrt that is open to collaboration?
>> 
>> 11. What is the user-space TCP/IP stack of choice (or reference) for use 
>> with DPDK?
> 
> No user-space TCP/IP stack is really robust and that great.
> VPP has one but it is likely to be specific to using VPP and not sure if you 
> want to go there.
> I don't think Fedora/Suse/Debian/Ubuntu have packaged any userspace TCP yet.


Not robust how?  In terms of performance, security, handling error conditions, 
having complete feature handlings... ?

-Philip


Reply via email to