On Thu, Apr 10, 2025 at 02:58:28PM +0200, Larysa Zaremba wrote: > On Thu, Apr 10, 2025 at 02:23:49PM +0300, Leon Romanovsky wrote: > > On Thu, Apr 10, 2025 at 12:44:33PM +0200, Larysa Zaremba wrote: > > > On Thu, Apr 10, 2025 at 11:21:37AM +0300, Leon Romanovsky wrote: > > > > On Tue, Apr 08, 2025 at 02:47:51PM +0200, Larysa Zaremba wrote: > > > > > From: Phani R Burra <phani.r.bu...@intel.com> > > > > > > > > > > Libeth will now support control queue setup and configuration APIs. > > > > > These are mainly used for mailbox communication between drivers and > > > > > control plane. > > > > > > > > > > Make use of the page pool support for managing controlq buffers. > > > > > > > > <...> > > > > > > > > > libeth-y := rx.o > > > > > > > > > > +obj-$(CONFIG_LIBETH_CP) += libeth_cp.o > > > > > + > > > > > +libeth_cp-y := controlq.o > > > > > > > > So why did you create separate module for it? > > > > Now you have pci -> libeth -> libeth_cp -> ixd, with the potential > > > > races between ixd and libeth, am I right? > > > > > > > > > > I am not sure what kind of races do you mean, all libeth modules > > > themselves are > > > stateless and will stay this way [0], all used data is owned by drivers. > > > > Somehow such separation doesn't truly work. There are multiple syzkaller > > reports per-cycle where module A tries to access module C, which already > > doesn't exist because it was proxied through module B. > > Are there similar reports for libeth and libie modules when iavf is enabled?
To get such report, syzkaller should run on physical iavf, it looks like it doesn't. Did I miss it here? https://syzkaller.appspot.com/upstream/s/net > It is basically the same hierarchy. (iavf uses both libeth and libie, libie > depends on libeth). > > I am just trying to understand, is this a regular situation or did I just > mess > smth up? My review comment was general one. It is almost impossible to review this newly proposed architecture split for correctness. > > > > > > > > > As for the module separation, I think there is no harm in keeping it > > > modular. > > > > Syzkaller reports disagree with you. > > > > Could you please share them? It is not an easy question to answer, because all these reports are complaining about some wrong locking order or NULL-pointer access. You will never know if it is because of programming or design error. As an approximate example, see commits a27c6f46dcec ("RDMA/bnxt_re: Fix an issue in bnxt_re_async_notifier") and f0df225d12fc ("RDMA/bnxt_re: Add sanity checks on rdev validity"). At the first glance, they look unrelated to our discussion, however they can serve as an example or races between deinit/disable paths in parent module vs. child. > > > > We intend to use basic libeth (libeth_rx) in drivers that for sure have > > > no use > > > for libeth_cp. libeth_pci and libeth_cp separation is more arbitral, as > > > we have > > > no plans for now to use them separately. > > > > So let's not over-engineer it. > > > > > > > > Module dependencies are as follows: > > > > > > libeth_rx and libeth_pci do not depend on other modules. > > > libeth_cp depends on both libeth_rx and libeth_pci. > > > idpf directly uses libeth_pci, libeth_rx and libeth_cp. > > > ixd directly uses libeth_cp and libeth_pci. > > > > I need to amend this: libeth_cp does not depend on libeth_pci in terms of > module namespace, it only uses the header to access struct device that is > stored in libeth_pci-specific mmio_info. So why did you add SELECT in kconfig? > > > You can do whatever module architecture for netdev devices, but if you > > plan to expose it to RDMA devices, I will vote against any deep layered > > module architecture for the drivers. > > > > BTW, please add some Intel prefix to the modules names, they shouldn't > > be called in generic names like libeth, e.t.c > > > > We did not think this would be a problem, intel has a tradition of calling > the > modules pretty ambiguously. I know and it is worth to be changed. > > > Thanks > > > > > > > > [0] > > > https://lore.kernel.org/netdev/61bfa880-6a88-4eac-bab7-040bf72a1...@intel.com/ > > > > > > > Thanks >