On Mon, Mar 04, 2019 at 06:11:07PM -0800, Jakub Kicinski wrote: > > At least in RDMA we have drivers doing all combinations of this: > > multiple ports per BDF, one port per BDF, and one composite RDMA > > device formed by combining multiple BDFs worth of ports together. > > Right, last but not least we have the case where there is one port but > multiple links (for NUMA, or just because 1 PCIe link can't really cope > with 200Gbps). In that case which DBDF would the port go to? :( > Do all internal info of the ASIC (health, regions, sbs) get registered > twice?
This I don't know, at least for RDMA this configuration gets confusing very fast and devlink is the least of the worries.. Personally I would advocate for a master/slave kind of arrangement where the master BDF has a different PCI DID from the slaves. devlink and other kernel objects hang off the master. The slave port is then only used to carry selected NUMA aware data path traffic and doesn't show in devlink. Jason