On Fri, Feb 02, 2024 at 10:24:54AM +0100, Mattias Rönnblom wrote: > On 2024-01-31 15:37, Bruce Richardson wrote: > > On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote: > > > On 2024-01-23 10:43, Bruce Richardson wrote: > > > > On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote: > > > > > On 2024-01-19 18:43, Bruce Richardson wrote: > > > > > > Some small rewording changes to the doxygen comments on struct > > > > > > rte_event_dev_info. > > > > > > > > > > > > Signed-off-by: Bruce Richardson <bruce.richard...@intel.com> > > > > > > --- > > > > > > lib/eventdev/rte_eventdev.h | 46 > > > > > > ++++++++++++++++++++----------------- > > > > > > 1 file changed, 25 insertions(+), 21 deletions(-) > > > > > > > > > > > > diff --git a/lib/eventdev/rte_eventdev.h > > > > > > b/lib/eventdev/rte_eventdev.h > > > > > > index 57a2791946..872f241df2 100644 > > > > > > --- a/lib/eventdev/rte_eventdev.h > > > > > > +++ b/lib/eventdev/rte_eventdev.h > > > > > > @@ -482,54 +482,58 @@ struct rte_event_dev_info { > > > > > > const char *driver_name; /**< Event driver name > > > > > > */ > > > > > > struct rte_device *dev; /**< Device information */ > > > > > > uint32_t min_dequeue_timeout_ns; > > > > > > - /**< Minimum supported global dequeue timeout(ns) by this > > > > > > device */ > > > > > > + /**< Minimum global dequeue timeout(ns) supported by this > > > > > > device */ > > > > > > > > > > Are we missing a bunch of "." here and in the other fields? > > > > > > > > > > > uint32_t max_dequeue_timeout_ns; > > > > > > - /**< Maximum supported global dequeue timeout(ns) by this > > > > > > device */ > > > > > > + /**< Maximum global dequeue timeout(ns) supported by this > > > > > > device */ > > > > > > uint32_t dequeue_timeout_ns; > > > > > > /**< Configured global dequeue timeout(ns) for this > > > > > > device */ > > > > > > uint8_t max_event_queues; > > > > > > - /**< Maximum event_queues supported by this device */ > > > > > > + /**< Maximum event queues supported by this device */ > > > > > > uint32_t max_event_queue_flows; > > > > > > - /**< Maximum supported flows in an event queue by this device*/ > > > > > > + /**< Maximum number of flows within an event queue supported by > > > > > > this device*/ > > > > > > uint8_t max_event_queue_priority_levels; > > > > > > /**< Maximum number of event queue priority levels by > > > > > > this device. > > > > > > - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS > > > > > > capability > > > > > > + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS > > > > > > capability. > > > > > > + * The priority levels are evenly distributed between > > > > > > + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref > > > > > > RTE_EVENT_DEV_PRIORITY_LOWEST. > > > > > > > > > > This is a change of the API, in the sense it's defining something > > > > > previously > > > > > left undefined? > > > > > > > > > > > > > Well, undefined is pretty useless for app writers, no? > > > > However, agreed that the range between HIGHEST and LOWEST is an > > > > assumption > > > > on my part, chosen because it matches what happens to the event > > > > priorities > > > > which are documented in event struct as "The implementation shall > > > > normalize > > > > the requested priority to supported priority value" - which, while > > > > better > > > > than nothing, does technically leave the details of how normalization > > > > occurs up to the implementation. > > > > > > > > > If you need 6 different priority levels in an app, how do you go about > > > > > making sure you find the correct (distinct) Eventdev levels on any > > > > > event > > > > > device supporting >= 6 levels? > > > > > > > > > > #define NUM_MY_LEVELS 6 > > > > > > > > > > #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) * > > > > > (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) / > > > > > NUM_MY_LEVELS) > > > > > > > > > > This way? One would worry a bit exactly what "evenly" means, in terms > > > > > of > > > > > rounding errors. If you have an event device with 255 priority levels > > > > > of > > > > > (say) 256 levels available in the API, which two levels are the same > > > > > priority? > > > > > > > > Yes, round etc. will be an issue in cases of non-powers-of 2. > > > > However, I think we do need to clarify this behaviour, so I'm open to > > > > alternative suggestions as to how update this. > > > > > > > > > > In retrospect, maybe it would have been better to just express the number > > > of > > > priority levels an event device supported, only allow [0, max_levels - 1] > > > in > > > the prio field, and leave it to the app to do the > > > conversion/normalization. > > > > > > > Yes, in many ways that would be better. > > > Maybe a new <rte_eventdev.h> helper macro would at least suggest to the > > > PMD > > > driver implementer and the application designer how this normalization > > > should work. Something like the above, but where NUM_MY_LEVELS is an input > > > parameter. Would result in an integer division though, so shouldn't be > > > used > > > in the fast path. > > > > I think it's actually ok now, having the drivers do the work, since each > > driver can choose optimal method. For those having power-of-2 number of > > priorities, just a shift op works best. > > > > I had an application-usable macro in mind, not a macro for PMDs. Showing how > app-level priority levels should map to Eventdev API-level priority levels > would, by implication, show how event device should do the Eventdev API > priority -> PMD level priority compression. > > The event device has exactly zero freedom in choosing how to translate > Eventdev API-level priorities to its internal priorities, or risk not > differentiating between app-level priority levels. If an event device > supports 128 levels, is RTE_EVENT_DEV_PRIORITY_NORMAL and > RTE_EVENT_DEV_PRIORITY_NORMAL-1 the same PMD-level priority or not? I would > *guess* the same, but that just an assumption, and not something that > follows from "normalize", I think. > > Anyway, this is not a problem this patch set necessarily needs to solve. > Yep, a good point. Would a public macro be enough, or would it be better for drivers to provide a function to allow the app to query the internal priority level for an eventdev one directly?
Other alternatives: * have an API break where we change the meaning of the priority field so that the priorities are given in the range of 0 - max_prios-1. * Keep same API, but explicitly state that devices must have a power-of-2 number of supported priorities, and hence that only the top N bits of the priority field will be valid (any devices with support for non-power-of-2 nb-priorities??) - to simplify things this could be followed by an API change where we report instead of priority levels, number of priority bits valid - if changing API for this anyway, could reduce size of event priority field - 256 event priority levels seems a lot! Cutting the field down to 4 bits, or even 3, might make sense. [It would also allow us to potentially expand the impl_opaque field up to 12 bits, allowing more than 256 outstanding events on a port, if using it for sequence numbers, or more useful metadata possibilities for devices/drivers] Not something for this patchset though, as you say. /Bruce