Hi.

Consider an Eventdev app using atomic-type scheduling doing something like:

    struct rte_event events[3];

    rte_event_dequeue_burst(dev_id, port_id, events, 3, 0);

    /* Assume three events were dequeued, and the application decides
     * it's best off to processing event 0 and 2 consecutively */

    process(&events[0]);
    process(&events[2]);

    events[0].queue_id++;
    events[0].op = RTE_EVENT_OP_FORWARD;
    events[2].queue_id++;
    events[2].op = RTE_EVENT_OP_FORWARD;

    rte_event_enqueue_burst(dev_id, port_id, &events[0], 1);
    rte_event_enqueue_burst(dev_id, port_id, &events[2], 1);

    process(&events[1]);
    events[1].queue_id++;
    events[1].op = RTE_EVENT_OP_FORWARD;

    rte_event_enqueue_burst(dev_id, port_id, &events[1], 1);

If one would just read the Eventdev API spec, they might expect this to work (especially since impl_opaque hints as potentially be useful for the purpose of identifying events).

However, on certain event devices, it doesn't (and maybe rightly so). If event 0 and 2 belongs to the same flow (queue id + flow id pair), and event 1 belongs to some other, then this other flow would be "unlocked" at the point of the second enqueue operation (and thus be processed on some other core, in parallel). The first flow would still be needlessly "locked".

Such event devices require the order of the enqueued events to be the same as the dequeued events, using RTE_EVENT_OP_RELEASE type events as "fillers" for dropped events.

Am I missing something in the Eventdev API documentation?

Could an event device use the impl_opaque field to track the identity of an event (and thus relax ordering requirements) and still be complaint toward the API?

What happens if a RTE_EVENT_OP_NEW event is inserted into the mix of OP_FORWARD and OP_RELEASE type events being enqueued? Again I'm not clear on what the API says, if anything.

Regards,
        Mattias

Reply via email to