On Tue, Oct 04, 2016 at 09:49:52PM +0000, Vangati, Narender wrote:
> Hi Jerin,

Hi Narender,

Thanks for the comments.I agree with proposed changes; I will address these 
comments in v2.

/Jerin


> 
> 
> 
> Here are some comments on the libeventdev RFC.
> 
> These are collated thoughts after discussions with you & others to understand 
> the concepts and rationale for the current proposal.
> 
> 
> 
> 1. Concept of flow queues. This is better abstracted as flow ids and not as 
> flow queues which implies there is a queueing structure per flow. A s/w 
> implementation can do atomic load balancing on multiple flow ids more 
> efficiently than maintaining each event in a specific flow queue.
> 
> 
> 
> 2. Scheduling group. A scheduling group is more a steam of events, so an 
> event queue might be a better abstraction.
> 
> 
> 
> 3. An event queue should support the concept of max active atomic flows 
> (maximum number of active flows this queue can track at any given time) and 
> max active ordered sequences (maximum number of outstanding events waiting to 
> be egress reordered by this queue). This allows a scheduler implementation to 
> dimension/partition its resources among event queues.
> 
> 
> 
> 4. An event queue should support concept of a single consumer. In an 
> application, a stream of events may need to be brought together to a single 
> core for some stages of processing, e.g. for TX at the end of the pipeline to 
> avoid NIC reordering of the packets. Having a 'single consumer' event queue 
> for that stage allows the intensive scheduling logic to be short circuited 
> and can improve throughput for s/w implementations.
> 
> 
> 
> 5. Instead of tying eventdev access to an lcore, a higher level of 
> abstraction called event port is needed which is the application i/f to the 
> eventdev. Event ports are connected to event queues and is the object the 
> application uses to dequeue and enqueue events. There can be more than one 
> event port per lcore allowing multiple lightweight threads to have their own 
> i/f into eventdev, if the implementation supports it. An event port 
> abstraction also encapsulates dequeue depth and enqueue depth for a scheduler 
> implementations which can schedule multiple events at a time and output 
> events that can be buffered.
> 
> 
> 
> 6. An event should support priority. Per event priority is useful for 
> segregating high priority (control messages) traffic from low priority within 
> the same flow. This needs to be part of the event definition for 
> implementations which support it.
> 
> 
> 
> 7. Event port to event queue servicing priority. This allows two event ports 
> to connect to the same event queue with different priorities. For 
> implementations which support it, this allows a worker core to participate in 
> two different workflows with different priorities (workflow 1 needing 3.5 
> cores, workflow 2 needing 2.5 cores, and so on).
> 
> 
> 
> 8. Define the workflow as schedule/dequeue/enqueue. An implementation is free 
> to define schedule as NOOP. A distributed s/w scheduler can use this to 
> schedule events; also a centralized s/w scheduler can make this a NOOP on 
> non-scheduler cores.
> 
> 
> 
> 9. The schedule_from_group API does not fit the workflow.
> 
> 
> 
> 10. The ctxt_update/ctxt_wait breaks the normal workflow. If the normal 
> workflow is a dequeue -> do work based on event type -> enqueue,  a pin_event 
> argument to enqueue (where the pinned event is returned through the normal 
> dequeue) allows application workflow to remain the same whether or not an 
> implementation supports it.
> 
> 
> 
> 11. Burst dequeue/enqueue needed.
> 
> 
> 
> 12. Definition of a closed/open system - where open system is memory backed 
> and closed system eventdev has limited capacity. In such systems, it is also 
> useful to denote per event port how many packets can be active in the system. 
> This can serve as a threshold for ethdev like devices so they don't overwhelm 
> core to core events.
> 
> 
> 
> 13. There should be sort of device capabilities definition to address 
> different implementations.
> 
> 
> 
> 
> vnr
> ---
> 

Reply via email to