The current flow rules insertion mechanism assumes applications manage flow rules lifecycle in the control path. The flow rules creation/destruction is performed synchronously and under a lock. But for applications doing this job as part of the datapath, any blocking operations are not desirable as they cause delay in the packet processing.
These patches optimize a datapath-focused flow rules management approach based on four main concepts: 1. Pre-configuration hints. In order to reduce the overhead of the flow rules management, the application may provide some hints at the initialization phase about flow rules characteristics to be used. The configuration funtion pre-allocates all the needed resources inside a PMD/HW beforehand and these resources are used at a later stage without costly allocations. 2. Flow grouping using templates. Unlike current stage where each flow rule is treated as independent entity, new approach can leverage application knowledge about common patterns in most of flows. Similar flows are grouped toghether using templates to enable better resource management inside the PMD/HW. 3. Queue-based flow management. Flow rules creation/destruction is done by using lockless flow queues. The application configures number of queues during the initialization stage. Then create/destroy operations are enqueued without any lock. 4. Asynchronous operations. There is a way to spare the datapath from waiting for the flow rule creation/destruction. Adopting an asynchronous queue-based approach, the packet processing can continue with handling next packets while inserting/deleting a flow rule inside the hardware. The application is expected to poll for results later to see if the flow rule is successfully inserted/deleted or not. Example on how to use this approach. Init stage consists from the resources preallocation, item and action templates definition and corresponding tables create. All these steps should be done before a device is started: rte_eth_dev_configure(); rte_flow_configure(port_id, number_of_flow_queues, max_num_of_counters); rte_flow_item_template_create(port_id, items("eth/ipv4/udp")); rte_flow_action_template_create(port_id, actions("counter/set_tp_src")); rte_flow_table_create(port_id, item_template, action_template); rte_eth_dev_start(); The packet processing can start once all the resources are preallocated. Flow rules creation/destruction jobs are enqueued as a part of the packet handling logic. These jobs are then flushed to the PMD/HW and their status is beign rquested via the dequeue API as a method to ensure flow rules are successfully created/destroyed. rte_eth_rx_burst(); for (every received packet in the burts) { if (flow rule needs to be created) { rte_flow_q_flow_create(port_id, flow_queue_id, table_id, item_template_id, items("eth/ipv4 is 1.1.1.1/udp"), action_template_id, actions("counter/set_tp_src is 5555")); } else {flow rule needs tp be destroyed) { rte_flow_q_flow_destroy(port_id, flow_queue_id, flow_rule_id); } rte_flow_q_flush(port_id, flow_queue_id); rte_flow_q_dequeue(port_id, flow_queue_id, &result); } Signed-off-by: Alexander Kozyrev <akozy...@nvidia.com> Suggested-by: Ori Kam <or...@nvidia.com> Alexander Kozyrev (3): ethdev: introduce flow pre-configuration hints ethdev: add flow item/action templates ethdev: add async queue-based flow rules operations lib/ethdev/rte_flow.h | 626 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 626 insertions(+) -- 2.18.2