On 02/12/2016 12:44 PM, Ananyev, Konstantin wrote: > >> >>> -----Original Message----- ... >> >> In that case we don't need to make any changes at rte_ethdev.[h,c] to >> alloc/free/maintain tx_buffer inside each queue... >> It all will be upper layer responsibility. >> So no need to modify existing rte_ethdev structures/code. >> Again, no need for error callback - caller would check return value and >> decide what to do with unsent packets in the tx_buffer. >> > > Just to summarise why I think it is better to have tx buffering managed on > the app level: > > 1. avoid any ABI change. > 2. Avoid extra changes in rte_ethdev.c: tx_queue_setup/tx_queue_stop. > 3. Provides much more flexibility to the user: > a) where to allocate space for tx_buffer (stack, heap, hugepages, etc). > b) user can mix and match plain tx_burst() and > tx_buffer/tx_buffer_flush() > in any way he fills it appropriate. > c) user can change the size of tx_buffer without stop/re-config/start > queue: > just allocate new larger(smaller) tx_buffer & copy contents to the > new one. > d) user can preserve buffered packets through device restart circle: > i.e if let say TX hang happened, and user has to do > dev_stop/dev_start - > contents of tx_buffer will stay unchanged and its contents could be > (re-)transmitted after device is up again, or through different > port/queue if needed. > > As a drawbacks mentioned - tx error handling becomes less transparent... > But we can add error handling routine and it's user provided parameter > into struct rte_eth_dev_tx_buffer', something like that: > > +struct rte_eth_dev_tx_buffer { > + buffer_tx_error_fn cbfn; > + void *userdata; > + unsigned nb_pkts; > + uint64_t errors; > + /**< Total number of queue packets to sent that are dropped. */ > + struct rte_mbuf *pkts[]; > +}; > > Konstantin >
Just to enforce Konstantin's comments. As a very basic - not to say fundamental - rule, one should avoid adding in the PMD RX/TX API any extra processing that can be handled at a higher level. The only and self-sufficient reason is that we must avoid impacting performances on the critical path, in particular for those - usually the majority of - applications that do not need such extra operations, or better implement them at upper level. Maybe in a not so far future will come a proposal for forking a new open source fast-dpdk project aiming at providing API simplicity, zero-overhead, modular design, and all those nice properties that every one claims to seek :-) Ivan