RXQ interrupts under Linux are based on the epoll mechanism. An expected order of operations is as follows: 1. Call rte_eth_dev_rx_intr_enable(), to arm the CQ for receiving events on data input. 2. Block on rte_epoll_wait() with an array of file descriptors representing the CQ events. Upon data arrival the kernel will signal an input event on the corresponding CQ fd. 3. Call rte_eth_dev_rx_intr_disable() after the event was received and continue in polling mode. The mlx4 implementation of rte_eth_dev_rx_intr_disable() is to get the CQ event and ack it.
In practice applications may wake up from rte_epoll_wait() due to timeout with no event to ack but still call rte_eth_dev_rx_intr_disable() unconditionally. In such cases the call should return EAGAIN (since the file descriptors are non-blocked), as opposed to EINVAL which indicates a real failure. In case of EAGAIN the PMD should not warn on "unable to disable interrupt on rx queue". Signed-off-by: Ophir Munk <ophi...@mellanox.com> --- drivers/net/mlx4/mlx4_intr.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c index 020fc25..19af935 100644 --- a/drivers/net/mlx4/mlx4_intr.c +++ b/drivers/net/mlx4/mlx4_intr.c @@ -326,13 +326,20 @@ mlx4_rx_intr_disable(struct rte_eth_dev *dev, uint16_t idx) } else { ret = mlx4_glue->get_cq_event(rxq->cq->channel, &ev_cq, &ev_ctx); - if (ret || ev_cq != rxq->cq) + /** For non-zero ret save the errno (may be EAGAIN + * which means the get_cq_event function was called before + * receiving one). + */ + if (ret) + ret = errno; + else if (ev_cq != rxq->cq) ret = EINVAL; } if (ret) { rte_errno = ret; - WARN("unable to disable interrupt on rx queue %d", - idx); + if (ret != EAGAIN) + WARN("unable to disable interrupt on rx queue %d", + idx); } else { rxq->mcq.arm_sn++; mlx4_glue->ack_cq_events(rxq->cq, 1); -- 2.8.4