On Fri, 26 Jul 2013, jinho hwang wrote: >> Thanks for the tips. I don't think I'm running out of mbufs, but I'll check >> that again. I am using these values from one of the examples - which claim >> to be correct for the 82599EB. >> >> /* >> * These default values are optimized for use with the Intel(R) 82599 10 GbE >> * Controller and the DPDK ixgbe PMD. Consider using other values for other >> * network controllers and/or network drivers. >> */ >> #define TX_PTHRESH 36 /**< Default values of TX prefetch threshold reg. */ >> #define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */ >> #define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */ >> >> static const struct rte_eth_txconf tx_conf = { >> .tx_thresh = { >> .pthresh = TX_PTHRESH, >> .hthresh = TX_HTHRESH, >> .wthresh = TX_WTHRESH, >> }, >> .tx_free_thresh = 0, /* Use PMD default values */ >> .tx_rs_thresh = 0, /* Use PMD default values */ >> }; >> >> /* >> * Configurable number of RX/TX ring descriptors >> */ >> #define RTE_TEST_TX_DESC_DEFAULT 512 >> static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; >> > I am wondering whether you use multiple cores accessing the same > receive queue. I had this problem before, but after I make the same > number of receiving queues as the number of receiving cores, the > problem disappeared. I did not dig more since I did not care how many > receive queues I have did not matter.
Jinho, Thanks. I have only one queue (should I be using more?) but as far as I know, I'm only using one core to transmit as well. Scott