From: Tetsuya Mukawa <muk...@igel.co.jp> Hi,
I want to measure throughputs like following cases. - path connected by RING PMDs - path connected by librte_vhost and virtio-net PMD - path connected by MEMNIC PMDs - ..... Is there anyone want to do same thing? Anyway, I guess those throughputs may be too high for some devices like ixia. But it's a bit pain to write or fix applications just for measuring. I guess a PMD like '/dev/null' and testpmd application will help. This patch set is RFC of a PMD like '/dev/null'. Please see the first commit of this patch set. Here is a my plan to use this PMD +-------------------------------+ | testpmd1 | +-------------+------+----------+ | Target PMD1 | | null PMD | +---++--------+ +----------+ || || Target path || +---++--------+ +----------+ | Target PMD2 | | null PMD | +-------------+------+----------+ | testpmd2 | +-------------------------------+ The testpmd1 or testpmd2 will start with "start tx_first". It causes huge transfers. The result is not thuroughput of PMD1 or PMD2, but throughput between PMD1 and PMD2. But I guess it's enough to know rough thuroughput. Also it's nice for measuing that the same environment can be used. Any suggestions or comments? Thanks, Tetsuya -- Tetsuya Mukawa (1): librte_pmd_null: Add null PMD config/common_bsdapp | 5 + config/common_linuxapp | 5 + lib/Makefile | 1 + lib/librte_pmd_null/Makefile | 58 +++++ lib/librte_pmd_null/rte_eth_null.c | 474 +++++++++++++++++++++++++++++++++++++ 5 files changed, 543 insertions(+) create mode 100644 lib/librte_pmd_null/Makefile create mode 100644 lib/librte_pmd_null/rte_eth_null.c -- 1.9.1