[dpdk-dev] [PATCH] vmxnet3: fixed segfault when initializing vmxnet3 pmd on linux platform

2014-03-12 Thread Daniel Kan
The vmxnet3 PCI hardware resources were never memory mapped when 
RE_EAL_UNBIND_PORTS is not defined.
Specifically, pci_dev->mem_resource is not mapped. The fix is to always set 
drv_flags with RTE_PCI_DRV_NEED_IGU for vmxnet3. This ensures 
pci_uio_map_resource() is called.

Signed-off-by: Daniel Kan 
---
 lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c |2 --
 1 file changed, 2 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c 
b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
index 6757aa2..8259cfe 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
@@ -267,9 +267,7 @@ static struct eth_driver rte_vmxnet3_pmd = {
{
.name = "rte_vmxnet3_pmd",
.id_table = pci_id_vmxnet3_map,
-#ifdef RTE_EAL_UNBIND_PORTS
.drv_flags = RTE_PCI_DRV_NEED_IGB_UIO,
-#endif
},
.eth_dev_init = eth_vmxnet3_dev_init,
.dev_private_size = sizeof(struct vmxnet3_adapter),
-- 
1.7.9.5



[dpdk-dev] performance for 1500byte packet

2014-03-12 Thread Bin Zhang
Hi,

I am seeing very strange behaving for l2fwd in VM, here are performance
number:

64byte -- 8.4 mil pps
256byte - 7.7 mil pps
512byte - 1.5 mil pps
1500byte - 500K pps

My setup details are
- Traffic coming in on  a 10G interface and going out on another 10G
interface
- Both 10G Nic are 82599
- using virtual function in VM(vmware esxi 5.1)
- Host OS: Ubuntu 12.04
- Using one cpu to do these testing

So my question?
Why packet size bigger than 256 cause performance drop a lot?

Thanks,
Bin