For tuning ESXi and vSwitch for latency sensitive workloads, I remember the 
following paper published by VMware: 
https://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf
 that you can try out.

The overall latency in setup (vmware and dpdk-vm using vmxnet3) remains in 
vmware-native-driver/vmkernel/vmxnet3-backend/vmx-emulation threads in ESXi. So 
you can better tune ESXi (as explained in the above white paper) and/or make 
sure that these important threads are not starving to improve latency and 
throughput in some cases of this setup.

Thanks,
Rashmin

-----Original Message-----
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Matthew Hall
Sent: Thursday, June 25, 2015 8:19 AM
To: Vass, Sandor (Nokia - HU/Budapest)
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] VMXNET3 on vmware, ping delay

On Thu, Jun 25, 2015 at 09:14:53AM +0000, Vass, Sandor (Nokia - HU/Budapest) 
wrote:
> According to my understanding each packet should go through BR as fast 
> as possible, but it seems that the rte_eth_rx_burst retrieves packets 
> only when there are at least 2 packets on the RX queue of the NIC. At 
> least most of the times as there are cases (rarely - according to my 
> console log) when it can retrieve 1 packet also and sometimes only 3 
> packets can be retrieved...

By default DPDK is optimized for throughput not latency. Try a test with 
heavier traffic.

There is also some work going on now for DPDK interrupt-driven mode, which will 
work more like traditional Ethernet drivers instead of polling mode Ethernet 
drivers.

Though I'm not an expert on it, there is also a series of ways to optimize for 
latency, which hopefully some others could discuss... or maybe search the 
archives / web site / Intel tuning documentation.

Matthew.

Reply via email to