> -----Original Message----- > From: Stephen Hemminger [mailto:stephen at networkplumber.org] > Sent: Tuesday, January 19, 2016 4:06 AM > To: Wang, Zhihong <zhihong.wang at intel.com> > Cc: dev at dpdk.org; Ananyev, Konstantin <konstantin.ananyev at intel.com>; > Richardson, Bruce <bruce.richardson at intel.com>; Xie, Huawei > <huawei.xie at intel.com> > Subject: Re: [PATCH v2 0/5] Optimize memcpy for AVX512 platforms > > On Sun, 17 Jan 2016 22:05:09 -0500 > Zhihong Wang <zhihong.wang at intel.com> wrote: > > > This patch set optimizes DPDK memcpy for AVX512 platforms, to make full > > utilization of hardware resources and deliver high performance. > > > > In current DPDK, memcpy holds a large proportion of execution time in > > libs like Vhost, especially for large packets, and this patch can bring > > considerable benefits. > > > > The implementation is based on the current DPDK memcpy framework, some > > background introduction can be found in these threads: > > http://dpdk.org/ml/archives/dev/2014-November/008158.html > > http://dpdk.org/ml/archives/dev/2015-January/011800.html > > > > Code changes are: > > > > 1. Read CPUID to check if AVX512 is supported by CPU > > > > 2. Predefine AVX512 macro if AVX512 is enabled by compiler > > > > 3. Implement AVX512 memcpy and choose the right implementation based > on > > predefined macros > > > > 4. Decide alignment unit for memcpy perf test based on predefined macros > > Cool, I like it. How much impact does this have on VHOST?
The impact is significant especially for enqueue (Detailed numbers might not be appropriate here due to policy :-), only how I test it), because VHOST actually spends a lot of time doing memcpy. Simply measure 1024B RX/TX time cost and compare it with 64B's and you'll get a sense of it, although not precise. My test cases include NIC2VM2NIC and VM2VM scenarios, which are the main use cases currently, and use both throughput and RX/TX cycles for evaluation.