Hi Mairtin, > -----Original Message----- > From: O'loingsigh, Mairtin <mairtin.oloings...@intel.com> > Sent: Tuesday, September 29, 2020 4:36 PM > To: Singh, Jasvinder <jasvinder.si...@intel.com>; Richardson, Bruce > <bruce.richard...@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.gua...@intel.com> > Cc: dev@dpdk.org; Ryan, Brendan <brendan.r...@intel.com>; Coyle, David > <david.co...@intel.com>; O'loingsigh, Mairtin <mairtin.oloings...@intel.com> > Subject: [PATCH v3 2/2] net: add support for AVX512/VPCLMULQDQ based CRC > > This patch enables the optimized calculation of CRC32-Ethernet and CRC16- > CCITT using the AVX512 and VPCLMULQDQ instruction sets. This CRC > implementation is built if the compiler supports the required instruction > sets. It is > selected at run-time if the host CPU, again, supports the required instruction > sets. > > Signed-off-by: Mairtin o Loingsigh <mairtin.oloings...@intel.com> > Signed-off-by: David Coyle <david.co...@intel.com>
... > +static __rte_always_inline uint32_t > +crc32_eth_calc_vpclmulqdq(const uint8_t *data, uint32_t data_len, uint32_t > crc, > + const struct crc_vpclmulqdq_ctx *params) { > + __m128i res, d; > + __m256i b; > + __m512i temp, k; > + __m512i qw0 = _mm512_set1_epi64(0), qw1, qw2, qw3; > + __m512i fold0, fold1, fold2, fold3; > + __mmask16 mask; > + uint32_t n = 0; > + int reduction = 0; > + > + /* Get CRC init value */ > + b = _mm256_insert_epi32(_mm256_setzero_si256(), crc, 0); > + temp = _mm512_inserti32x8(_mm512_setzero_si512(), b, 0); You can replace this with the following, which produces less instructions (b needs to be changed to __m128i): b = _mm_cvtsi32_si128(crc); temp = _mm512_castsi128_si512(b); > + > + if (data_len > 255) { > + fold0 = _mm512_loadu_si512((const __m512i *)data); ... > + } else { > + if (data_len > 31) { > + res = _mm_insert_epi32(_mm_setzero_si128(), crc, 0); Should work better with: res = _mm_cvtsi32_si128(crc); > + d = _mm_loadu_si128((const __m128i *)data); > + res = _mm_xor_si128(res, d); > + n += 16; > + > + reduction = 240 - ((n+256)-data_len); > + > + while (reduction > 0) > + reduction_loop(&res, &reduction, data, &n, > + params); > + > + if (n != data_len) > + res = last_two_xmm(data, data_len, n, res, > + params); > + } else if (data_len > 16) { > + res = _mm_insert_epi32(_mm_setzero_si128(), crc, 0); Same as above. > + d = _mm_loadu_si128((const __m128i *)data); > + res = _mm_xor_si128(res, d); > + n += 16; > + > + if (n != data_len) > + res = last_two_xmm(data, data_len, n, res, > + params); > + } else if (data_len == 16) { > + res = _mm_insert_epi32(_mm_setzero_si128(), crc, 0); Same. > + d = _mm_loadu_si128((const __m128i *)data); > + res = _mm_xor_si128(res, d); > + } else { > + res = _mm_insert_epi32(_mm_setzero_si128(), crc, 0); Same. > + mask = byte_len_to_mask_table[data_len]; > + d = _mm_maskz_loadu_epi8(mask, data);