Re: [dpdk-dev] [PATCH v4 2/5] vhost: use buffer vectors in dequeue path

2018-07-17 Thread Wang, Yinan


Hi Maxime,

vhost user + virtio-net VM2VM TSO performance test can work well on dpdk 
v18.05. 
But during our performance test with v18.08-rc1, we found a regression in the 
VM2VM test case. When using iperf or netperf, the server VM will hang/crash. 
After the bisection, I found it's caused by your below patch.
Could you help to take a look? 

Below is the steps to reproduce:

1.Bind 82599 NIC port to igb_uio
2.Launch vhost-switch
./examples/vhost/build/vhost-switch -c 0x7000 -n 4 --socket-mem 2048,2048 
--legacy-mem -- -p 0x1 --mergeable 1 --vm2vm 1  --tso 1 --tx-csum 1  
--socket-file ./vhost-net --socket-file ./vhost-net1
3.Launch VM1 and VM2.
  taskset -c 31 \
  qemu-system-x86_64  -name vm0 -enable-kvm \
  -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
  -device virtio-serial -device 
virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -daemonize \
  -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net 
nic,vlan=0,macaddr=00:00:00:50:fb:f3,addr=1f -net 
user,vlan=0,hostfwd=tcp:127.0.0.1:6145-:22 \
  -chardev socket,id=char0,path=./vhost-net \
  -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
  -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
  -cpu host -smp 1 -m 4096 -object 
memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa 
node,memdev=mem -mem-prealloc \
  -drive file=/home/osimg/ubuntu16.img -vnc :4

 taskset -c 32 \
 qemu-system-x86_64  -name vm1 -enable-kvm \
 -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 \
 -device virtio-serial -device 
virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -daemonize \
 -monitor unix:/tmp/vm1_monitor.sock,server,nowait -net 
nic,vlan=0,macaddr=00:00:00:40:75:e7,addr=1f -net 
user,vlan=0,hostfwd=tcp:127.0.0.1:6134-:22 \
 -chardev socket,id=char0,path=./vhost-net1 \
 -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
 -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02 -cpu host -smp 
1 -m 4096 \
 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on 
-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img -vnc 
:5

4. On VM1, set the virtio IP and run iperf
ifconfig ens4 1.1.1.2
arp -s 1.1.1.8 52:54:00:00:00:02
arp # to check the arp table is complete and correct. 

5. On VM2, set the virtio IP and run iperf
ifconfig ens4 1.1.1.8
arp -s 1.1.1.2 52:54:00:00:00:01
arp # to check the arp table is complete and correct. 
 
6. Ensure virtio1 can ping virtio2,then in VM1, run : `iperf -s -i 1` ; In VM2, 
run `iperf -c 1.1.1.2 -i 1 -t 60`.  

7. Check the iperf performance for VM2VM case.

Best Wishes,
Yinan

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Maxime Coquelin
Sent: Friday, July 6, 2018 8:05 AM
To: Bie, Tiwei ; Wang, Zhihong ; 
dev@dpdk.org
Cc: Maxime Coquelin 
Subject: [dpdk-dev] [PATCH v4 2/5] vhost: use buffer vectors in dequeue path

To ease packed ring layout integration, this patch makes the dequeue path to 
re-use buffer vectors implemented for enqueue path.

Doing this, copy_desc_to_mbuf() is now ring layout type agnostic.

Signed-off-by: Maxime Coquelin 
---
 lib/librte_vhost/vhost.h  |   1 +
 lib/librte_vhost/virtio_net.c | 451 --
 2 files changed, 167 insertions(+), 285 deletions(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 
3437b996b..79e3117d2 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -43,6 +43,7 @@
  * from vring to do scatter RX.
  */
 struct buf_vector {
+   uint64_t buf_iova;
uint64_t buf_addr;
uint32_t buf_len;
uint32_t desc_idx;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c 
index 741267345..6339296c7 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -225,12 +225,12 @@ static __rte_always_inline int  fill_vec_buf(struct 
virtio_net *dev, struct vhost_virtqueue *vq,
 uint32_t avail_idx, uint32_t *vec_idx,
 struct buf_vector *buf_vec, uint16_t *desc_chain_head,
-uint16_t *desc_chain_len)
+uint16_t *desc_chain_len, uint8_t perm)
 {
uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
uint32_t vec_id = *vec_idx;
uint32_t len= 0;
-   uint64_t dlen;
+   uint64_t dlen, desc_avail, desc_iova;
struct vring_desc *descs = vq->desc;
struct vring_desc *idesc = NULL;
 
@@ -261,16 +261,43 @@ fill_vec_buf(struct virtio_net *dev, struct 
vhost_virtqueue *vq,
}
 
while (1) {
-   if (unlikely(vec_id >= BUF_VECTOR_MAX || idx >= vq->size)) {
+   if (unlikely(idx >= vq->size)) {
free_ind_table(idesc);
return -1;
}
 
+

[dpdk-dev] [Bug 72] Unable to install dpdk on arm64

2018-07-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=72

Bug ID: 72
   Summary: Unable to install dpdk on arm64
   Product: DPDK
   Version: unspecified
  Hardware: ARM
OS: Linux
Status: CONFIRMED
  Severity: normal
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: stanislav.chle...@gmail.com
  Target Milestone: ---

###
### C O M P I L I N G ##
###
stanislav@contivvpp:~/dpdk$ make install T=arm64_thunderx_linuxapp_gcc
make[3]: *** No rule to make target
'/home/stanislav/dpdk/config/defconfig_arm64_thunderx_linuxapp_gcc', needed by
'/home/stanislav/dpdk/arm64_thunderx_linuxapp_gcc/.config'.  Stop.
/home/stanislav/dpdk/mk/rte.sdkroot.mk:65: recipe for target 'config' failed
make[2]: *** [config] Error 2
/home/stanislav/dpdk/mk/rte.sdkinstall.mk:57: recipe for target 'pre_install'
failed
make[1]: *** [pre_install] Error 2
/home/stanislav/dpdk/mk/rte.sdkroot.mk:79: recipe for target 'install' failed
make: *** [install] Error 2
stanislav@contivvpp:~/dpdk$ 


stanislav@contivvpp:~/dpdk$ make install T=arm64_native_linuxapp_gcc
make[3]: *** No rule to make target
'/home/stanislav/dpdk/config/defconfig_arm64_native_linuxapp_gcc', needed by
'/home/stanislav/dpdk/arm64_native_linuxapp_gcc/.config'.  Stop.
/home/stanislav/dpdk/mk/rte.sdkroot.mk:65: recipe for target 'config' failed
make[2]: *** [config] Error 2
/home/stanislav/dpdk/mk/rte.sdkinstall.mk:57: recipe for target 'pre_install'
failed
make[1]: *** [pre_install] Error 2
/home/stanislav/dpdk/mk/rte.sdkroot.mk:79: recipe for target 'install' failed
make: *** [install] Error 2
stanislav@contivvpp:~/dpdk$ 

###
 R E P O S I T O R Y ##
###

stanislav@contivvpp:~/dpdk$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean
stanislav@contivvpp:~/dpdk$ git log
commit c27dbc300eee78c2eb33e84181617fdd7cbaaae4
Author: Thomas Monjalon 
Date:   Mon Jul 16 01:17:18 2018 +0200

version: 18.08-rc1

Signed-off-by: Thomas Monjalon 

###

See more at:
https://gist.github.com/stanislav-chlebec/b622b12ec5b4a976a74e6de20e8a6fc1

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [dpdk-dev] [PATCH] examples/l2fwd-crypto: fix digest with AEAD algorithms

2018-07-17 Thread De Lara Guarch, Pablo



> -Original Message-
> From: De Lara Guarch, Pablo
> Sent: Tuesday, July 17, 2018 9:04 AM
> To: De Lara Guarch, Pablo 
> Subject: RE: [PATCH] examples/l2fwd-crypto: fix digest with AEAD algorithms
> 
> 
> 
> From: Dwivedi, Ankur [mailto:ankur.dwiv...@cavium.com]
> Sent: Tuesday, July 17, 2018 6:43 AM
> To: De Lara Guarch, Pablo ; Doherty, Declan
> 
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: Re: [PATCH] examples/l2fwd-crypto: fix digest with AEAD algorithms
> 
> Hi Pablo,
> 

Hi Ankur,

> This patch solves the bug.

Thanks, I will add in the commit that you have tested this patch.

Pablo

> 
> Thanks
> Ankur



Re: [dpdk-dev] [PATCH v2] test/compress: add scatter-gather tests

2018-07-17 Thread De Lara Guarch, Pablo



> -Original Message-
> From: De Lara Guarch, Pablo
> Sent: Monday, July 16, 2018 10:31 AM
> To: Daly, Lee ; Trahe, Fiona ;
> ashish.gu...@caviumnetworks.com; shally.ve...@caviumnetworks.com
> Cc: dev@dpdk.org; De Lara Guarch, Pablo 
> Subject: [PATCH v2] test/compress: add scatter-gather tests
> 
> Added Scatter-Gather test, which split input data into multi-segment mbufs and
> compresses/decompresses the data into also a multi-segment mbuf.
> 
> Signed-off-by: Pablo de Lara 
> Acked-by: Lee Daly 

Applied to dpdk-next-crypto.

Pablo


Re: [dpdk-dev] [PATCH] examples/l2fwd-crypto: fix digest with AEAD algorithms

2018-07-17 Thread De Lara Guarch, Pablo



> -Original Message-
> From: De Lara Guarch, Pablo
> Sent: Monday, July 16, 2018 9:26 AM
> To: ankur.dwiv...@cavium.com; Doherty, Declan 
> Cc: dev@dpdk.org; De Lara Guarch, Pablo ;
> sta...@dpdk.org
> Subject: [PATCH] examples/l2fwd-crypto: fix digest with AEAD algorithms
> 
> When performing authentication verification (both for AEAD algorithms, such as
> AES-GCM, or for authentication algorithms, such as SHA1-HMAC), the digest
> address is calculated based on the packet size and the algorithm used
> (substracting digest size and IP header to the packet size).
> 
> However, for AEAD algorithms, this was not calculated correctly, since the
> digest size was not being substracted.
> Signed-off-by: Pablo de Lara 

Applied to dpdk-next-crypto.

Pablo


Re: [dpdk-dev] [PATCH v3] test: add sample functions for packet forwarding

2018-07-17 Thread Pattan, Reshma
Hi

> -Original Message-
> From: Parthasarathy, JananeeX M
> Sent: Monday, July 16, 2018 5:01 PM
> To: dev@dpdk.org
> Cc: Horton, Remy ; Pattan, Reshma
> ; Parthasarathy, JananeeX M
> ; Chaitanya Babu, TalluriX
> 
> Subject: [PATCH v3] test: add sample functions for packet forwarding
> 
> Add sample test functions for packet forwarding.
> These can be used for unit test cases for LatencyStats and BitrateStats
> libraries.
> 
> Signed-off-by: Chaitanya Babu Talluri 
> Reviewed-by: Reshma Pattan 
> ---
> + */
> +/* Sample test to forward packets using virtual portids */ int
> +test_packet_forward(void)
> +{
> + struct rte_mbuf *pbuf[NUM_PACKETS];
> +
> + mp = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32, 0,
> + RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
> + if (mp == NULL)
> + return -1;

Instead of return -1 , please return TEST_FAILED, that seems to be the correct 
way
as autotest expects TEST_SUCESS or TEST_FAILED from the test result. 
Rest of the code looks ok, I am done with the review. In next patch version 
don't forget to add my Ack. 

Acked-by: Reshma Pattan 



Re: [dpdk-dev] [PATCH v3] vfio: fix workaround of BAR0 mapping

2018-07-17 Thread Takeshi Yoshimura
2018-07-13 20:08 GMT+09:00 Burakov, Anatoly :
> On 13-Jul-18 12:00 PM, Burakov, Anatoly wrote:
>>
>> On 13-Jul-18 11:11 AM, Takeshi Yoshimura wrote:
>>>
>>> The workaround of BAR0 mapping gives up and immediately returns an
>>> error if it cannot map around the MSI-X. However, recent version
>>> of VFIO allows MSIX mapping (*).
>>>
>>> I fixed not to return immediately but try mapping. In old Linux, mmap
>>> just fails and returns the same error as the code before my fix . In
>>> recent Linux, mmap succeeds and this patch enables running DPDK in
>>> specific environments (e.g., ppc64le with HGST NVMe)
>>>
>>> (*): "vfio-pci: Allow mapping MSIX BAR",
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
>>> commit/id=a32295c612c57990d17fb0f41e7134394b2f35f6
>>>
>>> Fixes: 90a1633b2347 ("eal/linux: allow to map BARs with MSI-X tables")
>>>
>>> Signed-off-by: Takeshi Yoshimura 
>>> ---
>>>
>>> Thanks, Anatoly.
>>>
>>> I updated the patch not to affect behaviors of older Linux and
>>> other environments as well as possible. This patch adds another
>>> chance to mmap BAR0.
>>>
>>> I noticed that the check at line 350 already includes the check
>>> of page size, so this patch does not fix the check.
>>>
>>> Regards,
>>> Takeshi
>>
>>
>> Hi Takeshi,
>>
>> Please correct me if i'm wrong, but i'm not sure the old behavior is kept.
>>
>> Let's say we're running an old kernel, which doesn't allow mapping MSI-X
>> BARs. If MSI-X starts at beginning of the BAR (floor-aligned to page size),
>> and ends at or beyond end of BAR (ceiling-aligned to page size). In that
>> situation, old code just skipped the BAR and returned 0.
>>
>> We then exited the function, and there's a check for return value right
>> after pci_vfio_mmap_bar() that stop continuing if we fail to map something.
>> In the old code, we would continue as we went, and finish the rest of our
>> mappings. With your new code, you're attempting to map the BAR, it fails,
>> and you will return -1 on older kernels.
>>
>> I believe what we really need here is the following:
>>
>> 1) If this is a BAR containing MSI-X vector, first try mapping the entire
>> BAR. If it succeeds, great - that would be your new kernel behavior.
>> 2) If we failed on step 1), check to see if we can map around the BAR. If
>> we can, try to map around it like the current code does. If we cannot map
>> around it (i.e. if MSI-X vector, page aligned, occupies entire BAR), then we
>> simply return 0 and skip the BAR.
>>
>> That, i would think, would keep the old behavior and enable the new one.
>>
>> Does that make sense?
>>
>
> I envision this to look something like this:
>
> bool again = false;
> do {
> if (again) {
> // set up mmap-around
> if (cannot map around)
> return 0;
> }
> // try mapping
> if (map_failed && msix_table->bar_index == bar_index) {
> again = true;
> continue;
> }
> if (map_failed)
> return -1;
> break/return 0;
> } while (again);
>
> --
> Thanks,
> Anatoly

That makes sense. The return code was not same as old one in some paths.

I wrote a code based on your idea. It works at least in my ppc64 and
x86 machines, but I am concerned that the error messages for
pci_map_resource() confuse users in old Linux. I saw a message like
this (even if I could mmap):
EAL: pci_map_resource(): cannot mmap(15, 0x728ee3a3, 0x4000, 0x0):
Invalid argument (0x)

But anyway, I send it in the next email, and please check if there is
any other problems in the code.

Thanks,
Takeshi


[dpdk-dev] [PATCH v4] vfio: fix workaround of BAR0 mapping

2018-07-17 Thread Takeshi Yoshimura
The workaround of BAR0 mapping gives up and immediately returns an
error if it cannot map around the MSI-X. However, recent version
of VFIO allows MSIX mapping (*).

I fixed not to return immediately but try mapping. In old Linux, mmap
just fails and returns the same error as the code before my fix . In
recent Linux, mmap succeeds and this patch enables running DPDK in
specific environments (e.g., ppc64le with HGST NVMe)

(*): "vfio-pci: Allow mapping MSIX BAR",
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
commit/id=a32295c612c57990d17fb0f41e7134394b2f35f6

Fixes: 90a1633b2347 ("eal/linux: allow to map BARs with MSI-X tables")

Signed-off-by: Takeshi Yoshimura 
---
 drivers/bus/pci/linux/pci_vfio.c | 92 ++--
 1 file changed, 51 insertions(+), 41 deletions(-)

diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index aeeaa9ed8..afdf0f6d5 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -332,50 +332,58 @@ pci_vfio_mmap_bar(int vfio_dev_fd, struct 
mapped_pci_resource *vfio_res,
void *bar_addr;
struct pci_msix_table *msix_table = &vfio_res->msix_table;
struct pci_map *bar = &vfio_res->maps[bar_index];
+   bool again = false;
 
if (bar->size == 0)
/* Skip this BAR */
return 0;
 
-   if (msix_table->bar_index == bar_index) {
-   /*
-* VFIO will not let us map the MSI-X table,
-* but we can map around it.
-*/
-   uint32_t table_start = msix_table->offset;
-   uint32_t table_end = table_start + msix_table->size;
-   table_end = (table_end + ~PAGE_MASK) & PAGE_MASK;
-   table_start &= PAGE_MASK;
-
-   if (table_start == 0 && table_end >= bar->size) {
-   /* Cannot map this BAR */
-   RTE_LOG(DEBUG, EAL, "Skipping BAR%d\n", bar_index);
-   bar->size = 0;
-   bar->addr = 0;
-   return 0;
-   }
-
-   memreg[0].offset = bar->offset;
-   memreg[0].size = table_start;
-   memreg[1].offset = bar->offset + table_end;
-   memreg[1].size = bar->size - table_end;
-
-   RTE_LOG(DEBUG, EAL,
-   "Trying to map BAR%d that contains the MSI-X "
-   "table. Trying offsets: "
-   "0x%04lx:0x%04lx, 0x%04lx:0x%04lx\n", bar_index,
-   memreg[0].offset, memreg[0].size,
-   memreg[1].offset, memreg[1].size);
-   } else {
-   memreg[0].offset = bar->offset;
-   memreg[0].size = bar->size;
-   }
-
/* reserve the address using an inaccessible mapping */
bar_addr = mmap(bar->addr, bar->size, 0, MAP_PRIVATE |
MAP_ANONYMOUS | additional_flags, -1, 0);
-   if (bar_addr != MAP_FAILED) {
+   if (bar_addr == MAP_FAILED) {
+   RTE_LOG(ERR, EAL,
+   "Failed to create inaccessible mapping for BAR%d\n",
+   bar_index);
+   return -1;
+   }
+
+   memreg[0].offset = bar->offset;
+   memreg[0].size = bar->size;
+   do {
void *map_addr = NULL;
+   if (again) {
+   /*
+* VFIO did not let us map the MSI-X table,
+* but we can map around it.
+*/
+   uint32_t table_start = msix_table->offset;
+   uint32_t table_end = table_start + msix_table->size;
+   table_end = (table_end + ~PAGE_MASK) & PAGE_MASK;
+   table_start &= PAGE_MASK;
+
+   if (table_start == 0 && table_end >= bar->size) {
+   /* Cannot map this BAR */
+   RTE_LOG(DEBUG, EAL, "Skipping BAR%d\n",
+   bar_index);
+   bar->size = 0;
+   bar->addr = 0;
+   return 0;
+   }
+
+   memreg[0].offset = bar->offset;
+   memreg[0].size = table_start;
+   memreg[1].offset = bar->offset + table_end;
+   memreg[1].size = bar->size - table_end;
+
+   RTE_LOG(DEBUG, EAL,
+   "Trying to map BAR%d that contains the MSI-X "
+   "table. Trying offsets: "
+   "0x%04lx:0x%04lx, 0x%04lx:0x%04lx\n", bar_index,
+   memreg[0].offset, memreg[0].size,
+   memreg[1].offset, memreg[1].size);
+   }
+
if (memreg[0

Re: [dpdk-dev] [PATCH v3 8/9] autotest: update autotest test case list

2018-07-17 Thread Pattan, Reshma
Hi,

> -Original Message-
> From: Burakov, Anatoly
> Sent: Monday, July 16, 2018 4:16 PM
> To: Pattan, Reshma ; tho...@monjalon.net;
> dev@dpdk.org
> Cc: Parthasarathy, JananeeX M ;
> sta...@dpdk.org
> Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list
> 
> 
> > +{
> > +"Name":"Set rxtx mode",
> > +"Command": "set_rxtx_mode",
> > +"Func":default_autotest,
> > +"Report":  None,
> > +},
> > +{
> > +"Name":"Set rxtx anchor",
> > +"Command": "set_rxtx_anchor",
> > +"Func":default_autotest,
> > +"Report":  None,
> > +},
> > +{
> > +"Name":"Set rxtx sc",
> > +"Command": "set_rxtx_sc",
> > +"Func":default_autotest,
> > +"Report":  None,
> > +},
> 
> The above three tests don't look like autotests to me. I have no idea what
> they are for, but either they need a special function, or they need to be 
> taken
> out.
> 

These commands needs to be run manually from test cmd prompt to various set 
rxtx mode, rxtx rate and rxtx direction .
These can be used to verify pmd perf test  with vaiours set of above values.

So this can be removed from autotest.

> > +"Name":"User delay",
> > +"Command": "user_delay_us",
> > +"Func":default_autotest,
> > +"Report":  None,
> > +},
> 
> This doesn't look like autotests to me. I have no idea what it is for, but 
> either
> it needs a special function, or it needs to be taken out.
> 
This is autotest but the name does'nt have the autotest name in it. So I will 
retain this.

Thanks,
Reshma


[dpdk-dev] [PATCH v6 1/2] librte_lpm: Improve performance of the delete and add functions

2018-07-17 Thread Alex Kiselev
librte_lpm: Improve lpm6 performance

Rework the lpm6 rule subsystem and replace
current rules algorithm complexity O(n)
with hashtables which allow dealing with
large (50k) rule sets.

Signed-off-by: Alex Kiselev 
---
 lib/Makefile   |   2 +-
 lib/librte_lpm/Makefile|   2 +-
 lib/librte_lpm/meson.build |   1 +
 lib/librte_lpm/rte_lpm6.c  | 368 +
 4 files changed, 210 insertions(+), 163 deletions(-)

diff --git a/lib/Makefile b/lib/Makefile
index d82462ba2..070104657 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -47,7 +47,7 @@ DEPDIRS-librte_hash := librte_eal librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_EFD) += librte_efd
 DEPDIRS-librte_efd := librte_eal librte_ring librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
-DEPDIRS-librte_lpm := librte_eal
+DEPDIRS-librte_lpm := librte_eal librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_ACL) += librte_acl
 DEPDIRS-librte_acl := librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_MEMBER) += librte_member
diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index 482bd72e9..a7946a1c5 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -8,7 +8,7 @@ LIB = librte_lpm.a
 
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
-LDLIBS += -lrte_eal
+LDLIBS += -lrte_eal -lrte_hash
 
 EXPORT_MAP := rte_lpm_version.map
 
diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build
index 067849427..a5176d8ae 100644
--- a/lib/librte_lpm/meson.build
+++ b/lib/librte_lpm/meson.build
@@ -7,3 +7,4 @@ headers = files('rte_lpm.h', 'rte_lpm6.h')
 # since header files have different names, we can install all vector headers
 # without worrying about which architecture we actually need
 headers += files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')
+deps += ['hash']
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 149677eb1..d86e878bc 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -21,6 +21,9 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
 
 #include "rte_lpm6.h"
 
@@ -37,6 +40,8 @@
 #define BYTE_SIZE 8
 #define BYTES2_SIZE  16
 
+#define RULE_HASH_TABLE_EXTRA_SPACE  64
+
 #define lpm6_tbl8_gindex next_hop
 
 /** Flags for setting an entry as valid/invalid. */
@@ -70,6 +75,12 @@ struct rte_lpm6_rule {
uint8_t depth; /**< Rule depth. */
 };
 
+/** Rules tbl entry key. */
+struct rte_lpm6_rule_key {
+   uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
+   uint8_t depth; /**< Rule depth. */
+};
+
 /** LPM6 structure. */
 struct rte_lpm6 {
/* LPM metadata. */
@@ -80,7 +91,7 @@ struct rte_lpm6 {
uint32_t next_tbl8;  /**< Next tbl8 to be used. */
 
/* LPM Tables. */
-   struct rte_lpm6_rule *rules_tbl; /**< LPM rules. */
+   struct rte_hash *rules_tbl; /**< LPM rules. */
struct rte_lpm6_tbl_entry tbl24[RTE_LPM6_TBL24_NUM_ENTRIES]
__rte_cache_aligned; /**< LPM tbl24 table. */
struct rte_lpm6_tbl_entry tbl8[0]
@@ -93,22 +104,69 @@ struct rte_lpm6 {
  * and set the rest to 0.
  */
 static inline void
-mask_ip(uint8_t *ip, uint8_t depth)
+ip6_mask_addr(uint8_t *ip, uint8_t depth)
 {
-int16_t part_depth, mask;
-int i;
+   int16_t part_depth, mask;
+   int i;
 
-   part_depth = depth;
+   part_depth = depth;
 
-   for (i = 0; i < RTE_LPM6_IPV6_ADDR_SIZE; i++) {
-   if (part_depth < BYTE_SIZE && part_depth >= 0) {
-   mask = (uint16_t)(~(UINT8_MAX >> part_depth));
-   ip[i] = (uint8_t)(ip[i] & mask);
-   } else if (part_depth < 0) {
-   ip[i] = 0;
-   }
-   part_depth -= BYTE_SIZE;
-   }
+   for (i = 0; i < RTE_LPM6_IPV6_ADDR_SIZE; i++) {
+   if (part_depth < BYTE_SIZE && part_depth >= 0) {
+   mask = (uint16_t)(~(UINT8_MAX >> part_depth));
+   ip[i] = (uint8_t)(ip[i] & mask);
+   } else if (part_depth < 0)
+   ip[i] = 0;
+
+   part_depth -= BYTE_SIZE;
+   }
+}
+
+/* copy ipv6 address */
+static inline void
+ip6_copy_addr(uint8_t *dst, const uint8_t *src)
+{
+   rte_memcpy(dst, src, RTE_LPM6_IPV6_ADDR_SIZE);
+}
+
+/*
+ * LPM6 rule hash function
+ *
+ * It's used as a hash function for the rte_hash
+ * containing rules
+ */
+static inline uint32_t
+rule_hash(const void *data, __rte_unused uint32_t data_len,
+ uint32_t init_val)
+{
+   return rte_jhash(data, sizeof(struct rte_lpm6_rule_key), init_val);
+}
+
+/*
+ * Init a rule key.
+ *   note that ip must be already masked
+ */
+static inline void
+rule_key_init(struct rte_lpm6_rule_key *key, uint8_t *ip, uint8_t depth)
+{
+   ip6_copy_addr(key->ip, ip);

[dpdk-dev] [PATCH v6 2/2] librte_lpm: Improve performance of the delete and add functions

2018-07-17 Thread Alex Kiselev
librte_lpm: Improve lpm6 performance

There is no need to rebuild the whole LPM tree
when a rule is deleted. This patch addresses this
issue introducing new delete operation.

Signed-off-by: Alex Kiselev 
---
 lib/librte_lpm/rte_lpm6.c | 712 ++
 1 file changed, 597 insertions(+), 115 deletions(-)

diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index d86e878bc..6f1f94e23 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -41,6 +41,7 @@
 #define BYTES2_SIZE  16
 
 #define RULE_HASH_TABLE_EXTRA_SPACE  64
+#define TBL24_INDUINT32_MAX
 
 #define lpm6_tbl8_gindex next_hop
 
@@ -81,6 +82,17 @@ struct rte_lpm6_rule_key {
uint8_t depth; /**< Rule depth. */
 };
 
+/* Header of tbl8 */
+struct rte_lpm_tbl8_hdr {
+   uint32_t owner_tbl_ind; /**< owner table: TBL24_IND if owner is tbl24,
+   * otherwise index of tbl8
+   */
+   uint32_t owner_entry_ind; /**< index of the owner table entry where
+   * pointer to the tbl8 is stored
+   */
+   uint32_t ref_cnt; /**< table reference counter */
+};
+
 /** LPM6 structure. */
 struct rte_lpm6 {
/* LPM metadata. */
@@ -88,12 +100,17 @@ struct rte_lpm6 {
uint32_t max_rules;  /**< Max number of rules. */
uint32_t used_rules; /**< Used rules so far. */
uint32_t number_tbl8s;   /**< Number of tbl8s to allocate. */
-   uint32_t next_tbl8;  /**< Next tbl8 to be used. */
 
/* LPM Tables. */
struct rte_hash *rules_tbl; /**< LPM rules. */
struct rte_lpm6_tbl_entry tbl24[RTE_LPM6_TBL24_NUM_ENTRIES]
__rte_cache_aligned; /**< LPM tbl24 table. */
+
+   uint32_t *tbl8_pool; /**< pool of indexes of free tbl8s */
+   uint32_t tbl8_pool_pos; /**< current position in the tbl8 pool */
+
+   struct rte_lpm_tbl8_hdr *tbl8_hdrs; /* array of tbl8 headers */
+
struct rte_lpm6_tbl_entry tbl8[0]
__rte_cache_aligned; /**< LPM tbl8 table. */
 };
@@ -142,6 +159,59 @@ rule_hash(const void *data, __rte_unused uint32_t data_len,
return rte_jhash(data, sizeof(struct rte_lpm6_rule_key), init_val);
 }
 
+/*
+ * Init pool of free tbl8 indexes
+ */
+static void
+tbl8_pool_init(struct rte_lpm6 *lpm)
+{
+   uint32_t i;
+
+   /* put entire range of indexes to the tbl8 pool */
+   for (i = 0; i < lpm->number_tbl8s; i++)
+   lpm->tbl8_pool[i] = i;
+
+   lpm->tbl8_pool_pos = 0;
+}
+
+/*
+ * Get an index of a free tbl8 from the pool
+ */
+static inline uint32_t
+tbl8_get(struct rte_lpm6 *lpm, uint32_t *tbl8_ind)
+{
+   if (lpm->tbl8_pool_pos == lpm->number_tbl8s)
+   /* no more free tbl8 */
+   return -ENOSPC;
+
+   /* next index */
+   *tbl8_ind = lpm->tbl8_pool[lpm->tbl8_pool_pos++];
+   return 0;
+}
+
+/*
+ * Put an index of a free tbl8 back to the pool
+ */
+static inline uint32_t
+tbl8_put(struct rte_lpm6 *lpm, uint32_t tbl8_ind)
+{
+   if (lpm->tbl8_pool_pos == 0)
+   /* pool is full */
+   return -ENOSPC;
+
+   lpm->tbl8_pool[--lpm->tbl8_pool_pos] = tbl8_ind;
+   return 0;
+}
+
+/*
+ * Returns number of tbl8s available in the pool
+ */
+static inline uint32_t
+tbl8_available(struct rte_lpm6 *lpm)
+{
+   return lpm->number_tbl8s - lpm->tbl8_pool_pos;
+}
+
 /*
  * Init a rule key.
  *   note that ip must be already masked
@@ -182,6 +252,8 @@ rte_lpm6_create(const char *name, int socket_id,
uint64_t mem_size;
struct rte_lpm6_list *lpm_list;
struct rte_hash *rules_tbl = NULL;
+   uint32_t *tbl8_pool = NULL;
+   struct rte_lpm_tbl8_hdr *tbl8_hdrs = NULL;
 
lpm_list = RTE_TAILQ_CAST(rte_lpm6_tailq.head, rte_lpm6_list);
 
@@ -216,6 +288,28 @@ rte_lpm6_create(const char *name, int socket_id,
goto fail_wo_unlock;
}
 
+   /* allocate tbl8 indexes pool */
+   tbl8_pool = rte_malloc(NULL,
+   sizeof(uint32_t) * config->number_tbl8s,
+   RTE_CACHE_LINE_SIZE);
+   if (tbl8_pool == NULL) {
+   RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)",
+ rte_strerror(rte_errno), rte_errno);
+   rte_errno = ENOMEM;
+   goto fail_wo_unlock;
+   }
+
+   /* allocate tbl8 headers */
+   tbl8_hdrs = rte_malloc(NULL,
+   sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s,
+   RTE_CACHE_LINE_SIZE);
+   if (tbl8_hdrs == NULL) {
+   RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)",
+ rte_strerror(rte_errno), rte_errno);
+   rte_err

Re: [dpdk-dev] [PATCH] memory: fix alignment in eal_get_virtual_area()

2018-07-17 Thread Xu, Qian Q


> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Burakov, Anatoly
> Sent: Monday, July 16, 2018 10:01 PM
> To: Stojaczyk, DariuszX ; dev@dpdk.org
> Cc: sta...@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] memory: fix alignment in 
> eal_get_virtual_area()
> 
> On 16-Jul-18 2:29 PM, Stojaczyk, DariuszX wrote:
> >
> >> -Original Message-
> >> From: Burakov, Anatoly
> >> Sent: Monday, July 16, 2018 2:58 PM
> >> To: Stojaczyk, DariuszX ; dev@dpdk.org
> >> Cc: sta...@dpdk.org
> >> Subject: Re: [PATCH] memory: fix alignment in eal_get_virtual_area()
> >>
> >> On 13-Jun-18 8:08 PM, Dariusz Stojaczyk wrote:
> >>> Although the alignment mechanism works as intended, the `no_align`
> >>> bool flag was set incorrectly. We were aligning buffers that didn't
> >>> need extra alignment, and weren't aligning ones that really needed
> >>> it.
> >>>
> >>> Fixes: b7cc54187ea4 ("mem: move virtual area function in common
> >>> directory")
> >>> Cc: anatoly.bura...@intel.com
> >>> Cc: sta...@dpdk.org
> >>>
> >>> Signed-off-by: Dariusz Stojaczyk 
> >>> ---
> >>>lib/librte_eal/common/eal_common_memory.c | 2 +-
> >>>1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/lib/librte_eal/common/eal_common_memory.c
> >> b/lib/librte_eal/common/eal_common_memory.c
> >>> index 4f0688f..a7c89f0 100644
> >>> --- a/lib/librte_eal/common/eal_common_memory.c
> >>> +++ b/lib/librte_eal/common/eal_common_memory.c
> >>> @@ -70,7 +70,7 @@ eal_get_virtual_area(void *requested_addr, size_t
> *size,
> >>>* system page size is the same as requested page size.
> >>>*/
> >>>   no_align = (requested_addr != NULL &&
> >>> - ((uintptr_t)requested_addr & (page_sz - 1)) == 0) ||
> >>> + ((uintptr_t)requested_addr & (page_sz - 1))) ||
> >>>   page_sz == system_page_sz;
> >>>
> >>>   do {
> >>>
> >>
> >> This patch is wrong - no alignment should be performed if address is
> >> already alighed, e.g. if requested_addr & (page_sz - 1) == 0. The
> >> original code was correct.
> >
> > If we provide an aligned address with ADDR_IS_HINT flag and OS decides not
> to use it, we end up with potentially unaligned address that needs to be
> manually aligned and that's what this patch does. If the requested address
> wasn't aligned to the provided page_sz, why would we bother aligning it
> manually?
> 
> no_align is a flag that indicates whether we should or shouldn't align the
> resulting end address - it is not meant to align requested address.
> 
> If requested_addr was NULL, no_align will be set to "false" (we don't know 
> what
> we get, so we must reserve additional space for alignment purposes).
> 
> However, it will be set to "true" if page size is equal to system size (the 
> OS will
> return pointer that is already aligned to system page size, so we don't need 
> to
> align the result and thus don't need to reserve additional space for 
> alignment).
> 
> If requested address wasn't null, again we don't need alignment if system page
> size is equal to requested page size, as any resulting address will be already
> page-aligned (hence no_align set to "true").
> 
> If requested address wasn't already page-aligned and page size is not equal to
> system page size, then we set "no_align" to false, because we will need to 
> align
> the resulting address.
> 
> The crucial part to understand is that the logic here is inverted - "if 
> requested
> address isn't NULL, and if the requested address is already aligned (i.e. 
> (addr &
> pgsz-1) == 0), then we *don't* need to align the address". So, if the 
> requested
> address is *not* aligned, "no_align" must be set to false - because we *will*
> need to align the address.
> 
> As an added bonus, we have regression testing identifying this patch as cause 
> for
> numerous regressions :)

Yes, we have met many mulit-process related issues(hang, block) due to the 
patches, 
so that RC1's quality is impacted by this patch seriously. 
How about current fix plan? It's a little urgent. Thx. 

> 
> >
> > D.
> >
> >>
> >> Thomas, could you please revert this patch?
> >>
> >> --
> >> Thanks,
> >> Anatoly
> 
> 
> --
> Thanks,
> Anatoly


Re: [dpdk-dev] [PATCH v3 8/9] autotest: update autotest test case list

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 10:18 AM, Pattan, Reshma wrote:

Hi,


-Original Message-
From: Burakov, Anatoly
Sent: Monday, July 16, 2018 4:16 PM
To: Pattan, Reshma ; tho...@monjalon.net;
dev@dpdk.org
Cc: Parthasarathy, JananeeX M ;
sta...@dpdk.org
Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list



+{
+"Name":"Set rxtx mode",
+"Command": "set_rxtx_mode",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Set rxtx anchor",
+"Command": "set_rxtx_anchor",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Set rxtx sc",
+"Command": "set_rxtx_sc",
+"Func":default_autotest,
+"Report":  None,
+},


The above three tests don't look like autotests to me. I have no idea what
they are for, but either they need a special function, or they need to be taken
out.



These commands needs to be run manually from test cmd prompt to various set 
rxtx mode, rxtx rate and rxtx direction .
These can be used to verify pmd perf test  with vaiours set of above values.

So this can be removed from autotest.


We do have PMD perf tests in the script - do they call these functions? 
If they are required for PMD autotests, maybe PMD autotests deserve a 
special test function calling these commands before running the tests?


(if they also work without these commands, then we can perhaps postpone 
this to 18.11)





+"Name":"User delay",
+"Command": "user_delay_us",
+"Func":default_autotest,
+"Report":  None,
+},


This doesn't look like autotests to me. I have no idea what it is for, but 
either
it needs a special function, or it needs to be taken out.


This is autotest but the name does'nt have the autotest name in it. So I will 
retain this.


OK.

--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH] memory: fix alignment in eal_get_virtual_area()

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 10:22 AM, Xu, Qian Q wrote:




-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Burakov, Anatoly
Sent: Monday, July 16, 2018 10:01 PM
To: Stojaczyk, DariuszX ; dev@dpdk.org
Cc: sta...@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] memory: fix alignment in eal_get_virtual_area()

On 16-Jul-18 2:29 PM, Stojaczyk, DariuszX wrote:



-Original Message-
From: Burakov, Anatoly
Sent: Monday, July 16, 2018 2:58 PM
To: Stojaczyk, DariuszX ; dev@dpdk.org
Cc: sta...@dpdk.org
Subject: Re: [PATCH] memory: fix alignment in eal_get_virtual_area()

On 13-Jun-18 8:08 PM, Dariusz Stojaczyk wrote:

Although the alignment mechanism works as intended, the `no_align`
bool flag was set incorrectly. We were aligning buffers that didn't
need extra alignment, and weren't aligning ones that really needed
it.

Fixes: b7cc54187ea4 ("mem: move virtual area function in common
directory")
Cc: anatoly.bura...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Dariusz Stojaczyk 
---
lib/librte_eal/common/eal_common_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/eal_common_memory.c

b/lib/librte_eal/common/eal_common_memory.c

index 4f0688f..a7c89f0 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -70,7 +70,7 @@ eal_get_virtual_area(void *requested_addr, size_t

*size,

 * system page size is the same as requested page size.
 */
no_align = (requested_addr != NULL &&
-   ((uintptr_t)requested_addr & (page_sz - 1)) == 0) ||
+   ((uintptr_t)requested_addr & (page_sz - 1))) ||
page_sz == system_page_sz;

do {



This patch is wrong - no alignment should be performed if address is
already alighed, e.g. if requested_addr & (page_sz - 1) == 0. The
original code was correct.


If we provide an aligned address with ADDR_IS_HINT flag and OS decides not

to use it, we end up with potentially unaligned address that needs to be
manually aligned and that's what this patch does. If the requested address
wasn't aligned to the provided page_sz, why would we bother aligning it
manually?

no_align is a flag that indicates whether we should or shouldn't align the
resulting end address - it is not meant to align requested address.

If requested_addr was NULL, no_align will be set to "false" (we don't know what
we get, so we must reserve additional space for alignment purposes).

However, it will be set to "true" if page size is equal to system size (the OS 
will
return pointer that is already aligned to system page size, so we don't need to
align the result and thus don't need to reserve additional space for alignment).

If requested address wasn't null, again we don't need alignment if system page
size is equal to requested page size, as any resulting address will be already
page-aligned (hence no_align set to "true").

If requested address wasn't already page-aligned and page size is not equal to
system page size, then we set "no_align" to false, because we will need to align
the resulting address.

The crucial part to understand is that the logic here is inverted - "if 
requested
address isn't NULL, and if the requested address is already aligned (i.e. (addr 
&
pgsz-1) == 0), then we *don't* need to align the address". So, if the requested
address is *not* aligned, "no_align" must be set to false - because we *will*
need to align the address.

As an added bonus, we have regression testing identifying this patch as cause 
for
numerous regressions :)


Yes, we have met many mulit-process related issues(hang, block) due to the 
patches,
so that RC1's quality is impacted by this patch seriously.
How about current fix plan? It's a little urgent. Thx.


Hi Qian,

I've sent a patch to fix this:

http://patches.dpdk.org/project/dpdk/list/?series=607

It was already tested by Lei, but you're welcome to pile on :)

--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH] test: fix incorrect return types

2018-07-17 Thread Burakov, Anatoly

On 16-Jul-18 5:58 PM, Reshma Pattan wrote:

UTs should return either TEST_SUCCESS or TEST_FAILED only.
They should not return 0, -1 and any other value.

Fixes: 9c9befea4f ("test: add flow classify unit tests")
CC: jasvinder.si...@intel.com
CC: bernard.iremon...@intel.com
CC: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
---


Perhaps it should be highlighted that along with making them return 
TEST_SUCCESS/FAILURE, you're also fixing the -ENOMEM return in one of 
the code paths.


Otherwise,

Reviewed-by: Anatoly Burakov 

--
Thanks,
Anatoly


Re: [dpdk-dev] [Bug 72] Unable to install dpdk on arm64

2018-07-17 Thread Shreyansh Jain

On Tuesday 17 July 2018 01:26 PM, bugzi...@dpdk.org wrote:

https://bugs.dpdk.org/show_bug.cgi?id=72

 Bug ID: 72
Summary: Unable to install dpdk on arm64
Product: DPDK
Version: unspecified
   Hardware: ARM
 OS: Linux
 Status: CONFIRMED
   Severity: normal
   Priority: Normal
  Component: core
   Assignee: dev@dpdk.org
   Reporter: stanislav.chle...@gmail.com
   Target Milestone: ---

###
### C O M P I L I N G ##
###
stanislav@contivvpp:~/dpdk$ make install T=arm64_thunderx_linuxapp_gcc
make[3]: *** No rule to make target
'/home/stanislav/dpdk/config/defconfig_arm64_thunderx_linuxapp_gcc', needed by
'/home/stanislav/dpdk/arm64_thunderx_linuxapp_gcc/.config'.  Stop.
/home/stanislav/dpdk/mk/rte.sdkroot.mk:65: recipe for target 'config' failed
make[2]: *** [config] Error 2
/home/stanislav/dpdk/mk/rte.sdkinstall.mk:57: recipe for target 'pre_install'
failed
make[1]: *** [pre_install] Error 2
/home/stanislav/dpdk/mk/rte.sdkroot.mk:79: recipe for target 'install' failed
make: *** [install] Error 2
stanislav@contivvpp:~/dpdk$


stanislav@contivvpp:~/dpdk$ make install T=arm64_native_linuxapp_gcc
make[3]: *** No rule to make target
'/home/stanislav/dpdk/config/defconfig_arm64_native_linuxapp_gcc', needed by
'/home/stanislav/dpdk/arm64_native_linuxapp_gcc/.config'.  Stop.
/home/stanislav/dpdk/mk/rte.sdkroot.mk:65: recipe for target 'config' failed
make[2]: *** [config] Error 2
/home/stanislav/dpdk/mk/rte.sdkinstall.mk:57: recipe for target 'pre_install'
failed
make[1]: *** [pre_install] Error 2
/home/stanislav/dpdk/mk/rte.sdkroot.mk:79: recipe for target 'install' failed
make: *** [install] Error 2
stanislav@contivvpp:~/dpdk$

###
 R E P O S I T O R Y ##
###

stanislav@contivvpp:~/dpdk$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean
stanislav@contivvpp:~/dpdk$ git log
commit c27dbc300eee78c2eb33e84181617fdd7cbaaae4
Author: Thomas Monjalon 
Date:   Mon Jul 16 01:17:18 2018 +0200

 version: 18.08-rc1

 Signed-off-by: Thomas Monjalon 

###

See more at:
https://gist.github.com/stanislav-chlebec/b622b12ec5b4a976a74e6de20e8a6fc1




Though I directly commented in bugzilla as well, I will just repeat that 
here because I noticed that emails to dev@ are automatically being added 
to bug history (and reverse is not true).


---
You have made a very tiny mistake of '_' in place of the correct '-':

from:
 make install T=arm64_thunderx_linuxapp_gcc
to:
 make install T=arm64-thunderx-linuxapp-gcc

Please check, the files available in config are:
defconfig_arm64-armv8a-linuxapp-clang
defconfig_arm64-armv8a-linuxapp-gcc
defconfig_arm64-dpaa2-linuxapp-gcc
defconfig_arm64-dpaa-linuxapp-gcc
defconfig_arm64-stingray-linuxapp-gcc
defconfig_arm64-thunderx-linuxapp-gcc
defconfig_arm64-xgene1-linuxapp-gcc

And T=

This works fine for me:

$ make T=arm64-thunderx-linuxapp-gcc install
Configuration done using arm64-thunderx-linuxapp-gcc
== Build lib
== Build lib/librte_compat
...
---

Now, as per bug process, who would set this to resolve/Invalid?


Re: [dpdk-dev] [PATCH v3 8/9] autotest: update autotest test case list

2018-07-17 Thread Pattan, Reshma
Hi,

> -Original Message-
> From: Burakov, Anatoly
> Sent: Tuesday, July 17, 2018 10:23 AM
> To: Pattan, Reshma ; tho...@monjalon.net;
> dev@dpdk.org
> Cc: Parthasarathy, JananeeX M ;
> sta...@dpdk.org
> Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list
> 
> On 17-Jul-18 10:18 AM, Pattan, Reshma wrote:
> > Hi,
> >
> >> -Original Message-
> >> From: Burakov, Anatoly
> >> Sent: Monday, July 16, 2018 4:16 PM
> >> To: Pattan, Reshma ; tho...@monjalon.net;
> >> dev@dpdk.org
> >> Cc: Parthasarathy, JananeeX M ;
> >> sta...@dpdk.org
> >> Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list
> >>
> >>
> >>> +{
> >>> +"Name":"Set rxtx mode",
> >>> +"Command": "set_rxtx_mode",
> >>> +"Func":default_autotest,
> >>> +"Report":  None,
> >>> +},
> >>> +{
> >>> +"Name":"Set rxtx anchor",
> >>> +"Command": "set_rxtx_anchor",
> >>> +"Func":default_autotest,
> >>> +"Report":  None,
> >>> +},
> >>> +{
> >>> +"Name":"Set rxtx sc",
> >>> +"Command": "set_rxtx_sc",
> >>> +"Func":default_autotest,
> >>> +"Report":  None,
> >>> +},
> >>
> >> The above three tests don't look like autotests to me. I have no idea
> >> what they are for, but either they need a special function, or they
> >> need to be taken out.
> >>
> >
> > These commands needs to be run manually from test cmd prompt to
> various set rxtx mode, rxtx rate and rxtx direction .
> > These can be used to verify pmd perf test  with vaiours set of above values.
> >
> > So this can be removed from autotest.
> 
> We do have PMD perf tests in the script - do they call these functions?
> If they are required for PMD autotests, maybe PMD autotests deserve a
> special test function calling these commands before running the tests?
> 
> (if they also work without these commands, then we can perhaps postpone
> this to 18.11)
> 

I ran pmd perf test manually and it passes without having to use above set_rxtx 
commands. 

Thanks,
Reshma


Re: [dpdk-dev] [PATCH] mem: fix alignment of requested virtual areas

2018-07-17 Thread Stojaczyk, DariuszX



> -Original Message-
> From: Burakov, Anatoly
> Sent: Monday, July 16, 2018 4:57 PM
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; Yao, Lei A ; Stojaczyk, DariuszX
> ; sta...@dpdk.org
> Subject: [PATCH] mem: fix alignment of requested virtual areas
> 
> The original code did not align any addresses that were requested as
> page-aligned, but were different because addr_is_hint was set.
> 
> Below fix by Dariusz has introduced an issue where all unaligned addresses
> were left as unaligned.
> 
> This patch is a partial revert of
> commit 7fa7216ed48d ("mem: fix alignment of requested virtual areas")
> 
> and implements a proper fix for this issue, by asking for alignment in all
> but the following two cases:
> 
> 1) page size is equal to system page size, or
> 2) we got an aligned requested address, and will not accept a different one
> 
> This ensures that alignment is performed in all cases, except for those we
> can guarantee that the address will not need alignment.
> 
> Fixes: b7cc54187ea4 ("mem: move virtual area function in common directory")
> Fixes: 7fa7216ed48d ("mem: fix alignment of requested virtual areas")
> Cc: dariuszx.stojac...@intel.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Anatoly Burakov 

Acked-by: Dariusz Stojaczyk 

> ---
>  lib/librte_eal/common/eal_common_memory.c | 15 +--
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/librte_eal/common/eal_common_memory.c
> b/lib/librte_eal/common/eal_common_memory.c
> index 659cc08f6..fbfb1b055 100644
> --- a/lib/librte_eal/common/eal_common_memory.c
> +++ b/lib/librte_eal/common/eal_common_memory.c
> @@ -66,14 +66,17 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
>   addr_is_hint = true;
>   }
> 
> - /* if requested address is not aligned by page size, or if requested
> -  * address is NULL, add page size to requested length as we may get an
> -  * address that's aligned by system page size, which can be smaller than
> -  * our requested page size. additionally, we shouldn't try to align if
> -  * system page size is the same as requested page size.
> + /* we don't need alignment of resulting pointer in the following cases:
> +  *
> +  * 1. page size is equal to system size
> +  * 2. we have a requested address, and it is page-aligned, and we will
> +  *be discarding the address if we get a different one.
> +  *
> +  * for all other cases, alignment is potentially necessary.
>*/
>   no_align = (requested_addr != NULL &&
> - ((uintptr_t)requested_addr & (page_sz - 1))) ||
> + requested_addr == RTE_PTR_ALIGN(requested_addr, page_sz)
> &&
> + !addr_is_hint) ||
>   page_sz == system_page_sz;
> 
>   do {
> --
> 2.17.1


Re: [dpdk-dev] [PATCH] memory: fix alignment in eal_get_virtual_area()

2018-07-17 Thread Stojaczyk, DariuszX


> -Original Message-
> From: Burakov, Anatoly
> Sent: Monday, July 16, 2018 4:17 PM
> To: Stojaczyk, DariuszX ; dev@dpdk.org
> Cc: sta...@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] memory: fix alignment in 
> eal_get_virtual_area()
> 
> On 16-Jul-18 3:01 PM, Burakov, Anatoly wrote:
> > On 16-Jul-18 2:29 PM, Stojaczyk, DariuszX wrote:
> >>
> >>> -Original Message-
> >>> From: Burakov, Anatoly
> >>> Sent: Monday, July 16, 2018 2:58 PM
> >>> To: Stojaczyk, DariuszX ; dev@dpdk.org
> >>> Cc: sta...@dpdk.org
> >>> Subject: Re: [PATCH] memory: fix alignment in eal_get_virtual_area()
> >>>
> >>> On 13-Jun-18 8:08 PM, Dariusz Stojaczyk wrote:
>  Although the alignment mechanism works as intended, the
>  `no_align` bool flag was set incorrectly. We were aligning
>  buffers that didn't need extra alignment, and weren't
>  aligning ones that really needed it.
> 
>  Fixes: b7cc54187ea4 ("mem: move virtual area function in common
>  directory")
>  Cc: anatoly.bura...@intel.com
>  Cc: sta...@dpdk.org
> 
>  Signed-off-by: Dariusz Stojaczyk 
>  ---
>     lib/librte_eal/common/eal_common_memory.c | 2 +-
>     1 file changed, 1 insertion(+), 1 deletion(-)
> 
>  diff --git a/lib/librte_eal/common/eal_common_memory.c
> >>> b/lib/librte_eal/common/eal_common_memory.c
>  index 4f0688f..a7c89f0 100644
>  --- a/lib/librte_eal/common/eal_common_memory.c
>  +++ b/lib/librte_eal/common/eal_common_memory.c
>  @@ -70,7 +70,7 @@ eal_get_virtual_area(void *requested_addr, size_t
>  *size,
>      * system page size is the same as requested page size.
>      */
>     no_align = (requested_addr != NULL &&
>  -    ((uintptr_t)requested_addr & (page_sz - 1)) == 0) ||
>  +    ((uintptr_t)requested_addr & (page_sz - 1))) ||
>     page_sz == system_page_sz;
> 
>     do {
> 
> >>>
> >>> This patch is wrong - no alignment should be performed if address is
> >>> already alighed, e.g. if requested_addr & (page_sz - 1) == 0. The
> >>> original code was correct.
> >>
> >> If we provide an aligned address with ADDR_IS_HINT flag and OS decides
> >> not to use it, we end up with potentially unaligned address that needs
> >> to be manually aligned and that's what this patch does. If the
> >> requested address wasn't aligned to the provided page_sz, why would we
> >> bother aligning it manually?
> >
> > no_align is a flag that indicates whether we should or shouldn't align
> > the resulting end address - it is not meant to align requested address.
> >
> > If requested_addr was NULL, no_align will be set to "false" (we don't
> > know what we get, so we must reserve additional space for alignment
> > purposes).
> >
> > However, it will be set to "true" if page size is equal to system size
> > (the OS will return pointer that is already aligned to system page size,
> > so we don't need to align the result and thus don't need to reserve
> > additional space for alignment).
> >
> > If requested address wasn't null, again we don't need alignment if
> > system page size is equal to requested page size, as any resulting
> > address will be already page-aligned (hence no_align set to "true").
> >
> > If requested address wasn't already page-aligned and page size is not
> > equal to system page size, then we set "no_align" to false, because we
> > will need to align the resulting address.

I haven't seen such use case in the code and I deliberately didn't handle it. I 
believe that was my problem.

> >
> > The crucial part to understand is that the logic here is inverted - "if
> > requested address isn't NULL, and if the requested address is already
> > aligned (i.e. (addr & pgsz-1) == 0), then we *don't* need to align the
> > address". So, if the requested address is *not* aligned, "no_align" must
> > be set to false - because we *will* need to align the address.
> >
> > As an added bonus, we have regression testing identifying this patch as
> > cause for numerous regressions :)
> 
> On reflection, I think i understand what you're getting at now, and that
> a different fix is required :)
> 
> The issue at hand isn't whether the requested address is or isn't
> aligned - it's that we need to make sure we always get aligned address
> as a result. You have highlighted a case where we might ask for a
> page-aligned address, but end up getting a different one, but since
> we've set no_align to "true", we won't align the resulting "wrong" address.

That's correct.

> 
> So it seems to me that the issue is, is there a possibility that we get
> an unaligned address? The answer lies in a different flag -
> addr_is_hint. That will tell us if we will discard the resulting address
> if we don't get what we want.
> 
> So really, the only cases we should *not* align the resulting address are:
> 
> 1) if page size is equal to that of system page size, or
> 2) if requested addr isn't NULL, *and* it's page aligned, 

Re: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf initialization

2018-07-17 Thread Ferruh Yigit
On 7/17/2018 6:15 AM, Ori Kam wrote:
> Sorry for the late response,
> 
>> -Original Message-
>> From: Xu, Rosen [mailto:rosen...@intel.com]
>> Sent: Thursday, July 12, 2018 9:23 AM
>> To: Ori Kam ; dev@dpdk.org
>> Cc: Yigit, Ferruh ; sta...@dpdk.org; Gilmore, Walter
>> E 
>> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf
>> initialization
>>
>> Hi Ori,
>>
>> Pls see my reply.
>>
>> Hi Walter and Ferruh,
>>
>> I need your voice :)
>>
>>> -Original Message-
>>> From: Ori Kam [mailto:or...@mellanox.com]
>>> Sent: Thursday, July 12, 2018 13:58
>>> To: Xu, Rosen ; dev@dpdk.org
>>> Cc: Yigit, Ferruh ; sta...@dpdk.org
>>> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>> rte_fdir_conf
>>> initialization
>>>
>>> Hi,
>>>
>>> PSB
>>>
 -Original Message-
 From: Xu, Rosen [mailto:rosen...@intel.com]
 Sent: Thursday, July 12, 2018 8:27 AM
 To: Ori Kam ; dev@dpdk.org
 Cc: Yigit, Ferruh ; sta...@dpdk.org
 Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
 rte_fdir_conf initialization

 Hi Ori,

 examples/flow_filtering sample app fails on i40e [1] because i40e
 requires explicit FDIR configuration.

 But rte_flow in and hardware independent ways of describing
 flow-action, it shouldn't require specific config options for specific
>>> hardware.

>>>
>>> I don't understand why using rte flow require the use of fdir.
>>> it doesn't make sense to me, that  new API will need old one.
>>
>> It's a good question, I also have this question about Mellanox NIC Driver
>> mlx5_flow.c.
>> In this file many flow functions call fdir. :)
> 
> The only functions that are calling fdir are fdir function,
> and you can see that inside of the create function we convert the fdir 
> Into rte flow.
> 
>>
 Is there any chance driver select the FDIR config automatically based
 on rte_flow rule, unless explicitly a FDIR config set by user?
>>>
>>> I don't know how the i40e driver is implemented but I know that Mellanox
>>> convert the other way around, if fdir is given it is converted to rte_flow.
>>
>> Firstly, rte_fdir_conf is part of rte_eth_conf definition.
>>  struct rte_eth_conf {
>>  ..
>>  struct rte_fdir_conf fdir_conf; /**< FDIR configuration. */
>>  ..
>>  };
>> Secondly, default value of rte_eth_conf.fdir_conf.mode is
>> RTE_FDIR_MODE_NONE, which means Disable FDIR support.
>> Thirdly, flow_filtering should align with test-pmd, in test-pmd all 
>> fdir_conf is
>> initialized.
>>
> 
> This sounds to me correct we don't want to enable fdir.
> Why should the example app for rte flow use fdir? And align to 
> testpmd which support everything in in all modes?

In i40e fdir is used to implement filters, that is why rte_flow rules
requires/depends some fdir configurations.

In long term I agree it is better if driver doesn't require any fdir
configuration for rte_flow programing, although not sure if this is completely
possible, cc'ed Qi for more comment.

For short term I am for getting this patch so that sample app can run on i40e
too, and fdir configuration shouldn't effect others. Perhaps it can be good to
add a comment to say why that config option is added and it is a temporary
workaround.

> 
> 
>>>

 [1]
 Flow can't be created 1 message: Check the mode in fdir_conf.
 EAL: Error - exiting with code: 1

> -Original Message-
> From: Ori Kam [mailto:or...@mellanox.com]
> Sent: Thursday, July 12, 2018 13:17
> To: Xu, Rosen ; dev@dpdk.org
> Cc: Yigit, Ferruh ; sta...@dpdk.org; Ori Kam
> 
> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
 rte_fdir_conf
> initialization
>
> Hi Rosen,
>
> Why do the fdir_conf must be initialized?
>
> What is the issue you are seeing?
>
> Best,
> Ori
>
>> -Original Message-
>> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Rosen Xu
>> Sent: Thursday, July 12, 2018 5:10 AM
>> To: dev@dpdk.org
>> Cc: rosen...@intel.com; ferruh.yi...@intel.com; Ori Kam
>> ; sta...@dpdk.org
>> Subject: [dpdk-dev] [PATCH] examples/flow_filtering: add
>> rte_fdir_conf
>> initialization
>>
>> Rte_fdir_conf of rte_eth_conf should be initialized before port
>> initialization.
>>
>> Fixes: 4a3ef59a10c8 ("examples/flow_filtering: add simple demo of
>>> flow
>> API")
>> Cc: sta...@dpdk.org
>>
>> Signed-off-by: Rosen Xu 
>> ---
>>  examples/flow_filtering/main.c | 6 ++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/examples/flow_filtering/main.c
>> b/examples/flow_filtering/main.c index f595034..aa03e23 100644
>> --- a/examples/flow_filtering/main.c
>> +++ b/examples/flow_filtering/main.c
>> @@ -132,6 +132,12 @@
>>  DEV_TX_OFFLOAD_SCTP_CKSUM  |
>>   

[dpdk-dev] [PATCH v4] test: add sample functions for packet forwarding

2018-07-17 Thread Jananee Parthasarathy
Add sample test functions for packet forwarding.
These can be used for unit test cases for
LatencyStats and BitrateStats libraries.

Signed-off-by: Chaitanya Babu Talluri 
Reviewed-by: Reshma Pattan 
Acked-by: Reshma Pattan 
---
v4: Updated return value as TEST_FAILED instead of -1
v3: Used same port for tx,rx and removed extra line
v2: SOCKET0 is removed and NUM_QUEUES is used accordingly
---
 test/test/Makefile|  1 +
 test/test/sample_packet_forward.c | 71 +++
 test/test/sample_packet_forward.h | 21 
 3 files changed, 93 insertions(+)
 create mode 100644 test/test/sample_packet_forward.c
 create mode 100644 test/test/sample_packet_forward.h

diff --git a/test/test/Makefile b/test/test/Makefile
index e6967bab6..8032ce53b 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -135,6 +135,7 @@ SRCS-y += test_version.c
 SRCS-y += test_func_reentrancy.c
 
 SRCS-y += test_service_cores.c
+SRCS-y += sample_packet_forward.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline.c
 SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline_num.c
diff --git a/test/test/sample_packet_forward.c 
b/test/test/sample_packet_forward.c
new file mode 100644
index 0..4a13d5001
--- /dev/null
+++ b/test/test/sample_packet_forward.c
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+#include "sample_packet_forward.h"
+#include "test.h"
+#include 
+
+#define NB_MBUF 512
+
+static struct rte_mempool *mp;
+uint16_t portid;
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int
+test_ring_setup(void)
+{
+   uint16_t socket_id = rte_socket_id();
+   struct rte_ring *rxtx[NUM_RINGS];
+   rxtx[0] = rte_ring_create("R0", RING_SIZE, socket_id,
+   RING_F_SP_ENQ|RING_F_SC_DEQ);
+   if (rxtx[0] == NULL) {
+   printf("%s() line %u: rte_ring_create R0 failed",
+   __func__, __LINE__);
+   return TEST_FAILED;
+   }
+   portid = rte_eth_from_rings("net_ringa", rxtx, NUM_QUEUES, rxtx,
+   NUM_QUEUES, socket_id);
+
+   return TEST_SUCCESS;
+}
+
+/* Sample test to forward packets using virtual portids */
+int
+test_packet_forward(void)
+{
+   struct rte_mbuf *pbuf[NUM_PACKETS];
+
+   mp = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32, 0,
+   RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+   if (mp == NULL)
+   return TEST_FAILED;
+   if (rte_pktmbuf_alloc_bulk(mp, pbuf, NUM_PACKETS) != 0)
+   printf("%s() line %u: rte_pktmbuf_alloc_bulk failed"
+   , __func__, __LINE__);
+   /* send and receive packet and check for stats update */
+   if (rte_eth_tx_burst(portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error sending packet to"
+   " port %d\n", __func__, __LINE__,
+   portid);
+   return TEST_FAILED;
+   }
+   if (rte_eth_rx_burst(portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error receiving packet from"
+   " port %d\n", __func__, __LINE__,
+   portid);
+   return TEST_FAILED;
+   }
+   return TEST_SUCCESS;
+}
diff --git a/test/test/sample_packet_forward.h 
b/test/test/sample_packet_forward.h
new file mode 100644
index 0..a4880316f
--- /dev/null
+++ b/test/test/sample_packet_forward.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SAMPLE_PACKET_FORWARD_H_
+#define _SAMPLE_PACKET_FORWARD_H_
+
+/* MACROS to support virtual ring creation */
+#define RING_SIZE 256
+#define NUM_RINGS 1
+#define NUM_QUEUES 1
+
+#define NUM_PACKETS 10
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int test_ring_setup(void);
+
+/* Sample test to forward packet using virtual port id */
+int test_packet_forward(void);
+
+#endif /* _SAMPLE_PACKET_FORWARD_H_ */
-- 
2.13.6



Re: [dpdk-dev] [PATCH v4] vfio: fix workaround of BAR0 mapping

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 9:22 AM, Takeshi Yoshimura wrote:

The workaround of BAR0 mapping gives up and immediately returns an
error if it cannot map around the MSI-X. However, recent version
of VFIO allows MSIX mapping (*).

I fixed not to return immediately but try mapping. In old Linux, mmap
just fails and returns the same error as the code before my fix . In
recent Linux, mmap succeeds and this patch enables running DPDK in
specific environments (e.g., ppc64le with HGST NVMe)


I don't think this applies to BAR0 only - it can be any BAR. Suggested 
rewording:


Currently, VFIO will try to map around MSI-X table in the BARs. When 
MSI-X table (page-aligned) size is equal to (page-aligned) size of BAR, 
VFIO will just skip the BAR.


Recent kernel versions will allow VFIO to map the entire BAR containing 
MSI-X tables (*), so instead of trying to map around the MSI-X vector or 
skipping the BAR entirely if it's not possible, we can now try mapping 
the entire BAR first. If mapping the entire BAR doesn't succeed, fall 
back to the old behavior of mapping around MSI-X table or skipping the BAR.




(*): "vfio-pci: Allow mapping MSIX BAR",
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
commit/id=a32295c612c57990d17fb0f41e7134394b2f35f6


I think your link is wrong here - at least i can't find the commit when 
i copypaste the link into my browser.




Fixes: 90a1633b2347 ("eal/linux: allow to map BARs with MSI-X tables")

Signed-off-by: Takeshi Yoshimura 
---





+   memreg[0].offset = bar->offset;
+   memreg[0].size = bar->size;
+   do {
void *map_addr = NULL;
+   if (again) {
+   /*
+* VFIO did not let us map the MSI-X table,
+* but we can map around it.
+*/
+   uint32_t table_start = msix_table->offset;
+   uint32_t table_end = table_start + msix_table->size;
+   table_end = (table_end + ~PAGE_MASK) & PAGE_MASK;
+   table_start &= PAGE_MASK;
+
+   if (table_start == 0 && table_end >= bar->size) {
+   /* Cannot map this BAR */
+   RTE_LOG(DEBUG, EAL, "Skipping BAR%d\n",
+   bar_index);
+   bar->size = 0;
+   bar->addr = 0;
+   return 0;


You have reserved space for the BAR earlier but do not unmap it on return.

Once that is fixed,

Reviewed-by: Anatoly Burakov 

--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH v3 8/9] autotest: update autotest test case list

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 10:45 AM, Pattan, Reshma wrote:

Hi,


-Original Message-
From: Burakov, Anatoly
Sent: Tuesday, July 17, 2018 10:23 AM
To: Pattan, Reshma ; tho...@monjalon.net;
dev@dpdk.org
Cc: Parthasarathy, JananeeX M ;
sta...@dpdk.org
Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list

On 17-Jul-18 10:18 AM, Pattan, Reshma wrote:

Hi,


-Original Message-
From: Burakov, Anatoly
Sent: Monday, July 16, 2018 4:16 PM
To: Pattan, Reshma ; tho...@monjalon.net;
dev@dpdk.org
Cc: Parthasarathy, JananeeX M ;
sta...@dpdk.org
Subject: Re: [PATCH v3 8/9] autotest: update autotest test case list



+{
+"Name":"Set rxtx mode",
+"Command": "set_rxtx_mode",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Set rxtx anchor",
+"Command": "set_rxtx_anchor",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Set rxtx sc",
+"Command": "set_rxtx_sc",
+"Func":default_autotest,
+"Report":  None,
+},


The above three tests don't look like autotests to me. I have no idea
what they are for, but either they need a special function, or they
need to be taken out.



These commands needs to be run manually from test cmd prompt to

various set rxtx mode, rxtx rate and rxtx direction .

These can be used to verify pmd perf test  with vaiours set of above values.

So this can be removed from autotest.


We do have PMD perf tests in the script - do they call these functions?
If they are required for PMD autotests, maybe PMD autotests deserve a
special test function calling these commands before running the tests?

(if they also work without these commands, then we can perhaps postpone
this to 18.11)



I ran pmd perf test manually and it passes without having to use above set_rxtx 
commands.

Thanks,
Reshma


Great.

Reviewed-by: Anatoly Burakov 

--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH v4] test: add sample functions for packet forwarding

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 11:00 AM, Jananee Parthasarathy wrote:

Add sample test functions for packet forwarding.
These can be used for unit test cases for
LatencyStats and BitrateStats libraries.

Signed-off-by: Chaitanya Babu Talluri 
Reviewed-by: Reshma Pattan 
Acked-by: Reshma Pattan 
---
v4: Updated return value as TEST_FAILED instead of -1
v3: Used same port for tx,rx and removed extra line
v2: SOCKET0 is removed and NUM_QUEUES is used accordingly
---
  test/test/Makefile|  1 +
  test/test/sample_packet_forward.c | 71 +++
  test/test/sample_packet_forward.h | 21 
  3 files changed, 93 insertions(+)
  create mode 100644 test/test/sample_packet_forward.c
  create mode 100644 test/test/sample_packet_forward.h

diff --git a/test/test/Makefile b/test/test/Makefile
index e6967bab6..8032ce53b 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -135,6 +135,7 @@ SRCS-y += test_version.c
  SRCS-y += test_func_reentrancy.c
  
  SRCS-y += test_service_cores.c

+SRCS-y += sample_packet_forward.c
  
  SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline.c

  SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test_cmdline_num.c
diff --git a/test/test/sample_packet_forward.c 
b/test/test/sample_packet_forward.c
new file mode 100644
index 0..4a13d5001
--- /dev/null
+++ b/test/test/sample_packet_forward.c
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+#include "sample_packet_forward.h"
+#include "test.h"
+#include 
+
+#define NB_MBUF 512
+
+static struct rte_mempool *mp;
+uint16_t portid;
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int
+test_ring_setup(void)
+{
+   uint16_t socket_id = rte_socket_id();
+   struct rte_ring *rxtx[NUM_RINGS];
+   rxtx[0] = rte_ring_create("R0", RING_SIZE, socket_id,
+   RING_F_SP_ENQ|RING_F_SC_DEQ);
+   if (rxtx[0] == NULL) {
+   printf("%s() line %u: rte_ring_create R0 failed",
+   __func__, __LINE__);
+   return TEST_FAILED;
+   }
+   portid = rte_eth_from_rings("net_ringa", rxtx, NUM_QUEUES, rxtx,
+   NUM_QUEUES, socket_id);
+
+   return TEST_SUCCESS;


I am probably missing something, but

1) Why are there 256 rings maximum, but only one is used?
2) You're creating these rings - where are you destroying them?
3) Some more comments on why this would be needed would be great as 
well. Right now, it looks like it's ripped out of the middle of a 
patchset - i don't see it being used anywhere.



+}
+
+/* Sample test to forward packets using virtual portids */
+int
+test_packet_forward(void)
+{
+   struct rte_mbuf *pbuf[NUM_PACKETS];
+
+   mp = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32, 0,
+   RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+   if (mp == NULL)
+   return TEST_FAILED;
+   if (rte_pktmbuf_alloc_bulk(mp, pbuf, NUM_PACKETS) != 0)
+   printf("%s() line %u: rte_pktmbuf_alloc_bulk failed"
+   , __func__, __LINE__);
+   /* send and receive packet and check for stats update */
+   if (rte_eth_tx_burst(portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error sending packet to"
+   " port %d\n", __func__, __LINE__,
+   portid);
+   return TEST_FAILED;
+   }
+   if (rte_eth_rx_burst(portid, 0, pbuf, NUM_PACKETS) !=
+   NUM_PACKETS) {
+   printf("%s() line %u: Error receiving packet from"
+   " port %d\n", __func__, __LINE__,
+   portid);
+   return TEST_FAILED;
+   }
+   return TEST_SUCCESS;


Same as above.


+}
diff --git a/test/test/sample_packet_forward.h 
b/test/test/sample_packet_forward.h
new file mode 100644
index 0..a4880316f
--- /dev/null
+++ b/test/test/sample_packet_forward.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SAMPLE_PACKET_FORWARD_H_
+#define _SAMPLE_PACKET_FORWARD_H_
+
+/* MACROS to support virtual ring creation */
+#define RING_SIZE 256
+#define NUM_RINGS 1
+#define NUM_QUEUES 1
+
+#define NUM_PACKETS 10
+
+/* Sample test to create virtual rings and tx,rx portid from rings */
+int test_ring_setup(void);
+
+/* Sample test to forward packet using virtual port id */
+int test_packet_forward(void);
+
+#endif /* _SAMPLE_PACKET_FORWARD_H_ */




--
Thanks,
Anatoly


[dpdk-dev] [PATCH] app/testpmd: fix logically dead code

2018-07-17 Thread Kevin Laatz
Remove logically dead code, tm_port_rate cannot be greater than
UINT32_MAX.

Coverity issue: 302846
Fixes: 0ad778b398c6 ("app/testpmd: rework softnic forward mode")

Signed-off-by: Kevin Laatz 
---
 app/test-pmd/softnicfwd.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/app/test-pmd/softnicfwd.c b/app/test-pmd/softnicfwd.c
index 1f9eeaf..7ff6228 100644
--- a/app/test-pmd/softnicfwd.c
+++ b/app/test-pmd/softnicfwd.c
@@ -175,9 +175,6 @@ set_tm_hiearchy_nodes_shaper_rate(portid_t port_id,
rte_eth_link_get(port_id, &link_params);
tm_port_rate = (uint64_t)ETH_SPEED_NUM_10G * BYTES_IN_MBPS;
 
-   if (tm_port_rate > UINT32_MAX)
-   tm_port_rate = UINT32_MAX;
-
/* Set tm hierarchy shapers rate */
h->root_node_shaper_rate = tm_port_rate;
h->subport_node_shaper_rate =
-- 
2.9.5



[dpdk-dev] [PATCH v4 0/9] Make unit tests great again

2018-07-17 Thread Reshma Pattan
Previously, unit tests were running in groups. There were technical reasons why 
that was the case (mostly having to do with limiting memory), but it was hard 
to maintain and update the autotest script.

In 18.05, limiting of memory at DPDK startup was no longer necessary, as DPDK 
allocates memory at runtime as needed. This has the implication that the old 
test grouping can now be retired and replaced with a more sensible way of 
running unit tests (using multiprocessing pool of workers and a queue of 
tests). This patchset accomplishes exactly that.

This patchset merges changes done in [1], [2]

[1] http://dpdk.org/dev/patchwork/patch/40370/
[2] http://patches.dpdk.org/patch/40373/

v4: Removed non auto tests set_rxtx_mode, set_rxtx_anchor and set_rxtx_sc
from autotest_data.py

Reshma Pattan (9):
  autotest: fix printing
  autotest: fix invalid code on reports
  autotest: make autotest runner python 2/3 compliant
  autotest: visually separate different test categories
  autotest: improve filtering
  autotest: remove autotest grouping
  autotest: properly parallelize unit tests
  autotest: update autotest test case list
  mk: update make targets for classified testcases

 mk/rte.sdkroot.mk|4 +-
 mk/rte.sdktest.mk|   33 +-
 test/test/autotest.py|   13 +-
 test/test/autotest_data.py   | 1081 +-
 test/test/autotest_runner.py |  519 ++--
 5 files changed, 948 insertions(+), 702 deletions(-)

-- 
2.14.4



[dpdk-dev] [PATCH v4 1/9] autotest: fix printing

2018-07-17 Thread Reshma Pattan
Previously, printing was done using tuple syntax, which caused
output to appear as a tuple as opposed to being one string. Fix
this by using addition operator instead.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index a692f0697..b09b57876 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -247,7 +247,7 @@ def __process_results(self, results):
 
 # don't print out total time every line, it's the same anyway
 if i == len(results) - 1:
-print(result,
+print(result +
   "[%02dm %02ds]" % (total_time / 60, total_time % 60))
 else:
 print(result)
@@ -332,8 +332,8 @@ def run_all_tests(self):
 
 # create table header
 print("")
-print("Test name".ljust(30), "Test result".ljust(29),
-  "Test".center(9), "Total".center(9))
+print("Test name".ljust(30) + "Test result".ljust(29) +
+  "Test".center(9) + "Total".center(9))
 print("=" * 80)
 
 # make a note of tests start time
-- 
2.14.4



[dpdk-dev] [PATCH v4 2/9] autotest: fix invalid code on reports

2018-07-17 Thread Reshma Pattan
There are no reports defined for any test, so this codepath was
never triggered, but it's still wrong because it's referencing
variables that aren't there. Fix it by passing target into the
test function, and reference correct log variable.

Fixes: e2cc79b75d9f ("app: rework autotest.py")
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index b09b57876..bdc32da5d 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -41,7 +41,7 @@ def wait_prompt(child):
 # quite a bit of effort to make it work).
 
 
-def run_test_group(cmdline, test_group):
+def run_test_group(cmdline, target, test_group):
 results = []
 child = None
 start_time = time.time()
@@ -128,14 +128,15 @@ def run_test_group(cmdline, test_group):
 # make a note when the test was finished
 end_time = time.time()
 
+log = logfile.getvalue()
+
 # append test data to the result tuple
-result += (test["Name"], end_time - start_time,
-   logfile.getvalue())
+result += (test["Name"], end_time - start_time, log)
 
 # call report function, if any defined, and supply it with
 # target and complete log for test run
 if test["Report"]:
-report = test["Report"](self.target, log)
+report = test["Report"](target, log)
 
 # append report to results tuple
 result += (report,)
@@ -343,6 +344,7 @@ def run_all_tests(self):
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
   [self.__get_cmdline(test_group),
+   self.target,
test_group])
 results.append(result)
 
@@ -367,7 +369,7 @@ def run_all_tests(self):
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-self.__get_cmdline(test_group), test_group)
+self.__get_cmdline(test_group), self.target, test_group)
 
 self.__process_results(group_result)
 
-- 
2.14.4



[dpdk-dev] [PATCH v4 3/9] autotest: make autotest runner python 2/3 compliant

2018-07-17 Thread Reshma Pattan
Autotest runner was still using python 2-style print syntax. Fix
it by importing print function from the future, and fix the calls
to be python-3 style.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index bdc32da5d..f6b669a2e 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -3,6 +3,7 @@
 
 # The main logic behind running autotests in parallel
 
+from __future__ import print_function
 import StringIO
 import csv
 import multiprocessing
@@ -52,8 +53,8 @@ def run_test_group(cmdline, target, test_group):
 # prepare logging of init
 startuplog = StringIO.StringIO()
 
-print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
-print >>startuplog, "\ncmdline=%s" % cmdline
+print("\n%s %s\n" % ("=" * 20, test_group["Prefix"]), file=startuplog)
+print("\ncmdline=%s" % cmdline, file=startuplog)
 
 child = pexpect.spawn(cmdline, logfile=startuplog)
 
@@ -117,7 +118,7 @@ def run_test_group(cmdline, target, test_group):
 
 try:
 # print test name to log buffer
-print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
 if stripped or test["Command"] in avail_cmds:
-- 
2.14.4



[dpdk-dev] [PATCH v4 4/9] autotest: visually separate different test categories

2018-07-17 Thread Reshma Pattan
Help visually identify parallel vs. non-parallel autotests.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index f6b669a2e..d9d5f7a97 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -341,6 +341,7 @@ def run_all_tests(self):
 # make a note of tests start time
 self.start = time.time()
 
+print("Parallel autotests:")
 # assign worker threads to run test groups
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
@@ -367,6 +368,7 @@ def run_all_tests(self):
 # remove result from results list once we're done with it
 results.remove(group_result)
 
+print("Non-parallel autotests:")
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-- 
2.14.4



[dpdk-dev] [PATCH v4 5/9] autotest: improve filtering

2018-07-17 Thread Reshma Pattan
Improve code for filtering test groups. Also, move reading binary
symbols into filtering stage, so that tests that are meant to be
skipped are never attempted to be executed in the first place.
Before running tests, print out any tests that were skipped because
they weren't compiled.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 118 ---
 1 file changed, 66 insertions(+), 52 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d9d5f7a97..c98ec2a57 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -95,13 +95,6 @@ def run_test_group(cmdline, target, test_group):
 results.append((0, "Success", "Start %s" % test_group["Prefix"],
 time.time() - start_time, startuplog.getvalue(), None))
 
-# parse the binary for available test commands
-binary = cmdline.split()[0]
-stripped = 'not stripped' not in subprocess.check_output(['file', binary])
-if not stripped:
-symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
-avail_cmds = re.findall('test_register_(\w+)', symbols)
-
 # run all tests in test group
 for test in test_group["Tests"]:
 
@@ -121,10 +114,7 @@ def run_test_group(cmdline, target, test_group):
 print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
-if stripped or test["Command"] in avail_cmds:
-result = test["Func"](child, test["Command"])
-else:
-result = (0, "Skipped [Not Available]")
+result = test["Func"](child, test["Command"])
 
 # make a note when the test was finished
 end_time = time.time()
@@ -186,8 +176,10 @@ class AutotestRunner:
 def __init__(self, cmdline, target, blacklist, whitelist):
 self.cmdline = cmdline
 self.target = target
+self.binary = cmdline.split()[0]
 self.blacklist = blacklist
 self.whitelist = whitelist
+self.skipped = []
 
 # log file filename
 logfile = "%s.log" % target
@@ -276,53 +268,58 @@ def __process_results(self, results):
 if i != 0:
 self.csvwriter.writerow([test_name, test_result, result_str])
 
-# this function iterates over test groups and removes each
-# test that is not in whitelist/blacklist
-def __filter_groups(self, test_groups):
-groups_to_remove = []
-
-# filter out tests from parallel test groups
-for i, test_group in enumerate(test_groups):
-
-# iterate over a copy so that we could safely delete individual
-# tests
-for test in test_group["Tests"][:]:
-test_id = test["Command"]
-
-# dump tests are specified in full e.g. "Dump_mempool"
-if "_autotest" in test_id:
-test_id = test_id[:-len("_autotest")]
-
-# filter out blacklisted/whitelisted tests
-if self.blacklist and test_id in self.blacklist:
-test_group["Tests"].remove(test)
-continue
-if self.whitelist and test_id not in self.whitelist:
-test_group["Tests"].remove(test)
-continue
-
-# modify or remove original group
-if len(test_group["Tests"]) > 0:
-test_groups[i] = test_group
-else:
-# remember which groups should be deleted
-# put the numbers backwards so that we start
-# deleting from the end, not from the beginning
-groups_to_remove.insert(0, i)
+# this function checks individual test and decides if this test should be 
in
+# the group by comparing it against  whitelist/blacklist. it also checks if
+# the test is compiled into the binary, and marks it as skipped if 
necessary
+def __filter_test(self, test):
+test_cmd = test["Command"]
+test_id = test_cmd
+
+# dump tests are specified in full e.g. "Dump_mempool"
+if "_autotest" in test_id:
+test_id = test_id[:-len("_autotest")]
+
+# filter out blacklisted/whitelisted tests
+if self.blacklist and test_id in self.blacklist:
+return False
+if self.whitelist and test_id not in self.whitelist:
+return False
+
+# if test wasn't compiled in, remove it as well
+
+# parse the binary for available test commands
+stripped = 'not stripped' not in \
+   subprocess.check_output(['file', self.binary])
+if not stripped:
+symbols = subprocess.check_output(['nm',
+   self.binary]).decode('utf-8')
+avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+if test_cmd

[dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Reshma Pattan
Makefiles are updated with new test case lists.
Test cases are classified as -
P1 - Main test cases,
P2 - Cryptodev/driver test cases,
P3 - Perf test cases which takes longer than 10s,
P4 - Logging/Dump test cases.

Makefile is updated with different targets
for the above classified groups.
Test cases for different targets are listed accordingly.

Cc: sta...@dpdk.org

Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Reshma Pattan 
---
 mk/rte.sdkroot.mk |  4 ++--
 mk/rte.sdktest.mk | 33 +++--
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index f43cc7829..ea3473ebf 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
 cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
 
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage
-test test-basic test-fast test-ring test-mempool test-perf coverage:
+.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
+test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:
$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
 
 test: test-build
diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index ee1fe0c7e..13d1efb6a 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,14 +18,35 @@ DIR := $(shell basename $(RTE_OUTPUT))
 #
 # test: launch auto-tests, very simple for now.
 #
-.PHONY: test test-basic test-fast test-perf coverage
+.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
 
-PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf
-coverage: BLACKLIST=-$(PERFLIST)
-test-fast: BLACKLIST=-$(PERFLIST)
-test-perf: WHITELIST=$(PERFLIST)
+PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
+ reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
+ barrier,hash_multiwriter,timer_racecond,efd,hash_functions,\
+ eventdev_selftest_sw,member_perf,efd_perf,lpm6_perf,red_perf,\
+ distributor_perf,ring_pmd_perf,pmd_perf,ring_perf
+DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
+cryptodev_sw_mrvl,cryptodev_dpaa2_sec,cryptodev_dpaa_sec,\
+cryptodev_qat,cryptodev_aesni_mb,cryptodev_openssl,\
+cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
+cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
+DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
+ dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
+ devargs_autotest
 
-test test-basic test-fast test-perf:
+SPACESTR:=
+SPACESTR+=
+STRIPPED_PERFLIST=$(subst $(SPACESTR),,$(PERFLIST))
+STRIPPED_DRIVERSLIST=$(subst $(SPACESTR),,$(DRIVERSLIST))
+STRIPPED_DUMPLIST=$(subst $(SPACESTR),,$(DUMPLIST))
+
+coverage: BLACKLIST=-$(STRIPPED_PERFLIST)
+test-fast: 
BLACKLIST=-$(STRIPPED_PERFLIST),$(STRIPPED_DRIVERSLIST),$(STRIPPED_DUMPLIST)
+test-perf: WHITELIST=$(STRIPPED_PERFLIST)
+test-drivers: WHITELIST=$(STRIPPED_DRIVERSLIST)
+test-dump: WHITELIST=$(STRIPPED_DUMPLIST)
+
+test test-basic test-fast test-perf test-drivers test-dump:
@mkdir -p $(AUTOTEST_DIR) ; \
cd $(AUTOTEST_DIR) ; \
if [ -f $(RTE_OUTPUT)/app/test ]; then \
-- 
2.14.4



[dpdk-dev] [PATCH v4 8/9] autotest: update autotest test case list

2018-07-17 Thread Reshma Pattan
Autotest is enhanced with additional test cases
being added to autotest_data.py

Removed non existing PCI autotest.

Cc: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Anatoly Burakov 
---
 test/test/autotest_data.py | 350 +++--
 1 file changed, 342 insertions(+), 8 deletions(-)

diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index c24e7bc25..3f856ff57 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -134,12 +134,6 @@
 "Func":default_autotest,
 "Report":  None,
 },
-{
-"Name":"PCI autotest",
-"Command": "pci_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
 {
 "Name":"Malloc autotest",
 "Command": "malloc_autotest",
@@ -248,6 +242,291 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Eventdev selftest octeontx",
+"Command": "eventdev_selftest_octeontx",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event ring autotest",
+"Command": "event_ring_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Table autotest",
+"Command": "table_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Flow classify autotest",
+"Command": "flow_classify_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event eth rx adapter autotest",
+"Command": "event_eth_rx_adapter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"User delay",
+"Command": "user_delay_us",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Rawdev autotest",
+"Command": "rawdev_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Kvargs autotest",
+"Command": "kvargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Devargs autotest",
+"Command": "devargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding autotest",
+"Command": "link_bonding_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding mode4 autotest",
+"Command": "link_bonding_mode4_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding rssconf autotest",
+"Command": "link_bonding_rssconf_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Crc autotest",
+"Command": "crc_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Distributor autotest",
+"Command": "distributor_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Reorder autotest",
+"Command": "reorder_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Barrier autotest",
+"Command": "barrier_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Bitmap test",
+"Command": "bitmap_test",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash scaling autotest",
+"Command": "hash_scaling_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash multiwriter autotest",
+"Command": "hash_multiwriter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Service autotest",
+"Command": "service_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Timer racecond autotest",
+"Command": "timer_racecond_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Member autotest",
+"Command": "member_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":   "Efd_autotest",
+"Command": "efd_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Thash autotest",
+"Command": "thash_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash function autotest",
+"Command": "hash_functions_autotest

[dpdk-dev] [PATCH v4 7/9] autotest: properly parallelize unit tests

2018-07-17 Thread Reshma Pattan
Now that everything else is in place, we can run unit tests in a
different fashion to what they were running as before. Previously,
we had all autotests as part of groups (largely obtained through
trial and error) to ensure parallel execution while still limiting
amounts of memory used by those tests.

This is no longer necessary, and as of previous commit, all tests
are now in the same group (still broken into two categories). They
still run one-by-one though. Fix this by initializing child
processes in multiprocessing Pool initialization, and putting all
tests on the queue, so that tests are executed by the first idle
worker. Tests are also affinitized to different NUMA nodes using
taskset in a round-robin fashion, to prevent over-exhausting
memory on any given NUMA node.

Non-parallel tests are executed in similar fashion, but on a
separate queue which will have only one pool worker, ensuring
non-parallel execution.

Support for FreeBSD is also added to ensure that on FreeBSD, all
tests are run sequentially even for the parallel section.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   6 +-
 test/test/autotest_runner.py | 277 +++
 2 files changed, 183 insertions(+), 100 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index ae27daef7..12997fdf0 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -36,8 +36,12 @@ def usage():
 
 print(cmdline)
 
+# how many workers to run tests with. FreeBSD doesn't support multiple primary
+# processes, so make it 1, otherwise make it 4. ignored for non-parallel tests
+n_processes = 1 if "bsdapp" in target else 4
+
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
-test_whitelist)
+test_whitelist, n_processes)
 
 runner.parallel_tests = autotest_data.parallel_test_list[:]
 runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d6ae57e76..36941a40a 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -6,16 +6,16 @@
 from __future__ import print_function
 import StringIO
 import csv
-import multiprocessing
+from multiprocessing import Pool, Queue
 import pexpect
 import re
 import subprocess
 import sys
 import time
+import glob
+import os
 
 # wait for prompt
-
-
 def wait_prompt(child):
 try:
 child.sendline()
@@ -28,22 +28,47 @@ def wait_prompt(child):
 else:
 return False
 
-# run a test group
-# each result tuple in results list consists of:
-#   result value (0 or -1)
-#   result string
-#   test name
-#   total test run time (double)
-#   raw test log
-#   test report (if not available, should be None)
-#
-# this function needs to be outside AutotestRunner class
-# because otherwise Pool won't work (or rather it will require
-# quite a bit of effort to make it work).
+
+# get all valid NUMA nodes
+def get_numa_nodes():
+return [
+int(
+re.match(r"node(\d+)", os.path.basename(node))
+.group(1)
+)
+for node in glob.glob("/sys/devices/system/node/node*")
+]
+
+
+# find first (or any, really) CPU on a particular node, will be used to spread
+# processes around NUMA nodes to avoid exhausting memory on particular node
+def first_cpu_on_node(node_nr):
+cpu_path = glob.glob("/sys/devices/system/node/node%d/cpu*" % node_nr)[0]
+cpu_name = os.path.basename(cpu_path)
+m = re.match(r"cpu(\d+)", cpu_name)
+return int(m.group(1))
+
+
+pool_child = None  # per-process child
 
 
-def run_test_group(cmdline, prefix, target, test):
+# we initialize each worker with a queue because we need per-pool unique
+# command-line arguments, but we cannot do different arguments in an 
initializer
+# because the API doesn't allow per-worker initializer arguments. so, instead,
+# we will initialize with a shared queue, and dequeue command-line arguments
+# from this queue
+def pool_init(queue, result_queue):
+global pool_child
+
+cmdline, prefix = queue.get()
 start_time = time.time()
+name = ("Start %s" % prefix) if prefix != "" else "Start"
+
+# use default prefix if no prefix was specified
+prefix_cmdline = "--file-prefix=%s" % prefix if prefix != "" else ""
+
+# append prefix to cmdline
+cmdline = "%s %s" % (cmdline, prefix_cmdline)
 
 # prepare logging of init
 startuplog = StringIO.StringIO()
@@ -54,24 +79,60 @@ def run_test_group(cmdline, prefix, target, test):
 print("\n%s %s\n" % ("=" * 20, prefix), file=startuplog)
 print("\ncmdline=%s" % cmdline, file=startuplog)
 
-child = pexpect.spawn(cmdline, logfile=startuplog)
+pool_child = pexpect.spawn(cmdline, logfile=startuplog)
 
 # wait for target to boot
-if not wait_prompt(child):
-child.close()
+if n

[dpdk-dev] [PATCH v4 6/9] autotest: remove autotest grouping

2018-07-17 Thread Reshma Pattan
Previously, all autotests were grouped into (seemingly arbitrary)
groups. The goal was to run all tests in parallel (so that autotest
finishes faster), but we couldn't just do it willy-nilly because
DPDK couldn't allocate and free hugepages on-demand, so we had to
find autotest groupings that could work memory-wise and still be
fast enough to not hold up shorter tests. The inflexibility of
memory subsystem has now been fixed for 18.05, so grouping
autotests is no longer necessary.

Thus, this commit moves all autotests into two groups -
parallel(izable) autotests, and non-arallel(izable) autotests
(typically performance tests). Note that this particular commit
makes running autotests dog slow because while the tests are now
in a single group, the test function itself hasn't changed much,
so all autotests are now run one-by-one, starting and stopping
the DPDK test application.

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   7 +-
 test/test/autotest_data.py   | 749 +--
 test/test/autotest_runner.py | 271 ++--
 3 files changed, 408 insertions(+), 619 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index 1cfd8cf22..ae27daef7 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -39,11 +39,8 @@ def usage():
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
 test_whitelist)
 
-for test_group in autotest_data.parallel_test_group_list:
-runner.add_parallel_test_group(test_group)
-
-for test_group in autotest_data.non_parallel_test_group_list:
-runner.add_non_parallel_test_group(test_group)
+runner.parallel_tests = autotest_data.parallel_test_list[:]
+runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
 
 num_fails = runner.run_all_tests()
 
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index aacfe0a66..c24e7bc25 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -3,465 +3,322 @@
 
 # Test data for autotests
 
-from glob import glob
 from autotest_test_funcs import *
 
-
-# quick and dirty function to find out number of sockets
-def num_sockets():
-result = len(glob("/sys/devices/system/node/node*"))
-if result == 0:
-return 1
-return result
-
-
-# Assign given number to each socket
-# e.g. 32 becomes 32,32 or 32,32,32,32
-def per_sockets(num):
-return ",".join([str(num)] * num_sockets())
-
 # groups of tests that can be run in parallel
 # the grouping has been found largely empirically
-parallel_test_group_list = [
-{
-"Prefix":"group_1",
-"Memory":per_sockets(8),
-"Tests":
-[
-{
-"Name":"Cycles autotest",
-"Command": "cycles_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Timer autotest",
-"Command": "timer_autotest",
-"Func":timer_autotest,
-"Report":   None,
-},
-{
-"Name":"Debug autotest",
-"Command": "debug_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Errno autotest",
-"Command": "errno_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Meter autotest",
-"Command": "meter_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Common autotest",
-"Command": "common_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Resource autotest",
-"Command": "resource_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-]
-},
-{
-"Prefix":"group_2",
-"Memory":"16",
-"Tests":
-[
-{
-"Name":"Memory autotest",
-"Command": "memory_autotest",
-"Func":memory_autotest,
-"Report":  None,
-},
-{
-"Name":"Read/write lock autotest",
-"Command": "rwlock_autotest",
-"Func":rwlock_autotest,
-"Report":  None,
-},
-{
-"Name":"Logs autotest",
-"Command": "logs_autotest",
-"Func":logs_autotest,
-"Report":  None,
-},
-{
-"Name":"CPU flags autotest",
-"Command": "cpuflags_autotes

Re: [dpdk-dev] [PATCH] app/testpmd: fix logically dead code

2018-07-17 Thread Iremonger, Bernard
> -Original Message-
> From: Laatz, Kevin
> Sent: Tuesday, July 17, 2018 11:34 AM
> To: dev@dpdk.org
> Cc: Singh, Jasvinder ; Iremonger, Bernard
> ; Laatz, Kevin 
> Subject: [PATCH] app/testpmd: fix logically dead code
> 
> Remove logically dead code, tm_port_rate cannot be greater than
> UINT32_MAX.
> 
> Coverity issue: 302846
> Fixes: 0ad778b398c6 ("app/testpmd: rework softnic forward mode")
> 
> Signed-off-by: Kevin Laatz 

Acked-by: Bernard Iremonger 


[dpdk-dev] [PATCH v2] test: fix incorrect return types

2018-07-17 Thread Reshma Pattan
UTs should return either TEST_SUCCESS or TEST_FAILED only.
They should not return 0, -1 and any other value.

Also replace one instance of setting the ret value
from -ENOMEM to TEST_FAILED, in order to return
correct value to autotest.

Fixes: 9c9befea4f ("test: add flow classify unit tests")
CC: jasvinder.si...@intel.com
CC: bernard.iremon...@intel.com
CC: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Reviewed-by: Anatoly Burakov 
---
v2: update commit message
---
 test/test/test_flow_classify.c | 110 -
 1 file changed, 55 insertions(+), 55 deletions(-)

diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c
index fc83b69ae..2ff1ca831 100644
--- a/test/test/test_flow_classify.c
+++ b/test/test/test_flow_classify.c
@@ -231,7 +231,7 @@ test_invalid_parameters(void)
printf("Line %i: rte_flow_classify_validate",
__LINE__);
printf(" with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
rule = rte_flow_classify_table_entry_add(NULL, NULL, NULL, NULL,
@@ -239,7 +239,7 @@ test_invalid_parameters(void)
if (rule) {
printf("Line %i: flow_classifier_table_entry_add", __LINE__);
printf(" with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classify_table_entry_delete(NULL, NULL);
@@ -247,14 +247,14 @@ test_invalid_parameters(void)
printf("Line %i: rte_flow_classify_table_entry_delete",
__LINE__);
printf(" with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classifier_query(NULL, NULL, 0, NULL, NULL);
if (!ret) {
printf("Line %i: flow_classifier_query", __LINE__);
printf(" with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
rule = rte_flow_classify_table_entry_add(NULL, NULL, NULL, NULL,
@@ -262,7 +262,7 @@ test_invalid_parameters(void)
if (rule) {
printf("Line %i: flow_classify_table_entry_add ", __LINE__);
printf("with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classify_table_entry_delete(NULL, NULL);
@@ -270,16 +270,16 @@ test_invalid_parameters(void)
printf("Line %i: rte_flow_classify_table_entry_delete",
__LINE__);
printf("with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classifier_query(NULL, NULL, 0, NULL, NULL);
if (!ret) {
printf("Line %i: flow_classifier_query", __LINE__);
printf(" with NULL param should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
-   return 0;
+   return TEST_SUCCESS;
 }
 
 static int
@@ -310,7 +310,7 @@ test_valid_parameters(void)
printf("Line %i: rte_flow_classify_validate",
__LINE__);
printf(" should not have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
rule = rte_flow_classify_table_entry_add(cls->cls, &attr, pattern,
actions, &key_found, &error);
@@ -318,7 +318,7 @@ test_valid_parameters(void)
if (!rule) {
printf("Line %i: flow_classify_table_entry_add", __LINE__);
printf(" should not have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classify_table_entry_delete(cls->cls, rule);
@@ -326,9 +326,9 @@ test_valid_parameters(void)
printf("Line %i: rte_flow_classify_table_entry_delete",
__LINE__);
printf(" should not have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
-   return 0;
+   return TEST_SUCCESS;
 }
 
 static int
@@ -361,7 +361,7 @@ test_invalid_patterns(void)
if (!ret) {
printf("Line %i: rte_flow_classify_validate", __LINE__);
printf(" should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
rule = rte_flow_classify_table_entry_add(cls->cls, &attr, pattern,
@@ -369,7 +369,7 @@ test_invalid_patterns(void)
if (rule) {
printf("Line %i: flow_classify_table_entry_add", __LINE__);
printf(" should have failed!\n");
-   return -1;
+   return TEST_FAILED;
}
 
ret = rte_flow_classify_table_entry_delete(cls->cls, rule);
@@ -377,7 +377,7 @@ test_invalid_patterns(void)
   

Re: [dpdk-dev] [PATCH v2 3/6] compress/octeontx: add xform and stream create support

2018-07-17 Thread Verma, Shally



>-Original Message-
>From: De Lara Guarch, Pablo [mailto:pablo.de.lara.gua...@intel.com]
>Sent: 14 July 2018 03:55
>To: Verma, Shally 
>Cc: dev@dpdk.org; Athreya, Narayana Prasad 
>; Challa, Mahipal
>; Gupta, Ashish ; Gupta, 
>Ashish ; Sahu,
>Sunila 
>Subject: RE: [PATCH v2 3/6] compress/octeontx: add xform and stream create 
>support
>
>External Email
>
>> -Original Message-
>> From: Shally Verma [mailto:shally.ve...@caviumnetworks.com]
>> Sent: Monday, July 2, 2018 5:55 PM
>> To: De Lara Guarch, Pablo 
>> Cc: dev@dpdk.org; pathr...@caviumnetworks.com;
>> mcha...@caviumnetworks.com; Ashish Gupta
>> ; Ashish Gupta
>> ; Sunila Sahu
>> 
>> Subject: [PATCH v2 3/6] compress/octeontx: add xform and stream create
>> support
>>
>> From: Ashish Gupta 
>>
>> implement private xform and stream create ops
>>
>> Signed-off-by: Ashish Gupta 
>> Signed-off-by: Shally Verma 
>> Signed-off-by: Sunila Sahu 
>> ---
>
>...
>
>>
>> +static int
>> +zip_pmd_stream_create(struct rte_compressdev *dev,
>> + const struct rte_comp_xform *xform, void **stream) {
>
>Do you support stateful ops? If you don't, this should not be implemented, I 
>think (or should return -ENOTSUP).
For us non-shareable priv_xform are equivalent to stream, so priv_xform_create 
falls back to stream_create . However, I see your point  so to reflect stateful 
not supported, we will set stream_create function pointer = NULL in pmd_ops.

Thanks
Shally


Re: [dpdk-dev] [PATCH v2] test: fix incorrect return types

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 1:39 PM, Reshma Pattan wrote:

UTs should return either TEST_SUCCESS or TEST_FAILED only.
They should not return 0, -1 and any other value.

Also replace one instance of setting the ret value
from -ENOMEM to TEST_FAILED, in order to return
correct value to autotest.

Fixes: 9c9befea4f ("test: add flow classify unit tests")
CC: jasvinder.si...@intel.com
CC: bernard.iremon...@intel.com
CC: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Reviewed-by: Anatoly Burakov 
---
v2: update commit message
---





@@ -871,32 +871,32 @@ test_flow_classify(void)
printf("Line %i: f_create has failed!\n", __LINE__);
rte_flow_classifier_free(cls->cls);
rte_free(cls);
-   return -1;
+   return TEST_FAILED;
}
printf("Created table_acl for for IPv4 five tuple packets\n");
  
  	ret = init_mbufpool();

if (ret) {
printf("Line %i: init_mbufpool has failed!\n", __LINE__);
-   return -1;
+   return TEST_FAILED;
}
  
  	if (test_invalid_parameters() < 0)

-   return -1;
+   return TEST_FAILED;
if (test_valid_parameters() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_invalid_patterns() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_invalid_actions() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_udp() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_tcp() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_sctp() < 0)
-   return -1;
+   return TEST_FAILED;
  
-	return 0;

+   return TEST_SUCCESS;
  }
  
  REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify);




I'm nitpicking now, but technically, we could've foregone large part of 
this patch and just kept the part above. We don't actually care if 
individual test functions return -1 or TEST_FAILED - we just need the 
return from test app to be that :)


--
Thanks,
Anatoly


Re: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf initialization

2018-07-17 Thread Ori Kam
Hi,

PSB

Thanks,
Ori

> -Original Message-
> From: Ferruh Yigit [mailto:ferruh.yi...@intel.com]
> Sent: Tuesday, July 17, 2018 12:57 PM
> To: Ori Kam ; Xu, Rosen ;
> dev@dpdk.org
> Cc: sta...@dpdk.org; Gilmore, Walter E ; Qi
> Zhang 
> Subject: Re: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf
> initialization
> 
> On 7/17/2018 6:15 AM, Ori Kam wrote:
> > Sorry for the late response,
> >
> >> -Original Message-
> >> From: Xu, Rosen [mailto:rosen...@intel.com]
> >> Sent: Thursday, July 12, 2018 9:23 AM
> >> To: Ori Kam ; dev@dpdk.org
> >> Cc: Yigit, Ferruh ; sta...@dpdk.org; Gilmore,
> Walter
> >> E 
> >> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
> rte_fdir_conf
> >> initialization
> >>
> >> Hi Ori,
> >>
> >> Pls see my reply.
> >>
> >> Hi Walter and Ferruh,
> >>
> >> I need your voice :)
> >>
> >>> -Original Message-
> >>> From: Ori Kam [mailto:or...@mellanox.com]
> >>> Sent: Thursday, July 12, 2018 13:58
> >>> To: Xu, Rosen ; dev@dpdk.org
> >>> Cc: Yigit, Ferruh ; sta...@dpdk.org
> >>> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
> >> rte_fdir_conf
> >>> initialization
> >>>
> >>> Hi,
> >>>
> >>> PSB
> >>>
>  -Original Message-
>  From: Xu, Rosen [mailto:rosen...@intel.com]
>  Sent: Thursday, July 12, 2018 8:27 AM
>  To: Ori Kam ; dev@dpdk.org
>  Cc: Yigit, Ferruh ; sta...@dpdk.org
>  Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>  rte_fdir_conf initialization
> 
>  Hi Ori,
> 
>  examples/flow_filtering sample app fails on i40e [1] because i40e
>  requires explicit FDIR configuration.
> 
>  But rte_flow in and hardware independent ways of describing
>  flow-action, it shouldn't require specific config options for specific
> >>> hardware.
> 
> >>>
> >>> I don't understand why using rte flow require the use of fdir.
> >>> it doesn't make sense to me, that  new API will need old one.
> >>
> >> It's a good question, I also have this question about Mellanox NIC Driver
> >> mlx5_flow.c.
> >> In this file many flow functions call fdir. :)
> >
> > The only functions that are calling fdir are fdir function,
> > and you can see that inside of the create function we convert the fdir
> > Into rte flow.
> >
> >>
>  Is there any chance driver select the FDIR config automatically based
>  on rte_flow rule, unless explicitly a FDIR config set by user?
> >>>
> >>> I don't know how the i40e driver is implemented but I know that
> Mellanox
> >>> convert the other way around, if fdir is given it is converted to 
> >>> rte_flow.
> >>
> >> Firstly, rte_fdir_conf is part of rte_eth_conf definition.
> >>struct rte_eth_conf {
> >>..
> >>struct rte_fdir_conf fdir_conf; /**< FDIR configuration. */
> >>..
> >>};
> >> Secondly, default value of rte_eth_conf.fdir_conf.mode is
> >> RTE_FDIR_MODE_NONE, which means Disable FDIR support.
> >> Thirdly, flow_filtering should align with test-pmd, in test-pmd all 
> >> fdir_conf
> is
> >> initialized.
> >>
> >
> > This sounds to me correct we don't want to enable fdir.
> > Why should the example app for rte flow use fdir? And align to
> > testpmd which support everything in in all modes?
> 
> In i40e fdir is used to implement filters, that is why rte_flow rules
> requires/depends some fdir configurations.
> 
> In long term I agree it is better if driver doesn't require any fdir
> configuration for rte_flow programing, although not sure if this is completely
> possible, cc'ed Qi for more comment.
> 
> For short term I am for getting this patch so that sample app can run on i40e
> too, and fdir configuration shouldn't effect others. Perhaps it can be good to
> add a comment to say why that config option is added and it is a temporary
> workaround.
> 

Assuming that the setting for the fdir are fixed for all possible rte_flow rules
I can agree for this workaround but we must add some comment in the code
and also add this comment in the example documentation.

It will be a problem if other PMD will require different default setting.
In this case we must find a better solution.


> >
> >
> >>>
> 
>  [1]
>  Flow can't be created 1 message: Check the mode in fdir_conf.
>  EAL: Error - exiting with code: 1
> 
> > -Original Message-
> > From: Ori Kam [mailto:or...@mellanox.com]
> > Sent: Thursday, July 12, 2018 13:17
> > To: Xu, Rosen ; dev@dpdk.org
> > Cc: Yigit, Ferruh ; sta...@dpdk.org; Ori Kam
> > 
> > Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>  rte_fdir_conf
> > initialization
> >
> > Hi Rosen,
> >
> > Why do the fdir_conf must be initialized?
> >
> > What is the issue you are seeing?
> >
> > Best,
> > Ori
> >
> >> -Original Message-
> >> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Rosen Xu
> >> Sent: Thursday, July 12, 2018 5

[dpdk-dev] [PATCH] vhost: fix buffer length calculation

2018-07-17 Thread Tiwei Bie
Fixes: fd68b4739d2c ("vhost: use buffer vectors in dequeue path")

Reported-by: Yinan Wang 
Signed-off-by: Tiwei Bie 
---
 lib/librte_vhost/virtio_net.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2b7ffcf92..07cc0c845 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -720,7 +720,8 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct 
vhost_virtqueue *vq,
uint16_t hdr_vec_idx = 0;
 
while (remain) {
-   len = remain;
+   len = RTE_MIN(remain,
+   buf_vec[hdr_vec_idx].buf_len);
dst = buf_vec[hdr_vec_idx].buf_addr;
rte_memcpy((void *)(uintptr_t)dst,
(void *)(uintptr_t)src,
@@ -747,7 +748,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct 
vhost_virtqueue *vq,
hdr_addr = 0;
}
 
-   cpy_len = RTE_MIN(buf_len, mbuf_avail);
+   cpy_len = RTE_MIN(buf_avail, mbuf_avail);
 
if (likely(cpy_len > MAX_BATCH_LEN ||
vq->batch_copy_nb_elems >= vq->size)) {
@@ -1112,7 +1113,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct 
vhost_virtqueue *vq,
 * in a contiguous virtual area.
 */
while (remain) {
-   len = remain;
+   len = RTE_MIN(remain,
+   buf_vec[hdr_vec_idx].buf_len);
src = buf_vec[hdr_vec_idx].buf_addr;
rte_memcpy((void *)(uintptr_t)dst,
   (void *)(uintptr_t)src, len);
-- 
2.18.0



[dpdk-dev] [PATCH v4 0/9] Make unit tests great again

2018-07-17 Thread Reshma Pattan
Previously, unit tests were running in groups. There were technical reasons why 
that was the case (mostly having to do with limiting memory), but it was hard 
to maintain and update the autotest script.

In 18.05, limiting of memory at DPDK startup was no longer necessary, as DPDK 
allocates memory at runtime as needed. This has the implication that the old 
test grouping can now be retired and replaced with a more sensible way of 
running unit tests (using multiprocessing pool of workers and a queue of 
tests). This patchset accomplishes exactly that.

This patchset merges changes done in [1], [2]

[1] http://dpdk.org/dev/patchwork/patch/40370/
[2] http://patches.dpdk.org/patch/40373/

v4: Removed non auto tests set_rxtx_mode, set_rxtx_anchor and set_rxtx_sc
from autotest_data.py

Reshma Pattan (9):
  autotest: fix printing
  autotest: fix invalid code on reports
  autotest: make autotest runner python 2/3 compliant
  autotest: visually separate different test categories
  autotest: improve filtering
  autotest: remove autotest grouping
  autotest: properly parallelize unit tests
  autotest: update autotest test case list
  mk: update make targets for classified testcases

 mk/rte.sdkroot.mk|4 +-
 mk/rte.sdktest.mk|   33 +-
 test/test/autotest.py|   13 +-
 test/test/autotest_data.py   | 1081 +-
 test/test/autotest_runner.py |  519 ++--
 5 files changed, 948 insertions(+), 702 deletions(-)

-- 
2.14.4



[dpdk-dev] [PATCH v4 3/9] autotest: make autotest runner python 2/3 compliant

2018-07-17 Thread Reshma Pattan
Autotest runner was still using python 2-style print syntax. Fix
it by importing print function from the future, and fix the calls
to be python-3 style.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index bdc32da5d..f6b669a2e 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -3,6 +3,7 @@
 
 # The main logic behind running autotests in parallel
 
+from __future__ import print_function
 import StringIO
 import csv
 import multiprocessing
@@ -52,8 +53,8 @@ def run_test_group(cmdline, target, test_group):
 # prepare logging of init
 startuplog = StringIO.StringIO()
 
-print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
-print >>startuplog, "\ncmdline=%s" % cmdline
+print("\n%s %s\n" % ("=" * 20, test_group["Prefix"]), file=startuplog)
+print("\ncmdline=%s" % cmdline, file=startuplog)
 
 child = pexpect.spawn(cmdline, logfile=startuplog)
 
@@ -117,7 +118,7 @@ def run_test_group(cmdline, target, test_group):
 
 try:
 # print test name to log buffer
-print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
 if stripped or test["Command"] in avail_cmds:
-- 
2.14.4



[dpdk-dev] [PATCH v4 4/9] autotest: visually separate different test categories

2018-07-17 Thread Reshma Pattan
Help visually identify parallel vs. non-parallel autotests.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index f6b669a2e..d9d5f7a97 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -341,6 +341,7 @@ def run_all_tests(self):
 # make a note of tests start time
 self.start = time.time()
 
+print("Parallel autotests:")
 # assign worker threads to run test groups
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
@@ -367,6 +368,7 @@ def run_all_tests(self):
 # remove result from results list once we're done with it
 results.remove(group_result)
 
+print("Non-parallel autotests:")
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-- 
2.14.4



[dpdk-dev] [PATCH v4 2/9] autotest: fix invalid code on reports

2018-07-17 Thread Reshma Pattan
There are no reports defined for any test, so this codepath was
never triggered, but it's still wrong because it's referencing
variables that aren't there. Fix it by passing target into the
test function, and reference correct log variable.

Fixes: e2cc79b75d9f ("app: rework autotest.py")
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index b09b57876..bdc32da5d 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -41,7 +41,7 @@ def wait_prompt(child):
 # quite a bit of effort to make it work).
 
 
-def run_test_group(cmdline, test_group):
+def run_test_group(cmdline, target, test_group):
 results = []
 child = None
 start_time = time.time()
@@ -128,14 +128,15 @@ def run_test_group(cmdline, test_group):
 # make a note when the test was finished
 end_time = time.time()
 
+log = logfile.getvalue()
+
 # append test data to the result tuple
-result += (test["Name"], end_time - start_time,
-   logfile.getvalue())
+result += (test["Name"], end_time - start_time, log)
 
 # call report function, if any defined, and supply it with
 # target and complete log for test run
 if test["Report"]:
-report = test["Report"](self.target, log)
+report = test["Report"](target, log)
 
 # append report to results tuple
 result += (report,)
@@ -343,6 +344,7 @@ def run_all_tests(self):
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
   [self.__get_cmdline(test_group),
+   self.target,
test_group])
 results.append(result)
 
@@ -367,7 +369,7 @@ def run_all_tests(self):
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-self.__get_cmdline(test_group), test_group)
+self.__get_cmdline(test_group), self.target, test_group)
 
 self.__process_results(group_result)
 
-- 
2.14.4



[dpdk-dev] [PATCH v4 1/9] autotest: fix printing

2018-07-17 Thread Reshma Pattan
Previously, printing was done using tuple syntax, which caused
output to appear as a tuple as opposed to being one string. Fix
this by using addition operator instead.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index a692f0697..b09b57876 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -247,7 +247,7 @@ def __process_results(self, results):
 
 # don't print out total time every line, it's the same anyway
 if i == len(results) - 1:
-print(result,
+print(result +
   "[%02dm %02ds]" % (total_time / 60, total_time % 60))
 else:
 print(result)
@@ -332,8 +332,8 @@ def run_all_tests(self):
 
 # create table header
 print("")
-print("Test name".ljust(30), "Test result".ljust(29),
-  "Test".center(9), "Total".center(9))
+print("Test name".ljust(30) + "Test result".ljust(29) +
+  "Test".center(9) + "Total".center(9))
 print("=" * 80)
 
 # make a note of tests start time
-- 
2.14.4



[dpdk-dev] [PATCH v4 5/9] autotest: improve filtering

2018-07-17 Thread Reshma Pattan
Improve code for filtering test groups. Also, move reading binary
symbols into filtering stage, so that tests that are meant to be
skipped are never attempted to be executed in the first place.
Before running tests, print out any tests that were skipped because
they weren't compiled.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 118 ---
 1 file changed, 66 insertions(+), 52 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d9d5f7a97..c98ec2a57 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -95,13 +95,6 @@ def run_test_group(cmdline, target, test_group):
 results.append((0, "Success", "Start %s" % test_group["Prefix"],
 time.time() - start_time, startuplog.getvalue(), None))
 
-# parse the binary for available test commands
-binary = cmdline.split()[0]
-stripped = 'not stripped' not in subprocess.check_output(['file', binary])
-if not stripped:
-symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
-avail_cmds = re.findall('test_register_(\w+)', symbols)
-
 # run all tests in test group
 for test in test_group["Tests"]:
 
@@ -121,10 +114,7 @@ def run_test_group(cmdline, target, test_group):
 print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
-if stripped or test["Command"] in avail_cmds:
-result = test["Func"](child, test["Command"])
-else:
-result = (0, "Skipped [Not Available]")
+result = test["Func"](child, test["Command"])
 
 # make a note when the test was finished
 end_time = time.time()
@@ -186,8 +176,10 @@ class AutotestRunner:
 def __init__(self, cmdline, target, blacklist, whitelist):
 self.cmdline = cmdline
 self.target = target
+self.binary = cmdline.split()[0]
 self.blacklist = blacklist
 self.whitelist = whitelist
+self.skipped = []
 
 # log file filename
 logfile = "%s.log" % target
@@ -276,53 +268,58 @@ def __process_results(self, results):
 if i != 0:
 self.csvwriter.writerow([test_name, test_result, result_str])
 
-# this function iterates over test groups and removes each
-# test that is not in whitelist/blacklist
-def __filter_groups(self, test_groups):
-groups_to_remove = []
-
-# filter out tests from parallel test groups
-for i, test_group in enumerate(test_groups):
-
-# iterate over a copy so that we could safely delete individual
-# tests
-for test in test_group["Tests"][:]:
-test_id = test["Command"]
-
-# dump tests are specified in full e.g. "Dump_mempool"
-if "_autotest" in test_id:
-test_id = test_id[:-len("_autotest")]
-
-# filter out blacklisted/whitelisted tests
-if self.blacklist and test_id in self.blacklist:
-test_group["Tests"].remove(test)
-continue
-if self.whitelist and test_id not in self.whitelist:
-test_group["Tests"].remove(test)
-continue
-
-# modify or remove original group
-if len(test_group["Tests"]) > 0:
-test_groups[i] = test_group
-else:
-# remember which groups should be deleted
-# put the numbers backwards so that we start
-# deleting from the end, not from the beginning
-groups_to_remove.insert(0, i)
+# this function checks individual test and decides if this test should be 
in
+# the group by comparing it against  whitelist/blacklist. it also checks if
+# the test is compiled into the binary, and marks it as skipped if 
necessary
+def __filter_test(self, test):
+test_cmd = test["Command"]
+test_id = test_cmd
+
+# dump tests are specified in full e.g. "Dump_mempool"
+if "_autotest" in test_id:
+test_id = test_id[:-len("_autotest")]
+
+# filter out blacklisted/whitelisted tests
+if self.blacklist and test_id in self.blacklist:
+return False
+if self.whitelist and test_id not in self.whitelist:
+return False
+
+# if test wasn't compiled in, remove it as well
+
+# parse the binary for available test commands
+stripped = 'not stripped' not in \
+   subprocess.check_output(['file', self.binary])
+if not stripped:
+symbols = subprocess.check_output(['nm',
+   self.binary]).decode('utf-8')
+avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+if test_cmd

[dpdk-dev] [PATCH v4 6/9] autotest: remove autotest grouping

2018-07-17 Thread Reshma Pattan
Previously, all autotests were grouped into (seemingly arbitrary)
groups. The goal was to run all tests in parallel (so that autotest
finishes faster), but we couldn't just do it willy-nilly because
DPDK couldn't allocate and free hugepages on-demand, so we had to
find autotest groupings that could work memory-wise and still be
fast enough to not hold up shorter tests. The inflexibility of
memory subsystem has now been fixed for 18.05, so grouping
autotests is no longer necessary.

Thus, this commit moves all autotests into two groups -
parallel(izable) autotests, and non-arallel(izable) autotests
(typically performance tests). Note that this particular commit
makes running autotests dog slow because while the tests are now
in a single group, the test function itself hasn't changed much,
so all autotests are now run one-by-one, starting and stopping
the DPDK test application.

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   7 +-
 test/test/autotest_data.py   | 749 +--
 test/test/autotest_runner.py | 271 ++--
 3 files changed, 408 insertions(+), 619 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index 1cfd8cf22..ae27daef7 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -39,11 +39,8 @@ def usage():
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
 test_whitelist)
 
-for test_group in autotest_data.parallel_test_group_list:
-runner.add_parallel_test_group(test_group)
-
-for test_group in autotest_data.non_parallel_test_group_list:
-runner.add_non_parallel_test_group(test_group)
+runner.parallel_tests = autotest_data.parallel_test_list[:]
+runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
 
 num_fails = runner.run_all_tests()
 
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index aacfe0a66..c24e7bc25 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -3,465 +3,322 @@
 
 # Test data for autotests
 
-from glob import glob
 from autotest_test_funcs import *
 
-
-# quick and dirty function to find out number of sockets
-def num_sockets():
-result = len(glob("/sys/devices/system/node/node*"))
-if result == 0:
-return 1
-return result
-
-
-# Assign given number to each socket
-# e.g. 32 becomes 32,32 or 32,32,32,32
-def per_sockets(num):
-return ",".join([str(num)] * num_sockets())
-
 # groups of tests that can be run in parallel
 # the grouping has been found largely empirically
-parallel_test_group_list = [
-{
-"Prefix":"group_1",
-"Memory":per_sockets(8),
-"Tests":
-[
-{
-"Name":"Cycles autotest",
-"Command": "cycles_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Timer autotest",
-"Command": "timer_autotest",
-"Func":timer_autotest,
-"Report":   None,
-},
-{
-"Name":"Debug autotest",
-"Command": "debug_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Errno autotest",
-"Command": "errno_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Meter autotest",
-"Command": "meter_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Common autotest",
-"Command": "common_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Resource autotest",
-"Command": "resource_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-]
-},
-{
-"Prefix":"group_2",
-"Memory":"16",
-"Tests":
-[
-{
-"Name":"Memory autotest",
-"Command": "memory_autotest",
-"Func":memory_autotest,
-"Report":  None,
-},
-{
-"Name":"Read/write lock autotest",
-"Command": "rwlock_autotest",
-"Func":rwlock_autotest,
-"Report":  None,
-},
-{
-"Name":"Logs autotest",
-"Command": "logs_autotest",
-"Func":logs_autotest,
-"Report":  None,
-},
-{
-"Name":"CPU flags autotest",
-"Command": "cpuflags_autotes

[dpdk-dev] [PATCH v4 7/9] autotest: properly parallelize unit tests

2018-07-17 Thread Reshma Pattan
Now that everything else is in place, we can run unit tests in a
different fashion to what they were running as before. Previously,
we had all autotests as part of groups (largely obtained through
trial and error) to ensure parallel execution while still limiting
amounts of memory used by those tests.

This is no longer necessary, and as of previous commit, all tests
are now in the same group (still broken into two categories). They
still run one-by-one though. Fix this by initializing child
processes in multiprocessing Pool initialization, and putting all
tests on the queue, so that tests are executed by the first idle
worker. Tests are also affinitized to different NUMA nodes using
taskset in a round-robin fashion, to prevent over-exhausting
memory on any given NUMA node.

Non-parallel tests are executed in similar fashion, but on a
separate queue which will have only one pool worker, ensuring
non-parallel execution.

Support for FreeBSD is also added to ensure that on FreeBSD, all
tests are run sequentially even for the parallel section.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   6 +-
 test/test/autotest_runner.py | 277 +++
 2 files changed, 183 insertions(+), 100 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index ae27daef7..12997fdf0 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -36,8 +36,12 @@ def usage():
 
 print(cmdline)
 
+# how many workers to run tests with. FreeBSD doesn't support multiple primary
+# processes, so make it 1, otherwise make it 4. ignored for non-parallel tests
+n_processes = 1 if "bsdapp" in target else 4
+
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
-test_whitelist)
+test_whitelist, n_processes)
 
 runner.parallel_tests = autotest_data.parallel_test_list[:]
 runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d6ae57e76..36941a40a 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -6,16 +6,16 @@
 from __future__ import print_function
 import StringIO
 import csv
-import multiprocessing
+from multiprocessing import Pool, Queue
 import pexpect
 import re
 import subprocess
 import sys
 import time
+import glob
+import os
 
 # wait for prompt
-
-
 def wait_prompt(child):
 try:
 child.sendline()
@@ -28,22 +28,47 @@ def wait_prompt(child):
 else:
 return False
 
-# run a test group
-# each result tuple in results list consists of:
-#   result value (0 or -1)
-#   result string
-#   test name
-#   total test run time (double)
-#   raw test log
-#   test report (if not available, should be None)
-#
-# this function needs to be outside AutotestRunner class
-# because otherwise Pool won't work (or rather it will require
-# quite a bit of effort to make it work).
+
+# get all valid NUMA nodes
+def get_numa_nodes():
+return [
+int(
+re.match(r"node(\d+)", os.path.basename(node))
+.group(1)
+)
+for node in glob.glob("/sys/devices/system/node/node*")
+]
+
+
+# find first (or any, really) CPU on a particular node, will be used to spread
+# processes around NUMA nodes to avoid exhausting memory on particular node
+def first_cpu_on_node(node_nr):
+cpu_path = glob.glob("/sys/devices/system/node/node%d/cpu*" % node_nr)[0]
+cpu_name = os.path.basename(cpu_path)
+m = re.match(r"cpu(\d+)", cpu_name)
+return int(m.group(1))
+
+
+pool_child = None  # per-process child
 
 
-def run_test_group(cmdline, prefix, target, test):
+# we initialize each worker with a queue because we need per-pool unique
+# command-line arguments, but we cannot do different arguments in an 
initializer
+# because the API doesn't allow per-worker initializer arguments. so, instead,
+# we will initialize with a shared queue, and dequeue command-line arguments
+# from this queue
+def pool_init(queue, result_queue):
+global pool_child
+
+cmdline, prefix = queue.get()
 start_time = time.time()
+name = ("Start %s" % prefix) if prefix != "" else "Start"
+
+# use default prefix if no prefix was specified
+prefix_cmdline = "--file-prefix=%s" % prefix if prefix != "" else ""
+
+# append prefix to cmdline
+cmdline = "%s %s" % (cmdline, prefix_cmdline)
 
 # prepare logging of init
 startuplog = StringIO.StringIO()
@@ -54,24 +79,60 @@ def run_test_group(cmdline, prefix, target, test):
 print("\n%s %s\n" % ("=" * 20, prefix), file=startuplog)
 print("\ncmdline=%s" % cmdline, file=startuplog)
 
-child = pexpect.spawn(cmdline, logfile=startuplog)
+pool_child = pexpect.spawn(cmdline, logfile=startuplog)
 
 # wait for target to boot
-if not wait_prompt(child):
-child.close()
+if n

[dpdk-dev] [PATCH v4 8/9] autotest: update autotest test case list

2018-07-17 Thread Reshma Pattan
Autotest is enhanced with additional test cases
being added to autotest_data.py

Removed non existing PCI autotest.

Cc: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Anatoly Burakov 
---
 test/test/autotest_data.py | 350 +++--
 1 file changed, 342 insertions(+), 8 deletions(-)

diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index c24e7bc25..3f856ff57 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -134,12 +134,6 @@
 "Func":default_autotest,
 "Report":  None,
 },
-{
-"Name":"PCI autotest",
-"Command": "pci_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
 {
 "Name":"Malloc autotest",
 "Command": "malloc_autotest",
@@ -248,6 +242,291 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Eventdev selftest octeontx",
+"Command": "eventdev_selftest_octeontx",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event ring autotest",
+"Command": "event_ring_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Table autotest",
+"Command": "table_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Flow classify autotest",
+"Command": "flow_classify_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event eth rx adapter autotest",
+"Command": "event_eth_rx_adapter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"User delay",
+"Command": "user_delay_us",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Rawdev autotest",
+"Command": "rawdev_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Kvargs autotest",
+"Command": "kvargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Devargs autotest",
+"Command": "devargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding autotest",
+"Command": "link_bonding_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding mode4 autotest",
+"Command": "link_bonding_mode4_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding rssconf autotest",
+"Command": "link_bonding_rssconf_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Crc autotest",
+"Command": "crc_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Distributor autotest",
+"Command": "distributor_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Reorder autotest",
+"Command": "reorder_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Barrier autotest",
+"Command": "barrier_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Bitmap test",
+"Command": "bitmap_test",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash scaling autotest",
+"Command": "hash_scaling_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash multiwriter autotest",
+"Command": "hash_multiwriter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Service autotest",
+"Command": "service_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Timer racecond autotest",
+"Command": "timer_racecond_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Member autotest",
+"Command": "member_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":   "Efd_autotest",
+"Command": "efd_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Thash autotest",
+"Command": "thash_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash function autotest",
+"Command": "hash_functions_autotest

[dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Reshma Pattan
Makefiles are updated with new test case lists.
Test cases are classified as -
P1 - Main test cases,
P2 - Cryptodev/driver test cases,
P3 - Perf test cases which takes longer than 10s,
P4 - Logging/Dump test cases.

Makefile is updated with different targets
for the above classified groups.
Test cases for different targets are listed accordingly.

Cc: sta...@dpdk.org

Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Reshma Pattan 
---
 mk/rte.sdkroot.mk |  4 ++--
 mk/rte.sdktest.mk | 33 +++--
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index f43cc7829..ea3473ebf 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
 cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
 
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage
-test test-basic test-fast test-ring test-mempool test-perf coverage:
+.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
+test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:
$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
 
 test: test-build
diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index ee1fe0c7e..13d1efb6a 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,14 +18,35 @@ DIR := $(shell basename $(RTE_OUTPUT))
 #
 # test: launch auto-tests, very simple for now.
 #
-.PHONY: test test-basic test-fast test-perf coverage
+.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
 
-PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf
-coverage: BLACKLIST=-$(PERFLIST)
-test-fast: BLACKLIST=-$(PERFLIST)
-test-perf: WHITELIST=$(PERFLIST)
+PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
+ reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
+ barrier,hash_multiwriter,timer_racecond,efd,hash_functions,\
+ eventdev_selftest_sw,member_perf,efd_perf,lpm6_perf,red_perf,\
+ distributor_perf,ring_pmd_perf,pmd_perf,ring_perf
+DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
+cryptodev_sw_mrvl,cryptodev_dpaa2_sec,cryptodev_dpaa_sec,\
+cryptodev_qat,cryptodev_aesni_mb,cryptodev_openssl,\
+cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
+cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
+DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
+ dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
+ devargs_autotest
 
-test test-basic test-fast test-perf:
+SPACESTR:=
+SPACESTR+=
+STRIPPED_PERFLIST=$(subst $(SPACESTR),,$(PERFLIST))
+STRIPPED_DRIVERSLIST=$(subst $(SPACESTR),,$(DRIVERSLIST))
+STRIPPED_DUMPLIST=$(subst $(SPACESTR),,$(DUMPLIST))
+
+coverage: BLACKLIST=-$(STRIPPED_PERFLIST)
+test-fast: 
BLACKLIST=-$(STRIPPED_PERFLIST),$(STRIPPED_DRIVERSLIST),$(STRIPPED_DUMPLIST)
+test-perf: WHITELIST=$(STRIPPED_PERFLIST)
+test-drivers: WHITELIST=$(STRIPPED_DRIVERSLIST)
+test-dump: WHITELIST=$(STRIPPED_DUMPLIST)
+
+test test-basic test-fast test-perf test-drivers test-dump:
@mkdir -p $(AUTOTEST_DIR) ; \
cd $(AUTOTEST_DIR) ; \
if [ -f $(RTE_OUTPUT)/app/test ]; then \
-- 
2.14.4



[dpdk-dev] [Bug 73] In a multi-process setup, secondary processes fail to receive any packet

2018-07-17 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=73

Bug ID: 73
   Summary: In a multi-process setup, secondary processes fail to
receive any packet
   Product: DPDK
   Version: 18.02
  Hardware: x86
OS: Linux
Status: CONFIRMED
  Severity: normal
  Priority: Normal
 Component: other
  Assignee: dev@dpdk.org
  Reporter: guillaume.gir...@intel.com
  Target Milestone: ---

We have built an application on top of DPDK listening to certain streams of
packets and reporting statistics on those. The application is working with a
specific Ethernet port specified at the command-line.

The application is multi-process aware, so you can start a primary and several
secondaries, all listening to their own ports. This works perfectly fine on a
X550T-based setup with up to 8 ports (driver is xgbe).

However, when moving to an X772T-based setup (driver is i40e), the secondaries
stopped receiving any packets. The primary works as usual, and sees all packets
for its own port. The secondaries get "packets" that consists of 60 to 300
bytes, all zeroes, at a rate of one per second or so.

I tried to run the same setup with multiple primaries separated by their file
prefix, but the same issue happened (which process gets the packets changes
though).

I finally refactored the application to run separate threads in the same
process instead, and that worked fine. All threads see all the packets they
expect.

I suspect that ixgbe and i40e differ in how they handle this, and that might
cause the bug, but obviously, there might be other differences between hosts
that might be causing this as well. Not being familiar with DPDK, I haven't
searched any further, but I'll be happy to do so if somebody can give me a hint
of where to start looking.

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [dpdk-dev] [PATCH v2] test: fix incorrect return types

2018-07-17 Thread Pattan, Reshma
Hi,

> -Original Message-
> From: Burakov, Anatoly
> Sent: Tuesday, July 17, 2018 1:58 PM
> To: Pattan, Reshma ; dev@dpdk.org
> Cc: Singh, Jasvinder ; Iremonger, Bernard
> ; sta...@dpdk.org
> Subject: Re: [PATCH v2] test: fix incorrect return types
> 
> 
> > @@ -871,32 +871,32 @@ test_flow_classify(void)
> > printf("Line %i: f_create has failed!\n", __LINE__);
> > rte_flow_classifier_free(cls->cls);
> > rte_free(cls);
> > -   return -1;
> > +   return TEST_FAILED;
> > }
> > printf("Created table_acl for for IPv4 five tuple packets\n");
> >
> > ret = init_mbufpool();
> > if (ret) {
> > printf("Line %i: init_mbufpool has failed!\n", __LINE__);
> > -   return -1;
> > +   return TEST_FAILED;
> > }
> >
> > if (test_invalid_parameters() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_valid_parameters() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_invalid_patterns() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_invalid_actions() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_query_udp() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_query_tcp() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> > if (test_query_sctp() < 0)
> > -   return -1;
> > +   return TEST_FAILED;
> >
> > -   return 0;
> > +   return TEST_SUCCESS;
> >   }
> >
> >   REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify);
> >
> 
> I'm nitpicking now, but technically, we could've foregone large part of this
> patch and just kept the part above. We don't actually care if individual test
> functions return -1 or TEST_FAILED - we just need the return from test app to
> be that :)
> 

Make sense. Will do the changes.



[dpdk-dev] [PATCH] maintainers: update for szedata2 PMD

2018-07-17 Thread Matej Vido
I will no longer be maintaining szedata2 PMD. Jan will take over this
role.

Signed-off-by: Matej Vido 
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 227e32c..43b0a3e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -617,7 +617,7 @@ F: doc/guides/nics/netvsc.rst
 F: doc/guides/nics/features/netvsc.ini
 
 Netcope szedata2
-M: Matej Vido 
+M: Jan Remes 
 F: drivers/net/szedata2/
 F: doc/guides/nics/szedata2.rst
 F: doc/guides/nics/features/szedata2.ini
-- 
2.7.4



Re: [dpdk-dev] [PATCH] ip_frag: extend rte_ipv6_frag_get_ipv6_fragment_header()

2018-07-17 Thread Ananyev, Konstantin
Hi Cody,

> 
> Hi,
> 
> > Just a generic thought - might be worse to move functions that parse ipv6 
> > header extentions
> > and related strcutures into rte_net.
> > I am sure they might be reused by some other code.
> 
> Sorry, I am misunderstanding. Do you mean it might be better to move
> struct ipv6_opt_hdr and ipv6_ext_hdr() into rte_net since they are not
> fragmentation specific? That seems fine to me.

Yes, that's was my thought.

> 
> > pktmbuf_read() is quite heavy-weight one.
> > Do we really need it here?
> > From my perspective - add an assumption that all whole IPv6 header will be 
> > inside
> > one segment seems reasonable enough.
> 
> It is my understanding that rte_pktmbuf_read() will almost always just
> invoke a light weight rte_pktmbuf_mtod_offset(). It only runs the
> heavy weight __rte_pktmbuf_read() in the case that the assumption you
> mentioned is broken.

Ah, yes you right.
Konstantin



Re: [dpdk-dev] [PATCH] maintainers: update for szedata2 PMD

2018-07-17 Thread Jan Remeš
On Tue, Jul 17, 2018 at 3:52 PM, Matej Vido  wrote:
>
> I will no longer be maintaining szedata2 PMD. Jan will take over this
> role.
>
> Signed-off-by: Matej Vido 
> ---
>  MAINTAINERS | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 227e32c..43b0a3e 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -617,7 +617,7 @@ F: doc/guides/nics/netvsc.rst
>  F: doc/guides/nics/features/netvsc.ini
>
>  Netcope szedata2
> -M: Matej Vido 
> +M: Jan Remes 
>  F: drivers/net/szedata2/
>  F: doc/guides/nics/szedata2.rst
>  F: doc/guides/nics/features/szedata2.ini
> --
> 2.7.4
>

Acked-by: Jan Remes 


Re: [dpdk-dev] [PATCH v2 2/2] librte_ip_frag: add mbuf counter

2018-07-17 Thread Ananyev, Konstantin
Hi Alex,
Sorry for delay in reply.

> 
> >> There might be situations (kind of attack when a lot of
> >> fragmented packets are sent to a dpdk application in order
> >> to flood the fragmentation table) when no additional mbufs
> >> must be added to the fragmentations table since it already
> >> contains to many of them. Currently there is no way to
> >> determine the number of mbufs holded int the fragmentation
> >> table. This patch allows to keep track of the number of mbufs
> >> holded in the fragmentation table.
> 
> > I understand your intention, but still not sure it is worth it.
> > My thought was that you can estimate by upper limit (num_entries * 
> > entries_per_bucket) or so.
> No, I can't. The estimation error might be so big that there would be no 
> difference at all.

Not sure why? If you'll use upper limit, then worst thing could happen -
you would start your table cleanup a bit earlier.

> 
> > Probably another way to account number of mbufs without changes in the lib -
> > apply something like that(assuming that your fragmets are not multisegs):
> 
> > uint32_t mbuf_in_frag_table = 0;
> > 
> 
> n= dr->>cnt;
> > mb = rte_ipv4_frag_reassemble_packet(...);
> > if (mb != NULL)
> >mbuf_in_frag_table += mb->nb_segs;
> > mbuf_in_frag_table += dr->cnt - n + 1;

Sorry, my bad, I think it should be 
mbuf_in_frag_table  -= dr->cnt - n + 1;

> 
> > In theory that could be applied even if fragments might be multisegs, but 
> > for that,
> > we'll need to change rte_ip_frag_free_death_row() to return total number of 
> > freed segments.
> 
> That should be a little bit more complicated wrapper code:
> 
> uint32_t mbuf_in_frag_table = 0;
> 
> 
> n= dr->cnt;
> reassembled_mbuf = rte_ipv4_frag_reassemble_packet(..., fragmented_mbuf, ...);
> if (reassembled_mbuf == NULL)
> mbuf_in_frag_table += fragmented_mbuf->nb_segs;

We don't know for sure here.
fragmented_mbuf could be in death row by now. 

> else
> mbuf_in_frag_table -= reassembled_mbuf->nb_segs;
> mbuf_in_frag_table += dr->cnt - n;
> 
> 
> Also, in that case every rte_ip_frag_free_death_row() needs a wrapper code 
> too.
> 
> n= dr->cnt;
> rte_ip_frag_free_death_row(..)
> mbuf_in_frag_table += dr->cnt - n;

I don't think it is necessary.
After packet is put in the death-row it is no longer in the table. 

Konstantin

> 
> 
> I think my approach is simplier.
> 
> > Konstantin
> 
> 
> >> Signed-off-by: Alex Kiselev 
> >> ---
> >>  lib/librte_ip_frag/ip_frag_common.h| 16 +---
> >>  lib/librte_ip_frag/ip_frag_internal.c  | 16 +---
> >>  lib/librte_ip_frag/rte_ip_frag.h   | 18 +-
> >>  lib/librte_ip_frag/rte_ip_frag_common.c|  1 +
> >>  lib/librte_ip_frag/rte_ip_frag_version.map |  1 +
> >>  lib/librte_ip_frag/rte_ipv4_reassembly.c   |  2 +-
> >>  lib/librte_ip_frag/rte_ipv6_reassembly.c   |  2 +-
> >>  7 files changed, 39 insertions(+), 17 deletions(-)
> 
> >> diff --git a/lib/librte_ip_frag/ip_frag_common.h 
> >> b/lib/librte_ip_frag/ip_frag_common.h
> >> index 0fdcc7d0f..9fe5c0559 100644
> >> --- a/lib/librte_ip_frag/ip_frag_common.h
> >> +++ b/lib/librte_ip_frag/ip_frag_common.h
> >> @@ -32,15 +32,15 @@
> >>  #endif /* IP_FRAG_TBL_STAT */
> 
> >>  /* internal functions declarations */
> >> -struct rte_mbuf * ip_frag_process(struct ip_frag_pkt *fp,
> >> - struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb,
> >> - uint16_t ofs, uint16_t len, uint16_t more_frags);
> >> +struct rte_mbuf *ip_frag_process(struct rte_ip_frag_tbl *tbl,
> >> + struct ip_frag_pkt *fp, struct rte_ip_frag_death_row *dr,
> >> + struct rte_mbuf *mb, uint16_t ofs, uint16_t len, uint16_t 
> >> more_frags);
> 
> >> -struct ip_frag_pkt * ip_frag_find(struct rte_ip_frag_tbl *tbl,
> >> +struct ip_frag_pkt *ip_frag_find(struct rte_ip_frag_tbl *tbl,
> >>   struct rte_ip_frag_death_row *dr,
> >>   const struct ip_frag_key *key, uint64_t tms);
> 
> >> -struct ip_frag_pkt * ip_frag_lookup(struct rte_ip_frag_tbl *tbl,
> >> +struct ip_frag_pkt *ip_frag_lookup(struct rte_ip_frag_tbl *tbl,
> >>   const struct ip_frag_key *key, uint64_t tms,
> >>   struct ip_frag_pkt **free, struct ip_frag_pkt **stale);
> 
> >> @@ -91,7 +91,8 @@ ip_frag_key_cmp(const struct ip_frag_key * k1, const 
> >> struct ip_frag_key * k2)
> 
> >>  /* put fragment on death row */
> >>  static inline void
> >> -ip_frag_free(struct ip_frag_pkt *fp, struct rte_ip_frag_death_row *dr)
> >> +ip_frag_free(struct rte_ip_frag_tbl *tbl, struct ip_frag_pkt *fp,
> >> + struct rte_ip_frag_death_row *dr)
> >>  {
> >>   uint32_t i, k;
> 
> >> @@ -100,6 +101,7 @@ ip_frag_free(struct ip_frag_pkt *fp, struct 
> >> rte_ip_frag_death_row *dr)
> >>   if (fp->frags[i].mb != NULL) {
> >>   dr->row[k++] = fp->frags[i].mb;
> >>   fp->frags[i].mb = NULL;
> >> + tbl->nb_mbufs--;
> >>   }
> >>   }
> 
> >> @@

Re: [dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 2:15 PM, Reshma Pattan wrote:

Makefiles are updated with new test case lists.
Test cases are classified as -
P1 - Main test cases,
P2 - Cryptodev/driver test cases,
P3 - Perf test cases which takes longer than 10s,
P4 - Logging/Dump test cases.

Makefile is updated with different targets
for the above classified groups.
Test cases for different targets are listed accordingly.

Cc: sta...@dpdk.org

Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Reshma Pattan 
---
  mk/rte.sdkroot.mk |  4 ++--
  mk/rte.sdktest.mk | 33 +++--
  2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index f43cc7829..ea3473ebf 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
  cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
  
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage

-test test-basic test-fast test-ring test-mempool test-perf coverage:
+.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
+test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:


I'm probably missing something, but i can only see definitions for 
coverage, test-fast, test-perf, test-drivers and test-dump.


What is the difference between test and test-basic, and what is 
test-mempool? If they are unused, they should be removed.



$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
  
  test: test-build

diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index ee1fe0c7e..13d1efb6a 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,14 +18,35 @@ DIR := $(shell basename $(RTE_OUTPUT))
  #
  # test: launch auto-tests, very simple for now.
  #
-.PHONY: test test-basic test-fast test-perf coverage
+.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
  
-PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf

-coverage: BLACKLIST=-$(PERFLIST)
-test-fast: BLACKLIST=-$(PERFLIST)
-test-perf: WHITELIST=$(PERFLIST)
+PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
+ reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
+ barrier,hash_multiwriter,timer_racecond,efd,hash_functions,\
+ eventdev_selftest_sw,member_perf,efd_perf,lpm6_perf,red_perf,\
+ distributor_perf,ring_pmd_perf,pmd_perf,ring_perf
+DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
+cryptodev_sw_mrvl,cryptodev_dpaa2_sec,cryptodev_dpaa_sec,\
+cryptodev_qat,cryptodev_aesni_mb,cryptodev_openssl,\
+cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
+cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
+DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
+ dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
+ devargs_autotest
  
-test test-basic test-fast test-perf:

+SPACESTR:=
+SPACESTR+=
+STRIPPED_PERFLIST=$(subst $(SPACESTR),,$(PERFLIST))
+STRIPPED_DRIVERSLIST=$(subst $(SPACESTR),,$(DRIVERSLIST))
+STRIPPED_DUMPLIST=$(subst $(SPACESTR),,$(DUMPLIST))
+
+coverage: BLACKLIST=-$(STRIPPED_PERFLIST)
+test-fast: 
BLACKLIST=-$(STRIPPED_PERFLIST),$(STRIPPED_DRIVERSLIST),$(STRIPPED_DUMPLIST)
+test-perf: WHITELIST=$(STRIPPED_PERFLIST)
+test-drivers: WHITELIST=$(STRIPPED_DRIVERSLIST)
+test-dump: WHITELIST=$(STRIPPED_DUMPLIST)
+
+test test-basic test-fast test-perf test-drivers test-dump:
@mkdir -p $(AUTOTEST_DIR) ; \
cd $(AUTOTEST_DIR) ; \
if [ -f $(RTE_OUTPUT)/app/test ]; then \




--
Thanks,
Anatoly


[dpdk-dev] [PATCH] app/eventdev: use proper teardown sequence

2018-07-17 Thread Pavan Nikhilesh
Use proper teardown sequence when SIGINT is caught to prevent
eventdev from going into undefined state.

Signed-off-by: Pavan Nikhilesh 
---
 app/test-eventdev/evt_main.c | 6 +-
 app/test-eventdev/test_pipeline_common.c | 1 -
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c
index 57bb94570..bc25fb386 100644
--- a/app/test-eventdev/evt_main.c
+++ b/app/test-eventdev/evt_main.c
@@ -25,8 +25,12 @@ signal_handler(int signum)
signum);
/* request all lcores to exit from the main loop */
*(int *)test->test_priv = true;
-   rte_wmb();
 
+   if (test->ops.ethdev_destroy)
+   test->ops.ethdev_destroy(test, &opt);
+
+   rte_event_dev_stop(opt.dev_id);
+   rte_wmb();
rte_eal_mp_wait_lcore();
 
if (test->ops.test_result)
diff --git a/app/test-eventdev/test_pipeline_common.c 
b/app/test-eventdev/test_pipeline_common.c
index 719518ff3..70fd04517 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -476,7 +476,6 @@ pipeline_eventdev_destroy(struct evt_test *test, struct 
evt_options *opt)
 {
RTE_SET_USED(test);
 
-   rte_event_dev_stop(opt->dev_id);
rte_event_dev_close(opt->dev_id);
 }
 
-- 
2.18.0



[dpdk-dev] [PATCH] event/octeontx: prefetch mbuf instead of wqe

2018-07-17 Thread Pavan Nikhilesh
Prefetch mbuf pointer instead of wqe when SSO receives pkt from PKI.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/event/octeontx/ssovf_worker.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/event/octeontx/ssovf_worker.h 
b/drivers/event/octeontx/ssovf_worker.h
index d55018a9c..7c7306b51 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -28,11 +28,11 @@ ssovf_octeontx_wqe_to_pkt(uint64_t work, uint16_t port_info)
 {
struct rte_mbuf *mbuf;
octtx_wqe_t *wqe = (octtx_wqe_t *)(uintptr_t)work;
-   rte_prefetch_non_temporal(wqe);
 
/* Get mbuf from wqe */
mbuf = (struct rte_mbuf *)((uintptr_t)wqe -
OCTTX_PACKET_WQE_SKIP);
+   rte_prefetch_non_temporal(mbuf);
mbuf->packet_type =
ptype_table[wqe->s.w2.lcty][wqe->s.w2.lety][wqe->s.w2.lfty];
mbuf->data_off = RTE_PTR_DIFF(wqe->s.w3.addr, mbuf->buf_addr);
-- 
2.18.0



Re: [dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Burakov, Anatoly

On 17-Jul-18 2:15 PM, Reshma Pattan wrote:

Makefiles are updated with new test case lists.
Test cases are classified as -
P1 - Main test cases,
P2 - Cryptodev/driver test cases,
P3 - Perf test cases which takes longer than 10s,
P4 - Logging/Dump test cases.

Makefile is updated with different targets
for the above classified groups.
Test cases for different targets are listed accordingly.

Cc: sta...@dpdk.org

Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Reshma Pattan 
---
  mk/rte.sdkroot.mk |  4 ++--
  mk/rte.sdktest.mk | 33 +++--
  2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index f43cc7829..ea3473ebf 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
  cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
  
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage

-test test-basic test-fast test-ring test-mempool test-perf coverage:
+.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
+test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:
$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
  
  test: test-build

diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index ee1fe0c7e..13d1efb6a 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,14 +18,35 @@ DIR := $(shell basename $(RTE_OUTPUT))
  #
  # test: launch auto-tests, very simple for now.
  #
-.PHONY: test test-basic test-fast test-perf coverage
+.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
  
-PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf

-coverage: BLACKLIST=-$(PERFLIST)
-test-fast: BLACKLIST=-$(PERFLIST)
-test-perf: WHITELIST=$(PERFLIST)
+PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
+ reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
+ barrier,hash_multiwriter,timer_racecond,efd,hash_functions,\
+ eventdev_selftest_sw,member_perf,efd_perf,lpm6_perf,red_perf,\
+ distributor_perf,ring_pmd_perf,pmd_perf,ring_perf
+DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
+cryptodev_sw_mrvl,cryptodev_dpaa2_sec,cryptodev_dpaa_sec,\
+cryptodev_qat,cryptodev_aesni_mb,cryptodev_openssl,\
+cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
+cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
+DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
+ dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
+ devargs_autotest


Also, why is "quit" and "devargs_autotest" in dump list?

--
Thanks,
Anatoly


Re: [dpdk-dev] [dpdk-stable] [PATCH] net/mlx5: fix compilation for rdma-core v19

2018-07-17 Thread Christian Ehrhardt
On Thu, Jul 12, 2018 at 12:57 PM Shahaf Shuler  wrote:

> Thursday, July 12, 2018 1:54 PM, Ori Kam:
> > Subject: RE: [PATCH] net/mlx5: fix compilation for rdma-core v19
> > >
> > > The flow counter support introduced by commit 9a761de8ea14 ("net/mlx5:
> > > flow counter support") was intend to work only with MLNX_OFED_4.3 as
> > > the upstream rdma-core libraries were lack such support.
> > >
> > > On rdma-core v19 the support for the flow counters was added but with
> > > different user APIs, hence causing compilation issues on the PMD.
> > >
> > > This patch fix the compilation errors by forcing the flow counters to
> > > be enabled only with MLNX_OFED APIs.
> > > Once MLNX_OFED and rdma-core APIs will be aligned, a proper patch to
> > > support the new API will be submitted.
> > >
> > > Fixes: 9a761de8ea14 ("net/mlx5: flow counter support")
> > > Cc: sta...@dpdk.org


In regard to the stable submission of this I wanted to mention that there
are issues with this.
It correctly fixes the detection with rdma-core v19 and does no more
set HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT

Due to that the following code triggers:
#ifndef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT
struct ibv_counter_set_init_attr {
   int dummy;
};
struct ibv_flow_spec_counter_action {
   int dummy;
};


But that makes compilation run into:

drivers/net/mlx5/mlx5_flow.c
/<>/drivers/net/mlx5/mlx5_flow.c:69:8: error: redefinition of
‘struct ibv_flow_spec_counter_action’
struct ibv_flow_spec_counter_action {
   ^~~~
In file included from /<>/drivers/net/mlx5/mlx5_flow.c:42:0:
/usr/include/infiniband/verbs.h:1563:8: note: originally defined here
struct ibv_flow_spec_counter_action {
   ^~~~

This is due to the series starting with b42c000e3 "net/mlx5: remove flow
support" not being in the current stable releases.

For now I appended something like this to the Makefile
+   HAVE_IBV_FLOW_SPEC_COUNTER_ACTION \
+   infiniband/verbs.h \
+   type 'struct ibv_flow_spec_counter_action' \
+   $(AUTOCONF_OUTPUT)

And this to the type definition:

--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -66,9 +66,15 @@
struct ibv_counter_set_init_attr {
   int dummy;
};
+/* rdma-core v19 has no ibv_counter_set_init_attr, but it has
+ * ibv_flow_spec_counter_action which would conflict.
+ * Newer DPDK, doesn't have the issue due to the series starting with
+ * "net/mlx5: remove flow support" */
+#ifndef HAVE_IBV_FLOW_SPEC_COUNTER_ACTION
struct ibv_flow_spec_counter_action {
   int dummy;
};
+#endif
struct ibv_counter_set {
   int dummy;
};

That makes it build, but I wanted to make you aware that this will let all
stable maintainers into issues as the fix applies, but then later breaks
compilation (only if mlx is enabled).
It would be great if you could submit a v2 to stable@ which solves it the
way you'd prefer it to be done.

Kind Regards,
Christian

> > Cc: or...@mellanox.com
> > >
> > > Reported-by:Stephen Hemminger 
> > > Reported-by: Ferruh Yigit 
> > > Signed-off-by: Shahaf Shuler 
> > > ---
> > >  drivers/net/mlx5/Makefile | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
> > > index 9e274964b4..d86c6bbab9 100644
> > > --- a/drivers/net/mlx5/Makefile
> > > +++ b/drivers/net/mlx5/Makefile
> > > @@ -150,7 +150,7 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-
> > > config-h.sh
> > > $Q sh -- '$<' '$@' \
> > > HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT \
> > > infiniband/verbs.h \
> > > -   enum IBV_FLOW_SPEC_ACTION_COUNT \
> > > +   type 'struct ibv_counter_set_init_attr' \
> > > $(AUTOCONF_OUTPUT)
> > > $Q sh -- '$<' '$@' \
> > > HAVE_RDMA_NL_NLDEV \
> > > --
> > > 2.12.0
> >
> >
> > Acked-by: Ori Kam 
>
> Applied to next-net-mlx, thanks.
>
>

-- 
Christian Ehrhardt
Software Engineer, Ubuntu Server
Canonical Ltd


Re: [dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Pattan, Reshma
Hi,

> -Original Message-
> From: Burakov, Anatoly
> Sent: Tuesday, July 17, 2018 3:40 PM
> To: Pattan, Reshma ; tho...@monjalon.net;
> dev@dpdk.org
> Cc: Parthasarathy, JananeeX M ;
> sta...@dpdk.org
> Subject: Re: [PATCH v4 9/9] mk: update make targets for classified testcases
> 
> +DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_
> devargs,\
> > + dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
> > + devargs_autotest
> 
> Also, why is "quit" and "devargs_autotest" in dump list?

I did not check this while merging the changes, will remove them from the dump 
list.


Re: [dpdk-dev] [PATCH v4 9/9] mk: update make targets for classified testcases

2018-07-17 Thread Pattan, Reshma
Hi,

> -Original Message-
> From: Burakov, Anatoly
> Sent: Tuesday, July 17, 2018 3:34 PM
> To: Pattan, Reshma ; tho...@monjalon.net;
> dev@dpdk.org
> Cc: Parthasarathy, JananeeX M ;
> sta...@dpdk.org
> Subject: Re: [PATCH v4 9/9] mk: update make targets for classified testcases
> 
> On 17-Jul-18 2:15 PM, Reshma Pattan wrote:
> > Makefiles are updated with new test case lists.
> > Test cases are classified as -
> > P1 - Main test cases,
> > P2 - Cryptodev/driver test cases,
> > P3 - Perf test cases which takes longer than 10s,
> > P4 - Logging/Dump test cases.
> >
> > Makefile is updated with different targets for the above classified
> > groups.
> > Test cases for different targets are listed accordingly.
> >
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Jananee Parthasarathy
> > 
> > Reviewed-by: Reshma Pattan 
> > ---
> >   mk/rte.sdkroot.mk |  4 ++--
> >   mk/rte.sdktest.mk | 33 +++--
> >   2 files changed, 29 insertions(+), 8 deletions(-)
> >
> > diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk index
> > f43cc7829..ea3473ebf 100644
> > --- a/mk/rte.sdkroot.mk
> > +++ b/mk/rte.sdkroot.mk
> > @@ -68,8 +68,8 @@ config defconfig showconfigs showversion
> showversionum:
> >   cscope gtags tags etags:
> > $(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
> >
> > -.PHONY: test test-basic test-fast test-ring test-mempool test-perf
> > coverage -test test-basic test-fast test-ring test-mempool test-perf 
> > coverage:
> > +.PHONY: test test-basic test-fast test-ring test-mempool test-perf
> > +coverage test-drivers test-dump test test-basic test-fast test-ring test-
> mempool test-perf coverage test-drivers test-dump:
> 
> I'm probably missing something, but i can only see definitions for coverage,
> test-fast, test-perf, test-drivers and test-dump.
> 
> What is the difference between test and test-basic, and what is test-
> mempool? If they are unused, they should be removed.
> 

test-mempool was there from legacy  though it is not useful after looking at 
the make file. Will remove.

make test : this runs all the UTs that are there in autotest_data.py
make test-basic:  it is same as make test.

Will remove test-mempool and make test-basic then in a separate patch.

Thanks,
Reshma


[dpdk-dev] [PATCH] mem: fix static analysis warning

2018-07-17 Thread Anatoly Burakov
Technically, single file segments codepath will never get
triggered when using in-memory mode, because EAL prohibits
mixing these two options at initialization time. However,
code analyzers do not know that, and some will complain
about either using uninitialized variables, or trying to
do operations on an already closed descriptor.

Fix this by assuring the compiler or code analyzer that
in-memory mode code never gets triggered when using
single-file segments mode.

Coverity ID: 302847
Fixes: 72b49ff623c4 ("mem: support --in-memory mode")

Signed-off-by: Anatoly Burakov 
---
 lib/librte_eal/linuxapp/eal/eal_memalloc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c 
b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 79443c56a..a59f229cd 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
@@ -481,7 +481,9 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
void *new_addr;
 
alloc_sz = hi->hugepage_sz;
-   if (internal_config.in_memory && anonymous_hugepages_supported) {
+   if (!internal_config.single_file_segments &&
+   internal_config.in_memory &&
+   anonymous_hugepages_supported) {
int log2, flags;
 
log2 = rte_log2_u32(alloc_sz);
-- 
2.17.1


Re: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf initialization

2018-07-17 Thread Ferruh Yigit
On 7/17/2018 2:04 PM, Ori Kam wrote:
> Hi,
> 
> PSB
> 
> Thanks,
> Ori
> 
>> -Original Message-
>> From: Ferruh Yigit [mailto:ferruh.yi...@intel.com]
>> Sent: Tuesday, July 17, 2018 12:57 PM
>> To: Ori Kam ; Xu, Rosen ;
>> dev@dpdk.org
>> Cc: sta...@dpdk.org; Gilmore, Walter E ; Qi
>> Zhang 
>> Subject: Re: [dpdk-dev] [PATCH] examples/flow_filtering: add rte_fdir_conf
>> initialization
>>
>> On 7/17/2018 6:15 AM, Ori Kam wrote:
>>> Sorry for the late response,
>>>
 -Original Message-
 From: Xu, Rosen [mailto:rosen...@intel.com]
 Sent: Thursday, July 12, 2018 9:23 AM
 To: Ori Kam ; dev@dpdk.org
 Cc: Yigit, Ferruh ; sta...@dpdk.org; Gilmore,
>> Walter
 E 
 Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>> rte_fdir_conf
 initialization

 Hi Ori,

 Pls see my reply.

 Hi Walter and Ferruh,

 I need your voice :)

> -Original Message-
> From: Ori Kam [mailto:or...@mellanox.com]
> Sent: Thursday, July 12, 2018 13:58
> To: Xu, Rosen ; dev@dpdk.org
> Cc: Yigit, Ferruh ; sta...@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
 rte_fdir_conf
> initialization
>
> Hi,
>
> PSB
>
>> -Original Message-
>> From: Xu, Rosen [mailto:rosen...@intel.com]
>> Sent: Thursday, July 12, 2018 8:27 AM
>> To: Ori Kam ; dev@dpdk.org
>> Cc: Yigit, Ferruh ; sta...@dpdk.org
>> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>> rte_fdir_conf initialization
>>
>> Hi Ori,
>>
>> examples/flow_filtering sample app fails on i40e [1] because i40e
>> requires explicit FDIR configuration.
>>
>> But rte_flow in and hardware independent ways of describing
>> flow-action, it shouldn't require specific config options for specific
> hardware.
>>
>
> I don't understand why using rte flow require the use of fdir.
> it doesn't make sense to me, that  new API will need old one.

 It's a good question, I also have this question about Mellanox NIC Driver
 mlx5_flow.c.
 In this file many flow functions call fdir. :)
>>>
>>> The only functions that are calling fdir are fdir function,
>>> and you can see that inside of the create function we convert the fdir
>>> Into rte flow.
>>>

>> Is there any chance driver select the FDIR config automatically based
>> on rte_flow rule, unless explicitly a FDIR config set by user?
>
> I don't know how the i40e driver is implemented but I know that
>> Mellanox
> convert the other way around, if fdir is given it is converted to 
> rte_flow.

 Firstly, rte_fdir_conf is part of rte_eth_conf definition.
struct rte_eth_conf {
..
struct rte_fdir_conf fdir_conf; /**< FDIR configuration. */
..
};
 Secondly, default value of rte_eth_conf.fdir_conf.mode is
 RTE_FDIR_MODE_NONE, which means Disable FDIR support.
 Thirdly, flow_filtering should align with test-pmd, in test-pmd all 
 fdir_conf
>> is
 initialized.

>>>
>>> This sounds to me correct we don't want to enable fdir.
>>> Why should the example app for rte flow use fdir? And align to
>>> testpmd which support everything in in all modes?
>>
>> In i40e fdir is used to implement filters, that is why rte_flow rules
>> requires/depends some fdir configurations.
>>
>> In long term I agree it is better if driver doesn't require any fdir
>> configuration for rte_flow programing, although not sure if this is 
>> completely
>> possible, cc'ed Qi for more comment.
>>
>> For short term I am for getting this patch so that sample app can run on i40e
>> too, and fdir configuration shouldn't effect others. Perhaps it can be good 
>> to
>> add a comment to say why that config option is added and it is a temporary
>> workaround.
>>
> 
> Assuming that the setting for the fdir are fixed for all possible rte_flow 
> rules
> I can agree for this workaround but we must add some comment in the code
> and also add this comment in the example documentation.
> 
> It will be a problem if other PMD will require different default setting.
> In this case we must find a better solution.

+1 for commenting code, and as far as I know fdir config only used by Intel PMDs
which we need to confirm all Intel PMDs are OK with change.

> 
> 
>>>
>>>
>
>>
>> [1]
>> Flow can't be created 1 message: Check the mode in fdir_conf.
>> EAL: Error - exiting with code: 1
>>
>>> -Original Message-
>>> From: Ori Kam [mailto:or...@mellanox.com]
>>> Sent: Thursday, July 12, 2018 13:17
>>> To: Xu, Rosen ; dev@dpdk.org
>>> Cc: Yigit, Ferruh ; sta...@dpdk.org; Ori Kam
>>> 
>>> Subject: RE: [dpdk-dev] [PATCH] examples/flow_filtering: add
>> rte_fdir_conf
>>> initialization
>>>
>>> Hi Rosen,
>>>
>>> Why do the fdir_conf must be initia

[dpdk-dev] [PATCH v5 00/10] Make unit tests great again

2018-07-17 Thread Reshma Pattan
Previously, unit tests were running in groups. There were technical reasons why 
that was the case (mostly having to do with limiting memory), but it was hard 
to maintain and update the autotest script.

In 18.05, limiting of memory at DPDK startup was no longer necessary, as DPDK 
allocates memory at runtime as needed. This has the implication that the old 
test grouping can now be retired and replaced with a more sensible way of 
running unit tests (using multiprocessing pool of workers and a queue of 
tests). This patchset accomplishes exactly that.

This patchset merges changes done in [1], [2]

[1] http://dpdk.org/dev/patchwork/patch/40370/
[2] http://patches.dpdk.org/patch/40373/

Removed unused and duplicate make rules for test-basic,
test-mempool, test-ring from make file system in patch 10/10.

v4: Removed unused and duplicate make rules for test-basic,
test-mempool, test-ring from make file system in patch 10/10.

Reshma Pattan (10):
  autotest: fix printing
  autotest: fix invalid code on reports
  autotest: make autotest runner python 2/3 compliant
  autotest: visually separate different test categories
  autotest: improve filtering
  autotest: remove autotest grouping
  autotest: properly parallelize unit tests
  autotest: update autotest test case list
  mk: update make targets for classified testcases
  mk: remove unnecessary make rules of test

 mk/rte.sdkroot.mk|4 +-
 mk/rte.sdktest.mk|   32 +-
 test/test/autotest.py|   13 +-
 test/test/autotest_data.py   | 1081 +-
 test/test/autotest_runner.py |  519 ++--
 5 files changed, 947 insertions(+), 702 deletions(-)

-- 
2.14.4



[dpdk-dev] [PATCH v5 01/10] autotest: fix printing

2018-07-17 Thread Reshma Pattan
Previously, printing was done using tuple syntax, which caused
output to appear as a tuple as opposed to being one string. Fix
this by using addition operator instead.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index a692f0697..b09b57876 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -247,7 +247,7 @@ def __process_results(self, results):
 
 # don't print out total time every line, it's the same anyway
 if i == len(results) - 1:
-print(result,
+print(result +
   "[%02dm %02ds]" % (total_time / 60, total_time % 60))
 else:
 print(result)
@@ -332,8 +332,8 @@ def run_all_tests(self):
 
 # create table header
 print("")
-print("Test name".ljust(30), "Test result".ljust(29),
-  "Test".center(9), "Total".center(9))
+print("Test name".ljust(30) + "Test result".ljust(29) +
+  "Test".center(9) + "Total".center(9))
 print("=" * 80)
 
 # make a note of tests start time
-- 
2.14.4



[dpdk-dev] [PATCH v5 04/10] autotest: visually separate different test categories

2018-07-17 Thread Reshma Pattan
Help visually identify parallel vs. non-parallel autotests.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index f6b669a2e..d9d5f7a97 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -341,6 +341,7 @@ def run_all_tests(self):
 # make a note of tests start time
 self.start = time.time()
 
+print("Parallel autotests:")
 # assign worker threads to run test groups
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
@@ -367,6 +368,7 @@ def run_all_tests(self):
 # remove result from results list once we're done with it
 results.remove(group_result)
 
+print("Non-parallel autotests:")
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-- 
2.14.4



[dpdk-dev] [PATCH v5 02/10] autotest: fix invalid code on reports

2018-07-17 Thread Reshma Pattan
There are no reports defined for any test, so this codepath was
never triggered, but it's still wrong because it's referencing
variables that aren't there. Fix it by passing target into the
test function, and reference correct log variable.

Fixes: e2cc79b75d9f ("app: rework autotest.py")
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index b09b57876..bdc32da5d 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -41,7 +41,7 @@ def wait_prompt(child):
 # quite a bit of effort to make it work).
 
 
-def run_test_group(cmdline, test_group):
+def run_test_group(cmdline, target, test_group):
 results = []
 child = None
 start_time = time.time()
@@ -128,14 +128,15 @@ def run_test_group(cmdline, test_group):
 # make a note when the test was finished
 end_time = time.time()
 
+log = logfile.getvalue()
+
 # append test data to the result tuple
-result += (test["Name"], end_time - start_time,
-   logfile.getvalue())
+result += (test["Name"], end_time - start_time, log)
 
 # call report function, if any defined, and supply it with
 # target and complete log for test run
 if test["Report"]:
-report = test["Report"](self.target, log)
+report = test["Report"](target, log)
 
 # append report to results tuple
 result += (report,)
@@ -343,6 +344,7 @@ def run_all_tests(self):
 for test_group in self.parallel_test_groups:
 result = pool.apply_async(run_test_group,
   [self.__get_cmdline(test_group),
+   self.target,
test_group])
 results.append(result)
 
@@ -367,7 +369,7 @@ def run_all_tests(self):
 # run non_parallel tests. they are run one by one, synchronously
 for test_group in self.non_parallel_test_groups:
 group_result = run_test_group(
-self.__get_cmdline(test_group), test_group)
+self.__get_cmdline(test_group), self.target, test_group)
 
 self.__process_results(group_result)
 
-- 
2.14.4



[dpdk-dev] [PATCH v5 05/10] autotest: improve filtering

2018-07-17 Thread Reshma Pattan
Improve code for filtering test groups. Also, move reading binary
symbols into filtering stage, so that tests that are meant to be
skipped are never attempted to be executed in the first place.
Before running tests, print out any tests that were skipped because
they weren't compiled.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 118 ---
 1 file changed, 66 insertions(+), 52 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d9d5f7a97..c98ec2a57 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -95,13 +95,6 @@ def run_test_group(cmdline, target, test_group):
 results.append((0, "Success", "Start %s" % test_group["Prefix"],
 time.time() - start_time, startuplog.getvalue(), None))
 
-# parse the binary for available test commands
-binary = cmdline.split()[0]
-stripped = 'not stripped' not in subprocess.check_output(['file', binary])
-if not stripped:
-symbols = subprocess.check_output(['nm', binary]).decode('utf-8')
-avail_cmds = re.findall('test_register_(\w+)', symbols)
-
 # run all tests in test group
 for test in test_group["Tests"]:
 
@@ -121,10 +114,7 @@ def run_test_group(cmdline, target, test_group):
 print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
-if stripped or test["Command"] in avail_cmds:
-result = test["Func"](child, test["Command"])
-else:
-result = (0, "Skipped [Not Available]")
+result = test["Func"](child, test["Command"])
 
 # make a note when the test was finished
 end_time = time.time()
@@ -186,8 +176,10 @@ class AutotestRunner:
 def __init__(self, cmdline, target, blacklist, whitelist):
 self.cmdline = cmdline
 self.target = target
+self.binary = cmdline.split()[0]
 self.blacklist = blacklist
 self.whitelist = whitelist
+self.skipped = []
 
 # log file filename
 logfile = "%s.log" % target
@@ -276,53 +268,58 @@ def __process_results(self, results):
 if i != 0:
 self.csvwriter.writerow([test_name, test_result, result_str])
 
-# this function iterates over test groups and removes each
-# test that is not in whitelist/blacklist
-def __filter_groups(self, test_groups):
-groups_to_remove = []
-
-# filter out tests from parallel test groups
-for i, test_group in enumerate(test_groups):
-
-# iterate over a copy so that we could safely delete individual
-# tests
-for test in test_group["Tests"][:]:
-test_id = test["Command"]
-
-# dump tests are specified in full e.g. "Dump_mempool"
-if "_autotest" in test_id:
-test_id = test_id[:-len("_autotest")]
-
-# filter out blacklisted/whitelisted tests
-if self.blacklist and test_id in self.blacklist:
-test_group["Tests"].remove(test)
-continue
-if self.whitelist and test_id not in self.whitelist:
-test_group["Tests"].remove(test)
-continue
-
-# modify or remove original group
-if len(test_group["Tests"]) > 0:
-test_groups[i] = test_group
-else:
-# remember which groups should be deleted
-# put the numbers backwards so that we start
-# deleting from the end, not from the beginning
-groups_to_remove.insert(0, i)
+# this function checks individual test and decides if this test should be 
in
+# the group by comparing it against  whitelist/blacklist. it also checks if
+# the test is compiled into the binary, and marks it as skipped if 
necessary
+def __filter_test(self, test):
+test_cmd = test["Command"]
+test_id = test_cmd
+
+# dump tests are specified in full e.g. "Dump_mempool"
+if "_autotest" in test_id:
+test_id = test_id[:-len("_autotest")]
+
+# filter out blacklisted/whitelisted tests
+if self.blacklist and test_id in self.blacklist:
+return False
+if self.whitelist and test_id not in self.whitelist:
+return False
+
+# if test wasn't compiled in, remove it as well
+
+# parse the binary for available test commands
+stripped = 'not stripped' not in \
+   subprocess.check_output(['file', self.binary])
+if not stripped:
+symbols = subprocess.check_output(['nm',
+   self.binary]).decode('utf-8')
+avail_cmds = re.findall('test_register_(\w+)', symbols)
+
+if test_cmd

[dpdk-dev] [PATCH v5 03/10] autotest: make autotest runner python 2/3 compliant

2018-07-17 Thread Reshma Pattan
Autotest runner was still using python 2-style print syntax. Fix
it by importing print function from the future, and fix the calls
to be python-3 style.

Fixes: 54ca545dce4b ("make python scripts python2/3 compliant")
Cc: john.mcnam...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest_runner.py | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index bdc32da5d..f6b669a2e 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -3,6 +3,7 @@
 
 # The main logic behind running autotests in parallel
 
+from __future__ import print_function
 import StringIO
 import csv
 import multiprocessing
@@ -52,8 +53,8 @@ def run_test_group(cmdline, target, test_group):
 # prepare logging of init
 startuplog = StringIO.StringIO()
 
-print >>startuplog, "\n%s %s\n" % ("=" * 20, test_group["Prefix"])
-print >>startuplog, "\ncmdline=%s" % cmdline
+print("\n%s %s\n" % ("=" * 20, test_group["Prefix"]), file=startuplog)
+print("\ncmdline=%s" % cmdline, file=startuplog)
 
 child = pexpect.spawn(cmdline, logfile=startuplog)
 
@@ -117,7 +118,7 @@ def run_test_group(cmdline, target, test_group):
 
 try:
 # print test name to log buffer
-print >>logfile, "\n%s %s\n" % ("-" * 20, test["Name"])
+print("\n%s %s\n" % ("-" * 20, test["Name"]), file=logfile)
 
 # run test function associated with the test
 if stripped or test["Command"] in avail_cmds:
-- 
2.14.4



[dpdk-dev] [PATCH v5 06/10] autotest: remove autotest grouping

2018-07-17 Thread Reshma Pattan
Previously, all autotests were grouped into (seemingly arbitrary)
groups. The goal was to run all tests in parallel (so that autotest
finishes faster), but we couldn't just do it willy-nilly because
DPDK couldn't allocate and free hugepages on-demand, so we had to
find autotest groupings that could work memory-wise and still be
fast enough to not hold up shorter tests. The inflexibility of
memory subsystem has now been fixed for 18.05, so grouping
autotests is no longer necessary.

Thus, this commit moves all autotests into two groups -
parallel(izable) autotests, and non-arallel(izable) autotests
(typically performance tests). Note that this particular commit
makes running autotests dog slow because while the tests are now
in a single group, the test function itself hasn't changed much,
so all autotests are now run one-by-one, starting and stopping
the DPDK test application.

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   7 +-
 test/test/autotest_data.py   | 749 +--
 test/test/autotest_runner.py | 271 ++--
 3 files changed, 408 insertions(+), 619 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index 1cfd8cf22..ae27daef7 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -39,11 +39,8 @@ def usage():
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
 test_whitelist)
 
-for test_group in autotest_data.parallel_test_group_list:
-runner.add_parallel_test_group(test_group)
-
-for test_group in autotest_data.non_parallel_test_group_list:
-runner.add_non_parallel_test_group(test_group)
+runner.parallel_tests = autotest_data.parallel_test_list[:]
+runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
 
 num_fails = runner.run_all_tests()
 
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index aacfe0a66..c24e7bc25 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -3,465 +3,322 @@
 
 # Test data for autotests
 
-from glob import glob
 from autotest_test_funcs import *
 
-
-# quick and dirty function to find out number of sockets
-def num_sockets():
-result = len(glob("/sys/devices/system/node/node*"))
-if result == 0:
-return 1
-return result
-
-
-# Assign given number to each socket
-# e.g. 32 becomes 32,32 or 32,32,32,32
-def per_sockets(num):
-return ",".join([str(num)] * num_sockets())
-
 # groups of tests that can be run in parallel
 # the grouping has been found largely empirically
-parallel_test_group_list = [
-{
-"Prefix":"group_1",
-"Memory":per_sockets(8),
-"Tests":
-[
-{
-"Name":"Cycles autotest",
-"Command": "cycles_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Timer autotest",
-"Command": "timer_autotest",
-"Func":timer_autotest,
-"Report":   None,
-},
-{
-"Name":"Debug autotest",
-"Command": "debug_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Errno autotest",
-"Command": "errno_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Meter autotest",
-"Command": "meter_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Common autotest",
-"Command": "common_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-{
-"Name":"Resource autotest",
-"Command": "resource_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
-]
-},
-{
-"Prefix":"group_2",
-"Memory":"16",
-"Tests":
-[
-{
-"Name":"Memory autotest",
-"Command": "memory_autotest",
-"Func":memory_autotest,
-"Report":  None,
-},
-{
-"Name":"Read/write lock autotest",
-"Command": "rwlock_autotest",
-"Func":rwlock_autotest,
-"Report":  None,
-},
-{
-"Name":"Logs autotest",
-"Command": "logs_autotest",
-"Func":logs_autotest,
-"Report":  None,
-},
-{
-"Name":"CPU flags autotest",
-"Command": "cpuflags_autotes

[dpdk-dev] [PATCH v5 07/10] autotest: properly parallelize unit tests

2018-07-17 Thread Reshma Pattan
Now that everything else is in place, we can run unit tests in a
different fashion to what they were running as before. Previously,
we had all autotests as part of groups (largely obtained through
trial and error) to ensure parallel execution while still limiting
amounts of memory used by those tests.

This is no longer necessary, and as of previous commit, all tests
are now in the same group (still broken into two categories). They
still run one-by-one though. Fix this by initializing child
processes in multiprocessing Pool initialization, and putting all
tests on the queue, so that tests are executed by the first idle
worker. Tests are also affinitized to different NUMA nodes using
taskset in a round-robin fashion, to prevent over-exhausting
memory on any given NUMA node.

Non-parallel tests are executed in similar fashion, but on a
separate queue which will have only one pool worker, ensuring
non-parallel execution.

Support for FreeBSD is also added to ensure that on FreeBSD, all
tests are run sequentially even for the parallel section.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
---
 test/test/autotest.py|   6 +-
 test/test/autotest_runner.py | 277 +++
 2 files changed, 183 insertions(+), 100 deletions(-)

diff --git a/test/test/autotest.py b/test/test/autotest.py
index ae27daef7..12997fdf0 100644
--- a/test/test/autotest.py
+++ b/test/test/autotest.py
@@ -36,8 +36,12 @@ def usage():
 
 print(cmdline)
 
+# how many workers to run tests with. FreeBSD doesn't support multiple primary
+# processes, so make it 1, otherwise make it 4. ignored for non-parallel tests
+n_processes = 1 if "bsdapp" in target else 4
+
 runner = autotest_runner.AutotestRunner(cmdline, target, test_blacklist,
-test_whitelist)
+test_whitelist, n_processes)
 
 runner.parallel_tests = autotest_data.parallel_test_list[:]
 runner.non_parallel_tests = autotest_data.non_parallel_test_list[:]
diff --git a/test/test/autotest_runner.py b/test/test/autotest_runner.py
index d6ae57e76..36941a40a 100644
--- a/test/test/autotest_runner.py
+++ b/test/test/autotest_runner.py
@@ -6,16 +6,16 @@
 from __future__ import print_function
 import StringIO
 import csv
-import multiprocessing
+from multiprocessing import Pool, Queue
 import pexpect
 import re
 import subprocess
 import sys
 import time
+import glob
+import os
 
 # wait for prompt
-
-
 def wait_prompt(child):
 try:
 child.sendline()
@@ -28,22 +28,47 @@ def wait_prompt(child):
 else:
 return False
 
-# run a test group
-# each result tuple in results list consists of:
-#   result value (0 or -1)
-#   result string
-#   test name
-#   total test run time (double)
-#   raw test log
-#   test report (if not available, should be None)
-#
-# this function needs to be outside AutotestRunner class
-# because otherwise Pool won't work (or rather it will require
-# quite a bit of effort to make it work).
+
+# get all valid NUMA nodes
+def get_numa_nodes():
+return [
+int(
+re.match(r"node(\d+)", os.path.basename(node))
+.group(1)
+)
+for node in glob.glob("/sys/devices/system/node/node*")
+]
+
+
+# find first (or any, really) CPU on a particular node, will be used to spread
+# processes around NUMA nodes to avoid exhausting memory on particular node
+def first_cpu_on_node(node_nr):
+cpu_path = glob.glob("/sys/devices/system/node/node%d/cpu*" % node_nr)[0]
+cpu_name = os.path.basename(cpu_path)
+m = re.match(r"cpu(\d+)", cpu_name)
+return int(m.group(1))
+
+
+pool_child = None  # per-process child
 
 
-def run_test_group(cmdline, prefix, target, test):
+# we initialize each worker with a queue because we need per-pool unique
+# command-line arguments, but we cannot do different arguments in an 
initializer
+# because the API doesn't allow per-worker initializer arguments. so, instead,
+# we will initialize with a shared queue, and dequeue command-line arguments
+# from this queue
+def pool_init(queue, result_queue):
+global pool_child
+
+cmdline, prefix = queue.get()
 start_time = time.time()
+name = ("Start %s" % prefix) if prefix != "" else "Start"
+
+# use default prefix if no prefix was specified
+prefix_cmdline = "--file-prefix=%s" % prefix if prefix != "" else ""
+
+# append prefix to cmdline
+cmdline = "%s %s" % (cmdline, prefix_cmdline)
 
 # prepare logging of init
 startuplog = StringIO.StringIO()
@@ -54,24 +79,60 @@ def run_test_group(cmdline, prefix, target, test):
 print("\n%s %s\n" % ("=" * 20, prefix), file=startuplog)
 print("\ncmdline=%s" % cmdline, file=startuplog)
 
-child = pexpect.spawn(cmdline, logfile=startuplog)
+pool_child = pexpect.spawn(cmdline, logfile=startuplog)
 
 # wait for target to boot
-if not wait_prompt(child):
-child.close()
+if n

[dpdk-dev] [PATCH v5 10/10] mk: remove unnecessary make rules of test

2018-07-17 Thread Reshma Pattan
make rule test-basic is duplicate of test rule.
removed unused test-mempool and test-ring make rules.

Fixes: a3df7f8d9c ("mk: rename test related rules")
Fixes: a3df7f8d9c ("mk: rename test related rules")
CC: sta...@dpdk.org
CC: ferruh.yi...@intel.com

Signed-off-by: Reshma Pattan 
---
 mk/rte.sdkroot.mk | 4 ++--
 mk/rte.sdktest.mk | 7 +++
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index ea3473ebf..18c88017e 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
 cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
 
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
-test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:
+.PHONY: test test-fast test-perf coverage test-drivers test-dump
+test test-fast test-perf coverage test-drivers test-dump:
$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
 
 test: test-build
diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index 13d1efb6a..295592809 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,7 +18,7 @@ DIR := $(shell basename $(RTE_OUTPUT))
 #
 # test: launch auto-tests, very simple for now.
 #
-.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
+.PHONY: test test-fast test-perf test-drivers test-dump coverage
 
 PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
  reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
@@ -31,8 +31,7 @@ 
DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
 cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
 cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
 DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
- dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
- devargs_autotest
+ dump_log_types,dump_ring,dump_physmem,dump_memzone
 
 SPACESTR:=
 SPACESTR+=
@@ -46,7 +45,7 @@ test-perf: WHITELIST=$(STRIPPED_PERFLIST)
 test-drivers: WHITELIST=$(STRIPPED_DRIVERSLIST)
 test-dump: WHITELIST=$(STRIPPED_DUMPLIST)
 
-test test-basic test-fast test-perf test-drivers test-dump:
+test test-fast test-perf test-drivers test-dump:
@mkdir -p $(AUTOTEST_DIR) ; \
cd $(AUTOTEST_DIR) ; \
if [ -f $(RTE_OUTPUT)/app/test ]; then \
-- 
2.14.4



[dpdk-dev] [PATCH v5 08/10] autotest: update autotest test case list

2018-07-17 Thread Reshma Pattan
Autotest is enhanced with additional test cases
being added to autotest_data.py

Removed non existing PCI autotest.

Cc: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Anatoly Burakov 
---
 test/test/autotest_data.py | 350 +++--
 1 file changed, 342 insertions(+), 8 deletions(-)

diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index c24e7bc25..3f856ff57 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -134,12 +134,6 @@
 "Func":default_autotest,
 "Report":  None,
 },
-{
-"Name":"PCI autotest",
-"Command": "pci_autotest",
-"Func":default_autotest,
-"Report":  None,
-},
 {
 "Name":"Malloc autotest",
 "Command": "malloc_autotest",
@@ -248,6 +242,291 @@
 "Func":default_autotest,
 "Report":  None,
 },
+{
+"Name":"Eventdev selftest octeontx",
+"Command": "eventdev_selftest_octeontx",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event ring autotest",
+"Command": "event_ring_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Table autotest",
+"Command": "table_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Flow classify autotest",
+"Command": "flow_classify_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Event eth rx adapter autotest",
+"Command": "event_eth_rx_adapter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"User delay",
+"Command": "user_delay_us",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Rawdev autotest",
+"Command": "rawdev_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Kvargs autotest",
+"Command": "kvargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Devargs autotest",
+"Command": "devargs_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding autotest",
+"Command": "link_bonding_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding mode4 autotest",
+"Command": "link_bonding_mode4_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Link bonding rssconf autotest",
+"Command": "link_bonding_rssconf_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Crc autotest",
+"Command": "crc_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Distributor autotest",
+"Command": "distributor_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Reorder autotest",
+"Command": "reorder_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Barrier autotest",
+"Command": "barrier_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Bitmap test",
+"Command": "bitmap_test",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash scaling autotest",
+"Command": "hash_scaling_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash multiwriter autotest",
+"Command": "hash_multiwriter_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Service autotest",
+"Command": "service_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Timer racecond autotest",
+"Command": "timer_racecond_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Member autotest",
+"Command": "member_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":   "Efd_autotest",
+"Command": "efd_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Thash autotest",
+"Command": "thash_autotest",
+"Func":default_autotest,
+"Report":  None,
+},
+{
+"Name":"Hash function autotest",
+"Command": "hash_functions_autotest

[dpdk-dev] [PATCH v5 09/10] mk: update make targets for classified testcases

2018-07-17 Thread Reshma Pattan
Makefiles are updated with new test case lists.
Test cases are classified as -
P1 - Main test cases,
P2 - Cryptodev/driver test cases,
P3 - Perf test cases which takes longer than 10s,
P4 - Logging/Dump test cases.

Makefile is updated with different targets
for the above classified groups.
Test cases for different targets are listed accordingly.

Cc: sta...@dpdk.org

Signed-off-by: Jananee Parthasarathy 
Reviewed-by: Reshma Pattan 
---
 mk/rte.sdkroot.mk |  4 ++--
 mk/rte.sdktest.mk | 33 +++--
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/mk/rte.sdkroot.mk b/mk/rte.sdkroot.mk
index f43cc7829..ea3473ebf 100644
--- a/mk/rte.sdkroot.mk
+++ b/mk/rte.sdkroot.mk
@@ -68,8 +68,8 @@ config defconfig showconfigs showversion showversionum:
 cscope gtags tags etags:
$(Q)$(RTE_SDK)/devtools/build-tags.sh $@ $T
 
-.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage
-test test-basic test-fast test-ring test-mempool test-perf coverage:
+.PHONY: test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump
+test test-basic test-fast test-ring test-mempool test-perf coverage 
test-drivers test-dump:
$(Q)$(MAKE) -f $(RTE_SDK)/mk/rte.sdktest.mk $@
 
 test: test-build
diff --git a/mk/rte.sdktest.mk b/mk/rte.sdktest.mk
index ee1fe0c7e..13d1efb6a 100644
--- a/mk/rte.sdktest.mk
+++ b/mk/rte.sdktest.mk
@@ -18,14 +18,35 @@ DIR := $(shell basename $(RTE_OUTPUT))
 #
 # test: launch auto-tests, very simple for now.
 #
-.PHONY: test test-basic test-fast test-perf coverage
+.PHONY: test test-basic test-fast test-perf test-drivers test-dump coverage
 
-PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf
-coverage: BLACKLIST=-$(PERFLIST)
-test-fast: BLACKLIST=-$(PERFLIST)
-test-perf: WHITELIST=$(PERFLIST)
+PERFLIST=ring_perf,mempool_perf,memcpy_perf,hash_perf,timer_perf,\
+ reciprocal_division,reciprocal_division_perf,lpm_perf,red_all,\
+ barrier,hash_multiwriter,timer_racecond,efd,hash_functions,\
+ eventdev_selftest_sw,member_perf,efd_perf,lpm6_perf,red_perf,\
+ distributor_perf,ring_pmd_perf,pmd_perf,ring_perf
+DRIVERSLIST=link_bonding,link_bonding_mode4,link_bonding_rssconf,\
+cryptodev_sw_mrvl,cryptodev_dpaa2_sec,cryptodev_dpaa_sec,\
+cryptodev_qat,cryptodev_aesni_mb,cryptodev_openssl,\
+cryptodev_scheduler,cryptodev_aesni_gcm,cryptodev_null,\
+cryptodev_sw_snow3g,cryptodev_sw_kasumi,cryptodev_sw_zuc
+DUMPLIST=dump_struct_sizes,dump_mempool,dump_malloc_stats,dump_devargs,\
+ dump_log_types,dump_ring,quit,dump_physmem,dump_memzone,\
+ devargs_autotest
 
-test test-basic test-fast test-perf:
+SPACESTR:=
+SPACESTR+=
+STRIPPED_PERFLIST=$(subst $(SPACESTR),,$(PERFLIST))
+STRIPPED_DRIVERSLIST=$(subst $(SPACESTR),,$(DRIVERSLIST))
+STRIPPED_DUMPLIST=$(subst $(SPACESTR),,$(DUMPLIST))
+
+coverage: BLACKLIST=-$(STRIPPED_PERFLIST)
+test-fast: 
BLACKLIST=-$(STRIPPED_PERFLIST),$(STRIPPED_DRIVERSLIST),$(STRIPPED_DUMPLIST)
+test-perf: WHITELIST=$(STRIPPED_PERFLIST)
+test-drivers: WHITELIST=$(STRIPPED_DRIVERSLIST)
+test-dump: WHITELIST=$(STRIPPED_DUMPLIST)
+
+test test-basic test-fast test-perf test-drivers test-dump:
@mkdir -p $(AUTOTEST_DIR) ; \
cd $(AUTOTEST_DIR) ; \
if [ -f $(RTE_OUTPUT)/app/test ]; then \
-- 
2.14.4



[dpdk-dev] [PATCH] librte_ethdev: improve description for port name api

2018-07-17 Thread Jasvinder Singh
Imporve description of api used to get port name from port id or
vice-versa.

Signed-off-by: Jasvinder Singh 
---
 lib/librte_ethdev/rte_ethdev.h | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f593b..874740b 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -3629,11 +3629,11 @@ rte_eth_dev_l2_tunnel_offload_set(uint16_t port_id,
  uint8_t en);
 
 /**
-* Get the port id from pci address or device name
-* Example:
-* - PCIe, :2:00.0
-* - SoC, fsl-gmac0
-* - vdev, net_pcap0
+* Get the port id from device name. The device name should be specified
+* as below:
+* - PCIe address (Domain:Bus:Device.Function), for example- :2:00.0
+* - SoC device name, for example- fsl-gmac0
+* - vdev dpdk name, for example- net_[pcap0|null0|tap0]
 *
 * @param name
 *  pci address or name of the device
@@ -3647,11 +3647,10 @@ int
 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id);
 
 /**
-* Get the device name from port id
-* Example:
-* - PCIe Bus:Domain:Function, :02:00.0
-* - SoC device name, fsl-gmac0
-* - vdev dpdk name, net_[pcap0|null0|tun0|tap0]
+* Get the device name from port id. The device name is specified as below;  
+* - PCIe address (Domain:Bus:Device.Function), for example- :02:00.0
+* - SoC device name, for example- fsl-gmac0
+* - vdev dpdk name, for example- net_[pcap0|null0|tun0|tap0]
 *
 * @param port_id
 *   Port identifier of the device.
-- 
2.9.3



Re: [dpdk-dev] [PATCH] event/octeontx: prefetch mbuf instead of wqe

2018-07-17 Thread santosh


On Tuesday 17 July 2018 08:03 PM, Pavan Nikhilesh wrote:
> Prefetch mbuf pointer instead of wqe when SSO receives pkt from PKI.
>
> Signed-off-by: Pavan Nikhilesh 
> ---

Acked-by: Santosh Shukla 



[dpdk-dev] [PATCH v3] test: fix incorrect return types

2018-07-17 Thread Reshma Pattan
UTs should return either TEST_SUCCESS or TEST_FAILED only.
They should not return 0, -1 and any other value.

Fixes: 9c9befea4f ("test: add flow classify unit tests")
CC: jasvinder.si...@intel.com
CC: bernard.iremon...@intel.com
CC: sta...@dpdk.org

Signed-off-by: Reshma Pattan 
Reviewed-by: Anatoly Burakov 

---
v3: remove return of TEST_SUCESS and TEST_FAILED from
unnecessary places.
---
---
 test/test/test_flow_classify.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c
index fc83b69ae..5f5b7 100644
--- a/test/test/test_flow_classify.c
+++ b/test/test/test_flow_classify.c
@@ -871,32 +871,32 @@ test_flow_classify(void)
printf("Line %i: f_create has failed!\n", __LINE__);
rte_flow_classifier_free(cls->cls);
rte_free(cls);
-   return -1;
+   return TEST_FAILED;
}
printf("Created table_acl for for IPv4 five tuple packets\n");
 
ret = init_mbufpool();
if (ret) {
printf("Line %i: init_mbufpool has failed!\n", __LINE__);
-   return -1;
+   return TEST_FAILED;
}
 
if (test_invalid_parameters() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_valid_parameters() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_invalid_patterns() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_invalid_actions() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_udp() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_tcp() < 0)
-   return -1;
+   return TEST_FAILED;
if (test_query_sctp() < 0)
-   return -1;
+   return TEST_FAILED;
 
-   return 0;
+   return TEST_SUCCESS;
 }
 
 REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify);
-- 
2.14.4



[dpdk-dev] [PATCH] eal: fix bitmap documentation

2018-07-17 Thread Jerin Jacob
n_bits comes as first argument, align doxygen comment.

n_bit need to not be multiple of 512 as n_bits
are rounding to RTE_BITMAP_CL_BIT_SIZE.

Fixes: 14456f59e9f7 ("doc: fix doxygen warnings in QoS API")
Fixes: de3cfa2c9823 ("sched: initial import")

Cc: sta...@dpdk.org

Signed-off-by: Jerin Jacob 
---
 lib/librte_eal/common/include/rte_bitmap.h | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_eal/common/include/rte_bitmap.h 
b/lib/librte_eal/common/include/rte_bitmap.h
index 7d4935fcc..d9facc642 100644
--- a/lib/librte_eal/common/include/rte_bitmap.h
+++ b/lib/librte_eal/common/include/rte_bitmap.h
@@ -198,12 +198,12 @@ rte_bitmap_get_memory_footprint(uint32_t n_bits) {
 /**
  * Bitmap initialization
  *
- * @param mem_size
- *   Minimum expected size of bitmap.
+ * @param n_bits
+ *   Number of pre-allocated bits in array2.
  * @param mem
  *   Base address of array1 and array2.
- * @param n_bits
- *   Number of pre-allocated bits in array2. Must be non-zero and multiple of 
512.
+ * @param mem_size
+ *   Minimum expected size of bitmap.
  * @return
  *   Handle to bitmap instance.
  */
-- 
2.18.0



[dpdk-dev] [PATCH 2/2] compression/qat: add sgl feature

2018-07-17 Thread Fiona Trahe
This patch add sgl feature to QAT compression PMD

Signed-off-by: Tomasz Jozwiak 
Signed-off-by: Fiona Trahe 
---
 config/common_base   |  1 +
 config/rte_config.h  |  1 +
 doc/guides/compressdevs/features/qat.ini |  3 +++
 doc/guides/compressdevs/qat_comp.rst |  2 --
 drivers/compress/qat/qat_comp.c  | 41 
 drivers/compress/qat/qat_comp.h  |  9 +++
 drivers/compress/qat/qat_comp_pmd.c  | 25 ++-
 7 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/config/common_base b/config/common_base
index a061c21..6d82b91 100644
--- a/config/common_base
+++ b/config/common_base
@@ -499,6 +499,7 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
+CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 
 #
 # Compile PMD for virtio crypto devices
diff --git a/config/rte_config.h b/config/rte_config.h
index 28f04b4..a8e4797 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -89,6 +89,7 @@
 /* QuickAssist device */
 /* Max. number of QuickAssist devices which can be attached */
 #define RTE_PMD_QAT_MAX_PCI_DEVICES 48
+#define RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS 16
 
 /* virtio crypto defines */
 #define RTE_MAX_VIRTIO_CRYPTO 32
diff --git a/doc/guides/compressdevs/features/qat.ini 
b/doc/guides/compressdevs/features/qat.ini
index 12bfb21..5cd4524 100644
--- a/doc/guides/compressdevs/features/qat.ini
+++ b/doc/guides/compressdevs/features/qat.ini
@@ -5,6 +5,9 @@
 ;
 [Features]
 HW Accelerated  = Y
+OOP SGL In SGL Out  = Y
+OOP SGL In LB  Out  = Y
+OOP LB  In SGL Out  = Y
 Deflate = Y
 Adler32 = Y
 Crc32   = Y
diff --git a/doc/guides/compressdevs/qat_comp.rst 
b/doc/guides/compressdevs/qat_comp.rst
index 167f816..8b1270b 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,8 +35,6 @@ Checksum generation:
 Limitations
 ---
 
-* Chained mbufs are not yet supported, therefore max data size which can be 
passed to the PMD in a single mbuf is 64K - 1. If data is larger than this it 
will need to be split up and sent as multiple operations.
-
 * Compressdev level 0, no compression, is not supported.
 
 * Dynamic Huffman encoding is not yet supported.
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index e8019eb..cbf7614 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -21,10 +21,12 @@
 
 int
 qat_comp_build_request(void *in_op, uint8_t *out_msg,
-  void *op_cookie __rte_unused,
+  void *op_cookie,
   enum qat_device_gen qat_dev_gen __rte_unused)
 {
struct rte_comp_op *op = in_op;
+   struct qat_comp_op_cookie *cookie =
+   (struct qat_comp_op_cookie *)op_cookie;
struct qat_comp_xform *qat_xform = op->private_xform;
const uint8_t *tmpl = (uint8_t *)&qat_xform->qat_comp_req_tmpl;
struct icp_qat_fw_comp_req *comp_req =
@@ -44,12 +46,43 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
comp_req->comp_pars.comp_len = op->src.length;
comp_req->comp_pars.out_buffer_sz = rte_pktmbuf_pkt_len(op->m_dst);
 
-   /* sgl */
if (op->m_src->next != NULL || op->m_dst->next != NULL) {
-   QAT_DP_LOG(ERR, "QAT PMD doesn't support scatter gather");
-   return -EINVAL;
+   /* sgl */
+   int ret = 0;
+
+   ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
+   QAT_COMN_PTR_TYPE_SGL);
+   ret = qat_sgl_fill_array(op->m_src,
+   rte_pktmbuf_mtophys_offset(op->m_src,
+   op->src.offset),
+   &cookie->qat_sgl_src,
+   op->src.length,
+   RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+   if (ret) {
+   QAT_DP_LOG(ERR, "QAT PMD Cannot fill sgl array");
+   return ret;
+   }
+
+   ret = qat_sgl_fill_array(op->m_dst,
+   rte_pktmbuf_mtophys_offset(op->m_dst,
+   op->dst.offset),
+   &cookie->qat_sgl_dst,
+   comp_req->comp_pars.out_buffer_sz,
+   RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+   if (ret) {
+   QAT_DP_LOG(ERR, "QAT PMD Cannot fill sgl array");
+   return ret;
+   }
+
+   comp_req->comn_mid.src_data_addr =
+   cookie->qat_sgl_src_phys_addr;
+   comp_req->comn_mid.dest_data_addr =
+   coo

[dpdk-dev] [PATCH 1/2] common/qat: add sgl header

2018-07-17 Thread Fiona Trahe
This patch refactors the sgl struct so it includes a flexible
array of flat buffers as sym and compress PMDs can have
different size sgls.

Signed-off-by: Tomasz Jozwiak 
Signed-off-by: Fiona Trahe 
---
 drivers/common/qat/qat_common.c | 53 ++---
 drivers/common/qat/qat_common.h | 23 ++
 drivers/crypto/qat/qat_sym.c| 12 ++
 drivers/crypto/qat/qat_sym.h| 14 +--
 4 files changed, 71 insertions(+), 31 deletions(-)

diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index c206d3b..c25372d 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -8,40 +8,53 @@
 
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, uint64_t buf_start,
-   struct qat_sgl *list, uint32_t data_len)
+   void *list_in, uint32_t data_len,
+   const int32_t max_segs)
 {
int nr = 1;
-
-   uint32_t buf_len = rte_pktmbuf_iova(buf) -
-   buf_start + rte_pktmbuf_data_len(buf);
+   struct qat_sgl *list = (struct qat_sgl *)list_in;
+   /* buf_start allows the first buffer to start at an address before or
+* after the mbuf data start. It's used to either optimally align the
+* dma to 64 or to start dma from an offset.
+*/
+   uint32_t buf_len;
+   uint32_t first_buf_len = rte_pktmbuf_data_len(buf) +
+   (rte_pktmbuf_mtophys(buf) - buf_start);
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+   uint8_t *virt_addr[max_segs];
+   virt_addr[0] = rte_pktmbuf_mtod(buf, uint8_t*) +
+   (rte_pktmbuf_mtophys(buf) - buf_start);
+#endif
 
list->buffers[0].addr = buf_start;
list->buffers[0].resrvd = 0;
-   list->buffers[0].len = buf_len;
+   list->buffers[0].len = first_buf_len;
 
-   if (data_len <= buf_len) {
+   if (data_len <= first_buf_len) {
list->num_bufs = nr;
list->buffers[0].len = data_len;
-   return 0;
+   goto sgl_end;
}
 
buf = buf->next;
+   buf_len = first_buf_len;
while (buf) {
-   if (unlikely(nr == QAT_SGL_MAX_NUMBER)) {
-   QAT_LOG(ERR,
-   "QAT PMD exceeded size of QAT SGL entry(%u)",
-   QAT_SGL_MAX_NUMBER);
+   if (unlikely(nr == max_segs)) {
+   QAT_DP_LOG(ERR, "Exceeded max segments in QAT SGL (%u)",
+   max_segs);
return -EINVAL;
}
 
list->buffers[nr].len = rte_pktmbuf_data_len(buf);
list->buffers[nr].resrvd = 0;
-   list->buffers[nr].addr = rte_pktmbuf_iova(buf);
-
+   list->buffers[nr].addr = rte_pktmbuf_mtophys(buf);
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+   virt_addr[nr] = rte_pktmbuf_mtod(buf, uint8_t*);
+#endif
buf_len += list->buffers[nr].len;
buf = buf->next;
 
-   if (buf_len > data_len) {
+   if (buf_len >= data_len) {
list->buffers[nr].len -=
buf_len - data_len;
buf = NULL;
@@ -50,6 +63,18 @@ qat_sgl_fill_array(struct rte_mbuf *buf, uint64_t buf_start,
}
list->num_bufs = nr;
 
+sgl_end:
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+   QAT_DP_LOG(INFO, "SGL with %d buffers:", list->num_bufs);
+   for (uint8_t i = 0; i < list->num_bufs; i++) {
+   QAT_DP_LOG(INFO, "QAT SGL buf %d, len = %d, iova = 0x%012lx",
+   i, list->buffers[i].len,
+   list->buffers[i].addr);
+   QAT_DP_HEXDUMP_LOG(DEBUG, "qat SGL",
+   virt_addr[i], list->buffers[i].len);
+   }
+#endif
+
return 0;
 }
 
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index db85d54..e6da7fb 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -10,11 +10,6 @@
 
 /**< Intel(R) QAT device name for PCI registration */
 #define QAT_PCI_NAME   qat
-/*
- * Maximum number of SGL entries
- */
-#define QAT_SGL_MAX_NUMBER 16
-
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)
 
 /* Intel(R) QuickAssist Technology device generation is enumerated
@@ -31,6 +26,7 @@ enum qat_service_type {
QAT_SERVICE_COMPRESSION,
QAT_SERVICE_INVALID
 };
+
 #define QAT_MAX_SERVICES   (QAT_SERVICE_INVALID)
 
 /**< Common struct for scatter-gather list operations */
@@ -40,11 +36,17 @@ struct qat_flat_buf {
uint64_t addr;
 } __rte_packed;
 
+#define qat_sgl_hdr  struct { \
+   uint64_t resrvd; \
+   uint32_t num_bufs; \
+   uint32_t num_mapped_bufs; \
+}
+
+__extension__
 struct qat_sgl {
-   uint64_t resrvd;
-   uint32_t num_bufs;
-   uint32_t num_mapped_bufs;
-   

[dpdk-dev] [PATCH v2] librte_ethdev: improve description for port name api

2018-07-17 Thread Jasvinder Singh
Improve description of api used to get port name from port id or
vice-versa.

Signed-off-by: Jasvinder Singh 
---
v2
- fixed checkpatch warning

 lib/librte_ethdev/rte_ethdev.h | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f5f593b..e1d0df2 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -3629,11 +3629,11 @@ rte_eth_dev_l2_tunnel_offload_set(uint16_t port_id,
  uint8_t en);
 
 /**
-* Get the port id from pci address or device name
-* Example:
-* - PCIe, :2:00.0
-* - SoC, fsl-gmac0
-* - vdev, net_pcap0
+* Get the port id from device name. The device name should be specified
+* as below:
+* - PCIe address (Domain:Bus:Device.Function), for example- :2:00.0
+* - SoC device name, for example- fsl-gmac0
+* - vdev dpdk name, for example- net_[pcap0|null0|tap0]
 *
 * @param name
 *  pci address or name of the device
@@ -3647,11 +3647,10 @@ int
 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id);
 
 /**
-* Get the device name from port id
-* Example:
-* - PCIe Bus:Domain:Function, :02:00.0
-* - SoC device name, fsl-gmac0
-* - vdev dpdk name, net_[pcap0|null0|tun0|tap0]
+* Get the device name from port id. The device name is specified as below:
+* - PCIe address (Domain:Bus:Device.Function), for example- :02:00.0
+* - SoC device name, for example- fsl-gmac0
+* - vdev dpdk name, for example- net_[pcap0|null0|tun0|tap0]
 *
 * @param port_id
 *   Port identifier of the device.
-- 
2.9.3



[dpdk-dev] [PATCH] mempool: check for invalid args on creation

2018-07-17 Thread Pablo de Lara
Currently, a mempool can be created if the number of
objects is zero or the size of these is zero.
In these scenarios, rte_mempool_create should return NULL,
as the mempool created is useless.

Signed-off-by: Pablo de Lara 
---
 lib/librte_mempool/rte_mempool.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 8c8b9f809..8c9573f1a 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -916,6 +916,18 @@ rte_mempool_create_empty(const char *name, unsigned n, 
unsigned elt_size,
 
mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
 
+   /* asked for zero items */
+   if (n == 0) {
+   rte_errno = EINVAL;
+   return NULL;
+   }
+
+   /* asked for zero-sized elements */
+   if (elt_size == 0) {
+   rte_errno = EINVAL;
+   return NULL;
+   }
+
/* asked cache too big */
if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
-- 
2.14.4



[dpdk-dev] [PATCH] examples/l2fwd-crypto: fix session mempool size

2018-07-17 Thread Pablo de Lara
The session mempool size for this application depends
on the number of crypto devices that are capable
of performing the operation given by the parameters on the app.

However, previously this calculation was done before all devices
were checked, resulting in an incorrect number of sessions
required.

Now the calculation of the devices to be used is done first,
followed by the creation of the session pool, resulting
in a correct number of objects needed for the sessions
to be created.

Fixes: e3bcb99a5e13 ("examples/l2fwd-crypto: limit number of sessions")

Signed-off-by: Pablo de Lara 
---
 examples/l2fwd-crypto/main.c | 541 +++
 1 file changed, 341 insertions(+), 200 deletions(-)

diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 9ac06a697..93bce583c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -1931,21 +1931,19 @@ check_supported_size(uint16_t length, uint16_t min, 
uint16_t max,
 static int
 check_iv_param(const struct rte_crypto_param_range *iv_range_size,
unsigned int iv_param, int iv_random_size,
-   uint16_t *iv_length)
+   uint16_t iv_length)
 {
/*
 * Check if length of provided IV is supported
 * by the algorithm chosen.
 */
if (iv_param) {
-   if (check_supported_size(*iv_length,
+   if (check_supported_size(iv_length,
iv_range_size->min,
iv_range_size->max,
iv_range_size->increment)
-   != 0) {
-   printf("Unsupported IV length\n");
+   != 0)
return -1;
-   }
/*
 * Check if length of IV to be randomly generated
 * is supported by the algorithm chosen.
@@ -1955,14 +1953,250 @@ check_iv_param(const struct rte_crypto_param_range 
*iv_range_size,
iv_range_size->min,
iv_range_size->max,
iv_range_size->increment)
-   != 0) {
-   printf("Unsupported IV length\n");
+   != 0)
+   return -1;
+   }
+
+   return 0;
+}
+
+static int
+check_capabilities(struct l2fwd_crypto_options *options, uint8_t cdev_id)
+{
+   struct rte_cryptodev_info dev_info;
+   const struct rte_cryptodev_capabilities *cap;
+
+   rte_cryptodev_info_get(cdev_id, &dev_info);
+
+   /* Set AEAD parameters */
+   if (options->xform_chain == L2FWD_CRYPTO_AEAD) {
+   /* Check if device supports AEAD algo */
+   cap = check_device_support_aead_algo(options, &dev_info,
+   cdev_id);
+   if (cap == NULL)
+   return -1;
+
+   if (check_iv_param(&cap->sym.aead.iv_size,
+   options->aead_iv_param,
+   options->aead_iv_random_size,
+   options->aead_iv.length) != 0) {
+   RTE_LOG(DEBUG, USER1,
+   "Device %u does not support IV length\n",
+   cdev_id);
return -1;
}
-   *iv_length = iv_random_size;
-   /* No size provided, use minimum size. */
-   } else
-   *iv_length = iv_range_size->min;
+
+   /*
+* Check if length of provided AEAD key is supported
+* by the algorithm chosen.
+*/
+   if (options->aead_key_param) {
+   if (check_supported_size(
+   options->aead_xform.aead.key.length,
+   cap->sym.aead.key_size.min,
+   cap->sym.aead.key_size.max,
+   cap->sym.aead.key_size.increment)
+   != 0) {
+   RTE_LOG(DEBUG, USER1,
+   "Device %u does not support "
+   "AEAD key length\n",
+   cdev_id);
+   return -1;
+   }
+   /*
+* Check if length of the aead key to be randomly generated
+* is supported by the algorithm chosen.
+*/
+   } else if (options->aead_key_random_size != -1) {
+   if (check_supported_size(options->aead_key_random_size,
+   cap->sym.aead.key_size.min,
+   cap->sym.aead.key_size.max,
+   

[dpdk-dev] [PATCH 1/2] test/hash: fix multiwriter with non consecutive cores

2018-07-17 Thread Pablo de Lara
When non consecutive cores are passed into the test application,
the distribution of the keys that each thread needs to insert
is not correct, since it assumes that there are no cores skipped
between the master core and the worker core.

Fixes: be856325cba3 ("hash: add scalable multi-writer insertion with Intel TSX")
Cc: sta...@dpdk.org

Signed-off-by: Pablo de Lara 
---
 test/test/test_hash_multiwriter.c | 41 ++-
 1 file changed, 36 insertions(+), 5 deletions(-)

diff --git a/test/test/test_hash_multiwriter.c 
b/test/test/test_hash_multiwriter.c
index f182f4052..acd6a91ca 100644
--- a/test/test/test_hash_multiwriter.c
+++ b/test/test/test_hash_multiwriter.c
@@ -48,18 +48,29 @@ static rte_atomic64_t ginsertions;
 static int use_htm;
 
 static int
-test_hash_multiwriter_worker(__attribute__((unused)) void *arg)
+test_hash_multiwriter_worker(void *arg)
 {
uint64_t i, offset;
+   uint16_t pos_core;
uint32_t lcore_id = rte_lcore_id();
uint64_t begin, cycles;
+   uint16_t *enabled_core_ids = (uint16_t *)arg;
 
-   offset = (lcore_id - rte_get_master_lcore())
-   * tbl_multiwriter_test_params.nb_tsx_insertion;
+   for (pos_core = 0; pos_core < rte_lcore_count(); pos_core++) {
+   if (enabled_core_ids[pos_core] == lcore_id)
+   break;
+   }
+
+   /*
+* Calculate offset for entries based on the position of the
+* logical core, from the master core (not counting not enabled cores)
+*/
+   offset = pos_core * tbl_multiwriter_test_params.nb_tsx_insertion;
 
printf("Core #%d inserting %d: %'"PRId64" - %'"PRId64"\n",
   lcore_id, tbl_multiwriter_test_params.nb_tsx_insertion,
-  offset, offset + tbl_multiwriter_test_params.nb_tsx_insertion);
+  offset,
+  offset + tbl_multiwriter_test_params.nb_tsx_insertion - 1);
 
begin = rte_rdtsc_precise();
 
@@ -88,6 +99,8 @@ test_hash_multiwriter(void)
 {
unsigned int i, rounded_nb_total_tsx_insertion;
static unsigned calledCount = 1;
+   uint16_t enabled_core_ids[RTE_MAX_LCORE];
+   uint16_t core_id;
 
uint32_t *keys;
uint32_t *found;
@@ -159,9 +172,27 @@ test_hash_multiwriter(void)
rte_atomic64_init(&ginsertions);
rte_atomic64_clear(&ginsertions);
 
+   /* Get list of enabled cores */
+   i = 0;
+   for (core_id = 0; core_id < RTE_MAX_LCORE; core_id++) {
+   if (i == rte_lcore_count())
+   break;
+
+   if (rte_lcore_is_enabled(core_id)) {
+   enabled_core_ids[i] = core_id;
+   i++;
+   }
+   }
+
+   if (i != rte_lcore_count()) {
+   printf("Number of enabled cores in list is different from "
+   "number given by rte_lcore_count()\n");
+   goto err3;
+   }
+
/* Fire all threads. */
rte_eal_mp_remote_launch(test_hash_multiwriter_worker,
-NULL, CALL_MASTER);
+enabled_core_ids, CALL_MASTER);
rte_eal_mp_wait_lcore();
 
count = rte_hash_count(handle);
-- 
2.14.4



[dpdk-dev] [PATCH 2/2] test/hash: fix potential memory leak

2018-07-17 Thread Pablo de Lara
In the multiwriter test, if "found" array allocation failed,
the memory of "keys" array, which was successfully allocated
could not be freed, since by this time, tbl_multiwriter_test_params.keys
was not set to this array, which is the pointer freed when finishing
the test or when a failure happens.

To solve this, tbl_multiwriter_test_params.keys is set to the "keys"
address, just after allocating and filling the array.

Fixes: be856325cba3 ("hash: add scalable multi-writer insertion with Intel TSX")
Cc: sta...@dpdk.org

Signed-off-by: Pablo de Lara 
---
 test/test/test_hash_multiwriter.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/test/test/test_hash_multiwriter.c 
b/test/test/test_hash_multiwriter.c
index acd6a91ca..6a3eb10bd 100644
--- a/test/test/test_hash_multiwriter.c
+++ b/test/test/test_hash_multiwriter.c
@@ -154,16 +154,17 @@ test_hash_multiwriter(void)
goto err1;
}
 
+   for (i = 0; i < nb_entries; i++)
+   keys[i] = i;
+
+   tbl_multiwriter_test_params.keys = keys;
+
found = rte_zmalloc(NULL, sizeof(uint32_t) * nb_entries, 0);
if (found == NULL) {
printf("RTE_ZMALLOC failed\n");
goto err2;
}
 
-   for (i = 0; i < nb_entries; i++)
-   keys[i] = i;
-
-   tbl_multiwriter_test_params.keys = keys;
tbl_multiwriter_test_params.found = found;
 
rte_atomic64_init(&gcycles);
-- 
2.14.4



[dpdk-dev] [PATCH v2] net/enic: pick the right Rx handler after changing MTU

2018-07-17 Thread John Daley
From: Hyong Youb Kim 

enic_set_mtu always reverts to the default Rx handler after changing
MTU. Try to use the simpler, non-scatter handler in this case as well.

Fixes: 35e2cb6a1795 ("net/enic: add simple Rx handler")

Signed-off-by: Hyong Youb Kim 
Reviewed-by: John Daley 
---
v2: remember to actually assign default handler

 drivers/net/enic/enic_main.c | 25 +
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index c8456c4b7..f04dc0878 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -514,6 +514,21 @@ static void enic_prep_wq_for_simple_tx(struct enic *enic, 
uint16_t queue_idx)
}
 }
 
+static void pick_rx_handler(struct enic *enic)
+{
+   struct rte_eth_dev *eth_dev;
+
+   /* Use the non-scatter, simplified RX handler if possible. */
+   eth_dev = enic->rte_dev;
+   if (enic->rq_count > 0 && enic->rq[0].data_queue_enable == 0) {
+   PMD_INIT_LOG(DEBUG, " use the non-scatter Rx handler");
+   eth_dev->rx_pkt_burst = &enic_noscatter_recv_pkts;
+   } else {
+   PMD_INIT_LOG(DEBUG, " use the normal Rx handler");
+   eth_dev->rx_pkt_burst = &enic_recv_pkts;
+   }
+}
+
 int enic_enable(struct enic *enic)
 {
unsigned int index;
@@ -571,13 +586,7 @@ int enic_enable(struct enic *enic)
eth_dev->tx_pkt_burst = &enic_xmit_pkts;
}
 
-   /* Use the non-scatter, simplified RX handler if possible. */
-   if (enic->rq_count > 0 && enic->rq[0].data_queue_enable == 0) {
-   PMD_INIT_LOG(DEBUG, " use the non-scatter Rx handler");
-   eth_dev->rx_pkt_burst = &enic_noscatter_recv_pkts;
-   } else {
-   PMD_INIT_LOG(DEBUG, " use the normal Rx handler");
-   }
+   pick_rx_handler(enic);
 
for (index = 0; index < enic->wq_count; index++)
enic_start_wq(enic, index);
@@ -1550,7 +1559,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
 
/* put back the real receive function */
rte_mb();
-   eth_dev->rx_pkt_burst = enic_recv_pkts;
+   pick_rx_handler(enic);
rte_mb();
 
/* restart Rx traffic */
-- 
2.16.2



[dpdk-dev] [PATCH] doc: update the enic guide and features

2018-07-17 Thread John Daley
From: Hyong Youb Kim 

Make a few updates in preparation for 18.08.
- Use SPDX
- Add 1400 series VIC adapters to supported models
- Describe the VXLAN port number
- Expand the description for ig-vlan-rewrite
- Add inner RSS and checksum to the features

Signed-off-by: Hyong Youb Kim 
Reviewed-by: John Daley 
---
 doc/guides/nics/enic.rst  | 76 +++
 doc/guides/nics/features/enic.ini |  3 ++
 2 files changed, 48 insertions(+), 31 deletions(-)

diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 7764c8648..438a83d5f 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -1,32 +1,7 @@
-..  BSD LICENSE
+..  SPDX-License-Identifier: BSD-3-Clause
 Copyright (c) 2017, Cisco Systems, Inc.
 All rights reserved.
 
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions
-are met:
-
-1. Redistributions of source code must retain the above copyright
-notice, this list of conditions and the following disclaimer.
-
-2. Redistributions in binary form must reproduce the above copyright
-notice, this list of conditions and the following disclaimer in
-the documentation and/or other materials provided with the
-distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGE.
-
 ENIC Poll Mode Driver
 =
 
@@ -336,6 +311,40 @@ it, set ``devargs`` parameter ``disable-overlay=1``. For 
example::
 
 -w 12:00.0,disable-overlay=1
 
+By default, the NIC uses 4789 as the VXLAN port. The user may change
+it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
+the current NIC has a single VXLAN port number, the user cannot
+configure multiple port numbers.
+
+Ingress VLAN Rewrite
+
+
+VIC adapters can tag, untag, or modify the VLAN headers of ingress
+packets. The ingress VLAN rewrite mode controls this behavior. By
+default, it is set to pass-through, where the NIC does not modify the
+VLAN header in any way so that the application can see the original
+header. This mode is sufficient for many applications, but may not be
+suitable for others. Such applications may change the mode by setting
+``devargs`` parameter ``ig-vlan-rewrite`` to one of the following.
+
+- ``pass``: Pass-through mode. The NIC does not modify the VLAN
+  header. This is the default mode.
+
+- ``priority``: Priority-tag default VLAN mode. If the ingress packet
+  is tagged with the default VLAN, the NIC replaces its VLAN header
+  with the priority tag (VLAN ID 0).
+
+- ``trunk``: Default trunk mode. The NIC tags untagged ingress packets
+  with the default VLAN. Tagged ingress packets are not modified. To
+  the application, every packet appears as tagged.
+
+- ``untag``: Untag default VLAN mode. If the ingress packet is tagged
+  with the default VLAN, the NIC removes or untags its VLAN header so
+  that the application sees an untagged packet. As a result, the
+  default VLAN becomes `untagged`. This mode can be useful for
+  applications such as OVS-DPDK performance benchmarks that utilize
+  only the default VLAN and want to see only untagged packets.
+
 .. _enic_limitations:
 
 Limitations
@@ -366,9 +375,9 @@ Another alternative is modify the adapter's ingress VLAN 
rewrite mode so that
 packets with the default VLAN tag are stripped by the adapter and presented to
 DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
 PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
-``devargs`` parameter ``ig-vlan-rewrite=1``. For example::
+``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
--w 12:00.0,ig-vlan-rewrite=1
+-w 12:00.0,ig-vlan-rewrite=untag
 
 - Limited flow director support on 1200 series and 1300 series Cisco VIC
   adapters with old firmware. Please see :ref:`enic-flow-director`.
@@ -405,10 +414,14 @@ PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This 
mode is enabled with the
 
   - ``rx_good_bytes`` (ibytes) always includes VLAN header (4B) and CRC bytes 
(4B).
 This behavior applies to 1300 and older series VIC adapters.
+1400 series VICs do not count CRC 

Re: [dpdk-dev] [PATCH] test: ensure EAL flags autotest works properly on BSD

2018-07-17 Thread Zhao, MeijuanX



-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Anatoly Burakov
Sent: Tuesday, July 17, 2018 12:34 AM
To: dev@dpdk.org
Cc: Liu, Yu Y ; Richardson, Bruce 
; Ananyev, Konstantin 
; sta...@dpdk.org
Subject: [dpdk-dev] [PATCH] test: ensure EAL flags autotest works properly on 
BSD

FreeBSD does not support running multiple primary processes concurrently, 
because all DPDK instances will allocate memory from the same place (memory 
provided by contigmem driver).
While it is technically possible to launch a DPDK process using no-shconf 
switch, it will actually corrupt main process'
for the above reason.

Fix EAL flags autotest to not run primary processes unless both no-shconf and 
no-huge are specified.

Cc: sta...@dpdk.org

Signed-off-by: Anatoly Burakov 
Tested-by: Wu, ChangqingX 


[dpdk-dev] Does lthread_cond_wait need a mutex?

2018-07-17 Thread wubenqing
Hi~
Reference: 
http://doc.dpdk.org/guides-18.05/sample_app_ug/performance_thread.html?highlight=lthread
The L-thread subsystem provides a set of functions that are logically 
equivalent to the corresponding functions offered by the POSIX pthread library.
I think there is a bug with pthread_cond_wait of lthread implement.
Look at this code, there are two lthread:

lthread1:
pthread_mutex_lock(mutex); //a1
if (predicate == FALSE) {//a2
pthread_cond_wait(cond, mutex)//a3
}
pthread_mutex_unlock(mutex);//a4

int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{
if (override) {
pthread_mutex_unlock(mutex); //a31
int rv = lthread_cond_wait(*(struct lthread_cond **)cond, 0); //a32

pthread_mutex_lock(mutex); //a33
return rv;
}
return _sys_pthread_funcs.f_pthread_cond_wait(cond, mutex);
}

lthread2:
pthread_mutex_lock(mutex);//b1
predicate = TRUE;//b2
pthread_mutex_unlock(mutex);//b3
pthread_cond_signal(cond);//b4


If the sequence is:
a1->a2->a31->b1->b2->b3->b4->a32
Will lthread1 sleep forever?


吴本卿(研五 福州)