[vpp-dev] Enabling DPDK OCTEONTx mempool ops in VPP

2018-06-06 Thread Nitin Saxena

Hi Damjan,

I want separate email-thread for the OCTEONTx mempool ops.

> Regarding the octeon tx mempool, no idea what it is,
OCTEONTx mempool ops uses hardware mempool allocator instead of DPDK 
software mempool. This is not enabled by default in DPDK but once 
enabled all buffers are managed by OCTEONTx hardware allocator.


> VPP expects that buffer memory is allocated by VPP and then given to
> DPDK via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().
This I understood that VPP calls rte_mempool_populate_iova_tab() but I 
want to know why VPP allocates buffer on its own and not allow DPDK to 
create its own buffers? Why VPP cannot call rte_pktmbuf_pool_create()? 
Is there a limitation?


Thanks,
Nitin

On Wednesday 06 June 2018 02:20 PM, Damjan Marion wrote:
> Regarding the octeon tx mempool, no idea what it is, but will not be
> surprised that it is not compatible with the way how we use buffer
> memory in vpp.
> VPP expects that buffer memory is allocated by VPP and then given to
> DPDK via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9542): https://lists.fd.io/g/vpp-dev/message/9542
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/21380227
Mute This Topic: https://lists.fd.io/mt/21380227/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
Email sent to: arch...@mail-archive.com
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] IKEv2 VPN tunnel working in one direction

2018-06-06 Thread Saurabh Jain via Lists.Fd.Io
Hi,

Any help here would be appreciable.
Please help with configurations.

Thanks,
Saurabh Jain


VPP 18.01.2 Maintenance Release is complete [was: [vpp-dev] VPP 18.01.2 Release Artifacts have been published]

2018-06-06 Thread Dave Wallace

Folks,

The CSIT-VPP Test Report for VPP 18.01.2 release is now available. I 
have updated the Documents section of the VPP main wiki page to provide 
a links to the associated CSIT-VPP Test Reports for each VPP release: 
https://wiki.fd.io/view/VPP#Documents


Many thanks to the CSIT Team for defining the process of qualifying VPP 
Maintenance releases and producing the 18.01.2 CSIT-VPP Test Report.


VPP 18.01.2 Maintenance Release is now complete!

Cheers,
-daw-  "Your Friendly VPP 18.01 Release Manager"

On 5/18/18 6:22 PM, Dave Wallace wrote:

Folks,

The VPP 18.01.2 Maintenance Release artifacts are now available on 
nexus.fd.io and can be installed using the recipe at: 
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages



Thanks,
-daw-





Re: [vpp-dev] VAT

2018-06-06 Thread Dave Wallace

Xlangyun,

VAT (vpp api test) is currently used in CSIT to configure VPP during 
Functional and Performance testing.  There have been proposals to 
replace VAT in the CSIT test framework with the Python VAPI interface, 
but that has not been implemented yet.


You can search the CSIT Wiki and Documentation for references:

https://wiki.fd.io/view/CSIT
https://docs.fd.io/csit/master/doc

Thanks,
-daw-

On 6/6/18 12:00 AM, xulang wrote:

Hi all,
Is there any files to tell us when and how module vat is been used?

Regards,
Xlangyun







Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Ravi Kerur
Steven, Damjan, Nitin,

Let me clarify so there is no confusion, since you are assisting me to
get this working I will make sure we are all on same page. I believe
OcteonTx is related to Cavium/ARM and I am not using it.

DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
2MB I had to use '--single-file-segments' option.

There used to be a way in DPDK to influence compiler to compile for
certain architecture f.e. 'nehalem'. I will try that option but I want
to make sure steps I am executing is fine first.

(1) I compile VPP (18.04) code on x86_64 system with following
CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.

fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts

(2) I run VPP on the same system.

(3) VPP on host has following startup.conf
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci

  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
  vdev net_vhost1,iface=/var/run/vpp/sock2.sock

  huge-dir /dev/hugepages_1G
  socket-mem 2,0
}

(4) VPP vhost-user config (on host)
create vhost socket /var/run/vpp/sock3.sock
set interface state VirtualEthernet0/0/0 up
set interface ip address VirtualEthernet0/0/0 10.1.1.1/24

(5) show dpdk version (Version is the same on host and container, EAL
params are different)
DPDK Version: DPDK 18.02.1
DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1G
--master-lcore 0 --socket-mem 2,0

(6) Container is instantiated as follows
docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages_1G:/dev/hugepages_1G dpdk-app-vpp:latest

(6) VPP startup.conf inside container is as follows
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci
  huge-dir /dev/hugepages_1G
  socket-mem 1,0
  vdev virtio_user0,path=/var/run/usvhost1
}

(7) VPP virtio-user config (on container)
set interface state VirtioUser0/0/0  up
set interface ip address VirtioUser0/0/0 10.1.1.2/24

(8) Ping... VP on host crashes. I sent one backtrace yesterday. Today
morning tried again, no backtrace but following messages

Program received signal SIGSEGV, Segmentation fault.
0x7fd6f2ba3070 in dpdk_input_avx2 () from
target:/usr/lib/vpp_plugins/dpdk_plugin.so
(gdb)
Continuing.

Program received signal SIGABRT, Aborted.
0x7fd734860428 in raise () from target:/lib/x86_64-linux-gnu/libc.so.6
(gdb)
Continuing.

Program terminated with signal SIGABRT, Aborted.
The program no longer exists.
(gdb) bt
No stack.
(gdb)

Thanks.

On Wed, Jun 6, 2018 at 1:50 AM, Damjan Marion  wrote:
>
> Now i'm completely confused, is this on x86 or octeon tx?
>
> Regarding the octeon tx mempool, no idea what it is, but will not be
> surprised that it is not compatible with the way how we use buffer memory in
> vpp.
> VPP expects that buffer memory is allocated by VPP and then given to DPDK
> via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().
>
> On 6 Jun 2018, at 06:51, Saxena, Nitin  wrote:
>
> Hi Ravi,
>
> Two things to get vhost-user running on OCTEONTx
>
> 1) use either 1 GB hugepages or 512 MB. This you did.
>
> 2) You need one dpdk patch that I merged in dpdk-18.05 related to OcteonTx
> MTU. You can get patch from dpdk git (search for nsaxena)
>
> Hi damjan,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> I was about to send email regarding OCTEONTX mempool, as we enabled it and
> running into issuea. Any pointers will be helpful as I didn't reach to the
> root cause of the issue
>
> Thanks,
> Nitin
>
> On 06-Jun-2018, at 01:40, Damjan Marion  wrote:
>
> Dear Ravi,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur  wrote:
>
> Steven,
>
> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
> assign an IP address to both vhost-user/virtio interfaces and initiate
> a ping VPP crashes.
>
> Any other mechanism available to test Tx/Rx path between Vhost and
> Virtio? Details below.
>
>
> ***On host***
> vpp#show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>  coalesce

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Nitin Saxena
Hi Ravi,

Sorry for diluting your topic. From your stack trace and show interface output 
I thought you are using OCTEONTx. 

Regards,
Nitin

> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
> 
> Steven, Damjan, Nitin,
> 
> Let me clarify so there is no confusion, since you are assisting me to
> get this working I will make sure we are all on same page. I believe
> OcteonTx is related to Cavium/ARM and I am not using it.
> 
> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
> 2MB I had to use '--single-file-segments' option.
> 
> There used to be a way in DPDK to influence compiler to compile for
> certain architecture f.e. 'nehalem'. I will try that option but I want
> to make sure steps I am executing is fine first.
> 
> (1) I compile VPP (18.04) code on x86_64 system with following
> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
> 
> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
> 
> (2) I run VPP on the same system.
> 
> (3) VPP on host has following startup.conf
> unix {
>  nodaemon
>  log /var/log/vpp/vpp.log
>  full-coredump
>  cli-listen /run/vpp/cli.sock
>  gid vpp
> }
> 
> api-trace {
>  on
> }
> 
> api-segment {
>  gid vpp
> }
> 
> dpdk {
>  no-pci
> 
>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
> 
>  huge-dir /dev/hugepages_1G
>  socket-mem 2,0
> }
> 
> (4) VPP vhost-user config (on host)
> create vhost socket /var/run/vpp/sock3.sock
> set interface state VirtualEthernet0/0/0 up
> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
> 
> (5) show dpdk version (Version is the same on host and container, EAL
> params are different)
> DPDK Version: DPDK 18.02.1
> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1G
> --master-lcore 0 --socket-mem 2,0
> 
> (6) Container is instantiated as follows
> docker run -it --privileged -v
> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
> /dev/hugepages_1G:/dev/hugepages_1G dpdk-app-vpp:latest
> 
> (6) VPP startup.conf inside container is as follows
> unix {
>  nodaemon
>  log /var/log/vpp/vpp.log
>  full-coredump
>  cli-listen /run/vpp/cli.sock
>  gid vpp
> }
> 
> api-trace {
>  on
> }
> 
> api-segment {
>  gid vpp
> }
> 
> dpdk {
>  no-pci
>  huge-dir /dev/hugepages_1G
>  socket-mem 1,0
>  vdev virtio_user0,path=/var/run/usvhost1
> }
> 
> (7) VPP virtio-user config (on container)
> set interface state VirtioUser0/0/0  up
> set interface ip address VirtioUser0/0/0 10.1.1.2/24
> 
> (8) Ping... VP on host crashes. I sent one backtrace yesterday. Today
> morning tried again, no backtrace but following messages
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x7fd6f2ba3070 in dpdk_input_avx2 () from
> target:/usr/lib/vpp_plugins/dpdk_plugin.so
> (gdb)
> Continuing.
> 
> Program received signal SIGABRT, Aborted.
> 0x7fd734860428 in raise () from target:/lib/x86_64-linux-gnu/libc.so.6
> (gdb)
> Continuing.
> 
> Program terminated with signal SIGABRT, Aborted.
> The program no longer exists.
> (gdb) bt
> No stack.
> (gdb)
> 
> Thanks.
> 
>> On Wed, Jun 6, 2018 at 1:50 AM, Damjan Marion  wrote:
>> 
>> Now i'm completely confused, is this on x86 or octeon tx?
>> 
>> Regarding the octeon tx mempool, no idea what it is, but will not be
>> surprised that it is not compatible with the way how we use buffer memory in
>> vpp.
>> VPP expects that buffer memory is allocated by VPP and then given to DPDK
>> via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().
>> 
>> On 6 Jun 2018, at 06:51, Saxena, Nitin  wrote:
>> 
>> Hi Ravi,
>> 
>> Two things to get vhost-user running on OCTEONTx
>> 
>> 1) use either 1 GB hugepages or 512 MB. This you did.
>> 
>> 2) You need one dpdk patch that I merged in dpdk-18.05 related to OcteonTx
>> MTU. You can get patch from dpdk git (search for nsaxena)
>> 
>> Hi damjan,
>> 
>> Currently we don't support Octeon TX mempool. Are you intentionally using
>> it?
>> 
>> I was about to send email regarding OCTEONTX mempool, as we enabled it and
>> running into issuea. Any pointers will be helpful as I didn't reach to the
>> root cause of the issue
>> 
>> Thanks,
>> Nitin
>> 
>> On 06-Jun-2018, at 01:40, Damjan Marion  wrote:
>> 
>> Dear Ravi,
>> 
>> Currently we don't support Octeon TX mempool. Are you intentionally using
>> it?
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Ravi Kerur
Damjan, Steven,

I will get back to the system on which VPP is crashing and get more
info on it later.

For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration

VPP vhost-user on host
VPP virtio-user on a container

This time VPP didn't crash. Ping doesn't work though. Both vhost-user
and virtio are transmitting and receiving packets. What do I need to
enable so that ping works?

(1) on host:
show interface
  Name   Idx   State  Counter
Count
VhostEthernet01down
VhostEthernet12down
VirtualEthernet0/0/0  3 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#


(2) On container
show interface
  Name   Idx   State  Counter
Count
VirtioUser0/0/0   1 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#

Thanks.

On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  wrote:
> Hi Ravi,
>
> Sorry for diluting your topic. From your stack trace and show interface 
> output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>> get this working I will make sure we are all on same page. I believe
>> OcteonTx is related to Cavium/ARM and I am not using it.
>>
>> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
>> 2MB I had to use '--single-file-segments' option.
>>
>> There used to be a way in DPDK to influence compiler to compile for
>> certain architecture f.e. 'nehalem'. I will try that option but I want
>> to make sure steps I am executing is fine first.
>>
>> (1) I compile VPP (18.04) code on x86_64 system with following
>> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
>>
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
>> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
>> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
>> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
>> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
>> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
>> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
>> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
>> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
>>
>> (2) I run VPP on the same system.
>>
>> (3) VPP on host has following startup.conf
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>
>>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 2,0
>> }
>>
>> (4) VPP vhost-user config (on host)
>> create vhost socket /var/run/vpp/sock3.sock
>> set interface state VirtualEthernet0/0/0 up
>> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> (5) show dpdk version (Version is the same on host and container, EAL
>> params are different)
>> DPDK Version: DPDK 18.02.1
>> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
>> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
>> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1G
>> --master-lcore 0 --socket-mem 2,0
>>
>> (6) Container is instantiated as follows
>> docker run -it --privileged -v
>> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
>> /dev/hugepages_1G:/dev/hugepages_1G dpdk-app-vpp:latest
>>
>> (6) VPP startup.conf inside container is as follows
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 1,0
>>  vdev virtio_user0,path=/var/run/usvhost1
>> }
>>
>> (7) VPP virtio-user config (on container)
>> set interface state VirtioUser0/0/0  up
>> set interface ip address VirtioUser0/0/0 10.1.1.2/24
>>
>> (8) Ping... VP on ho

Re: [vpp-dev] DPDK 18.05 is out

2018-06-06 Thread Damjan Marion

Patch is in gerrit[1] but it keeps 18.02 as default. We will bump to 18.05 in 
the separate one.

This exercise resulted in one bug found in dpdk 18.05. Patch[2] sent upstream...

Due to this bug, if VPP is linked against non-patched DPDK 18.05, XL710 VF 
driver will not work.
VPP will skip device and print warning message.

[1] https://gerrit.fd.io/r/#/c/12924 
[2] 
https://gerrit.fd.io/r/#/c/12924/3/dpdk/dpdk-18.05_patches/0001-i40evf-don-t-reset-device_info-data.patch
 


> On 31 May 2018, at 11:51, Damjan Marion  wrote:
> 
> Folks,
> 
> DPDK 18.05 is out and required changes are not zero so I will need to spend 
> some time on it.
> Several APIs we use are deprecated.
> 
> A part of that, I'm planning to remove dpdk HQOS code, actually to store it 
> under extras/dpdk-hqos
> unless anybody is willing to step out and volunteer to be "maintainer" of 
> that code.
> Please let me know of any interested parties
> 
> Thanks,
> 
> -- 
> Damjan
> 



Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
Ravi,

I supposed you already checked the obvious that the vhost connection is 
established and shared memory has at least 1 region in show vhost. For traffic 
issue, use show error to see why packets are dropping. trace add 
vhost-user-input and show trace to see if vhost is getting the packet.

Steven

On 6/6/18, 1:33 PM, "Ravi Kerur"  wrote:

Damjan, Steven,

I will get back to the system on which VPP is crashing and get more
info on it later.

For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration

VPP vhost-user on host
VPP virtio-user on a container

This time VPP didn't crash. Ping doesn't work though. Both vhost-user
and virtio are transmitting and receiving packets. What do I need to
enable so that ping works?

(1) on host:
show interface
  Name   Idx   State  Counter
Count
VhostEthernet01down
VhostEthernet12down
VirtualEthernet0/0/0  3 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#


(2) On container
show interface
  Name   Idx   State  Counter
Count
VirtioUser0/0/0   1 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#

Thanks.

On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  
wrote:
> Hi Ravi,
>
> Sorry for diluting your topic. From your stack trace and show interface 
output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>> get this working I will make sure we are all on same page. I believe
>> OcteonTx is related to Cavium/ARM and I am not using it.
>>
>> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
>> 2MB I had to use '--single-file-segments' option.
>>
>> There used to be a way in DPDK to influence compiler to compile for
>> certain architecture f.e. 'nehalem'. I will try that option but I want
>> to make sure steps I am executing is fine first.
>>
>> (1) I compile VPP (18.04) code on x86_64 system with following
>> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
>>
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
>> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
>> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
>> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
>> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
>> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
>> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
>> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
>> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
>>
>> (2) I run VPP on the same system.
>>
>> (3) VPP on host has following startup.conf
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>
>>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 2,0
>> }
>>
>> (4) VPP vhost-user config (on host)
>> create vhost socket /var/run/vpp/sock3.sock
>> set interface state VirtualEthernet0/0/0 up
>> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> (5) show dpdk version (Version is the same on host and container, EAL
>> params are different)
>> DPDK Version: DPDK 18.02.1
>> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
>> net_vhost0,iface

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Ravi Kerur
Hi Steven

Shared memory is setup correctly. I am seeing following errors. System
on which there is no crash it doesn't support 1G hugepages. So I have
to use 2M hugepages with following config for VPP.



(1) host
vpp# show error
   CountNode  Reason
vpp# show error
   CountNode  Reason
 5vhost-user-inputmmap failure
 5 ethernet-input l3 mac mismatch
vpp#

(2) Host VPP config
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci

  huge-dir /dev/hugepages
  socket-mem 16,0
}

(3) Container
vpp# show error
   CountNode  Reason
 5ip4-glean   ARP requests sent
vpp#

(4) Container VPP config

unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci
  huge-dir /dev/hugepages
  socket-mem 8,0
  vdev virtio_user0,path=/var/run/usvhost1
}

Thanks.

On Wed, Jun 6, 2018 at 1:53 PM, Steven Luong (sluong)  wrote:
> Ravi,
>
> I supposed you already checked the obvious that the vhost connection is 
> established and shared memory has at least 1 region in show vhost. For 
> traffic issue, use show error to see why packets are dropping. trace add 
> vhost-user-input and show trace to see if vhost is getting the packet.
>
> Steven
>
> On 6/6/18, 1:33 PM, "Ravi Kerur"  wrote:
>
> Damjan, Steven,
>
> I will get back to the system on which VPP is crashing and get more
> info on it later.
>
> For now, I got hold of another system (same 16.04 x86_64) and I tried
> with the same configuration
>
> VPP vhost-user on host
> VPP virtio-user on a container
>
> This time VPP didn't crash. Ping doesn't work though. Both vhost-user
> and virtio are transmitting and receiving packets. What do I need to
> enable so that ping works?
>
> (1) on host:
> show interface
>   Name   Idx   State  Counter
> Count
> VhostEthernet01down
> VhostEthernet12down
> VirtualEthernet0/0/0  3 up   rx packets
>  5
>  rx bytes
>210
>  tx packets
>  5
>  tx bytes
>210
>  drops
> 10
> local00down
> vpp# show ip arp
> vpp#
>
>
> (2) On container
> show interface
>   Name   Idx   State  Counter
> Count
> VirtioUser0/0/0   1 up   rx packets
>  5
>  rx bytes
>210
>  tx packets
>  5
>  tx bytes
>210
>  drops
> 10
> local00down
> vpp# show ip arp
> vpp#
>
> Thanks.
>
> On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  
> wrote:
> > Hi Ravi,
> >
> > Sorry for diluting your topic. From your stack trace and show interface 
> output I thought you are using OCTEONTx.
> >
> > Regards,
> > Nitin
> >
> >> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
> >>
> >> Steven, Damjan, Nitin,
> >>
> >> Let me clarify so there is no confusion, since you are assisting me to
> >> get this working I will make sure we are all on same page. I believe
> >> OcteonTx is related to Cavium/ARM and I am not using it.
> >>
> >> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
> >> 2MB I had to use '--single-file-segments' option.
> >>
> >> There used to be a way in DPDK to influence compiler to compile for
> >> certain architecture f.e. 'nehalem'. I will try that option but I want
> >> to make sure steps I am executing is fine first.
> >>
> >> (1) I compile VPP (18.04) code on x86_64 system with following
> >> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
> >>
> >> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> >> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> >> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> >> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> >> ds_c