Damjan, Steven,

I will get back to the system on which VPP is crashing and get more
info on it later.

For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration

VPP vhost-user on host
VPP virtio-user on a container

This time VPP didn't crash. Ping doesn't work though. Both vhost-user
and virtio are transmitting and receiving packets. What do I need to
enable so that ping works?

(1) on host:
show interface
              Name               Idx       State          Counter
    Count
VhostEthernet0                    1        down
VhostEthernet1                    2        down
VirtualEthernet0/0/0              3         up       rx packets
             5
                                                     rx bytes
           210
                                                     tx packets
             5
                                                     tx bytes
           210
                                                     drops
            10
local0                            0        down
vpp# show ip arp
vpp#


(2) On container
show interface
              Name               Idx       State          Counter
    Count
VirtioUser0/0/0                   1         up       rx packets
             5
                                                     rx bytes
           210
                                                     tx packets
             5
                                                     tx bytes
           210
                                                     drops
            10
local0                            0        down
vpp# show ip arp
vpp#

Thanks.

On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin <nitin.sax...@cavium.com> wrote:
> Hi Ravi,
>
> Sorry for diluting your topic. From your stack trace and show interface 
> output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur <rke...@gmail.com> wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>> get this working I will make sure we are all on same page. I believe
>> OcteonTx is related to Cavium/ARM and I am not using it.
>>
>> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
>> 2MB I had to use '--single-file-segments' option.
>>
>> There used to be a way in DPDK to influence compiler to compile for
>> certain architecture f.e. 'nehalem'. I will try that option but I want
>> to make sure steps I am executing is fine first.
>>
>> (1) I compile VPP (18.04) code on x86_64 system with following
>> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
>>
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
>> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
>> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
>> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
>> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
>> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
>> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
>> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
>> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
>>
>> (2) I run VPP on the same system.
>>
>> (3) VPP on host has following startup.conf
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>
>>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 2,0
>> }
>>
>> (4) VPP vhost-user config (on host)
>> create vhost socket /var/run/vpp/sock3.sock
>> set interface state VirtualEthernet0/0/0 up
>> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> (5) show dpdk version (Version is the same on host and container, EAL
>> params are different)
>> DPDK Version:             DPDK 18.02.1
>> DPDK EAL init args:       -c 1 -n 4 --no-pci --vdev
>> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
>> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1G
>> --master-lcore 0 --socket-mem 2,0
>>
>> (6) Container is instantiated as follows
>> docker run -it --privileged -v
>> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
>> /dev/hugepages_1G:/dev/hugepages_1G dpdk-app-vpp:latest
>>
>> (6) VPP startup.conf inside container is as follows
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 1,0
>>  vdev virtio_user0,path=/var/run/usvhost1
>> }
>>
>> (7) VPP virtio-user config (on container)
>> set interface state VirtioUser0/0/0  up
>> set interface ip address VirtioUser0/0/0 10.1.1.2/24
>>
>> (8) Ping... VP on host crashes. I sent one backtrace yesterday. Today
>> morning tried again, no backtrace but following messages
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> 0x00007fd6f2ba3070 in dpdk_input_avx2 () from
>> target:/usr/lib/vpp_plugins/dpdk_plugin.so
>> (gdb)
>> Continuing.
>>
>> Program received signal SIGABRT, Aborted.
>> 0x00007fd734860428 in raise () from target:/lib/x86_64-linux-gnu/libc.so.6
>> (gdb)
>> Continuing.
>>
>> Program terminated with signal SIGABRT, Aborted.
>> The program no longer exists.
>> (gdb) bt
>> No stack.
>> (gdb)
>>
>> Thanks.
>>
>>> On Wed, Jun 6, 2018 at 1:50 AM, Damjan Marion <dmar...@me.com> wrote:
>>>
>>> Now i'm completely confused, is this on x86 or octeon tx?
>>>
>>> Regarding the octeon tx mempool, no idea what it is, but will not be
>>> surprised that it is not compatible with the way how we use buffer memory in
>>> vpp.
>>> VPP expects that buffer memory is allocated by VPP and then given to DPDK
>>> via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().
>>>
>>> On 6 Jun 2018, at 06:51, Saxena, Nitin <nitin.sax...@cavium.com> wrote:
>>>
>>> Hi Ravi,
>>>
>>> Two things to get vhost-user running on OCTEONTx
>>>
>>> 1) use either 1 GB hugepages or 512 MB. This you did.
>>>
>>> 2) You need one dpdk patch that I merged in dpdk-18.05 related to OcteonTx
>>> MTU. You can get patch from dpdk git (search for nsaxena)
>>>
>>> Hi damjan,
>>>
>>> Currently we don't support Octeon TX mempool. Are you intentionally using
>>> it?
>>>
>>> I was about to send email regarding OCTEONTX mempool, as we enabled it and
>>> running into issuea. Any pointers will be helpful as I didn't reach to the
>>> root cause of the issue
>>>
>>> Thanks,
>>> Nitin
>>>
>>> On 06-Jun-2018, at 01:40, Damjan Marion <dmar...@me.com> wrote:
>>>
>>> Dear Ravi,
>>>
>>> Currently we don't support Octeon TX mempool. Are you intentionally using
>>> it?
>>>
>>> Regards,
>>>
>>> Damjan
>>>
>>> On 5 Jun 2018, at 21:46, Ravi Kerur <rke...@gmail.com> wrote:
>>>
>>> Steven,
>>>
>>> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
>>> assign an IP address to both vhost-user/virtio interfaces and initiate
>>> a ping VPP crashes.
>>>
>>> Any other mechanism available to test Tx/Rx path between Vhost and
>>> Virtio? Details below.
>>>
>>>
>>> *******On host*******
>>> vpp#show vhost-user VirtualEthernet0/0/0
>>> Virtio vhost-user interfaces
>>> Global:
>>> coalesce frames 32 time 1e-3
>>> number of rx virtqueues in interrupt mode: 0
>>> Interface: VirtualEthernet0/0/0 (ifindex 3)
>>> virtio_net_hdr_sz 12
>>> features mask (0xffffffffffffffff):
>>> features (0x110008000):
>>>  VIRTIO_NET_F_MRG_RXBUF (15)
>>>  VIRTIO_F_INDIRECT_DESC (28)
>>>  VIRTIO_F_VERSION_1 (32)
>>> protocol features (0x0)
>>>
>>> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>>>
>>> rx placement:
>>>  thread 0 on vring 1, polling
>>> tx placement: lock-free
>>>  thread 0 on vring 0
>>>
>>> Memory regions (total 1)
>>> region fd    guest_phys_addr    memory_size        userspace_addr
>>> mmap_offset        mmap_addr
>>> ====== ===== ================== ================== ==================
>>> ================== ==================
>>> 0     26    0x00007f54c0000000 0x0000000040000000 0x00007f54c0000000
>>> 0x0000000000000000 0x00007faf00000000
>>>
>>> Virtqueue 0 (TX)
>>> qsz 256 last_avail_idx 0 last_used_idx 0
>>> avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
>>> kickfd 27 callfd 24 errfd -1
>>>
>>> Virtqueue 1 (RX)
>>> qsz 256 last_avail_idx 0 last_used_idx 0
>>> avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
>>> kickfd 28 callfd 25 errfd -1
>>>
>>>
>>> vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>>
>>> ************On container**********
>>> vpp# show interface VirtioUser0/0/0
>>>             Name               Idx       State          Counter
>>>   Count
>>> VirtioUser0/0/0                   1         up
>>> vpp#
>>> vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
>>> vpp#
>>> vpp# ping 10.1.1.1
>>>
>>> Statistics: 5 sent, 0 received, 100% packet loss
>>> vpp#
>>>
>>>
>>> ************Host vpp crash with following backtrace******************
>>> Continuing.
>>>
>>> Program received signal SIGSEGV, Segmentation fault.
>>> octeontx_fpa_bufpool_alloc (handle=0)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
>>> 57        return (void *)(uintptr_t)fpavf_read64((void *)(handle +
>>> (gdb) bt
>>> #0  octeontx_fpa_bufpool_alloc (handle=0)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
>>> #1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880,
>>> n=528)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
>>> #2  0x00007fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
>>> obj_table=<optimized out>,
>>>   mp=0x7fae7fc9ab40)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
>>> #3  __mempool_generic_get (cache=<optimized out>, n=<optimized out>,
>>> obj_table=<optimized out>,
>>>   mp=<optimized out>)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
>>> #4  rte_mempool_generic_get (cache=<optimized out>, n=<optimized out>,
>>>   obj_table=<optimized out>, mp=<optimized out>)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
>>> #5  rte_mempool_get_bulk (n=528, obj_table=<optimized out>,
>>> mp=0x7fae7fc9ab40)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
>>> #6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
>>> <vlib_global_main>, fl=0x7fb04cb2b100,
>>>   min_free_buffers=<optimized out>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/plugins/dpdk/buffer.c:228
>>> #7  0x00007fb08e5046ea in vlib_buffer_alloc_from_free_list (index=0
>>> '\000', n_buffers=514,
>>>   buffers=0x7fb04cb8ec58, vm=0x7fb08ec69480 <vlib_global_main>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/buffer_funcs.h:306
>>> #8  vhost_user_if_input (mode=<optimized out>, node=0x7fb04d0f5b80,
>>> qid=<optimized out>,
>>>   vui=0x7fb04d87523c, vum=0x7fb08e9b9560 <vhost_user_main>,
>>>   vm=0x7fb08ec69480 <vlib_global_main>)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1644
>>> #9  vhost_user_input (f=<optimized out>, node=<optimized out>,
>>> vm=<optimized out>)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
>>> #10 vhost_user_input_avx2 (vm=<optimized out>, node=<optimized out>,
>>> frame=<optimized out>)
>>>   at
>>> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
>>> #11 0x00007fb08ea166b3 in dispatch_node (last_time_stamp=<optimized
>>> out>, frame=0x0,
>>>   dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
>>> node=0x7fb04d0f5b80,
>>>   vm=0x7fb08ec69480 <vlib_global_main>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:988
>>> #12 vlib_main_or_worker_loop (is_main=1, vm=0x7fb08ec69480
>>> <vlib_global_main>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1505
>>> #13 vlib_main_loop (vm=0x7fb08ec69480 <vlib_global_main>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1633
>>> #14 vlib_main (vm=vm@entry=0x7fb08ec69480 <vlib_global_main>,
>>> input=input@entry=0x7fb04d077fa0)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1787
>>> #15 0x00007fb08ea4d683 in thread0 (arg=140396286350464)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/unix/main.c:568
>>> #16 0x00007fb08dbe15d8 in clib_calljmp ()
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vppinfra/longjmp.S:110
>>> #17 0x00007fff0726d370 in ?? ()
>>> ---Type <return> to continue, or q <return> to quit---
>>> #18 0x00007fb08ea4e3da in vlib_unix_main (argc=<optimized out>,
>>> argv=<optimized out>)
>>>   at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/unix/main.c:632
>>> #19 0x0000001900000000 in ?? ()
>>> #20 0x000000e700000000 in ?? ()
>>> #21 0x0000000000000831 in ?? ()
>>> #22 0x00007fb08e9aac00 in ?? () from /usr/lib/x86_64-linux-gnu/libvnet.so.0
>>>
>>> **************Vhost-user debugs on host**********
>>> Jun  5 19:23:35 [18916]: vhost_user_socksvr_accept_ready:1294: New
>>> client socket for vhost interface 3, fd 23
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:995: if 3 msg
>>> VHOST_USER_SET_OWNER
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:847: if 3 msg
>>> VHOST_USER_GET_FEATURES - reply 0x000000015c628000
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:995: if 3 msg
>>> VHOST_USER_SET_OWNER
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1004: if 3 msg
>>> VHOST_USER_SET_VRING_CALL 0
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1004: if 3 msg
>>> VHOST_USER_SET_VRING_CALL 1
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:852: if 3 msg
>>> VHOST_USER_SET_FEATURES features 0x0000000110008000
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:877: if 3 msg
>>> VHOST_USER_SET_MEM_TABLE nregions 1
>>> Jun  5 19:23:35[18916]: vhost_user_socket_read:916: map memory region
>>> 0 addr 0 len 0x40000000 fd 26 mapped 0x7faf00000000 page_sz 0x40000000
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:932: if 3 msg
>>> VHOST_USER_SET_VRING_NUM idx 0 num 256
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1096: if 3 msg
>>> VHOST_USER_SET_VRING_BASE idx 0 num 0
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:943: if 3 msg
>>> VHOST_USER_SET_VRING_ADDR idx 0
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1037: if 3 msg
>>> VHOST_USER_SET_VRING_KICK 0
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:932: if 3 msg
>>> VHOST_USER_SET_VRING_NUM idx 1 num 256
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1096: if 3 msg
>>> VHOST_USER_SET_VRING_BASE idx 1 num 0
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:943: if 3 msg
>>> VHOST_USER_SET_VRING_ADDR idx 1
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1037: if 3 msg
>>> VHOST_USER_SET_VRING_KICK 1
>>> Jun  5 19:23:35 [18916]: vhost_user_socket_read:1211: if 3
>>> VHOST_USER_SET_VRING_ENABLE: enable queue 0
>>> Jun  5 19:23:35[18916]: vhost_user_socket_read:1211: if 3
>>> VHOST_USER_SET_VRING_ENABLE: enable queue 1
>>>
>>> Thanks.
>>>
>>> On Tue, Jun 5, 2018 at 11:31 AM, Steven Luong (sluong) <slu...@cisco.com>
>>> wrote:
>>>
>>> Ravi,
>>>
>>> In order to use dpdk virtio_user, you need 1GB huge page.
>>>
>>> Steven
>>>
>>> On 6/5/18, 11:17 AM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   Hi Steven,
>>>
>>>   Connection is the problem. I don't see memory regions setup correctly.
>>>   Below are some details. Currently I am using 2MB hugepages.
>>>
>>>   (1) Create vhost-user server
>>>   debug vhost-user on
>>>   vpp# create vhost socket /var/run/vpp/sock3.sock server
>>>   VirtualEthernet0/0/0
>>>   vpp# set interface state VirtualEthernet0/0/0 up
>>>   vpp#
>>>   vpp#
>>>
>>>   (2) Instantiate a container
>>>   docker run -it --privileged -v
>>>   /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
>>>   /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
>>>
>>>   (3) Inside the container run EAL/DPDK virtio with following startup conf.
>>>   unix {
>>>     nodaemon
>>>     log /var/log/vpp/vpp.log
>>>     full-coredump
>>>     cli-listen /run/vpp/cli.sock
>>>     gid vpp
>>>   }
>>>
>>>   api-trace {
>>>     on
>>>   }
>>>
>>>   api-segment {
>>>     gid vpp
>>>   }
>>>
>>>   dpdk {
>>>           no-pci
>>>           vdev virtio_user0,path=/var/run/usvhost1
>>>   }
>>>
>>>   Following errors are seen due to 2MB hugepages and I think DPDK
>>>   requires "--single-file-segments" option.
>>>
>>>   /usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
>>>   --vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
>>>   /run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
>>>   64,64
>>>   /usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
>>>   --vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
>>>   /run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
>>>   64,64
>>>   EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
>>>   found for that size
>>>   EAL: VFIO support initialized
>>>   get_hugepage_file_info(): Exceed maximum of 8
>>>   prepare_vhost_memory_user(): Failed to prepare memory for vhost-user
>>>   DPDK physical memory layout:
>>>
>>>
>>>   Second test case>
>>>   (1) and (2) are same as above. I run VPP inside a container with
>>>   following startup config
>>>
>>>   unix {
>>>     nodaemon
>>>     log /var/log/vpp/vpp.log
>>>     full-coredump
>>>     cli-listen /run/vpp/cli.sock
>>>     gid vpp
>>>   }
>>>
>>>   api-trace {
>>>     on
>>>   }
>>>
>>>   api-segment {
>>>     gid vpp
>>>   }
>>>
>>>   dpdk {
>>>           no-pci
>>>           single-file-segments
>>>           vdev virtio_user0,path=/var/run/usvhost1
>>>   }
>>>
>>>
>>>   VPP fails to start with
>>>   plugin.so
>>>   vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'
>>>   vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'
>>>
>>>   [1]+  Done                    /usr/bin/vpp -c /etc/vpp/startup.conf
>>>   root@867dc128b544:~/dpdk#
>>>
>>>
>>>   show version (on both host and container).
>>>   vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed
>>>   Apr 25 14:53:07 UTC 2018
>>>   vpp#
>>>
>>>   Thanks.
>>>
>>>   On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong) <slu...@cisco.com>
>>> wrote:
>>>
>>> Ravi,
>>>
>>> Do this
>>>
>>> 1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user
>>> on".
>>> 2. Bring up the container with the vdev virtio_user commands that you have
>>> as before
>>> 3. show vhost-user in the host and verify that it has a shared memory
>>> region. If not, the connection has a problem. Collect the show vhost-user
>>> and debug vhost-user and send them to me and stop. If yes, proceed with step
>>> 4.
>>> 4. type "trace vhost-user-input 100" in the host
>>> 5. clear error, and clear interfaces in the host and the container.
>>> 6. do the ping from the container.
>>> 7. Collect show error, show trace, show interface, and show vhost-user in
>>> the host. Collect show error and show interface in the container. Put output
>>> in github and provide a link to view. There is no need to send a large file.
>>>
>>> Steven
>>>
>>> On 6/4/18, 5:50 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   Hi Steven,
>>>
>>>   Thanks for your help. I am using vhost-user client (VPP in container)
>>>   and vhost-user server (VPP in host). I thought it should work.
>>>
>>>   create vhost socket /var/run/vpp/sock3.sock server (On host)
>>>
>>>   create vhost socket /var/run/usvhost1 (On container)
>>>
>>>   Can you please point me to a document which shows how to create VPP
>>>   virtio_user interfaces or static configuration in
>>>   /etc/vpp/startup.conf?
>>>
>>>   I have used following declarations in /etc/vpp/startup.conf
>>>
>>>   # vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
>>>   # vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02
>>>
>>>   but it doesn't work.
>>>
>>>   Thanks.
>>>
>>>   On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong) <slu...@cisco.com>
>>> wrote:
>>>
>>> Ravi,
>>>
>>> VPP only supports vhost-user in the device mode. In your example, the host,
>>> in device mode, and the container also in device mode do not make a happy
>>> couple. You need one of them, either the host or container, running in
>>> driver mode using the dpdk vdev virtio_user command in startup.conf. So you
>>> need something like this
>>>
>>> (host) VPP native vhost-user ----- (container) VPP DPDK vdev virtio_user
>>>                         -- or --
>>> (host) VPP DPDK vdev virtio_user ---- (container) VPP native vhost-user
>>>
>>> Steven
>>>
>>> On 6/4/18, 3:27 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   Hi Steven
>>>
>>>   Though crash is not happening anymore, there is still an issue with Rx
>>>   and Tx. To eliminate whether it is testpmd or vpp, I decided to run
>>>
>>>   (1) VPP vhost-user server on host-x
>>>   (2) Run VPP in a container on host-x and vhost-user client port
>>>   connecting to vhost-user server.
>>>
>>>   Still doesn't work. Details below. Please let me know if something is
>>>   wrong in what I am doing.
>>>
>>>
>>>   (1) VPP vhost-user as a server
>>>   (2) VPP in a container virtio-user or vhost-user client
>>>
>>>   (1) Create vhost-user server socket on VPP running on host.
>>>
>>>   vpp#create vhost socket /var/run/vpp/sock3.sock server
>>>   vpp#set interface state VirtualEthernet0/0/0 up
>>>   show vhost-user VirtualEthernet0/0/0 descriptors
>>>   Virtio vhost-user interfaces
>>>   Global:
>>>   coalesce frames 32 time 1e-3
>>>   number of rx virtqueues in interrupt mode: 0
>>>   Interface: VirtualEthernet0/0/0 (ifindex 3)
>>>   virtio_net_hdr_sz 0
>>>   features mask (0xffffffffffffffff):
>>>   features (0x0):
>>>   protocol features (0x0)
>>>
>>>   socket filename /var/run/vpp/sock3.sock type server errno "Success"
>>>
>>>   rx placement:
>>>   tx placement: spin-lock
>>>   thread 0 on vring 0
>>>
>>>   Memory regions (total 0)
>>>
>>>   vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
>>>   vpp#
>>>
>>>   (2) Instantiate a docker container to run VPP connecting to sock3.server
>>> socket.
>>>
>>>   docker run -it --privileged -v
>>>   /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
>>>   /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
>>>   root@4b1bd06a3225:~/dpdk#
>>>   root@4b1bd06a3225:~/dpdk# ps -ef
>>>   UID PID PPID C STIME TTY TIME CMD
>>>   root 1 0 0 21:39 ? 00:00:00 /bin/bash
>>>   root 17 1 0 21:39 ? 00:00:00 ps -ef
>>>   root@4b1bd06a3225:~/dpdk#
>>>
>>>   root@8efda6701ace:~/dpdk# ps -ef | grep vpp
>>>   root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
>>>   root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
>>>   root@8efda6701ace:~/dpdk#
>>>
>>>   vpp#create vhost socket /var/run/usvhost1
>>>   vpp#set interface state VirtualEthernet0/0/0 up
>>>   vpp#show vhost-user VirtualEthernet0/0/0 descriptors
>>>   Virtio vhost-user interfaces
>>>   Global:
>>>   coalesce frames 32 time 1e-3
>>>   number of rx virtqueues in interrupt mode: 0
>>>   Interface: VirtualEthernet0/0/0 (ifindex 1)
>>>   virtio_net_hdr_sz 0
>>>   features mask (0xffffffffffffffff):
>>>   features (0x0):
>>>   protocol features (0x0)
>>>
>>>   socket filename /var/run/usvhost1 type client errno "Success"
>>>
>>>   rx placement:
>>>   tx placement: spin-lock
>>>   thread 0 on vring 0
>>>
>>>   Memory regions (total 0)
>>>
>>>   vpp#
>>>
>>>   vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
>>>   vpp#
>>>
>>>   vpp# ping 192.168.1.1
>>>
>>>   Statistics: 5 sent, 0 received, 100% packet loss
>>>   vpp#
>>>
>>>   On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong) <slu...@cisco.com>
>>> wrote:
>>>
>>> show interface and look for the counter and count columns for the
>>> corresponding interface.
>>>
>>> Steven
>>>
>>> On 5/31/18, 1:28 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   Hi Steven,
>>>
>>>   You made my day, thank you. I didn't realize different dpdk versions
>>>   (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
>>>   could be the cause of the problem, I still dont understand why it
>>>   should as virtio/vhost messages are meant to setup tx/rx rings
>>>   correctly?
>>>
>>>   I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
>>>   crash now (for both vpp-native and dpdk vhost interfaces). I have one
>>>   question is there a way to read vhost-user statistics counter (Rx/Tx)
>>>   on vpp? I only know
>>>
>>>   'show vhost-user <intf>' and 'show vhost-user <intf> descriptors'
>>>   which doesn't show any counters.
>>>
>>>   Thanks.
>>>
>>>   On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
>>>   <slu...@cisco.com> wrote:
>>>
>>> Ravi,
>>>
>>> For (1) which works, what dpdk version are you using in the host? Are you
>>> using the same dpdk version as VPP is using? Since you are using VPP latest,
>>> I think it is 18.02. Type "show dpdk version" at the VPP prompt to find out
>>> for sure.
>>>
>>> Steven
>>>
>>> On 5/31/18, 11:44 AM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   Hi Steven,
>>>
>>>   i have tested following scenarios and it basically is not clear why
>>>   you think DPDK is the problem? Is it possible VPP and DPDK use
>>>   different virtio versions?
>>>
>>>   Following are the scenarios I have tested
>>>
>>>   (1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
>>>   virito-user (in a container) -- can send and receive packets
>>>   (2) VPP-native vhost-user (running on host) and testpmd/DPDK
>>>   virtio-user (in a container) -- VPP crashes and it is in VPP code
>>>   (3) VPP-DPDK vhost user (running on host) and testpmd/DPDK virtio-user
>>>   (in a container) -- VPP crashes and in DPDK
>>>
>>>   Thanks.
>>>
>>>   On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
>>>   <slu...@cisco.com> wrote:
>>>
>>> Ravi,
>>>
>>> I've proved my point -- there is a problem in the way that you invoke
>>> testpmd. The shared memory region that it passes to the device is not
>>> accessible from the device. I don't know what the correct options are that
>>> you need to use. This is really a question for dpdk.
>>>
>>> As a further exercise, you could remove VPP in the host and instead run
>>> testpmd in device mode using "--vdev
>>> net_vhost0,iface=/var/run/vpp/sock1.sock" option. I bet you testpmd in the
>>> host will crash in the same place. I hope you can find out the answer from
>>> dpdk and tell us about it.
>>>
>>> Steven
>>>
>>> On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
>>> <vpp-dev@lists.fd.io on behalf of rke...@gmail.com> wrote:
>>>
>>>   Hi Steven,
>>>
>>>   Thank you for your help, I removed sock1.sock and sock2.sock,
>>>   restarted vpp, atleast interfaces get created. However, when I start
>>>   dpdk/testpmd inside the container it crashes as well. Below are some
>>>   details. I am using vpp code from latest repo.
>>>
>>>   (1) On host
>>>   show interface
>>>                 Name               Idx       State          Counter
>>>       Count
>>>   VhostEthernet2                    3        down
>>>   VhostEthernet3                    4        down
>>>   VirtualFunctionEthernet4/10/4     1        down
>>>   VirtualFunctionEthernet4/10/6     2        down
>>>   local0                            0        down
>>>   vpp#
>>>   vpp# set interface state VhostEthernet2 up
>>>   vpp# set interface state VhostEthernet3 up
>>>   vpp#
>>>   vpp# set interface l2 bridge VhostEthernet2 1
>>>   vpp# set interface l2 bridge VhostEthernet3 1
>>>   vpp#
>>>
>>>   (2) Run tespmd inside the container
>>>   docker run -it --privileged -v
>>>   /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>>>   /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>>>   /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -l 16-19
>>>   -n 4 --log-level=8 -m 64 --no-pci
>>>   --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
>>>   --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
>>>   -i
>>>   EAL: Detected 28 lcore(s)
>>>   EAL: Detected 2 NUMA nodes
>>>   EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>>   EAL: 8192 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>>>   found for that size
>>>   EAL: Probing VFIO support...
>>>   EAL: VFIO support initialized
>>>   EAL: Setting up physically contiguous memory...
>>>   EAL: locking hot plug lock memory...
>>>   EAL: primary init32...
>>>   Interactive-mode selected
>>>   Warning: NUMA should be configured manually by using
>>>   --port-numa-config and --ring-numa-config parameters along with
>>>   --numa.
>>>   testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
>>>   size=2176, socket=0
>>>   testpmd: preferred mempool ops selected: ring_mp_mc
>>>   testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
>>>   size=2176, socket=1
>>>   testpmd: preferred mempool ops selected: ring_mp_mc
>>>   Port 0 is now not stopped
>>>   Port 1 is now not stopped
>>>   Please stop the ports first
>>>   Done
>>>   testpmd>
>>>
>>>   (3) VPP crashes with the same issue but inside dpdk code
>>>
>>>   (gdb) cont
>>>   Continuing.
>>>
>>>   Program received signal SIGSEGV, Segmentation fault.
>>>   [Switching to Thread 0x7ffd0d08e700 (LWP 41257)]
>>>   rte_vhost_dequeue_burst (vid=<optimized out>, queue_id=<optimized
>>>   out>, mbuf_pool=0x7fe17fc883c0,
>>>       pkts=pkts@entry=0x7fffb671ebc0, count=count@entry=32)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
>>>   1504        free_entries = *((volatile uint16_t *)&vq->avail->idx) -
>>>   (gdb) bt
>>>   #0  rte_vhost_dequeue_burst (vid=<optimized out>, queue_id=<optimized
>>> out>,
>>>       mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
>>>   count=count@entry=32)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
>>>   #1  0x00007fffb4718e6f in eth_vhost_rx (q=0x7fe17fbbdd80,
>>> bufs=0x7fffb671ebc0,
>>>       nb_bufs=<optimized out>)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/net/vhost/rte_eth_vhost.c:410
>>>   #2  0x00007fffb441cb7c in rte_eth_rx_burst (nb_pkts=256,
>>>   rx_pkts=0x7fffb671ebc0, queue_id=0,
>>>       port_id=3) at
>>>
>>> /var/venom/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_ethdev.h:3635
>>>   #3  dpdk_device_input (queue_id=0, thread_index=<optimized out>,
>>>   node=0x7fffb732c700,
>>>       xd=0x7fffb7337240, dm=<optimized out>, vm=0x7fffb6703340)
>>>       at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:477
>>>   #4  dpdk_input_node_fn_avx2 (vm=<optimized out>, node=<optimized out>,
>>>   f=<optimized out>)
>>>       at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:658
>>>   #5  0x00007ffff7954d35 in dispatch_node
>>>   (last_time_stamp=12531752723928016, frame=0x0,
>>>       dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
>>>   node=0x7fffb732c700,
>>>       vm=0x7fffb6703340) at
>>> /var/venom/vpp/build-data/../src/vlib/main.c:988
>>>   #6  vlib_main_or_worker_loop (is_main=0, vm=0x7fffb6703340)
>>>       at /var/venom/vpp/build-data/../src/vlib/main.c:1507
>>>   #7  vlib_worker_loop (vm=0x7fffb6703340) at
>>>   /var/venom/vpp/build-data/../src/vlib/main.c:1641
>>>   #8  0x00007ffff6ad25d8 in clib_calljmp ()
>>>       at /var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
>>>   #9  0x00007ffd0d08ddb0 in ?? ()
>>>   #10 0x00007fffb4436edd in eal_thread_loop (arg=<optimized out>)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
>>>   #11 0x0000000000000000 in ?? ()
>>>   (gdb) frame 0
>>>   #0  rte_vhost_dequeue_burst (vid=<optimized out>, queue_id=<optimized
>>> out>,
>>>       mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
>>>   count=count@entry=32)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
>>>   1504        free_entries = *((volatile uint16_t *)&vq->avail->idx) -
>>>   (gdb) p vq
>>>   $1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00
>>>   (gdb) p vq->avail
>>>   $2 = (struct vring_avail *) 0x7ffbfff98000
>>>   (gdb) p *$2
>>>   Cannot access memory at address 0x7ffbfff98000
>>>   (gdb)
>>>
>>>
>>>   Thanks.
>>>
>>>   On Thu, May 31, 2018 at 12:09 AM, Steven Luong (sluong)
>>>   <slu...@cisco.com> wrote:
>>>
>>> Sorry, I was expecting to see two VhostEthernet interfaces like this. Those
>>> VirtualFunctionEthernet are your physical interfaces.
>>>
>>> sh int
>>>             Name               Idx       State          Counter
>>> Count
>>> VhostEthernet0                    1         up
>>> VhostEthernet1                    2         up
>>> local0                            0        down
>>> DBGvpp#
>>>
>>> You have to first manually remove /var/run/vpp/sock1.sock and
>>> /var/run/vpp/sock2.sock before you start vpp on the host. dpdk does not like
>>> it if they already existed. If you successfully create VhostEthernet
>>> interface, try to send some traffic through it to see if it crashes or not.
>>>
>>> Steven
>>>
>>> On 5/30/18, 9:17 PM, "vpp-dev@lists.fd.io on behalf of Steven Luong
>>> (sluong)" <vpp-dev@lists.fd.io on behalf of slu...@cisco.com> wrote:
>>>
>>>   Ravi,
>>>
>>>   I don't think you can declare (2) works fine yet. Please bring up the
>>> dpdk vhost-user interfaces and try to send some traffic between them to
>>> exercise the shared memory region from dpdk virtio-user which may be
>>> "questionable".
>>>
>>>       VirtualFunctionEthernet4/10/4     1        down
>>>       VirtualFunctionEthernet4/10/6     2        down
>>>
>>>   Steven
>>>
>>>   On 5/30/18, 4:41 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>       Hi Steve,
>>>
>>>       Thank you for your inputs, I added feature-mask to see if it helps in
>>>       setting up queues correctly, it didn't so I will remove it. I have
>>>       tried following combination
>>>
>>>       (1) VPP->vhost-user (on host) and DPDK/testpmd->virtio-user (in a
>>>       container)  -- VPP crashes
>>>       (2) DPDK/testpmd->vhost-user (on host) and DPDK/testpmd->virtio-user
>>>       (in a container) -- works fine
>>>
>>>       To use DPDK vhost-user inside VPP, I defined configuration in
>>>       startup.conf as mentioned by you and it looks as follows
>>>
>>>       unix {
>>>         nodaemon
>>>         log /var/log/vpp/vpp.log
>>>         full-coredump
>>>         cli-listen /run/vpp/cli.sock
>>>         gid vpp
>>>       }
>>>
>>>       api-segment {
>>>         gid vpp
>>>       }
>>>
>>>       cpu {
>>>               main-core 1
>>>               corelist-workers 6-9
>>>       }
>>>
>>>       dpdk {
>>>               dev 0000:04:10.4
>>>               dev 0000:04:10.6
>>>               uio-driver vfio-pci
>>>               vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>>               vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>>               huge-dir /dev/hugepages_1GB
>>>               socket-mem 2048,2048
>>>       }
>>>
>>>       From VPP logs
>>>       dpdk: EAL init args: -c 3c2 -n 4 --vdev
>>>       net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
>>>       net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir
>>> /dev/hugepages_1GB
>>>       -w 0000:04:10.4 -w 0000:04:10.6 --master-lcore 1 --socket-mem
>>>       2048,2048
>>>
>>>       However, VPP doesn't create interface at all
>>>
>>>       vpp# show interface
>>>                     Name               Idx       State          Counter
>>>           Count
>>>       VirtualFunctionEthernet4/10/4     1        down
>>>       VirtualFunctionEthernet4/10/6     2        down
>>>       local0                            0        down
>>>
>>>       since it is a static mapping I am assuming it should be created,
>>> correct?
>>>
>>>       Thanks.
>>>
>>>       On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong)
>>> <slu...@cisco.com> wrote:
>>>
>>> Ravi,
>>>
>>> First and foremost, get rid of the feature-mask option. I don't know what
>>> 0x40400000 does for you. If that does not help, try testing it with dpdk
>>> based vhost-user instead of VPP native vhost-user to make sure that they can
>>> work well with each other first. To use dpdk vhost-user, add a vdev command
>>> in the startup.conf for each vhost-user device that you have.
>>>
>>> dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }
>>>
>>> dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1,
>>> etc. Make sure you use the right interface name to set the state to up.
>>>
>>> If dpdk based vhost-user does not work with testpmd either, it looks like
>>> some problem with the way that you invoke testpmd.
>>>
>>> If dpdk based vhost-user works well with the same testpmd device driver and
>>> not vpp native vhost-user, I can set up something similar to yours to look
>>> into it.
>>>
>>> The device driver, testpmd, is supposed to pass the shared memory region to
>>> VPP for TX/RX queues. It looks like VPP vhost-user might have run into a
>>> bump there with using the shared memory (txvq->avail).
>>>
>>> Steven
>>>
>>> PS. vhost-user is not an optimum interface for containers. You may want to
>>> look into using memif if you don't already know about it.
>>>
>>>
>>> On 5/30/18, 2:06 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
>>>
>>>   I am not sure what is wrong with the setup or a bug in vpp, vpp
>>>   crashes with vhost<-->virtio communication.
>>>
>>>   (1) Vhost-interfaces are created and attached to bridge-domain as follows
>>>
>>>   create vhost socket /var/run/vpp/sock1.sock server feature-mask
>>> 0x40400000
>>>   create vhost socket /var/run/vpp/sock2.sock server feature-mask
>>> 0x40400000
>>>   set interface state VirtualEthernet0/0/0 up
>>>   set interface state VirtualEthernet0/0/1 up
>>>
>>>   set interface l2 bridge VirtualEthernet0/0/0 1
>>>   set interface l2 bridge VirtualEthernet0/0/1 1
>>>
>>>
>>>   (2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
>>>   interface as follows
>>>
>>>   docker run -it --privileged -v
>>>   /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>>>   /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>>>   /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>>>   4 --log-level=9 -m 64 --no-pci --single-file-segments
>>>   --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
>>>   --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
>>>   -i
>>>
>>>   (3) show vhost-user VirtualEthernet0/0/1
>>>   Virtio vhost-user interfaces
>>>   Global:
>>>     coalesce frames 32 time 1e-3
>>>     number of rx virtqueues in interrupt mode: 0
>>>   Interface: VirtualEthernet0/0/1 (ifindex 4)
>>>   virtio_net_hdr_sz 10
>>>    features mask (0x40400000):
>>>    features (0x0):
>>>     protocol features (0x0)
>>>
>>>    socket filename /var/run/vpp/sock2.sock type server errno "Success"
>>>
>>>    rx placement:
>>>    tx placement: spin-lock
>>>      thread 0 on vring 0
>>>      thread 1 on vring 0
>>>      thread 2 on vring 0
>>>      thread 3 on vring 0
>>>      thread 4 on vring 0
>>>
>>>    Memory regions (total 1)
>>>    region fd    guest_phys_addr    memory_size        userspace_addr
>>>   mmap_offset        mmap_addr
>>>    ====== ===== ================== ================== ==================
>>>   ================== ==================
>>>     0     55    0x00007ff7c0000000 0x0000000040000000 0x00007ff7c0000000
>>>   0x0000000000000000 0x00007ffbc0000000
>>>
>>>   vpp# show vhost-user VirtualEthernet0/0/0
>>>   Virtio vhost-user interfaces
>>>   Global:
>>>     coalesce frames 32 time 1e-3
>>>     number of rx virtqueues in interrupt mode: 0
>>>   Interface: VirtualEthernet0/0/0 (ifindex 3)
>>>   virtio_net_hdr_sz 10
>>>    features mask (0x40400000):
>>>    features (0x0):
>>>     protocol features (0x0)
>>>
>>>    socket filename /var/run/vpp/sock1.sock type server errno "Success"
>>>
>>>    rx placement:
>>>    tx placement: spin-lock
>>>      thread 0 on vring 0
>>>      thread 1 on vring 0
>>>      thread 2 on vring 0
>>>      thread 3 on vring 0
>>>      thread 4 on vring 0
>>>
>>>    Memory regions (total 1)
>>>    region fd    guest_phys_addr    memory_size        userspace_addr
>>>   mmap_offset        mmap_addr
>>>    ====== ===== ================== ================== ==================
>>>   ================== ==================
>>>     0     51    0x00007ff7c0000000 0x0000000040000000 0x00007ff7c0000000
>>>   0x0000000000000000 0x00007ffc00000000
>>>
>>>   (4) vpp stack trace
>>>   Program received signal SIGSEGV, Segmentation fault.
>>>   [Switching to Thread 0x7ffd0e090700 (LWP 46570)]
>>>   0x00007ffff7414642 in vhost_user_if_input
>>>   (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
>>>       node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
>>>       vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
>>>       at
>>> /var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
>>>   1596      if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
>>>   (gdb) bt
>>>   #0  0x00007ffff7414642 in vhost_user_if_input
>>>   (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
>>>       node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
>>>       vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
>>>       at
>>> /var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
>>>   #1  vhost_user_input (f=<optimized out>, node=<optimized out>,
>>>   vm=<optimized out>)
>>>       at
>>> /var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
>>>   #2  vhost_user_input_avx2 (vm=<optimized out>, node=<optimized out>,
>>>   frame=<optimized out>)
>>>       at
>>> /var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
>>>   #3  0x00007ffff7954d35 in dispatch_node
>>>   (last_time_stamp=12391212490024174, frame=0x0,
>>>       dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
>>>   node=0x7fffb76bab00,
>>>       vm=0x7fffb672a9c0) at
>>> /var/venom/vpp/build-data/../src/vlib/main.c:988
>>>   #4  vlib_main_or_worker_loop (is_main=0, vm=0x7fffb672a9c0)
>>>       at /var/venom/vpp/build-data/../src/vlib/main.c:1507
>>>   #5  vlib_worker_loop (vm=0x7fffb672a9c0) at
>>>   /var/venom/vpp/build-data/../src/vlib/main.c:1641
>>>   #6  0x00007ffff6ad25d8 in clib_calljmp ()
>>>       at /var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
>>>   #7  0x00007ffd0e08fdb0 in ?? ()
>>>   #8  0x00007fffb4436edd in eal_thread_loop (arg=<optimized out>)
>>>       at
>>> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
>>>   #9  0x0000000000000000 in ?? ()
>>>   (gdb) frame 0
>>>   #0  0x00007ffff7414642 in vhost_user_if_input
>>>   (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
>>>       node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
>>>       vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
>>>       at
>>> /var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
>>>   1596      if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
>>>   (gdb) p txvq
>>>   $1 = (vhost_user_vring_t *) 0x7fffb6739ac0
>>>   (gdb) p *txvq
>>>   $2 = {cacheline0 = 0x7fffb6739ac0 "?", qsz_mask = 255, last_avail_idx
>>>   = 0, last_used_idx = 0,
>>>     n_since_last_int = 0, desc = 0x7ffbfff97000, avail = 0x7ffbfff98000,
>>>   used = 0x7ffbfff99000,
>>>     int_deadline = 0, started = 1 '\001', enabled = 0 '\000', log_used = 0
>>> '\000',
>>>     cacheline1 = 0x7fffb6739b00 "????\n", errfd = -1, callfd_idx = 10,
>>>   kickfd_idx = 14,
>>>     log_guest_addr = 0, mode = 1}
>>>   (gdb) p *(txvq->avail)
>>>   Cannot access memory at address 0x7ffbfff98000
>>>   (gdb)
>>>
>>>   On Tue, May 29, 2018 at 10:47 AM, Ravi Kerur <rke...@gmail.com> wrote:
>>>
>>> Steve,
>>>
>>> Thanks for inputs on debugs and gdb. I am using gdb on my development
>>> system to debug the issue. I would like to have reliable core
>>> generation on the system on which I don't have access to install gdb.
>>> I installed corekeeper and it still doesn't generate core. I am
>>> running vpp inside a VM (VirtualBox/vagrant), not sure if I need to
>>> set something inside vagrant config file.
>>>
>>> dpkg -l corekeeper
>>> Desired=Unknown/Install/Remove/Purge/Hold
>>> |
>>> Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
>>> |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
>>> ||/ Name                 Version         Architecture    Description
>>> +++-====================-===============-===============-==============================================
>>> ii  corekeeper           1.6             amd64           enable core
>>> files and report crashes to the system
>>>
>>> Thanks.
>>>
>>> On Tue, May 29, 2018 at 9:38 AM, Steven Luong (sluong) <slu...@cisco.com>
>>> wrote:
>>>
>>> Ravi,
>>>
>>> I install corekeeper and the core file is kept in /var/crash. But why not
>>> use gdb to attach to the VPP process?
>>> To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP
>>> prompt.
>>>
>>> Steven
>>>
>>> On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
>>> <vpp-dev@lists.fd.io on behalf of rke...@gmail.com> wrote:
>>>
>>>   Hi Marco,
>>>
>>>
>>>   On Tue, May 29, 2018 at 6:30 AM, Marco Varlese <mvarl...@suse.de> wrote:
>>>
>>> Ravi,
>>>
>>> On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
>>>
>>> Hello,
>>>
>>> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
>>> installed VPP successfully on it. Later I have created vhost-user
>>> interfaces via
>>>
>>> create vhost socket /var/run/vpp/sock1.sock server
>>> create vhost socket /var/run/vpp/sock2.sock server
>>> set interface state VirtualEthernet0/0/0 up
>>> set interface state VirtualEthernet0/0/1 up
>>>
>>> set interface l2 bridge VirtualEthernet0/0/0 1
>>> set interface l2 bridge VirtualEthernet0/0/1 1
>>>
>>> I then run 'DPDK/testpmd' inside a container which will use
>>> virtio-user interfaces using the following command
>>>
>>> docker run -it --privileged -v
>>> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>>> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>>> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>>> 4 --log-level=9 -m 64 --no-pci --single-file-segments
>>> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
>>> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
>>> -i
>>>
>>> VPP Vnet crashes with following message
>>>
>>> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
>>> 0x7fcca4620187, faulting address 0x7fcb317ac000
>>>
>>> Questions:
>>> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
>>> unix {
>>> nodaemon
>>> log /var/log/vpp/vpp.log
>>> full-coredump
>>> cli-listen /run/vpp/cli.sock
>>> gid vpp
>>> }
>>>
>>> But I couldn't locate corefile?
>>>
>>> The location of the coredump file depends on your system configuration.
>>>
>>> Please, check "cat /proc/sys/kernel/core_pattern"
>>>
>>> If you have systemd-coredump in the output of the above command, then likely
>>> the
>>> location of the coredump files is "/var/lib/systemd/coredump/"
>>>
>>> You can also change the location of where your system places the coredump
>>> files:
>>> echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee
>>> /proc/sys/kernel/core_pattern
>>>
>>> See if that helps...
>>>
>>>
>>>   Initially '/proc/sys/kernel/core_pattern' was set to 'core'. I changed
>>>   it to 'systemd-coredump'. Still no core generated. VPP crashes
>>>
>>>   May 29 08:54:34 localhost vnet[4107]: received signal SIGSEGV, PC
>>>   0x7f0167751187, faulting address 0x7efff43ac000
>>>   May 29 08:54:34 localhost systemd[1]: vpp.service: Main process
>>>   exited, code=killed, status=6/ABRT
>>>   May 29 08:54:34 localhost systemd[1]: vpp.service: Unit entered failed
>>> state.
>>>   May 29 08:54:34 localhost systemd[1]: vpp.service: Failed with result
>>> 'signal'.
>>>
>>>
>>>   cat /proc/sys/kernel/core_pattern
>>>   systemd-coredump
>>>
>>>
>>>   ulimit -a
>>>   core file size          (blocks, -c) unlimited
>>>   data seg size           (kbytes, -d) unlimited
>>>   scheduling priority             (-e) 0
>>>   file size               (blocks, -f) unlimited
>>>   pending signals                 (-i) 15657
>>>   max locked memory       (kbytes, -l) 64
>>>   max memory size         (kbytes, -m) unlimited
>>>   open files                      (-n) 1024
>>>   pipe size            (512 bytes, -p) 8
>>>   POSIX message queues     (bytes, -q) 819200
>>>   real-time priority              (-r) 0
>>>   stack size              (kbytes, -s) 8192
>>>   cpu time               (seconds, -t) unlimited
>>>   max user processes              (-u) 15657
>>>   virtual memory          (kbytes, -v) unlimited
>>>   file locks                      (-x) unlimited
>>>
>>>   cd /var/lib/systemd/coredump/
>>>   root@localhost:/var/lib/systemd/coredump# ls
>>>   root@localhost:/var/lib/systemd/coredump#
>>>
>>>
>>> (2) How to enable debugs? I have used 'make build' but no additional
>>> logs other than those shown below
>>>
>>>
>>> VPP logs from /var/log/syslog is shown below
>>> cat /var/log/syslog
>>> May 27 11:40:28 localhost vpp[6818]: vlib_plugin_early_init:361:
>>> plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: abf_plugin.so (ACL based Forwarding)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: acl_plugin.so (Access Control Lists)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device
>>> Plugin)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: Loaded
>>> plugin: cdp_plugin.so
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: flowprobe_plugin.so (Flow per Packet)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: gbp_plugin.so (Group Based Policy)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: gtpu_plugin.so (GTPv1-U)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: igmp_plugin.so (IGMP messaging)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: ioam_plugin.so (Inbound OAM)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:117: Plugin
>>> disabled (default): ixge_plugin.so
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: l2e_plugin.so (L2 Emulation)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: lacp_plugin.so (Link Aggregation Control Protocol)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: lb_plugin.so (Load Balancer)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: memif_plugin.so (Packet Memory Interface (experimetal))
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: nat_plugin.so (Network Address Translation)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: pppoe_plugin.so (PPPoE)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
>>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: srv6as_plugin.so (Static SRv6 proxy)
>>> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
>>> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: tlsmbedtls_plugin.so (mbedtls based TLS Engine)
>>> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
>>> plugin: tlsopenssl_plugin.so (openssl based TLS Engine)
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: dpdk_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: dpdk_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: lb_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: flowprobe_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: stn_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: udp_ping_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: pppoe_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: lacp_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: lb_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: acl_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: ioam_export_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: ioam_trace_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin:
>>> vxlan_gpe_ioam_export_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: gtpu_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: cdp_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: ioam_vxlan_gpe_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: memif_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
>>> load_one_vat_plugin:67: Loaded plugin: ioam_pot_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: flowprobe_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: stn_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: nat_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: udp_ping_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: pppoe_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: lacp_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: acl_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: ioam_export_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: ioam_trace_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: vxlan_gpe_ioam_export_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: gtpu_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: cdp_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: ioam_vxlan_gpe_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: memif_test_plugin.so
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
>>> Loaded plugin: ioam_pot_test_plugin.so
>>> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]: dpdk: EAL
>>> init args: -c 1 -n 4 --no-pci --huge-dir /dev/hugepages --master-lcore
>>> 0 --socket-mem 256,0
>>> May 27 11:40:29 localhost /usr/bin/vpp[6818]: dpdk: EAL init args: -c
>>> 1 -n 4 --no-pci --huge-dir /dev/hugepages --master-lcore 0
>>> --socket-mem 256,0
>>> May 27 11:40:29 localhost vnet[6818]: dpdk_ipsec_process:1019: not
>>> enough DPDK crypto resources, default to OpenSSL
>>> May 27 11:43:19 localhost vnet[6818]: show vhost-user: unknown input `detail
>>> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
>>> 0x7fcca4620187, faulting address 0x7fcb317ac000
>>> May 27 11:44:00 localhost systemd[1]: vpp.service: Main process
>>> exited, code=killed, status=6/ABRT
>>> May 27 11:44:00 localhost systemd[1]: vpp.service: Unit entered failed
>>> state.
>>> May 27 11:44:00 localhost systemd[1]: vpp.service: Failed with result
>>> 'signal'.
>>> May 27 11:44:00 localhost systemd[1]: vpp.service: Service hold-off
>>> time over, scheduling restart
>>>
>>>
>>>   Thanks,
>>>   Ravi
>>>
>>>
>>>
>>> Thanks.
>>>
>>> Cheers,
>>> Marco
>>>
>>>
>>>
>>>
>>> --
>>> Marco V
>>>
>>> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
>>> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
>>>
>>>

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9548): https://lists.fd.io/g/vpp-dev/message/9548
View All Messages In Topic (33): https://lists.fd.io/g/vpp-dev/topic/20346431
Mute This Topic: https://lists.fd.io/mt/20346431/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
Email sent to: arch...@mail-archive.com
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to