Ravi,

I don't think you can declare (2) works fine yet. Please bring up the dpdk 
vhost-user interfaces and try to send some traffic between them to exercise the 
shared memory region from dpdk virtio-user which may be "questionable".

    VirtualFunctionEthernet4/10/4     1        down
    VirtualFunctionEthernet4/10/6     2        down

Steven

On 5/30/18, 4:41 PM, "Ravi Kerur" <rke...@gmail.com> wrote:

    Hi Steve,
    
    Thank you for your inputs, I added feature-mask to see if it helps in
    setting up queues correctly, it didn't so I will remove it. I have
    tried following combination
    
    (1) VPP->vhost-user (on host) and DPDK/testpmd->virtio-user (in a
    container)  -- VPP crashes
    (2) DPDK/testpmd->vhost-user (on host) and DPDK/testpmd->virtio-user
    (in a container) -- works fine
    
    To use DPDK vhost-user inside VPP, I defined configuration in
    startup.conf as mentioned by you and it looks as follows
    
    unix {
      nodaemon
      log /var/log/vpp/vpp.log
      full-coredump
      cli-listen /run/vpp/cli.sock
      gid vpp
    }
    
    api-segment {
      gid vpp
    }
    
    cpu {
            main-core 1
            corelist-workers 6-9
    }
    
    dpdk {
            dev 0000:04:10.4
            dev 0000:04:10.6
            uio-driver vfio-pci
            vdev net_vhost0,iface=/var/run/vpp/sock1.sock
            vdev net_vhost1,iface=/var/run/vpp/sock2.sock
            huge-dir /dev/hugepages_1GB
            socket-mem 2048,2048
    }
    
    From VPP logs
    dpdk: EAL init args: -c 3c2 -n 4 --vdev
    net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
    net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1GB
    -w 0000:04:10.4 -w 0000:04:10.6 --master-lcore 1 --socket-mem
    2048,2048
    
    However, VPP doesn't create interface at all
    
    vpp# show interface
                  Name               Idx       State          Counter
        Count
    VirtualFunctionEthernet4/10/4     1        down
    VirtualFunctionEthernet4/10/6     2        down
    local0                            0        down
    
    since it is a static mapping I am assuming it should be created, correct?
    
    Thanks.
    
    On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong) <slu...@cisco.com> 
wrote:
    > Ravi,
    >
    > First and foremost, get rid of the feature-mask option. I don't know what 
0x40400000 does for you. If that does not help, try testing it with dpdk based 
vhost-user instead of VPP native vhost-user to make sure that they can work 
well with each other first. To use dpdk vhost-user, add a vdev command in the 
startup.conf for each vhost-user device that you have.
    >
    > dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }
    >
    > dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1, 
etc. Make sure you use the right interface name to set the state to up.
    >
    > If dpdk based vhost-user does not work with testpmd either, it looks like 
some problem with the way that you invoke testpmd.
    >
    > If dpdk based vhost-user works well with the same testpmd device driver 
and not vpp native vhost-user, I can set up something similar to yours to look 
into it.
    >
    > The device driver, testpmd, is supposed to pass the shared memory region 
to VPP for TX/RX queues. It looks like VPP vhost-user might have run into a 
bump there with using the shared memory (txvq->avail).
    >
    > Steven
    >
    > PS. vhost-user is not an optimum interface for containers. You may want 
to look into using memif if you don't already know about it.
    >
    >
    > On 5/30/18, 2:06 PM, "Ravi Kerur" <rke...@gmail.com> wrote:
    >
    >     I am not sure what is wrong with the setup or a bug in vpp, vpp
    >     crashes with vhost<-->virtio communication.
    >
    >     (1) Vhost-interfaces are created and attached to bridge-domain as 
follows
    >
    >     create vhost socket /var/run/vpp/sock1.sock server feature-mask 
0x40400000
    >     create vhost socket /var/run/vpp/sock2.sock server feature-mask 
0x40400000
    >     set interface state VirtualEthernet0/0/0 up
    >     set interface state VirtualEthernet0/0/1 up
    >
    >     set interface l2 bridge VirtualEthernet0/0/0 1
    >     set interface l2 bridge VirtualEthernet0/0/1 1
    >
    >
    >     (2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
    >     interface as follows
    >
    >     docker run -it --privileged -v
    >     /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
    >     /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
    >     /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
    >     4 --log-level=9 -m 64 --no-pci --single-file-segments
    >     --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
    >     --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
    >     -i
    >
    >     (3) show vhost-user VirtualEthernet0/0/1
    >     Virtio vhost-user interfaces
    >     Global:
    >       coalesce frames 32 time 1e-3
    >       number of rx virtqueues in interrupt mode: 0
    >     Interface: VirtualEthernet0/0/1 (ifindex 4)
    >     virtio_net_hdr_sz 10
    >      features mask (0x40400000):
    >      features (0x0):
    >       protocol features (0x0)
    >
    >      socket filename /var/run/vpp/sock2.sock type server errno "Success"
    >
    >      rx placement:
    >      tx placement: spin-lock
    >        thread 0 on vring 0
    >        thread 1 on vring 0
    >        thread 2 on vring 0
    >        thread 3 on vring 0
    >        thread 4 on vring 0
    >
    >      Memory regions (total 1)
    >      region fd    guest_phys_addr    memory_size        userspace_addr
    >     mmap_offset        mmap_addr
    >      ====== ===== ================== ================== ==================
    >     ================== ==================
    >       0     55    0x00007ff7c0000000 0x0000000040000000 0x00007ff7c0000000
    >     0x0000000000000000 0x00007ffbc0000000
    >
    >     vpp# show vhost-user VirtualEthernet0/0/0
    >     Virtio vhost-user interfaces
    >     Global:
    >       coalesce frames 32 time 1e-3
    >       number of rx virtqueues in interrupt mode: 0
    >     Interface: VirtualEthernet0/0/0 (ifindex 3)
    >     virtio_net_hdr_sz 10
    >      features mask (0x40400000):
    >      features (0x0):
    >       protocol features (0x0)
    >
    >      socket filename /var/run/vpp/sock1.sock type server errno "Success"
    >
    >      rx placement:
    >      tx placement: spin-lock
    >        thread 0 on vring 0
    >        thread 1 on vring 0
    >        thread 2 on vring 0
    >        thread 3 on vring 0
    >        thread 4 on vring 0
    >
    >      Memory regions (total 1)
    >      region fd    guest_phys_addr    memory_size        userspace_addr
    >     mmap_offset        mmap_addr
    >      ====== ===== ================== ================== ==================
    >     ================== ==================
    >       0     51    0x00007ff7c0000000 0x0000000040000000 0x00007ff7c0000000
    >     0x0000000000000000 0x00007ffc00000000
    >
    >     (4) vpp stack trace
    >     Program received signal SIGSEGV, Segmentation fault.
    >     [Switching to Thread 0x7ffd0e090700 (LWP 46570)]
    >     0x00007ffff7414642 in vhost_user_if_input
    >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >         node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
    >         vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
    >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     1596      if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
    >     (gdb) bt
    >     #0  0x00007ffff7414642 in vhost_user_if_input
    >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >         node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
    >         vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
    >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     #1  vhost_user_input (f=<optimized out>, node=<optimized out>,
    >     vm=<optimized out>)
    >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
    >     #2  vhost_user_input_avx2 (vm=<optimized out>, node=<optimized out>,
    >     frame=<optimized out>)
    >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
    >     #3  0x00007ffff7954d35 in dispatch_node
    >     (last_time_stamp=12391212490024174, frame=0x0,
    >         dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
    >     node=0x7fffb76bab00,
    >         vm=0x7fffb672a9c0) at 
/var/venom/vpp/build-data/../src/vlib/main.c:988
    >     #4  vlib_main_or_worker_loop (is_main=0, vm=0x7fffb672a9c0)
    >         at /var/venom/vpp/build-data/../src/vlib/main.c:1507
    >     #5  vlib_worker_loop (vm=0x7fffb672a9c0) at
    >     /var/venom/vpp/build-data/../src/vlib/main.c:1641
    >     #6  0x00007ffff6ad25d8 in clib_calljmp ()
    >         at /var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
    >     #7  0x00007ffd0e08fdb0 in ?? ()
    >     #8  0x00007fffb4436edd in eal_thread_loop (arg=<optimized out>)
    >         at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
    >     #9  0x0000000000000000 in ?? ()
    >     (gdb) frame 0
    >     #0  0x00007ffff7414642 in vhost_user_if_input
    >     (mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
    >         node=0x7fffb76bab00, qid=<optimized out>, vui=0x7fffb6739700,
    >         vum=0x7ffff78f4480 <vhost_user_main>, vm=0x7fffb672a9c0)
    >         at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
    >     1596      if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
    >     (gdb) p txvq
    >     $1 = (vhost_user_vring_t *) 0x7fffb6739ac0
    >     (gdb) p *txvq
    >     $2 = {cacheline0 = 0x7fffb6739ac0 "?", qsz_mask = 255, last_avail_idx
    >     = 0, last_used_idx = 0,
    >       n_since_last_int = 0, desc = 0x7ffbfff97000, avail = 0x7ffbfff98000,
    >     used = 0x7ffbfff99000,
    >       int_deadline = 0, started = 1 '\001', enabled = 0 '\000', log_used 
= 0 '\000',
    >       cacheline1 = 0x7fffb6739b00 "????\n", errfd = -1, callfd_idx = 10,
    >     kickfd_idx = 14,
    >       log_guest_addr = 0, mode = 1}
    >     (gdb) p *(txvq->avail)
    >     Cannot access memory at address 0x7ffbfff98000
    >     (gdb)
    >
    >     On Tue, May 29, 2018 at 10:47 AM, Ravi Kerur <rke...@gmail.com> wrote:
    >     > Steve,
    >     >
    >     > Thanks for inputs on debugs and gdb. I am using gdb on my 
development
    >     > system to debug the issue. I would like to have reliable core
    >     > generation on the system on which I don't have access to install 
gdb.
    >     > I installed corekeeper and it still doesn't generate core. I am
    >     > running vpp inside a VM (VirtualBox/vagrant), not sure if I need to
    >     > set something inside vagrant config file.
    >     >
    >     >  dpkg -l corekeeper
    >     > Desired=Unknown/Install/Remove/Purge/Hold
    >     > | 
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
    >     > |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
    >     > ||/ Name                 Version         Architecture    Description
    >     > 
+++-====================-===============-===============-==============================================
    >     > ii  corekeeper           1.6             amd64           enable core
    >     > files and report crashes to the system
    >     >
    >     > Thanks.
    >     >
    >     > On Tue, May 29, 2018 at 9:38 AM, Steven Luong (sluong) 
<slu...@cisco.com> wrote:
    >     >> Ravi,
    >     >>
    >     >> I install corekeeper and the core file is kept in /var/crash. But 
why not use gdb to attach to the VPP process?
    >     >> To turn on VPP vhost-user debug, type "debug vhost-user on" at the 
VPP prompt.
    >     >>
    >     >> Steven
    >     >>
    >     >> On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
<vpp-dev@lists.fd.io on behalf of rke...@gmail.com> wrote:
    >     >>
    >     >>     Hi Marco,
    >     >>
    >     >>
    >     >>     On Tue, May 29, 2018 at 6:30 AM, Marco Varlese 
<mvarl...@suse.de> wrote:
    >     >>     > Ravi,
    >     >>     >
    >     >>     > On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
    >     >>     >> Hello,
    >     >>     >>
    >     >>     >> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. 
I have
    >     >>     >> installed VPP successfully on it. Later I have created 
vhost-user
    >     >>     >> interfaces via
    >     >>     >>
    >     >>     >> create vhost socket /var/run/vpp/sock1.sock server
    >     >>     >> create vhost socket /var/run/vpp/sock2.sock server
    >     >>     >> set interface state VirtualEthernet0/0/0 up
    >     >>     >> set interface state VirtualEthernet0/0/1 up
    >     >>     >>
    >     >>     >> set interface l2 bridge VirtualEthernet0/0/0 1
    >     >>     >> set interface l2 bridge VirtualEthernet0/0/1 1
    >     >>     >>
    >     >>     >> I then run 'DPDK/testpmd' inside a container which will use
    >     >>     >> virtio-user interfaces using the following command
    >     >>     >>
    >     >>     >> docker run -it --privileged -v
    >     >>     >> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
    >     >>     >> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
    >     >>     >> /dev/hugepages:/dev/hugepages dpdk-app-testpmd 
./bin/testpmd -c 0x3 -n
    >     >>     >> 4 --log-level=9 -m 64 --no-pci --single-file-segments
    >     >>     >> 
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
    >     >>     >> 
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
    >     >>     >> -i
    >     >>     >>
    >     >>     >> VPP Vnet crashes with following message
    >     >>     >>
    >     >>     >> May 27 11:44:00 localhost vnet[6818]: received signal 
SIGSEGV, PC
    >     >>     >> 0x7fcca4620187, faulting address 0x7fcb317ac000
    >     >>     >>
    >     >>     >> Questions:
    >     >>     >> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
    >     >>     >> unix {
    >     >>     >>   nodaemon
    >     >>     >>   log /var/log/vpp/vpp.log
    >     >>     >>   full-coredump
    >     >>     >>   cli-listen /run/vpp/cli.sock
    >     >>     >>   gid vpp
    >     >>     >> }
    >     >>     >>
    >     >>     >> But I couldn't locate corefile?
    >     >>     > The location of the coredump file depends on your system 
configuration.
    >     >>     >
    >     >>     > Please, check "cat /proc/sys/kernel/core_pattern"
    >     >>     >
    >     >>     > If you have systemd-coredump in the output of the above 
command, then likely the
    >     >>     > location of the coredump files is 
"/var/lib/systemd/coredump/"
    >     >>     >
    >     >>     > You can also change the location of where your system places 
the coredump files:
    >     >>     > echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee 
/proc/sys/kernel/core_pattern
    >     >>     >
    >     >>     > See if that helps...
    >     >>     >
    >     >>
    >     >>     Initially '/proc/sys/kernel/core_pattern' was set to 'core'. I 
changed
    >     >>     it to 'systemd-coredump'. Still no core generated. VPP crashes
    >     >>
    >     >>     May 29 08:54:34 localhost vnet[4107]: received signal SIGSEGV, 
PC
    >     >>     0x7f0167751187, faulting address 0x7efff43ac000
    >     >>     May 29 08:54:34 localhost systemd[1]: vpp.service: Main process
    >     >>     exited, code=killed, status=6/ABRT
    >     >>     May 29 08:54:34 localhost systemd[1]: vpp.service: Unit 
entered failed state.
    >     >>     May 29 08:54:34 localhost systemd[1]: vpp.service: Failed with 
result 'signal'.
    >     >>
    >     >>
    >     >>     cat /proc/sys/kernel/core_pattern
    >     >>     systemd-coredump
    >     >>
    >     >>
    >     >>     ulimit -a
    >     >>     core file size          (blocks, -c) unlimited
    >     >>     data seg size           (kbytes, -d) unlimited
    >     >>     scheduling priority             (-e) 0
    >     >>     file size               (blocks, -f) unlimited
    >     >>     pending signals                 (-i) 15657
    >     >>     max locked memory       (kbytes, -l) 64
    >     >>     max memory size         (kbytes, -m) unlimited
    >     >>     open files                      (-n) 1024
    >     >>     pipe size            (512 bytes, -p) 8
    >     >>     POSIX message queues     (bytes, -q) 819200
    >     >>     real-time priority              (-r) 0
    >     >>     stack size              (kbytes, -s) 8192
    >     >>     cpu time               (seconds, -t) unlimited
    >     >>     max user processes              (-u) 15657
    >     >>     virtual memory          (kbytes, -v) unlimited
    >     >>     file locks                      (-x) unlimited
    >     >>
    >     >>     cd /var/lib/systemd/coredump/
    >     >>     root@localhost:/var/lib/systemd/coredump# ls
    >     >>     root@localhost:/var/lib/systemd/coredump#
    >     >>
    >     >>     >>
    >     >>     >> (2) How to enable debugs? I have used 'make build' but no 
additional
    >     >>     >> logs other than those shown below
    >     >>     >>
    >     >>     >>
    >     >>     >> VPP logs from /var/log/syslog is shown below
    >     >>     >> cat /var/log/syslog
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: 
vlib_plugin_early_init:361:
    >     >>     >> plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: abf_plugin.so (ACL based Forwarding)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: acl_plugin.so (Access Control Lists)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: avf_plugin.so (Intel Adaptive Virtual Function 
(AVF) Device
    >     >>     >> Plugin)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: 
Loaded
    >     >>     >> plugin: cdp_plugin.so
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: flowprobe_plugin.so (Flow per Packet)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: gbp_plugin.so (Group Based Policy)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: gtpu_plugin.so (GTPv1-U)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: igmp_plugin.so (IGMP messaging)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: ila_plugin.so (Identifier-locator addressing for 
IPv6)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: ioam_plugin.so (Inbound OAM)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:117: 
Plugin
    >     >>     >> disabled (default): ixge_plugin.so
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: l2e_plugin.so (L2 Emulation)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: lacp_plugin.so (Link Aggregation Control Protocol)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: lb_plugin.so (Load Balancer)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: nat_plugin.so (Network Address Translation)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: pppoe_plugin.so (PPPoE)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
    >     >>     >> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: srv6as_plugin.so (Static SRv6 proxy)
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: stn_plugin.so (VPP Steals the NIC for Container 
integration)
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: tlsmbedtls_plugin.so (mbedtls based TLS Engine)
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: 
Loaded
    >     >>     >> plugin: tlsopenssl_plugin.so (openssl based TLS Engine)
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: dpdk_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: dpdk_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: lb_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
flowprobe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: stn_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
udp_ping_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: pppoe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: lacp_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: lb_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: acl_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
ioam_export_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
ioam_trace_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin:
    >     >>     >> vxlan_gpe_ioam_export_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: gtpu_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: cdp_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
ioam_vxlan_gpe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: memif_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
    >     >>     >> load_one_vat_plugin:67: Loaded plugin: 
ioam_pot_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: flowprobe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: stn_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: nat_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: udp_ping_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: pppoe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: lacp_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: acl_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: ioam_export_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: ioam_trace_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: vxlan_gpe_ioam_export_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: gtpu_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: cdp_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: ioam_vxlan_gpe_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: memif_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: 
load_one_vat_plugin:67:
    >     >>     >> Loaded plugin: ioam_pot_test_plugin.so
    >     >>     >> May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]: 
dpdk: EAL
    >     >>     >> init args: -c 1 -n 4 --no-pci --huge-dir /dev/hugepages 
--master-lcore
    >     >>     >> 0 --socket-mem 256,0
    >     >>     >> May 27 11:40:29 localhost /usr/bin/vpp[6818]: dpdk: EAL 
init args: -c
    >     >>     >> 1 -n 4 --no-pci --huge-dir /dev/hugepages --master-lcore 0
    >     >>     >> --socket-mem 256,0
    >     >>     >> May 27 11:40:29 localhost vnet[6818]: 
dpdk_ipsec_process:1019: not
    >     >>     >> enough DPDK crypto resources, default to OpenSSL
    >     >>     >> May 27 11:43:19 localhost vnet[6818]: show vhost-user: 
unknown input `detail
    >     >>     >> May 27 11:44:00 localhost vnet[6818]: received signal 
SIGSEGV, PC
    >     >>     >> 0x7fcca4620187, faulting address 0x7fcb317ac000
    >     >>     >> May 27 11:44:00 localhost systemd[1]: vpp.service: Main 
process
    >     >>     >> exited, code=killed, status=6/ABRT
    >     >>     >> May 27 11:44:00 localhost systemd[1]: vpp.service: Unit 
entered failed state.
    >     >>     >> May 27 11:44:00 localhost systemd[1]: vpp.service: Failed 
with result
    >     >>     >> 'signal'.
    >     >>     >> May 27 11:44:00 localhost systemd[1]: vpp.service: Service 
hold-off
    >     >>     >> time over, scheduling restart
    >     >>     >>
    >     >>
    >     >>     Thanks,
    >     >>     Ravi
    >     >>
    >     >>     >>
    >     >>     >>
    >     >>     >> Thanks.
    >     >>     > Cheers,
    >     >>     > Marco
    >     >>     >>
    >     >>     >>
    >     >>     >>
    >     >>     > --
    >     >>     > Marco V
    >     >>     >
    >     >>     > SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, 
Graham Norton
    >     >>     > HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
    >     >>
    >     >>
    >     >>
    >     >>
    >     >>
    >     >
    >     > 
    >     >
    >
    >
    


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9469): https://lists.fd.io/g/vpp-dev/message/9469
View All Messages In Topic (10): https://lists.fd.io/g/vpp-dev/topic/20346431
Mute This Topic: https://lists.fd.io/mt/20346431/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to