Eyle,

This argument in your qemu command line,

queues=16,

is over our current limit. We support up to 8. I can submit an improvement 
patch. But I think it will be master only.

Steven

From: Eyle Brinkhuis <eyle.brinkh...@surf.nl>
Date: Wednesday, December 9, 2020 at 9:24 AM
To: "Steven Luong (sluong)" <slu...@cisco.com>
Cc: "Benoit Ganne (bganne)" <bga...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi Steven, This is the command line:

libvirt+ 1620511       1  0 17:19 ?        00:00:00 /usr/bin/qemu-system-x86_64 
-name guest=instance-000002be,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-96-instance-000002be/master-key.aes
 -machine pc-i440fx-4.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 8192 
-overcommit mem-lock=off -smp 16,sockets=16,cores=1,threads=1 -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/96-instance-000002be,share=yes,size=8589934592,host-nodes=0,policy=bind
 -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -uuid 
e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack 
Nova,version=20.3.0,serial=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,uuid=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,family=Virtual
 Machine -no-user-config -nodefaults -chardev 
socket,id=charmonitor,fd=25,server,nowait -mon 
chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global 
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object 
secret,id=virtio-disk0-secret0,data=6heG0DJExrHzsPjvdMMDZEgCRzMTVhEQNM1q+t/PeVI=,keyid=masterKey0,iv=q1A9BiAx0eW1MsIpYrU56A==,format=base64
 -drive 
file=rbd:cinder-ceph/volume-22c67810-cd55-4cc2-a830-1433488003eb:id=cinder-ceph:auth_supported=cephx\;none:mon_host=10.0.91.205\:6789\;10.0.91.206\:6789\;10.0.91.207\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=22c67810-cd55-4cc2-a830-1433488003eb
 -chardev 
socket,id=charnet0,path=/tmp/15873ca6-0488-4826-9f50-bab037271c93,server 
-netdev vhost-user,chardev=charnet0,queues=16,id=hostnet0 -device 
virtio-net-pci,mq=on,vectors=34,rx_queue_size=1024,tx_queue_size=1024,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:e4:df,bus=pci.0,addr=0x3
 -add-fd set=1,fd=28 -chardev 
pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device 
isa-serial,chardev=charserial0,id=serial0 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.0.92.191:1 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
timestamp=on


It looks like it is only requesting 16 queues.


@Ben, I have put those in the same file share as well 
(https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb)


Regards,

Eyle


On 9 Dec 2020, at 18:00, Steven Luong (sluong) 
<slu...@cisco.com<mailto:slu...@cisco.com>> wrote:

Eyle,

Can you also show me the qemu command line to bring up the VM? I think it is 
asking for more than 16 queues. VPP supports up to 16.

Steven

On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> on behalf 
of Benoit Ganne (bganne) via lists.fd.io<http://lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> on behalf of 
bganne=cisco....@lists.fd.io<mailto:bganne=cisco....@lists.fd.io>> wrote:

   Hi Eyle, could you share the associated .deb files you built (esp. vpp, 
vpp-dbg, libvppinfra , vpp-plugin-core and vpp-plugin-dpdk)?
   I cannot exploit the core without those, as you rebuilt vpp.

   Best
   ben


-----Original Message-----
From: Eyle Brinkhuis <eyle.brinkh...@surf.nl<mailto:eyle.brinkh...@surf.nl>>
Sent: mercredi 9 décembre 2020 17:02
To: Benoit Ganne (bganne) <bga...@cisco.com<mailto:bga...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi Ben,

I have built a new 20.05.1 version with this fix cherry-picked. It gets a
lot further now: VM is actually spawning and I can see the interface being
created inside VPP. However, a little while later, VPP crashes once again.
I have created a new core dump and api-post mortem, which can be found
here:

https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb

BTW, havent yet tried this with 20.09. Let me know if you want me to do
that first. Once again, thanks for your quick reply.

Regards,

Eyle


On 8 Dec 2020, at 19:14, Benoit Ganne (bganne) via lists.fd.io
<http://lists.fd.io>  <bganne=cisco....@lists.fd.io
<mailto:bganne=cisco....@lists.fd.io> > wrote:

Hi Eyle,

Thanks for the core, I think I identified the issue.
Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue?
It should apply to 20.05 without conflicts.

Best
ben



-----Original Message-----
From: Eyle Brinkhuis <eyle.brinkh...@surf.nl
<mailto:eyle.brinkh...@surf.nl> >
Sent: mercredi 2 décembre 2020 17:13
To: Benoit Ganne (bganne) <bga...@cisco.com
<mailto:bga...@cisco.com> >
Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
Subject: Re: Vpp crashes with core dump vhost-user interface

Hi Ben, all,

I’m sorry, I forgot about adding a backtrace. I have now
posted it here:
https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb


I am not too familiar with the openstack integration, but now
that
20.09 is out, can't you move to 20.09? At least in your lab to
check
whether you still see this issue.

The last “guaranteed to work” version is 20.05.1 against
networking-vpp. I
can still try though, in my testbed, but I’d like to keep to
the known
working combinations as much as possible. Ill let you know if
anything
comes up!

Thanks for the quick replies, both you and Steven.

Regards,

Eyle


On 2 Dec 2020, at 16:35, Benoit Ganne (bganne)
<bga...@cisco.com
<mailto:bga...@cisco.com> > wrote:

Hi Eyle,

I am not too familiar with the openstack integration, but now
that
20.09 is out, can't you move to 20.09? At least in your lab to
check
whether you still see this issue.
Apart from that, we'd need to decipher the backtrace to be
able to
help. The best should be to share a coredump as explained
here:

https://fd.io/docs/vpp/master/troubleshooting/reportingissues/report
ingiss
ues.html#core-files

<https://fd.io/docs/vpp/master/troubleshooting/reportingissues/repor
tingis
sues.html#core-files>

Best
ben



-----Original Message-----
From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
<mailto:vpp-dev@lists.fd.io>  <vpp-
d...@lists.fd.io <mailto:d...@lists.fd.io>  <mailto:vpp-
d...@lists.fd.io> > On Behalf Of Eyle
Brinkhuis
Sent: mercredi 2 décembre 2020 14:59
To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] Vpp crashes with core dump vhost-user
interface

Hi all,

In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-
vpp 20.05.1,
Openstack train) we are running into an issue. When we spawn a
VM (regular
ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP
backed interface,
our VPP instance dies:

Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not
permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not
permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]:
received signal SIGSEGV, PC 0x7fdf80653188, faulting address
0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
received signal
SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #0
0x00007fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0
0x00007fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1
0x00007fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #1
0x00007fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #2
0x00007fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2
0x00007fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3
0x00007fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #3
0x00007fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #4
0x00007fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4
0x00007fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #5
0x00007fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #5
0x00007fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #6
0x00007fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #7
0x00007fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #6
0x00007fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #8
0x00007fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #7
0x00007fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #8
0x00007fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service:
Main process
exited, code=dumped, status=6/ABRT
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service:
Failed with
result 'core-dump'.


While we are able to run 8 core VMs, we’d like to be able to
create
beefier. VPP restarts, but never makes it to create the vhost-
user
interface.. Anyone ran into the same issue?

Regards,

Eyle










-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18299): https://lists.fd.io/g/vpp-dev/message/18299
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to