Hi Steven
Shared memory is setup correctly. I am seeing following errors. System
on which there is no crash it doesn't support 1G hugepages. So I have
to use 2M hugepages with following config for VPP.
(1) host
vpp# show error
CountNode Reason
vpp# show e
Ravi,
I supposed you already checked the obvious that the vhost connection is
established and shared memory has at least 1 region in show vhost. For traffic
issue, use show error to see why packets are dropping. trace add
vhost-user-input and show trace to see if vhost is getting the packet.
S
Damjan, Steven,
I will get back to the system on which VPP is crashing and get more
info on it later.
For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration
VPP vhost-user on host
VPP virtio-user on a container
This time VPP didn't crash. Ping doesn't
Hi Ravi,
Sorry for diluting your topic. From your stack trace and show interface output
I thought you are using OCTEONTx.
Regards,
Nitin
> On 06-Jun-2018, at 22:10, Ravi Kerur wrote:
>
> Steven, Damjan, Nitin,
>
> Let me clarify so there is no confusion, since you are assisting me to
> get
Steven, Damjan, Nitin,
Let me clarify so there is no confusion, since you are assisting me to
get this working I will make sure we are all on same page. I believe
OcteonTx is related to Cavium/ARM and I am not using it.
DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
2MB I
Ravi,
I only have an SSE machine (Ivy Bridge) and DPDK is using ring mempool as far
as I can tell from gdb. You are using AVX2 which I don't have one to try it to
see whether Octeontx mempool is the default mempool for AVX2. What do you put
in dpdk in the host startup.conf? What is the output f
Damjan, Steven,
Kindly let me know if anything I have messed up?
I have compiled VPP on x86_64 and done everything as suggested by Steven.
Thanks.
On Tue, Jun 5, 2018 at 1:39 PM, Ravi Kerur wrote:
> Hi Damjan,
>
> I am not intentional using it. I am running VPP on a x86 Ubuntu server.
>
> unam
Hi Damjan,
I am not intentional using it. I am running VPP on a x86 Ubuntu server.
uname -a
4.9.77.2-rt61 #1 SMP PREEMPT RT Tue May 15 20:36:51 UTC 2018 x86_64
x86_64 x86_64 GNU/Linux
Thanks.
On Tue, Jun 5, 2018 at 1:10 PM, Damjan Marion wrote:
> Dear Ravi,
>
> Currently we don't support Octeo
Steven,
I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
assign an IP address to both vhost-user/virtio interfaces and initiate
a ping VPP crashes.
Any other mechanism available to test Tx/Rx path between Vhost and
Virtio? Details below.
***On host***
vpp#show vhos
Ravi,
In order to use dpdk virtio_user, you need 1GB huge page.
Steven
On 6/5/18, 11:17 AM, "Ravi Kerur" wrote:
Hi Steven,
Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.
(1) Create vh
Hi Steven,
Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.
(1) Create vhost-user server
debug vhost-user on
vpp# create vhost socket /var/run/vpp/sock3.sock server
VirtualEthernet0/0/0
vpp# set interface state Virt
Ravi,
Do this
1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on".
2. Bring up the container with the vdev virtio_user commands that you have as
before
3. show vhost-user in the host and verify that it has a shared memory region.
If not, the connection has a problem.
Hi Steven,
Thanks for your help. I am using vhost-user client (VPP in container)
and vhost-user server (VPP in host). I thought it should work.
create vhost socket /var/run/vpp/sock3.sock server (On host)
create vhost socket /var/run/usvhost1 (On container)
Can you please point me to a document
Ravi,
VPP only supports vhost-user in the device mode. In your example, the host, in
device mode, and the container also in device mode do not make a happy couple.
You need one of them, either the host or container, running in driver mode
using the dpdk vdev virtio_user command in startup.conf.
Hi Steven
Though crash is not happening anymore, there is still an issue with Rx
and Tx. To eliminate whether it is testpmd or vpp, I decided to run
(1) VPP vhost-user server on host-x
(2) Run VPP in a container on host-x and vhost-user client port
connecting to vhost-user server.
Still doesn't
show interface and look for the counter and count columns for the corresponding
interface.
Steven
On 5/31/18, 1:28 PM, "Ravi Kerur" wrote:
Hi Steven,
You made my day, thank you. I didn't realize different dpdk versions
(vpp -- 18.02.1 and testpmd -- from latest git repo (prob
Hi Steven,
You made my day, thank you. I didn't realize different dpdk versions
(vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
could be the cause of the problem, I still dont understand why it
should as virtio/vhost messages are meant to setup tx/rx rings
correctly?
I downlo
Ravi,
For (1) which works, what dpdk version are you using in the host? Are you using
the same dpdk version as VPP is using? Since you are using VPP latest, I think
it is 18.02. Type "show dpdk version" at the VPP prompt to find out for sure.
Steven
On 5/31/18, 11:44 AM, "Ravi Kerur" wrote:
Hi Steven,
i have tested following scenarios and it basically is not clear why
you think DPDK is the problem? Is it possible VPP and DPDK use
different virtio versions?
Following are the scenarios I have tested
(1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
virito-user (in a cont
Ravi,
I've proved my point -- there is a problem in the way that you invoke testpmd.
The shared memory region that it passes to the device is not accessible from
the device. I don't know what the correct options are that you need to use.
This is really a question for dpdk.
As a further exercis
Hi Steven,
Thank you for your help, I removed sock1.sock and sock2.sock,
restarted vpp, atleast interfaces get created. However, when I start
dpdk/testpmd inside the container it crashes as well. Below are some
details. I am using vpp code from latest repo.
(1) On host
show interface
Sorry, I was expecting to see two VhostEthernet interfaces like this. Those
VirtualFunctionEthernet are your physical interfaces.
sh int
Name Idx State Counter
Count
VhostEthernet01 up
VhostEthernet1
Ravi,
I don't think you can declare (2) works fine yet. Please bring up the dpdk
vhost-user interfaces and try to send some traffic between them to exercise the
shared memory region from dpdk virtio-user which may be "questionable".
VirtualFunctionEthernet4/10/4 1down
Virtua
Hi Steven,
I am testing both memif and vhost-virtio, unfortunately memif is not
working as well. I posted question to the list, let me know if
something is wrong. Below is the link
https://lists.fd.io/g/vpp-dev/topic/q_on_memif_between_vpp/20371922?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,
Hi Steve,
Thank you for your inputs, I added feature-mask to see if it helps in
setting up queues correctly, it didn't so I will remove it. I have
tried following combination
(1) VPP->vhost-user (on host) and DPDK/testpmd->virtio-user (in a
container) -- VPP crashes
(2) DPDK/testpmd->vhost-user
Ravi,
First and foremost, get rid of the feature-mask option. I don't know what
0x4040 does for you. If that does not help, try testing it with dpdk based
vhost-user instead of VPP native vhost-user to make sure that they can work
well with each other first. To use dpdk vhost-user, add a vd
I am not sure what is wrong with the setup or a bug in vpp, vpp
crashes with vhost<-->virtio communication.
(1) Vhost-interfaces are created and attached to bridge-domain as follows
create vhost socket /var/run/vpp/sock1.sock server feature-mask 0x4040
create vhost socket /var/run/vpp/sock2.s
Steve,
Thanks for inputs on debugs and gdb. I am using gdb on my development
system to debug the issue. I would like to have reliable core
generation on the system on which I don't have access to install gdb.
I installed corekeeper and it still doesn't generate core. I am
running vpp inside a VM (
Ravi,
I install corekeeper and the core file is kept in /var/crash. But why not use
gdb to attach to the VPP process?
To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt.
Steven
On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
wrote:
Hi Marco
Hi Marco,
On Tue, May 29, 2018 at 6:30 AM, Marco Varlese wrote:
> Ravi,
>
> On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
>> Hello,
>>
>> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
>> installed VPP successfully on it. Later I have created vhost-user
>> interfaces v
Ravi,
On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
> Hello,
>
> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
> installed VPP successfully on it. Later I have created vhost-user
> interfaces via
>
> create vhost socket /var/run/vpp/sock1.sock server
> create vhost so
31 matches
Mail list logo