ce add
> vhost-user-input and show trace to see if vhost is getting the packet.
>
> Steven
>
> On 6/6/18, 1:33 PM, "Ravi Kerur" wrote:
>
> Damjan, Steven,
>
> I will get back to the system on which VPP is crashing and get more
> info on it late
and show interface
> output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>>
hanks,
> Nitin
>
> On 06-Jun-2018, at 01:40, Damjan Marion wrote:
>
> Dear Ravi,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur wrote:
>
> Ste
Damjan, Steven,
Kindly let me know if anything I have messed up?
I have compiled VPP on x86_64 and done everything as suggested by Steven.
Thanks.
On Tue, Jun 5, 2018 at 1:39 PM, Ravi Kerur wrote:
> Hi Damjan,
>
> I am not intentional using it. I am running VPP on a x86 Ubun
#x27;t support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur wrote:
>
> Steven,
>
> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
> assign an IP address to both vhos
5, 2018 at 11:31 AM, Steven Luong (sluong) wrote:
> Ravi,
>
> In order to use dpdk virtio_user, you need 1GB huge page.
>
> Steven
>
> On 6/5/18, 11:17 AM, "Ravi Kerur" wrote:
>
> Hi Steven,
>
> Connection is the problem. I don't see memor
r interfaces in the host and the container.
> 6. do the ping from the container.
> 7. Collect show error, show trace, show interface, and show vhost-user in the
> host. Collect show error and show interface in the container. Put output in
> github and provide a link to view. There is n
> (host) VPP DPDK vdev virtio_user (container) VPP native vhost-user
>
> Steven
>
> On 6/4/18, 3:27 PM, "Ravi Kerur" wrote:
>
> Hi Steven
>
> Though crash is not happening anymore, there is still an issue with Rx
> and Tx. To eliminate wheth
p#
vpp# ping 192.168.1.1
Statistics: 5 sent, 0 received, 100% packet loss
vpp#
On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong) wrote:
> show interface and look for the counter and count columns for the
> corresponding interface.
>
> Steven
>
> On 5/31/18, 1:28
; I think it is 18.02. Type "show dpdk version" at the VPP prompt to find out
> for sure.
>
> Steven
>
> On 5/31/18, 11:44 AM, "Ravi Kerur" wrote:
>
> Hi Steven,
>
> i have tested following scenarios and it basically is not clear why
>
will crash in the same place. I hope you can find out the answer from
> dpdk and tell us about it.
>
> Steven
>
> On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
> wrote:
>
> Hi Steven,
>
> Thank you for your help, I removed so
egion from dpdk virtio-user which may be
> "questionable".
>
> VirtualFunctionEthernet4/10/4 1down
> VirtualFunctionEthernet4/10/6 2down
>
> Steven
>
> On 5/30/18, 4:41 PM, "Ravi Kerur" wrote:
>
> Hi Steve,
>
,,,20,2,0,20371922
Thanks.
On Wed, May 30, 2018 at 4:41 PM, Ravi Kerur wrote:
> Hi Steve,
>
> Thank you for your inputs, I added feature-mask to see if it helps in
> setting up queues correctly, it didn't so I will remove it. I have
> tried following combination
>
> (1) VPP->
> VPP for TX/RX queues. It looks like VPP vhost-user might have run into a bump
> there with using the shared memory (txvq->avail).
>
> Steven
>
> PS. vhost-user is not an optimum interface for containers. You may want to
> look into using memif if you don't already know
= 0x7ffbfff99000,
int_deadline = 0, started = 1 '\001', enabled = 0 '\000', log_used = 0 '\000',
cacheline1 = 0x7fffb6739b00 "\n", errfd = -1, callfd_idx = 10,
kickfd_idx = 14,
log_guest_addr = 0, mode = 1}
(gdb) p *(txvq->avail)
Cannot access m
Steven
>
> On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur"
> wrote:
>
> Hi Marco,
>
>
> On Tue, May 29, 2018 at 6:30 AM, Marco Varlese wrote:
> > Ravi,
> >
> > On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wr
Hi Marco,
On Tue, May 29, 2018 at 6:30 AM, Marco Varlese wrote:
> Ravi,
>
> On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
>> Hello,
>>
>> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
>> installed VPP successfully on it. Later I have
Hi,
I have VM (x86_64, Ubuntu 16.04) with 4GB RAM and 2 cores on which I
have vpp compiled, installed and running fine. I have followed the
example from
https://docs.fd.io/vpp/17.10/libmemif_example_setup_doc.html
Unfortunately it is not working for me. Couple of questions
(1) how to debug memi
Hello,
I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
installed VPP successfully on it. Later I have created vhost-user
interfaces via
create vhost socket /var/run/vpp/sock1.sock server
create vhost socket /var/run/vpp/sock2.sock server
set interface state VirtualEthernet0/0/0
Hi,
I am seeing following build errors with latest vpp git repo. CPU on
the system I am building supports 'avx2'. Commenting 'avx2' in vnet.am
builds succesfully. However, I would like to use 'avx2' version of the
library. I have followed exact build instructions given in wiki.
/***build erro
20 matches
Mail list logo