Hi Florin,
Thank you for the response. The reason I stick to 1609 is the router plugin
(vppsb) that works with 1609 but does not with any other VPP version I
tried. So, I'm trying to get multiple VRFs works with 1609 since I also
need the router plugin for the project...
Thanks,
Michael.
On Wed
Michael,
I would recommend you switched to a newer release (17.07 or the soon to be
release 17.10) since the fib code has been completely reworked in 17.01.
Florin
> On Oct 4, 2017, at 7:13 PM, Michael Borokhovich wrote:
>
> Hi,
>
> I'm trying to configure the following setup.
>
> Gigabit
Hi,
I'm trying to configure the following setup.
GigabitEthernet0/4/0 - Table 1
GigabitEthernet0/5/0 - Table 2
GigabitEthernet0/6/0 - Table 0 (default)
If a packet with DST_IP="10.5.1.0/24" received at GigabitEthernet0/4/0 or
GigabitEthernet0/5/0 it needs to be sent via GigabitEthernet0/6/0.
Th
Hi All,
I tried with both uio_pci_generic driver and igb_uio driver. Please can you
share your opinion on this?
Regards,
Balaji
On Tue, Oct 3, 2017 at 5:59 PM, Balaji Kn wrote:
> Hi All,
>
> I am working on VPP 17.07 and using ubuntu 14.04. I have two VMs say VM1
> and VM2. I am running VPP on
Hi Justin,
Try api-segment {prefix }
Florin
> On Oct 4, 2017, at 9:23 AM, Justin Iurman wrote:
>
> Hi all,
>
> Is it still possible to run multiple instances of VPP, just like it was done
> with VPP-lite (see here:
> https://wiki.fd.io/view/VPP/Progressive_VPP_Tutorial) before merging it ?
Hi all,
Is it still possible to run multiple instances of VPP, just like it was done
with VPP-lite (see here: https://wiki.fd.io/view/VPP/Progressive_VPP_Tutorial)
before merging it ?
Actually, my problem is the following. I need to run several instances of VPP
(eg. vpp1, vpp2, vpp3, etc) to s
Yeah you completely got the point.
Thanks for the hint ;)
Alessio
On Oct 4, 2017 12:11, "Damjan Marion" wrote:
>
>
> On 2 Oct 2017, at 18:14, Alessio Silvestro
> wrote:
>
> Dear all,
>
> I am running VPP on a CPU with 2 sockets and 4 virtual cores. The startup
> configuration is the following
The "proper" CLI is "show node counters" which displays stats kept by various
graph nodes including normal operation counts and possibly error counts. The
CLI "sho err" is historical and was kept for backward compatibilities. The old
CLI is somewhat misleading but is quicker to type then the pro
Hi,
Hardware that is capable of 50Gbps and above (at 64 byte line rate)
place packets next to each other in large memory zones rather than in
individual memory buffers.
To handle packets without copy would require vlib_buffer_t to allow
packet data to be NOT consecutive to it.
Are there plans or
Folks,
I see a number of patches on master that look like fixes, have the word “fix”
in subject line, but no jira ticket assigned and have not been cherry-picked to
stable/1710. Could patch owners, once their patches have been merged, take care
of these last two steps *if needed* and thereby a
Thanks for the clarification Damjan
-Nitin
From: Damjan Marion
Sent: Wednesday, October 4, 2017 3:37 PM
To: Saxena, Nitin
Cc: vpp-dev@lists.fd.io; Steven Luong (sluong); Athreya, Narayana Prasad
Subject: Re: [vpp-dev] [vpp-v17.10] L2 forwarding errors in vhost-u
One suggestion I would have would be to do a phased transition, with a
'topic' in the existing infra channels set to direct folks to lf-releng,
and a period (3 months?, 6 months?) of keeping a presence on the existing
channels, to be used to politely request bouncing conversations that arise
in t
+ci-...@fd.io
This probably should also go to ci-man which is roubhly the equivalent
of releng in opnfv.
--TFH
On 10/03/2017 02:12 PM, Thanh Ha wrote:
On Tue, Sep 26, 2017 at 6:38 PM, Thanh Ha
mailto:thanh...@linuxfoundation.org>>
wrote:
Hi Everyone,
We'd like to pitch an idea to
> On 2 Oct 2017, at 18:14, Alessio Silvestro wrote:
>
> Dear all,
>
> I am running VPP on a CPU with 2 sockets and 4 virtual cores. The startup
> configuration is the following:
>
> unix {
> interactive
> nodaemon
> }
>
> cpu {
> main-core 0
> corelist-workers 2-3
>
> On 3 Oct 2017, at 17:32, Saxena, Nitin wrote:
>
> Hi,
>
> While running ping between VM1 to VM2 on aarch64, configured using vhost-user
> interface, I am seeing following L2 errors in VPP v1710 for each ICMP packet
> sent
>
>
> DBGvpp# show err
>
Thanks Damjan.
On Tue, Oct 3, 2017 at 3:23 PM, Damjan Marion (damarion) wrote:
>
>
>
> On 3 Oct 2017, at 11:47, Avinash Dhar Dubey
> wrote:
>
> Hello,
>
> I am trying to compile vpp with flag vpp_configure_args_vpp =
> --disable-japi by modifying the file datapath/vpp/build-data/platforms/
> vp
16 matches
Mail list logo