Hello,
I am using VPP 17.07 release code (tag *v17.07*).
DBGvpp# show int address
TenGigabitEthernet1/0/0 (up):
172.27.28.5/24
TenGigabitEthernet1/0/1 (up):
172.27.29.5/24
My use case is to allow packets based on VLANs. I added an ACL rule in
classify table as below.
classify table mask l2
e the
> ip4-unitcast feature listed as ip4-drop:
>
>ip4-unicast:
>
> ip4-drop
>
>
>
> Regards,
>
> John
>
>
>
> *From:* Balaji Kn [mailto:balaji.s...@gmail.com]
> *Sent:* Friday, August 04, 2017 7:28 AM
> *To:* John Lo (loj)
> *Cc:*
ate which
> forwarding path the ACL would be applied, not which packet header ACL will
> be matched. The match of the packet is specified with the table/session
> used in the ACL.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailt
Hello,
I am using *v17.07*. I am trying to configure huge page size as 1GB and
reserve 16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for
huge page of size 2M.
*output of vpp-conf file.*
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024
# Must b
Hello,
Can you help me on below query related to 1G huge pages usage in VPP.
Regards,
Balaji
On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn wrote:
> Hello,
>
> I am using *v17.07*. I am trying to configure huge page size as 1GB and
> reserve 16 huge pages for VPP.
> I went throug
llocating 1G will be huge waste of
> memory….
>
> Thanks,
>
> Damjan
>
> On 5 Sep 2017, at 11:15, Balaji Kn wrote:
>
> Hello,
>
> Can you help me on below query related to 1G huge pages usage in VPP.
>
> Regards,
> Balaji
>
>
> On Thu, Aug 31,
on (damarion) wrote:
>
> On 6 Sep 2017, at 16:49, Balaji Kn wrote:
>
> Hi Damjan,
>
> I was trying to create 4k sub-interfaces for an interface and associate
> each sub-interface with vrf and observed a limitation in VPP 17.07 that was
> supporting only 874 VRFs and shared m
s…
>
> On 6 Sep 2017, at 17:21, Balaji Kn wrote:
>
> Hi Damjan,
>
> I am creating vrf's using "*set interface ip table
> ".*
> */dev/shm/vpe-api* shared memory is unlinked. I am able to see following
> error message on vppctl console.
>
> *exec error:
Hello All,
I am working on VPP 17.07 and using DHCP proxy functionality. CPU
configuration provided as one main thread and one worker thread.
cpu {
main-core 0
corelist-workers 1
}
Deadlock is observed while processing DHCP offer packet in VPP. However
issue is not observed if i comment CPU
in mater/17.10:
>
> https://gerrit.fd.io/r/#/c/8464/
>
>
>
> Can you try the latest image from master/17.10 or apply the patch it into
> your 17.07 tree and rebuild?
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-b
Hello,
I am working on VPP 17.07.
I have created a sub-interface on host-interface with double tag (QinQ) and
exact match. Intention was to dedicate this sub-interface for processing
double tagged packets received on host interface with exact match.
*create sub-interface GigabitEthernet0/9/0 20
Hi All,
I am working on VPP 17.07 and using ubuntu 14.04. I have two VMs say VM1
and VM2. I am running VPP on VM1 and interface between VM1 and VM2 is DPDK
type.
*Configuration*
vpp# set int state GigabitEthernet0/a/0 up
vpp#
vpp# create sub-interface GigabitEthernet0/a/0 500 dot1q 500 exact-ma
.io/r/#/c/8464/
>
>
>
> Can you try the latest image from master/17.10 or apply the patch it into
> your 17.07 tree and rebuild?
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of
Hi All,
I tried with both uio_pci_generic driver and igb_uio driver. Please can you
share your opinion on this?
Regards,
Balaji
On Tue, Oct 3, 2017 at 5:59 PM, Balaji Kn wrote:
> Hi All,
>
> I am working on VPP 17.07 and using ubuntu 14.04. I have two VMs say VM1
> and VM2. I am
Hello,
I am using VPP 17.07 and initialized heap memory as 3G in startup
configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and
associated each sub-interface with unique VRF. Eventually using 4k FIBs.
However i am observing VPP is crashing with memory crunch while a
Hello All,
I am using *VPP 19.04* version. As per my analysis icmp_responder
application is receiving interrupts but memif_rx_burst API is always giving
number of buffers as 0.
Below are logs collected on icmp_responder.
root@balaji:~# ./icmp_responder
INFO: tx qid: 0
LIBMEMIF EXAMPLE APP: ICMP_
16 matches
Mail list logo