[vpp-dev] query on L2 ACL for VLANs

2017-08-02 Thread Balaji Kn
Hello,

I am using VPP 17.07 release code (tag *v17.07*).

DBGvpp# show int address
TenGigabitEthernet1/0/0 (up):
  172.27.28.5/24
TenGigabitEthernet1/0/1 (up):
  172.27.29.5/24

My use case is to allow packets based on VLANs. I added an ACL rule in
classify table as below.

classify table mask l2 tag1
classify session acl-hit-next permit opaque-index 0 table-index 0 match l2
tag1 100
set int input acl intfc TenGigabitEthernet1/0/0 l2-table 0

Tagged packets were dropped in ethernet node.

00:08:39:270674: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d67: current data 0, length 124, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x1
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
0x6de35a00
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a24
fragment id 0x001c
  UDP: 1024 -> 1024
length 86, checksum 0x
00:08:39:270679: ethernet-input
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
00:08:39:270685: error-drop
  ethernet-input: unknown vlan

DBGvpp#

Hence i created a sub-interface to allow tagged packet.
create sub-interfaces TenGigabitEthernet1/0/0  100
set interface state  TenGigabitEthernet1/0/0.100 up

Still the packets are not hitting ACL node and still packets are dropped.
This time in ip4-input node.

00:07:42:330550: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d8e: current data 0, length 124, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
0x6de363c0
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a25
fragment id 0x001b
  UDP: 1024 -> 1024
length 86, checksum 0x
00:07:42:330560: ethernet-input
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
00:07:42:330572: ip4-input
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a25
fragment id 0x001b
  UDP: 1024 -> 1024
length 86, checksum 0x
00:07:42:330583: ip4-drop
UDP: 172.27.28.6 -> 172.27.29.6
  tos 0x00, ttl 255, length 106, checksum 0x2a25
  fragment id 0x001b
UDP: 1024 -> 1024
  length 86, checksum 0x
00:07:42:330586: error-drop
  ip4-input: ip4 adjacency drop

Can you help me know if i am missing any configuration so that my packets
hit ACL node and then ip4-input node.

Please let me know if you need any information on configurations/setup.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on L2 ACL for VLANs

2017-08-07 Thread Balaji Kn
Hi John,

ACL feature is working after setting IP address on the sub-interface.
Thanks for help

Regards,
Balaji

On Fri, Aug 4, 2017 at 10:24 PM, John Lo (loj)  wrote:

> Hi Balaj
>
>
>
> I think the problem is that you did not configure an IP address on the
> sub-interface. Thus, IP4 forwarding is not enabled.   You can check state
> of various forwarding features on an interface or sub-interface using the
> command:
>
>   show int feat TenGigabitEthernet1/0/0.100
>
>
>
> If an interface does not have IP4 address configured, you will see the
> ip4-unitcast feature listed as ip4-drop:
>
>ip4-unicast:
>
> ip4-drop
>
>
>
> Regards,
>
> John
>
>
>
> *From:* Balaji Kn [mailto:balaji.s...@gmail.com]
> *Sent:* Friday, August 04, 2017 7:28 AM
> *To:* John Lo (loj) 
> *Cc:* vpp-dev@lists.fd.io; l.s.abhil...@gmail.com
> *Subject:* Re: [vpp-dev] query on L2 ACL for VLANs
>
>
>
> Hi John,
>
>
>
> Thanks for quick response.
>
> I tried as you suggested to associate input ACL on IP-forwarding path for
> tagged packets. Ingress packets are not hitting ACL node and are dropped.
> However ACL with src/dst IP, MAC address, udp port numbers are fine.
>
>
>
> *Following are the configuration steps followed.*
>
>
>
> set int ip address TenGigabitEthernet1/0/0 172.27.28.5/24
>
> set interface state  TenGigabitEthernet1/0/0 up
>
> set int ip address TenGigabitEthernet1/0/1 172.27.29.5/24
>
> set interface state  TenGigabitEthernet1/0/1 up
>
> create sub-interfaces TenGigabitEthernet1/0/0  100
>
> set interface state  TenGigabitEthernet1/0/0.100 up
>
>
>
> *ACL configuration*
>
> classify table mask l2 tag1
>
> classify session acl-hit-next deny opaque-index 0 table-index 0 match l2
> tag1 100
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 *ip4-table* 0
>
>
>
> *Trace captured on VPP*
>
> 00:16:11:820587: dpdk-input
>
>   TenGigabitEthernet1/0/0 rx queue 0
>
>   buffer 0x4d40: current data 0, length 124, free-list 0, clone-count 0,
> totlen-nifb 0, trace 0x0
>
>   PKT MBUF: port 0, nb_segs 1, pkt_len 124
>
> buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
> 0x6de35040
>
> packet_type 0x291
>
> Packet Offload Flags
>
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>
> Packet Types
>
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>
>   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
> extension headers
>
>   RTE_PTYPE_L4_UDP (0x0200) UDP packet
>
>   IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
>
>   UDP: 172.27.28.6 -> 172.27.29.6
>
> tos 0x00, ttl 255, length 106, checksum 0x2a38
>
> fragment id 0x0008
>
>   UDP: 1024 -> 1024
>
> length 86, checksum 0x
>
> 00:16:11:820596: ethernet-input
>
>   IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
>
> 00:16:11:820616: ip4-input
>
>   UDP: 172.27.28.6 -> 172.27.29.6
>
> tos 0x00, ttl 255, length 106, checksum 0x2a38
>
> fragment id 0x0008
>
>   UDP: 1024 -> 1024
>
> length 86, checksum 0x
>
> 00:16:11:820624: ip4-drop
>
> UDP: 172.27.28.6 -> 172.27.29.6
>
>   tos 0x00, ttl 255, length 106, checksum 0x2a38
>
>   fragment id 0x0008
>
> UDP: 1024 -> 1024
>
>   length 86, checksum 0x
>
> 00:16:11:820627: error-drop
>
>   ip4-input: ip4 adjacency drop
>
>
>
> I verified in VPP code and packet is dropped while searching for intc arc
> (searching for feature enabled on interface). I assume associating
> sub-interface with ACL was enabling feature.
>
>
>
> Let me know if i missed anything.
>
>
>
> Regards,
>
> Balaji
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Aug 2, 2017 at 9:26 PM, John Lo (loj)  wrote:
>
> Hi Balaji,
>
>
>
> In order to make input ACL work on the IPv4 forwarding path, you need to
> set it as ip4-table on the interface or sub-interface. For your case for
> packets with VLAN tags, it needs to be set on sub-interface:
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0
>
>
>
> The names in the CLI  [ip4-table|ip6-table|l2-table] indicate which
> forwarding path the ACL would be applied, not which packet header ACL will
> be matched. The match of the packet is specified with the table/session
> used in the ACL.
>
>
>
> Regards,
>
> John
>
>
&g

Re: [vpp-dev] query on L2 ACL for VLANs

2017-08-07 Thread Balaji Kn
Hi John,

Thanks for quick response.
I tried as you suggested to associate input ACL on IP-forwarding path for
tagged packets. Ingress packets are not hitting ACL node and are dropped.
However ACL with src/dst IP, MAC address, udp port numbers are fine.

*Following are the configuration steps followed.*

set int ip address TenGigabitEthernet1/0/0 172.27.28.5/24
set interface state  TenGigabitEthernet1/0/0 up
set int ip address TenGigabitEthernet1/0/1 172.27.29.5/24
set interface state  TenGigabitEthernet1/0/1 up
create sub-interfaces TenGigabitEthernet1/0/0  100
set interface state  TenGigabitEthernet1/0/0.100 up

*ACL configuration*
classify table mask l2 tag1
classify session acl-hit-next deny opaque-index 0 table-index 0 match l2
tag1 100
set int input acl intfc TenGigabitEthernet1/0/0.100 *ip4-table* 0

*Trace captured on VPP*
00:16:11:820587: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d40: current data 0, length 124, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
0x6de35040
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820596: ethernet-input
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
00:16:11:820616: ip4-input
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820624: ip4-drop
UDP: 172.27.28.6 -> 172.27.29.6
  tos 0x00, ttl 255, length 106, checksum 0x2a38
  fragment id 0x0008
UDP: 1024 -> 1024
  length 86, checksum 0x
00:16:11:820627: error-drop
  ip4-input: ip4 adjacency drop

I verified in VPP code and packet is dropped while searching for intc arc
(searching for feature enabled on interface). I assume associating
sub-interface with ACL was enabling feature.

Let me know if i missed anything.

Regards,
Balaji









On Wed, Aug 2, 2017 at 9:26 PM, John Lo (loj)  wrote:

> Hi Balaji,
>
>
>
> In order to make input ACL work on the IPv4 forwarding path, you need to
> set it as ip4-table on the interface or sub-interface. For your case for
> packets with VLAN tags, it needs to be set on sub-interface:
>
> set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0
>
>
>
> The names in the CLI  [ip4-table|ip6-table|l2-table] indicate which
> forwarding path the ACL would be applied, not which packet header ACL will
> be matched. The match of the packet is specified with the table/session
> used in the ACL.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Balaji Kn
> *Sent:* Wednesday, August 02, 2017 9:41 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* l.s.abhil...@gmail.com
> *Subject:* [vpp-dev] query on L2 ACL for VLANs
>
>
>
> Hello,
>
>
>
> I am using VPP 17.07 release code (tag *v17.07*).
>
>
>
> DBGvpp# show int address
>
> TenGigabitEthernet1/0/0 (up):
>
>   172.27.28.5/24
>
> TenGigabitEthernet1/0/1 (up):
>
>   172.27.29.5/24
>
>
>
> My use case is to allow packets based on VLANs. I added an ACL rule in
> classify table as below.
>
>
>
> classify table mask l2 tag1
>
> classify session acl-hit-next permit opaque-index 0 table-index 0 match l2
> tag1 100
>
> set int input acl intfc TenGigabitEthernet1/0/0 l2-table 0
>
>
>
> Tagged packets were dropped in ethernet node.
>
>
>
> 00:08:39:270674: dpdk-input
>
>   TenGigabitEthernet1/0/0 rx queue 0
>
>   buffer 0x4d67: current data 0, length 124, free-list 0, clone-count 0,
> totlen-nifb 0, trace 0x1
>
>   PKT MBUF: port 0, nb_segs 1, pkt_len 124
>
> buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr
> 0x6de35a00
>
> packet_type 0x291
>
> Packet Offload Flags
>
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>
> Packet Types
>
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>
>   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
> extension headers
>
>   RTE_PTYPE_L4_

[vpp-dev] query on hugepages usage in VPP

2017-08-31 Thread Balaji Kn
Hello,

I am using *v17.07*. I am trying to configure huge page size as 1GB and
reserve 16 huge pages for VPP.
I went through /etc/sysctl.d/80-vpp.conf file and found options only for
huge page of size 2M.

*output of vpp-conf file.*
.# Number of 2MB hugepages desired
vm.nr_hugepages=1024

# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=3096

# All groups allowed to access hugepages
vm.hugetlb_shm_group=0

# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=2147483648

Please can you let me know configurations i need to do so that VPP runs
with 1GB huge pages.

Host OS is supporting 1GB huge pages.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-05 Thread Balaji Kn
Hello,

Can you help me on below query related to 1G huge pages usage in VPP.

Regards,
Balaji


On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:

> Hello,
>
> I am using *v17.07*. I am trying to configure huge page size as 1GB and
> reserve 16 huge pages for VPP.
> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
> huge page of size 2M.
>
> *output of vpp-conf file.*
> .# Number of 2MB hugepages desired
> vm.nr_hugepages=1024
>
> # Must be greater than or equal to (2 * vm.nr_hugepages).
> vm.max_map_count=3096
>
> # All groups allowed to access hugepages
> vm.hugetlb_shm_group=0
>
> # Shared Memory Max must be greator or equal to the total size of
> hugepages.
> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
> # is greater than the calculated TotalHugepageSize then set this parameter
> # to current shmmax value.
> kernel.shmmax=2147483648 <(214)%20748-3648>
>
> Please can you let me know configurations i need to do so that VPP runs
> with 1GB huge pages.
>
> Host OS is supporting 1GB huge pages.
>
> Regards,
> Balaji
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I was trying to create 4k sub-interfaces for an interface and associate
each sub-interface with vrf and observed a limitation in VPP 17.07 that was
supporting only 874 VRFs and shared memory was unlinked for 875th VRF.

I felt this might be because of shortage of heap memory used in VPP and
might be solved with  increase of huge page memory.

Regards,
Balaji

On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion)  wrote:

>
> why do you need so much memory? Currently, for default number of buffers
> (16K per socket) VPP needs
> around 40MB of hugepage memory so allocating 1G will be huge waste of
> memory….
>
> Thanks,
>
> Damjan
>
> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>
> Hello,
>
> Can you help me on below query related to 1G huge pages usage in VPP.
>
> Regards,
> Balaji
>
>
> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:
>
>> Hello,
>>
>> I am using *v17.07*. I am trying to configure huge page size as 1GB and
>> reserve 16 huge pages for VPP.
>> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
>> huge page of size 2M.
>>
>> *output of vpp-conf file.*
>> .# Number of 2MB hugepages desired
>> vm.nr_hugepages=1024
>>
>> # Must be greater than or equal to (2 * vm.nr_hugepages).
>> vm.max_map_count=3096
>>
>> # All groups allowed to access hugepages
>> vm.hugetlb_shm_group=0
>>
>> # Shared Memory Max must be greator or equal to the total size of
>> hugepages.
>> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
>> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
>> # is greater than the calculated TotalHugepageSize then set this parameter
>> # to current shmmax value.
>> kernel.shmmax=2147483648 <(214)%20748-3648>
>>
>> Please can you let me know configurations i need to do so that VPP runs
>> with 1GB huge pages.
>>
>> Host OS is supporting 1GB huge pages.
>>
>> Regards,
>> Balaji
>>
>>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I am creating vrf's using "*set interface ip table  ".*
*/dev/shm/vpe-api* shared memory is unlinked. I am able to see following
error message on vppctl console.

*exec error: Misc*

After this if i execute "show int" on vppctl, all VPP configuration i did
so far was lost and started with default configuration as per
/etc/vpp/startup.conf.

You mentioned that VPP heap is not using huge pages. In that case can I
increase heap memory with startup configuration "heapsize" parameter?

Regards,
Balaji


On Wed, Sep 6, 2017 at 8:27 PM, Damjan Marion (damarion)  wrote:

>
> On 6 Sep 2017, at 16:49, Balaji Kn  wrote:
>
> Hi Damjan,
>
> I was trying to create 4k sub-interfaces for an interface and associate
> each sub-interface with vrf and observed a limitation in VPP 17.07 that was
> supporting only 874 VRFs and shared memory was unlinked for 875th VRF.
>
>
> What do you mean by “shared memory was unlinked” ?
> Which shared memory?
>
>
> I felt this might be because of shortage of heap memory used in VPP and
> might be solved with  increase of huge page memory.
>
>
> VPP heap is not using hugepages.
>
>
> Regards,
> Balaji
>
> On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> why do you need so much memory? Currently, for default number of buffers
>> (16K per socket) VPP needs
>> around 40MB of hugepage memory so allocating 1G will be huge waste of
>> memory….
>>
>> Thanks,
>>
>> Damjan
>>
>> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>>
>> Hello,
>>
>> Can you help me on below query related to 1G huge pages usage in VPP.
>>
>> Regards,
>> Balaji
>>
>>
>> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn  wrote:
>>
>>> Hello,
>>>
>>> I am using *v17.07*. I am trying to configure huge page size as 1GB and
>>> reserve 16 huge pages for VPP.
>>> I went through /etc/sysctl.d/80-vpp.conf file and found options only for
>>> huge page of size 2M.
>>>
>>> *output of vpp-conf file.*
>>> .# Number of 2MB hugepages desired
>>> vm.nr_hugepages=1024
>>>
>>> # Must be greater than or equal to (2 * vm.nr_hugepages).
>>> vm.max_map_count=3096
>>>
>>> # All groups allowed to access hugepages
>>> vm.hugetlb_shm_group=0
>>>
>>> # Shared Memory Max must be greator or equal to the total size of
>>> hugepages.
>>> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
>>> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
>>> # is greater than the calculated TotalHugepageSize then set this
>>> parameter
>>> # to current shmmax value.
>>> kernel.shmmax=2147483648 <(214)%20748-3648>
>>>
>>> Please can you let me know configurations i need to do so that VPP runs
>>> with 1GB huge pages.
>>>
>>> Host OS is supporting 1GB huge pages.
>>>
>>> Regards,
>>> Balaji
>>>
>>>
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>
>>
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on hugepages usage in VPP

2017-09-06 Thread Balaji Kn
Hi Damjan,

I was able to create 4k VRF's with increase in heap memory size to 4G.
Thanks for the help.

Regards,
Balaji

On Wed, Sep 6, 2017 at 9:01 PM, Damjan Marion (damarion)  wrote:

> yes, you can also try to execute “show memory verbose” before the failing
> one to see the stats…
>
> On 6 Sep 2017, at 17:21, Balaji Kn  wrote:
>
> Hi Damjan,
>
> I am creating vrf's using "*set interface ip table 
> ".*
> */dev/shm/vpe-api* shared memory is unlinked. I am able to see following
> error message on vppctl console.
>
> *exec error: Misc*
>
> After this if i execute "show int" on vppctl, all VPP configuration i did
> so far was lost and started with default configuration as per
> /etc/vpp/startup.conf.
>
> You mentioned that VPP heap is not using huge pages. In that case can I
> increase heap memory with startup configuration "heapsize" parameter?
>
> Regards,
> Balaji
>
>
> On Wed, Sep 6, 2017 at 8:27 PM, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> On 6 Sep 2017, at 16:49, Balaji Kn  wrote:
>>
>> Hi Damjan,
>>
>> I was trying to create 4k sub-interfaces for an interface and associate
>> each sub-interface with vrf and observed a limitation in VPP 17.07 that was
>> supporting only 874 VRFs and shared memory was unlinked for 875th VRF.
>>
>>
>> What do you mean by “shared memory was unlinked” ?
>> Which shared memory?
>>
>>
>> I felt this might be because of shortage of heap memory used in VPP and
>> might be solved with  increase of huge page memory.
>>
>>
>> VPP heap is not using hugepages.
>>
>>
>> Regards,
>> Balaji
>>
>> On Wed, Sep 6, 2017 at 7:10 PM, Damjan Marion (damarion) <
>> damar...@cisco.com> wrote:
>>
>>>
>>> why do you need so much memory? Currently, for default number of buffers
>>> (16K per socket) VPP needs
>>> around 40MB of hugepage memory so allocating 1G will be huge waste of
>>> memory….
>>>
>>> Thanks,
>>>
>>> Damjan
>>>
>>> On 5 Sep 2017, at 11:15, Balaji Kn  wrote:
>>>
>>> Hello,
>>>
>>> Can you help me on below query related to 1G huge pages usage in VPP.
>>>
>>> Regards,
>>> Balaji
>>>
>>>
>>> On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn 
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> I am using *v17.07*. I am trying to configure huge page size as 1GB
>>>> and reserve 16 huge pages for VPP.
>>>> I went through /etc/sysctl.d/80-vpp.conf file and found options only
>>>> for huge page of size 2M.
>>>>
>>>> *output of vpp-conf file.*
>>>> .# Number of 2MB hugepages desired
>>>> vm.nr_hugepages=1024
>>>>
>>>> # Must be greater than or equal to (2 * vm.nr_hugepages).
>>>> vm.max_map_count=3096
>>>>
>>>> # All groups allowed to access hugepages
>>>> vm.hugetlb_shm_group=0
>>>>
>>>> # Shared Memory Max must be greator or equal to the total size of
>>>> hugepages.
>>>> # For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
>>>> # If the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
>>>> # is greater than the calculated TotalHugepageSize then set this
>>>> parameter
>>>> # to current shmmax value.
>>>> kernel.shmmax=2147483648 <(214)%20748-3648>
>>>>
>>>> Please can you let me know configurations i need to do so that VPP runs
>>>> with 1GB huge pages.
>>>>
>>>> Host OS is supporting 1GB huge pages.
>>>>
>>>> Regards,
>>>> Balaji
>>>>
>>>>
>>> ___
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>>
>>>
>>>
>>
>>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] deadlock issue in VPP during DHCP packet processing

2017-09-26 Thread Balaji Kn
Hello All,

I am working on VPP 17.07 and using DHCP proxy functionality. CPU
configuration provided as one main thread and one worker thread.

cpu {
  main-core 0
  corelist-workers 1
}

Deadlock is observed while processing DHCP offer packet in VPP. However
issue is not observed if i comment CPU configuration in startup.conf file
(if running in single thread) and everything works smoothly.

*Following message is displayed on console.*
vlib_worker_thread_barrier_sync: worker thread deadlock

*Backtrace from core file generated.*
[New LWP 12792]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f721ab13028 in __GI_abort () at abort.c:89
#2  0x00407073 in os_panic () at
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7f721c0b5d5d in vlib_worker_thread_barrier_sync
(vm=0x7f721c2e12e0 )
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/threads.c:1192
#4  0x7f721c2e973a in vl_msg_api_handler_with_vm_node
(am=am@entry=0x7f721c5063a0
, the_msg=the_msg@entry=0x304bc6d4,
vm=vm@entry=0x7f721c2e12e0 , node=node@entry
=0x7f71da6a8000)
at
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibapi/api_shared.c:501
#5  0x7f721c2f34be in memclnt_process (vm=,
node=0x7f71da6a8000, f=)
at
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibmemory/memory_vlib.c:544
#6  0x7f721c08ec96 in vlib_process_bootstrap (_a=)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1259
#7  0x7f721b2ec858 in clib_calljmp () at
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vppinfra/longjmp.S:110
#8  0x7f71da9efe20 in ?? ()
#9  0x7f721c090041 in vlib_process_startup (f=0x0, p=0x7f71da6a8000,
vm=0x7f721c2e12e0 )
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1281
#10 dispatch_process (vm=0x7f721c2e12e0 ,
p=0x7f71da6a8000, last_time_stamp=58535483853222, f=0x0)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1324
#11 0x00d800d9 in ?? ()

Any pointers would be appreciated.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] deadlock issue in VPP during DHCP packet processing

2017-09-27 Thread Balaji Kn
Hi John,

Applying the patch to 17.07 tree did not solved the issue.I am observing
many compilation issues with latest image of master and could not verify.

Am i missing anything?

Regards,
Balaji

On Tue, Sep 26, 2017 at 7:15 PM, John Lo (loj)  wrote:

> There was a patch recently merged in mater/17.10:
>
> https://gerrit.fd.io/r/#/c/8464/
>
>
>
> Can you try the latest image from master/17.10 or apply the patch it into
> your 17.07 tree and rebuild?
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Balaji Kn
> *Sent:* Tuesday, September 26, 2017 8:37 AM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] deadlock issue in VPP during DHCP packet processing
>
>
>
> Hello All,
>
>
>
> I am working on VPP 17.07 and using DHCP proxy functionality. CPU
> configuration provided as one main thread and one worker thread.
>
>
>
> cpu {
>
>   main-core 0
>
>   corelist-workers 1
>
> }
>
>
>
> Deadlock is observed while processing DHCP offer packet in VPP. However
> issue is not observed if i comment CPU configuration in startup.conf file
> (if running in single thread) and everything works smoothly.
>
>
>
> *Following message is displayed on console.*
>
> vlib_worker_thread_barrier_sync: worker thread deadlock
>
>
>
> *Backtrace from core file generated.*
>
> [New LWP 12792]
>
> [Thread debugging using libthread_db enabled]
>
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>
> Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
>
> Program terminated with signal SIGABRT, Aborted.
>
> #0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>
> 56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or
> directory.
>
> (gdb) bt
>
> #0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>
> #1  0x7f721ab13028 in __GI_abort () at abort.c:89
>
> #2  0x00407073 in os_panic () at /root/vfe/fe-vfe/datapath/vpp/
> build-data/../src/vpp/vnet/main.c:263
>
> #3  0x7f721c0b5d5d in vlib_worker_thread_barrier_sync
> (vm=0x7f721c2e12e0 )
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/threads.c:1192
>
> #4  0x7f721c2e973a in vl_msg_api_handler_with_vm_node 
> (am=am@entry=0x7f721c5063a0
> , the_msg=the_msg@entry=0x304bc6d4,
>
> vm=vm@entry=0x7f721c2e12e0 , node=node@entry=
> 0x7f71da6a8000)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibapi/api_
> shared.c:501
>
> #5  0x7f721c2f34be in memclnt_process (vm=,
> node=0x7f71da6a8000, f=)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibmemory/
> memory_vlib.c:544
>
> #6  0x7f721c08ec96 in vlib_process_bootstrap (_a=)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1259
>
> #7  0x7f721b2ec858 in clib_calljmp () at /root/vfe/fe-vfe/datapath/vpp/
> build-data/../src/vppinfra/longjmp.S:110
>
> #8  0x7f71da9efe20 in ?? ()
>
> #9  0x7f721c090041 in vlib_process_startup (f=0x0, p=0x7f71da6a8000,
> vm=0x7f721c2e12e0 )
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1281
>
> #10 dispatch_process (vm=0x7f721c2e12e0 ,
> p=0x7f71da6a8000, last_time_stamp=58535483853222, f=0x0)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1324
>
> #11 0x00d800d9 in ?? ()
>
>
>
> Any pointers would be appreciated.
>
>
>
> Regards,
>
> Balaji
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] problem with double tagged packet on host interface (af-packet-input)

2017-09-27 Thread Balaji Kn
Hello,

I am working on VPP 17.07.

I have created a sub-interface on host-interface with double tag (QinQ) and
exact match. Intention was to dedicate this sub-interface for processing
double tagged packets received on host interface with exact match.

*create sub-interface GigabitEthernet0/9/0 20 dot1ad 100 inner-dot1q 20
exact-match*

I have assigned IP for host-interface and sub-interface created on
host-interface and changed state to UP.

However exact match double tagged packets (received on af-packet-input) are
received only on base host-interface, but not on sub-interface created on
host interface.

Please can you let me know whether sub-interface is supported or not for
host interfaces? I am not seeing any issues with dpdk-input.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Issue with tagged packets on DPDK-interface across VMs

2017-10-03 Thread Balaji Kn
Hi All,

I am working on VPP 17.07 and using ubuntu 14.04. I have two VMs say VM1
and VM2. I am running VPP on VM1 and interface between VM1 and VM2 is DPDK
type.

*Configuration*
vpp# set int state GigabitEthernet0/a/0 up

vpp#

vpp# create sub-interface GigabitEthernet0/a/0 500 dot1q 500 exact-match
GigabitEthernet0/a/0.500

vpp#
vpp# set int ip address GigabitEthernet0/a/0.500 5.5.5.1/24

vpp# set interface state GigabitEthernet0/a/0.500 up

vpp# show int addr
GigabitEthernet0/10/0 (up):
GigabitEthernet0/9/0 (up):
GigabitEthernet0/a/0 (up):
GigabitEthernet0/a/0.500 (up):
  5.5.5.1/24
local0 (dn):


Tagged packets  received on VPP are dropped as l3 mac mismatch. Enabling
trace is showing all packets received with incorrect hardware address.

*Trace*
00:13:14:020654: dpdk-input
  GigabitEthernet0/a/0 rx queue 0
  buffer 0xe487: current data 0, length 64, free-list 0, clone-count 0,
totlen-nifb 0, trace 0x4
  PKT MBUF: port 1, nb_segs 1, pkt_len 64
buf_len 2176, data_len 64, ol_flags 0x0, data_off 128, phys_addr
0x7514ac40
packet_type 0x0
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
00:13:14:020708: ethernet-input
  0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
00:13:14:020730: error-drop
  ethernet-input: l3 mac mismatch

If i ping from VPP,  VM2 is responding with ARP reply. Is there any known
issue with tagged packets for DPDK interfaces across VMs? Any pointers
would be appreciated.

However there are no issues with base interface on same setup. I am not
sure whether tagged packets are corrupted or not on dpdk-input.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] deadlock issue in VPP during DHCP packet processing

2017-10-03 Thread Balaji Kn
Hi John,

Applying the patch to 17.07 tree did not helped.  Are there any more
relevant fixes I need to apply on 17.07 tree for this issue?

Regards,
Balaji

On Tue, Sep 26, 2017 at 7:15 PM, John Lo (loj)  wrote:

> There was a patch recently merged in mater/17.10:
>
> https://gerrit.fd.io/r/#/c/8464/
>
>
>
> Can you try the latest image from master/17.10 or apply the patch it into
> your 17.07 tree and rebuild?
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Balaji Kn
> *Sent:* Tuesday, September 26, 2017 8:37 AM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] deadlock issue in VPP during DHCP packet processing
>
>
>
> Hello All,
>
>
>
> I am working on VPP 17.07 and using DHCP proxy functionality. CPU
> configuration provided as one main thread and one worker thread.
>
>
>
> cpu {
>
>   main-core 0
>
>   corelist-workers 1
>
> }
>
>
>
> Deadlock is observed while processing DHCP offer packet in VPP. However
> issue is not observed if i comment CPU configuration in startup.conf file
> (if running in single thread) and everything works smoothly.
>
>
>
> *Following message is displayed on console.*
>
> vlib_worker_thread_barrier_sync: worker thread deadlock
>
>
>
> *Backtrace from core file generated.*
>
> [New LWP 12792]
>
> [Thread debugging using libthread_db enabled]
>
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>
> Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
>
> Program terminated with signal SIGABRT, Aborted.
>
> #0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>
> 56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or
> directory.
>
> (gdb) bt
>
> #0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>
> #1  0x7f721ab13028 in __GI_abort () at abort.c:89
>
> #2  0x00407073 in os_panic () at /root/vfe/fe-vfe/datapath/vpp/
> build-data/../src/vpp/vnet/main.c:263
>
> #3  0x7f721c0b5d5d in vlib_worker_thread_barrier_sync
> (vm=0x7f721c2e12e0 )
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/threads.c:1192
>
> #4  0x7f721c2e973a in vl_msg_api_handler_with_vm_node 
> (am=am@entry=0x7f721c5063a0
> , the_msg=the_msg@entry=0x304bc6d4,
>
> vm=vm@entry=0x7f721c2e12e0 , node=node@entry=
> 0x7f71da6a8000)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibapi/api_
> shared.c:501
>
> #5  0x7f721c2f34be in memclnt_process (vm=,
> node=0x7f71da6a8000, f=)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibmemory/
> memory_vlib.c:544
>
> #6  0x7f721c08ec96 in vlib_process_bootstrap (_a=)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1259
>
> #7  0x7f721b2ec858 in clib_calljmp () at /root/vfe/fe-vfe/datapath/vpp/
> build-data/../src/vppinfra/longjmp.S:110
>
> #8  0x7f71da9efe20 in ?? ()
>
> #9  0x7f721c090041 in vlib_process_startup (f=0x0, p=0x7f71da6a8000,
> vm=0x7f721c2e12e0 )
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1281
>
> #10 dispatch_process (vm=0x7f721c2e12e0 ,
> p=0x7f71da6a8000, last_time_stamp=58535483853222, f=0x0)
>
> at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1324
>
> #11 0x00d800d9 in ?? ()
>
>
>
> Any pointers would be appreciated.
>
>
>
> Regards,
>
> Balaji
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Issue with tagged packets on DPDK-interface across VMs

2017-10-04 Thread Balaji Kn
Hi All,

I tried with both uio_pci_generic driver and igb_uio driver. Please can you
share your opinion on this?

Regards,
Balaji

On Tue, Oct 3, 2017 at 5:59 PM, Balaji Kn  wrote:

> Hi All,
>
> I am working on VPP 17.07 and using ubuntu 14.04. I have two VMs say VM1
> and VM2. I am running VPP on VM1 and interface between VM1 and VM2 is DPDK
> type.
>
> *Configuration*
> vpp# set int state GigabitEthernet0/a/0 up
>
> vpp#
>
> vpp# create sub-interface GigabitEthernet0/a/0 500 dot1q 500 exact-match
> GigabitEthernet0/a/0.500
>
> vpp#
> vpp# set int ip address GigabitEthernet0/a/0.500 5.5.5.1/24
>
> vpp# set interface state GigabitEthernet0/a/0.500 up
>
> vpp# show int addr
> GigabitEthernet0/10/0 (up):
> GigabitEthernet0/9/0 (up):
> GigabitEthernet0/a/0 (up):
> GigabitEthernet0/a/0.500 (up):
>   5.5.5.1/24
> local0 (dn):
>
>
> Tagged packets  received on VPP are dropped as l3 mac mismatch. Enabling
> trace is showing all packets received with incorrect hardware address.
>
> *Trace*
> 00:13:14:020654: dpdk-input
>   GigabitEthernet0/a/0 rx queue 0
>   buffer 0xe487: current data 0, length 64, free-list 0, clone-count 0,
> totlen-nifb 0, trace 0x4
>   PKT MBUF: port 1, nb_segs 1, pkt_len 64
> buf_len 2176, data_len 64, ol_flags 0x0, data_off 128, phys_addr
> 0x7514ac40
> packet_type 0x0
>   0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
> 00:13:14:020708: ethernet-input
>   0x: 00:00:00:00:00:00 -> 00:00:00:00:00:00
> 00:13:14:020730: error-drop
>   ethernet-input: l3 mac mismatch
>
> If i ping from VPP,  VM2 is responding with ARP reply. Is there any known
> issue with tagged packets for DPDK interfaces across VMs? Any pointers
> would be appreciated.
>
> However there are no issues with base interface on same setup. I am not
> sure whether tagged packets are corrupted or not on dpdk-input.
>
> Regards,
> Balaji
>
>
>
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

2017-11-27 Thread Balaji Kn
Hello,

I am using VPP 17.07 and initialized heap memory as 3G in startup
configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and
associated each sub-interface with unique VRF. Eventually using 4k FIBs.

However i am observing VPP is crashing with memory crunch while adding an
ip route.

backtrace
#0  0x7fae4c981cc9 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fae4c9850d8 in __GI_abort () at abort.c:89
#2  0x004070b3 in os_panic ()
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7fae4d19007a in clib_mem_alloc_aligned_at_offset
(os_out_of_memory_on_failure=1,
align_offset=, align=64, size=1454172096)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=v@entry=0x7fade2c44880,
length_increment=length_increment@entry=1,
data_bytes=, header_bytes=,
header_bytes@entry=24,
data_align=data_align@entry=64)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.c:84
#5  0x7fae4db9210c in _vec_resize (data_align=,
header_bytes=,
data_bytes=, length_increment=,
v=)
at
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.h:142

I initially suspected FIB is consuming more heap space. But I do see much
memory consumed by FIB table also and felt 3GB of heap is sufficient

vpp# show fib memory
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   7260010 /  60010 4320720/4320720
 Entry Source3268011 /  68011 2176352/2176352
 Entry Path-Extensions   60  0   /0   0/0
multicast-Entry 1924006  /   4006 769152/769152
   Path-list 4860016 /  60016 2880768/2880768
   uRPF-list 1676014 /  76015 1216224/1216240
 Path8060016 /  60016 4801280/4801280
  Node-list elements 2076017 /  76019 1520340/1520380
Node-list heads  8 68020 /  68020 544160/544160

Is there any way to identify usage of heap memory in other modules?
Any pointers would be helpful.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] issue is memif sample application icmp_responder in VPP 19.04

2019-08-28 Thread balaji kn
Hello All,

I am using *VPP 19.04* version. As per my analysis icmp_responder
application is receiving interrupts but memif_rx_burst API is always giving
number of buffers as 0.

Below are logs collected on icmp_responder.

root@balaji:~# ./icmp_responder
INFO: tx qid: 0
LIBMEMIF EXAMPLE APP: ICMP_Responder
==
libmemif version: 2.1
memif version: 512
use CTRL+C to exit
MEMIF DETAILS
==
interface name: memif_connection
app name: ICMP_Responder
remote interface name:
remote app name:
id: 0
secret: (null)
role: slave
mode: ethernet
socket filename: /run/vpp/memif.sock
socket filename: /run/vpp/memif.sock
rx queues:
tx queues:
link: up
INFO: memif connected!
ICMP_Responder:on_interrupt:289: interrupted
ICMP_Responder:on_interrupt:298: *received 0 buffers*. 0/256 alloc/free
buffers
ICMP_Responder:icmpr_buffer_alloc:237: allocated 0/0 buffers, 256 free
buffers
ICMP_Responder:on_interrupt:320: freed 0 buffers. 0/256 alloc/free buffers
ICMP_Responder:icmpr_tx_burst:252: tx: 0/0

Do let me know if i am missing anything.

Regards,
Balaji
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13854): https://lists.fd.io/g/vpp-dev/message/13854
Mute This Topic: https://lists.fd.io/mt/33056182/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-