Hi,

We recently faced a similar issue, we could not insert more than say 500k
routes in ip6 fib table.

https://wiki.fd.io/view/VPP/Command-line_Arguments#.22heapsize.22_parameter

we refered the above link and made the following changes in
/etc/vpp/startup.conf file

ip6 {
  heap-size 4G
}

If you trace back, this parameter makes a call to

/*

* The size of the hash table

*/

#define L2FIB_NUM_BUCKETS (64 * 1024)

#define L2FIB_MEMORY_SIZE (256<<20)


and sets the memory size.



HTH





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Tue, Aug 15, 2017 at 8:05 AM, John Lo (loj) <l...@cisco.com> wrote:

> Hi Billy,
>
>
>
> The output of “show l2fib” is showing how many MAC entries exist in the
> L2FIB and is not relevant to the size of L2FIB table. The L2FIB table size
> is not configurable. It is a bi-hash table with size set by the following
> #def’s in l2_fib.h and has not changed for quite a while, definitely not
> between 1704, 1707 and current master:
>
> /*
>
> * The size of the hash table
>
> */
>
> #define L2FIB_NUM_BUCKETS (64 * 1024)
>
> #define L2FIB_MEMORY_SIZE (256<<20)
>
>
>
> It is interesting to note that at the end of the test run, there is
> different number of MAC entries in the L2FIB. I think this may have to do
> with a change in 1707 where an interface up/down would cause MACs learned
> on that interface to be flushed. So when the interface come back up, the
> MACs needs to be learned again.  With 1704, the stale learned MACs from an
> interface will remain in L2FIB even if the interface is down or deleted
> unless aging is enabled to removed them at the BD aging interval.
>
>
>
> Another improvement added in 1707 was a check in the l2-fwd node so when a
> MAC entry is found in L2FIB, its sequence number is checked to make sure it
> is not stale and subject to flushing (such as MAC learned when this
> interface sw_if_index was up but went down, or if this sw_if_index was
> used, deleted and reused). If the MAC is stale, the packet will be flooded
> instead of making use of the stale MAC entry to forward it.
>
>
>
> I wonder if the test script for the performance does create/delete
> interfaces or set interface to admin up/down states causing stale MACs be
> flushed in 1707?  With 1704, it may be using stale MAC entries to forward
> packets rather than flooding to learn the MACs again. This can explain the
> l2-flood and l2-input count ratio difference between 1704 and 1707.
>
>
>
> When measuring l2-bridg forwarding performance, are you setup to measure
> the forwarding rate in the steady forwarding state?  If all the 10K or 1M
> flows are started at the same time for a particular test, there will be an
> initial low PPS throughput period when all packets needs to be flooded and
> MACs earned before it settle down to a higher steady state PPS forwarding
> rate. If there is interface flap or other events that causes MAC flush, the
> MAC will needs to be learned again. I wonder if the forwarding performance
> for 10K or 1M flows is measure at the steady forwarding state or not.
>
>
>
> Above are a few generic comments I can think of, without knowing much
> details about how the tests are setup and measured. Hope it can help to
> explain the different behavior observed between 1704 and 1707.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Billy McFall
> *Sent:* Monday, August 14, 2017 6:40 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP Performance drop from 17.04 to 17.07
>
>
>
> In the last VPP call, I reported some internal Red Hat performance testing
> was showing a significant drop in performance between releases 17.04 to
> 17.07. This with l2-bridge testing - PVP - 0.002% Drop Rate:
>
>    VPP-17.04: 256 Flow 7.8 MP/s 10k Flow 7.3 MP/s 1m Flow 5.2 MP/s
>
>    VPP-17.07: 256 Flow 7.7 MP/s 10k Flow 2.7 MP/s 1m Flow 1.8 MP/s
>
>
>
> The performance team re-ran some of the tests for me with some additional
> data collected. Looks like the size of the L2 FIB table was reduced in
> 17.07. Below are the number of entries in the MAC Table after the tests are
> run:
>
>    17.04:
>
>      show l2fib
>
>      4000008 l2fib entries
>
>    17.07:
>
>      show l2fib
>
>      1067053 l2fib entries with 1048576 learned (or non-static) entries
>
>
>
> This caused more packets to be flooded (see out of 'show node counters'
> below). I looked but couldn't find anything. Is the size of the L2 FIB
> Table table configurable?
>
>
>
> Thanks,
>
> Billy McFall
>
>
>
>
>
> 17.04:
>
>
>
> show node counters
>
>    Count                    Node                  Reason
>
> :
>
>  313035313                l2-input                L2 input packets
>
>     555726                l2-flood                L2 flood packets
>
> :
>
>  310115490                l2-input                L2 input packets
>
>     824859                l2-flood                L2 flood packets
>
> :
>
>  313508376                l2-input                L2 input packets
>
>    1041961                l2-flood                L2 flood packets
>
> :
>
>  313691024                l2-input                L2 input packets
>
>     698968                l2-flood                L2 flood packets
>
>
>
> 17.07:
>
>
>
> show node counters
>
>    Count                    Node                  Reason
>
> :
>
>   97810569                l2-input                L2 input packets
>
>   72557612                l2-flood                L2 flood packets
>
> :
>
>   97830674                l2-input                L2 input packets
>
>   72478802                l2-flood                L2 flood packets
>
> :
>
>   97714888                l2-input                L2 input packets
>
>   71655987                l2-flood                L2 flood packets
>
> :
>
>   97710374                l2-input                L2 input packets
>
>   70058006                l2-flood                L2 flood packets
>
>
>
>
>
> --
>
> *Billy McFall*
> SDN Group
> Office of Technology
> *Red Hat*
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to