Hi,

                We have encounter performance issue on batch adding 10000 GTPU 
tunnels and 10000 routes each taking one gtpu tunnel interface as nexthop via 
API.

The effect is like executing following command:

                create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 1 
encap-vrf-id 0 decap-next ip4
create gtpu tunnel src 18.1.0.41 dst 18.1.0.31 teid 2 encap-vrf-id 0 decap-next 
ip4
                ip route add 1.1.1.1/32 table 2 via gtpu_tunnel0
ip route add 1.1.1.2/32 table 2 via gtpu_tunnel1

                After debugging, the time is mainly cost on init 
adj_nbr_tables[nh_proto][sw_if_index] for "ip route add" following function:

BV(clib_bihash_init) (adj_nbr_tables[nh_proto][sw_if_index],
                                                      "Adjacency Neighbour 
table",
                                                      
ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS,
                                                      
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE);

We have change the third parameter from ADJ_NBR_DEFAULT_HASH_NUM_BUCKETS 
(64*64) to 64, and change the fourth parameter from 
ADJ_NBR_DEFAULT_HASH_MEMORY_SIZE (32 <<20) to 32<<10. And the time cost has 
been reduced to about one ninth of original result.

The question is what adj_nbr_tables is used for? Why it need so many buckets 
and memory?

BR/Lollita Liu

Reply via email to