Hello,

Past days, I am trying to understand the VPP Bihash memory requirements & 
bucket sizing. It would good if someone points to reference.
I have a capable system to scale up to 1 million connections.

Started experiment with few hundreds of connections. There will be thousands of 
add/deletes happening in the system, but caped to 100 at once.
max_connections = 100, workers =4. hash table is per worker.
I am using clib_bihash - 16_8 with initialized as nbuckets = max_connection/ 
BIHASH_KVP_PER_PAGE and *memory_size* =max_connection*1024. (1024.. I 
considered 1K of memory per connection).
While experimenting, system is crashing at different places on couple of runs 
with out_of_memory in ~30min run
* clib_bihash_add_del_16_8(hash 
delete)->make_working_copy_16_8->clib_mem_alloc_aligned->OOM.

* clib_bihash_add_del_16_8(hash add) -> 
split_and_rehash_16_8(old_log2_pages=old_log2_pages@entry=2, 
new_log2_pages=new_log2_pages@entry=3) ->value_alloc_16_8->OOM
My queries:

* Assuming nbuckets is set correctly as per Bihash cookbook , How is the 
memory_size for hash deduced ? I think I have kept it low. Is there a guideline 
to deduce it considering this needs to scaled up to millions.
* Can we minimize the hash collision resulting in rehash via nbuckets or 
memory_size ?

Thanks,
Shaligram
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12924): https://lists.fd.io/g/vpp-dev/message/12924
Mute This Topic: https://lists.fd.io/mt/31486852/21656
Mute #vpp_stability: https://lists.fd.io/mk?hashtag=vpp_stability&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to