At this moment, I don't see a leak, but will keep an eye out.
I doubled hash_size for clib_bihash_init, the same run was able to sustain
from ~30min to ~9hr. It crashed at same logic of split_and_rehash.
source code is integrated one with other s/w components. Its gonna take
some time to snip off relevant piece.

It would be helpful if you point to a rough formula to derive the
parameters for clib_bihash_init to support N number_of_expected_records.

Thanks,
Shaligram

On Sat, 4 May 2019 at 01:13, Dave Barach (dbarach) <dbar...@cisco.com>
wrote:

> We routinely test to 200M (key,value) pairs, so scaling shouldn’t be an
> issue. Are you absolutely sure that you’re not leaking (key, value) pairs?
> It’s easy to imagine a test code bug which leaks memory [ask me how I know
> that 😉].
>
>
>
> Can you share your test code?
>
>
>
> HTH... Dave
>
>
>
> *From:* vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> *On Behalf Of *
> shaligram.prakash
> *Sent:* Friday, May 3, 2019 2:48 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] Bihash memory requirement #vpp_stability
>
>
>
> Hello,
>
> Past days, I am trying to understand the VPP Bihash memory requirements &
> bucket sizing. It would good if someone points to reference.
> I have a capable system to scale up to 1 million connections.
>
> Started experiment with few hundreds of connections. There will be
> thousands of add/deletes happening in the system, but caped to 100 at once.
> max_connections = 100, workers =4. hash table is per worker.
> I am using clib_bihash - 16_8 with initialized as nbuckets =
> max_connection/BIHASH_KVP_PER_PAGE and *memory_size*=max_connection*1024.
> (1024.. I considered 1K of memory per connection).
> While experimenting, system is crashing at different places on couple of
> runs with out_of_memory in ~30min run
>
>    1. clib_bihash_add_del_16_8(hash
>    delete)->make_working_copy_16_8->clib_mem_alloc_aligned->OOM.
>    2. clib_bihash_add_del_16_8(hash add) ->
>    split_and_rehash_16_8(old_log2_pages=old_log2_pages@entry
>    =2, new_log2_pages=new_log2_pages@entry=3) ->value_alloc_16_8->OOM
>
> My queries:
>
>    - Assuming nbuckets is set correctly as per Bihash cookbook, How is
>    the memory_size for hash deduced ? I think I have kept it low. Is there a
>    guideline to deduce it considering this needs to scaled up to millions.
>    - Can we minimize the hash collision resulting in rehash via nbuckets
>    or memory_size ?
>
>
> Thanks,
> Shaligram
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12926): https://lists.fd.io/g/vpp-dev/message/12926
Mute This Topic: https://lists.fd.io/mt/31486852/21656
Mute #vpp_stability: https://lists.fd.io/mk?hashtag=vpp_stability&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to