Hello, I was looking at the code which inits rte_hash objects in examples/l3fwd. It's using approx. 1M to 4M hash 'entries' depending on 32-bit vs 64-bit, but it's setting the 'bucket_entries' to just 4.
Normally I'm used to using somewhat deeper hash buckets than that... it seems like having a zillion little tiny hash buckets would cause more TLB pressure and memory overhead... or does 4 get shifted / exponentiated into 2**4 ? The documentation in http://dpdk.org/doc/api/structrte__hash__parameters.html and http://dpdk.org/doc/api/rte__hash_8h.html isn't that clear... is there a better place to look for this? In my case I'm looking to create a table of 4M or 8M entries, containing tables of security threat IPs / domains, to be detected in the traffic. So it would be good to have some understanding how not to waste a ton of memory on a table this huge without making it run super slow either. Did anybody have some experience with how to get this right? Another thing... the LPM table uses 16-bit Hop IDs. But I would probably have more than 64K CIDR blocks of badness on the Internet available to me for analysis. How would I cope with this, besides just letting some attackers escape unnoticed? ;) Have we got some kind of structure which allows a greater number of CIDRs even if it's not quite as fast? Thanks, Matthew.