On Mon, Aug 1, 2011 at 8:55 PM, Simon Horman <ho...@verge.net.au> wrote:
> * If the number of entries in a table exceeds
>  the number of buckets that it has then
>  an attempt will be made to resize the table.
>
> * There is a limit of TBL_MAX_BUCKETS placed on
>  the number of buckets of a table.
>
> * If this limit is exceeded keep using the existing table.
>  This allows a table to hold more than TBL_MAX_BUCKETS
>  entries at the expense of increased hash collisions.
>
> Signed-off-by: Simon Horman <ho...@verge.net.au>
>
> ---
>
> It appears that on 64-bit systems TBL_MAX_BUCKETS
> is 131072 (128k) not 262144 (256k) as noted in
> the comment with to its definition.
>
> Without this change the number of flows that
> can be present in the datapath is limited to 128k.
> With this change I am able achieve significantly
> higher flow counts.

I don't think it's true that TBL_MAX_BUCKETS is 128k.  I double
checked the math and printed out the value of TBL_MAX_BUCKETS and both
times came out to 256k.  I don't necessarily object to this patch per
se but it's based on the premise that that the size is the limiting
factor and that doesn't really seem to be the case so I'd like to
understand what is going on a little better before applying this.
Could it be that your test is composed of bidirectional flows (like
TCP)?  That could explain the discrepancy as this is counting flows in
each direction.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to