On Wednesday 09 August 2006 02:11, David Miller wrote:
> From: Andi Kleen <[EMAIL PROTECTED]>
> Date: Wed, 9 Aug 2006 01:23:01 +0200
> 
> > The problem is to find out what a good boundary is.
> 
> The more I think about this the more I lean towards
> two conclusions:
> 
> 1) dynamic table growth is the only reasonable way to
>    handle this and not waste memory in all cases

Yes, but even with dynamic growth you still need some upper boundary 
(otherwise a DOS could eat all your memory). And it would need
to be figured out what it is.

BTW does dynamic shrink after a load spike make sense too?

> 2) for cases where we haven't implemented dynamic
>    table growth, specifying a proper limit argument
>    to the hash table allocation is a sufficient
>    solution for the time being

Agreed, just we don't know what the proper limits are. 

I guess it would need someone running quite a lot of benchmarks.
Anyone volunteering? @)

Or do some cheesy default and document the options to change
it clearly and wait for feedback from users on what works for
them?

-Andi
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to