On Sun, 2006-03-19 at 11:47 -0500, jamal wrote:
> Your scheme for least square fit for evaluating the hash results
> seems to be mostly fine for worst case lookups (I havent found a spot
> where it lied about this at least). The only problem is it doesnt take
> into consideration the spread as well and thus the memory utilization
> aspect. So if you could find a way to also factor in the "spread" it
> would be more complete and wouldnt require plots to find out how good a
> hash was.
> As an example - when i computed the values for 256 buckets for say
> 0xfc - the result was they were equal; upon inspection - proved this
> wasnt so.
> perhaps that maybe a hard thing to do?

I sent my data analysis to you before reading this.
Let me know if you still want it.  If you do, I can 
do it easily enough, I think.

Stephen: I am producing am awful lot of noise here
for no clear result.  Do you still want to be on
the cc: list?  Also, who runs the lartc list - do
you know?


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to