Russell,

Your scheme for least square fit for evaluating the hash results
seems to be mostly fine for worst case lookups (I havent found a spot
where it lied about this at least). The only problem is it doesnt take
into consideration the spread as well and thus the memory utilization
aspect. So if you could find a way to also factor in the "spread" it
would be more complete and wouldnt require plots to find out how good a
hash was.
As an example - when i computed the values for 256 buckets for say
0xfc - the result was they were equal; upon inspection - proved this
wasnt so.
perhaps that maybe a hard thing to do?

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to