On 06/11/2014 09:38 AM, Theodore Ts'o wrote: > On Mon, Jun 09, 2014 at 09:17:38AM -0400, George Spelvin wrote: >> Here's an example of a smaller, faster, and better fast_mix() function. >> The mix is invertible (thus preserving entropy), but causes each input >> bit or pair of bits to avalanche to at least 43 bits after 2 rounds and >> 120 bit0 after 3. > > I've been looking at your fast_mix2(), and it does look interesting. > >> For comparison, with the current linear fast_mix function, bits above >> the 12th in the data words only affect 4 other bits after one repetition. >> >> With 3 iterations, it runs in 2/3 the time of the current fast_mix >> and is half the size: 84 bytes of object code as opposed to 168. > > ... but how did you measure the "2/3 the time"? I've done some > measurements, using both "time calling fast_mix() and fast_mix2() N > times and divide by N (where N needs to be quite large). Using that > metric, fast_mix2() takes seven times as long to run. > > If I only run the two mixing functions once, and use RDTSC to measure > the time, fast_mix2() takes only three times as long. (That's because > the memory cache effects are much less, which favors fast_mix2). > > But either way, fast_mix2() is slower than the current fast_mix(), and > using the measurements that are as advantageous (and most realistic) > that I could come up with, it's still three times slower. > > My measurements were done using Intel 2.8 GHz quad-core i7-4900MQ CPU. >
While talking about performance, I did a quick prototype of random using Skein instead of SHA-1, and it was measurably faster, in part because Skein produces more output per hash. -hpa -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/