On 2009-Nov-24 14:07:06 -0600, Mike Gerdts <mger...@gmail.com> wrote:
>On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
><richard.ell...@gmail.com> wrote:
>> Also, the performance of /dev/*random is not very good.  So prestaging
>> lots of random data will be particularly challenging.

This depends on the random number generation algorithm used in the
kernel.  I get >50MB/sec out of FreeBSD on 3.2GHz P4 (using Yarrow).
In any case, you don't need crypto-grade random numbers, just data
that is different and uncompressible - there are lots of relatively
simple RNGs that can deliver this with far greater speed.

>I was thinking that a bignum library such as libgmp could be handy to
>allow easy bit shifting of large amounts of data.  That is, fill a 128
>KB buffer with random data then do bitwise rotations for each
>successive use of the buffer.  Unless my math is wrong, it should
>allow 128 KB of random data to be write 128 GB of data with very
>little deduplication or compression.  A much larger data set could be
>generated with the use of a 128 KB linear feedback shift register...

This strikes me as much harder to use than just filling the buffer
with 8/32/64-bit random numbers from a linear congruential generator,
lagged fibonacci generator, mersenne twister or even random(3)

http://en.wikipedia.org/wiki/List_of_random_number_generators

-- 
Peter Jeremy

Attachment: pgpO9mAWzbb7x.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to