> From: Jim Meyering > Steven Schveighoffer wrote: > > > > Our company is looking at using GNU shred to wipe customer data from RMA'd > > drives in our systems. > > > > One thing we have noticed is that shred runs about 90% slower if > > /dev/urandom > > exists versus when it does not. Researching this, it seems this is because > > gl/lib/randread.c will use an internal RNG when null is passed into > > randread_new, and /dev/urandom cannot be opened. > > Someone asked the same question not long ago. > here's the thread: > > http://thread.gmane.org/gmane.comp.gnu.coreutils.bugs/15581 > > Quick answer: > > How about > > --random-source=FILE
Then I have to make a random-data file that is as big as my hard drive, which means I would have to write it to the hard drive first. This would not be possible with a single drive, and in the case of 2 drives, it would be redundant since I would already have to make a utility that would write random data to the drive (or to stdout). Why use shred at all? > > where FILE contains a bunch of random data, > like a chunk from the middle of a well-compressed tarball? > Or even this: > > --random-source=/dev/zero Unlike that discussion cited, I want the random data pass. I don't want to disable it. This solution doesn't work for me. The whole point of the patch is -- there is a perfectly usable random source inside shred, but it's only accessible if I remove a possibly critical component of my OS, /dev/urandom. The patch just makes that behavior forcible without removing /dev/urandom. I'm not trying to add any special behavior here, you can already get the behavior (by removing /dev/urandom). The patch just makes it more practical to access the internal RNG. -Steve _______________________________________________ Bug-coreutils mailing list Bug-coreutils@gnu.org http://lists.gnu.org/mailman/listinfo/bug-coreutils