Thanks you @misc. Using dd with a large block size will likely be the course of action.
I really need to refresh my memory on this stuff. This is not something we do, or need to do, everyday. Paul your example shows: bs=1048576 How did you choose that number? Could you have gone even bigger? Obviously it is a multiple of 512. The disks in point are 4TB Western Digital Blues. They have 4096 sector sizes. I used a 16G USB stick as a sacrificial lamb to experiment with dd. Interestingly, there is no difference in time between 1m, 1k, and 1g. How is that possible? Obviously this will not be an accurate comparison of the WD disks, but it was still a good practice exercise. Also Paul, to clarify a point you made, did you mean forget the random data step, and just encrypt the disks with softraid0 crypto? I think I like that idea because this is actually a traditional pre-encryption step. I don't agree with it, but I respect the decision. For our purposes, encryption only helps if the disks are off the machine, and someone is trying to access them. This automatically implies that they were stolen. The chances of disk theft around here are slim to none. We have no reason to worry about forensics either - we're not storing nuclear secrets. Thanks for your time On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd <we...@weirdnet.nl> wrote: > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote: > | Hi Misc, > | > | Has anyone ever filled a 4TB disk with random data and/or zeros with > | OpenBSD? > > I do this before disposing of old disks. Have written random data to > several sizes of disk, not sure if I ever wiped a 4TB disk. > > | How long did it take? What did you use (dd, openssl)? Can you share the > | command that you used? > > It takes quite some time, but OpenBSD (at least on modern hardware) > can generate random numbers faster than you can write them to spinning > disks (may be different with those fast nvme(4) disks). > > I simply used dd, with a large block size: > > dd if=/dev/random of=/dev/sdXc bs=1048576 > > And then you wait. The time it takes really depends on two factors: > the size of the disk and the speed at which you write (whatever the > bottleneck). If you start, you can send dd the 'INFO' signal (`pkill > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty > status ^T`)) This will give you output a bit like: > > 30111+0 records in > 30111+0 records out > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec) > > Now take the size of the disk in bytes, divide it by that last number > and subtract the second number. This is a reasonable ball-park > indication of time remaining. > > Note that if you're doing this because you want to prevent others from > reading back even small parts of your data, you are better of never > writing your data in plain text (e.g. using softraid(4)'s CRYPTO > discipline), or (if it's too late for that), to physically destroy the > storage medium. Due to smart disks remapping your data in case of > 'broken' sectors, some old data can never be properly overwritten. > > Cheers, > > Paul 'WEiRD' de Weerd > > -- > >++++++++[<++++++++++>-]<+++++++.>+++[<------>-]<.>+++[<+ > +++++++++++>-]<.>++[<------------>-]<+.--------------.[-] > http://www.weirdnet.nl/ >