On Fri 26 Jun 2020 at 19:50:21 (-0700), David Christensen wrote: > On 2020-06-26 18:25, David Wright wrote: > > On Fri 26 Jun 2020 at 15:06:31 (-0700), David Christensen wrote: > > > On 2020-06-26 06:07, David Wright wrote: > > > > > On this slow machine with an oldish PATA disk, > > > > I can get about 75% speed from urandom, 15MB/s vs 20MB/s on a 29GiB > > > > partition (no encryption). There's a noticeable slowdown because, > > > > I presume, the machine runs a bit short of entropy after a while. > > > > > > I think you are noticing a slowdown when the Linux write buffer fills. > > > > I'm not sure where these write buffers might be hiding: the > > 2000-vintage PC has 512MB memory, and the same size swap partition, > > though the latter is on a disk constructed one month earlier than the > > target disk (Feb/Mar 2008). The target disk has 8MB of cache. > > With a leisurely determination of dd's PID, my first USR1 poke > > occurred no earlier than after 4GB of copying, over three minutes in. > > I seem to recall that most of my EIDE interfaces and drives were 100 > MB/s. (A few were 133 MB/s.) So, bulk reads or writes can completely > use an 8 MB cache in a fraction of a second.
This is IDE. The buses run at 100MHz, but I don't know where the bottlenecks are. The idea was only to compare writing zeros and random data. The machine, a 650MHz SE440BX-2 (Seattle 2), was selected on the basis that it's presently housing a secondary drive with two spare "root filesystem partitions" (which was in the Dell Optiplex that died last month). It was doing nothing but running two ssh sessions, one for dd and one for kill. > top(1) reports memory statistics on line 4. I believe "buff/cache" is > the amount of memory being used for I/O write buffering and read > caching. Line 5 has statics for swap. I do not know if memory write > buffer / read cache usage interacts with swap usage, but it would not > surprise me. top(1) should be able to show you. > > Perhaps I misinterpreted your "slowdown" statement. I assumed you ran > a command similar to: > > # dd if=/dev/urandom of=/dev/sdxn bs=1M status=progress Close: I was running within a script command, so I just poked the dd occasionally with kill -USR1 to record its progress in the typescript file. > dd(1) is copying PRN data from the CPU to the kernel write buffer (in > memory) and the kernel input/ output stack is copying from the write > buffer to the HDD (likely via direct memory access, DMA). The > 'status=progress' option will cause dd(1) to display the rate at which > the write buffer is being filled. I am not sure how to monitor the > rate at which the write buffer is being drained. Assuming the write > buffer is initially empty, the filling process is "fast", and the > draining process is "slow" when the above command is started, dd(1) > should show fast throughput until the write buffer fills and then show > slow throughput for the remainder of the transfer. And, without a > 'sync' option to dd(1), dd(1) will exit and the shell will display the > next prompt as the final write buffer contents are being written to > the HDD (e.g. the HDD will be busy for a short while after dd(1) is > finished). > > Another possibility -- magnetic disk drives have more sectors in outer > tracks (lower sector number) than they have in inner tracks (higher > sector number). When filling an entire drive, I have seen the > transfer rate drop by 40~50% over the duration of the transfer. This > is normal. Is this what you are referring to? I tried to take account of these possibilities by using 29GB partitions, much larger than the buffer sizes, and writing two different partitions. But replicating the run a few times didn't give me consistent enough timings to have confidence in any conclusions. When I tried using sync to reduce the effect of buffering, things slowed so much that I suspect there would be no shortage of entropy anyway. Regardless, the loss of speed is not serious enough for me to change my strategy from: urandom before cryptsetup, zero before encrypting swap, zero to erase disk at end of life/possession. I *have* given up running badblocks. (The disk layout is: Device Start End Sectors Size Type /dev/sdb1 2048 8191 6144 3M BIOS boot /dev/sdb2 8192 1023999 1015808 496M EFI System /dev/sdb3 1024000 2047999 1024000 500M Linux swap /dev/sdb4 2048000 63487999 61440000 29.3G Linux filesystem /dev/sdb5 63488000 124927999 61440000 29.3G Linux filesystem /dev/sdb6 124928000 976773119 851845120 406.2G Linux filesystem ) Cheers, David.