On 2020-06-10 07:00, Michael Stone wrote:
On Mon, Jun 08, 2020 at 08:22:39PM +0000, Matthew Campbell wrote:

dd if=/dev/zero of=/dev/sdb ibs=4096 count=976754646

This command line gets data in 4k chunks from /dev/zero and then writes them to the disk in 512 byte chunks. That's pretty much the worst possible case for writing to a disk.

> # dd if=/dev/zero of=/dev/sdh ibs=4096 count=10000 conv=fdatasync
> 10000+0 records in
> 80000+0 records out
> 40960000 bytes (41 MB, 39 MiB) copied, 3.15622 s, 13.0 MB/s

Good catch.


You want "bs", not "ibs". I'd suggest dd if=/dev/zero of=/dev/sdb bs=64k

+1


(I do not recall having a need for 'ibs' or 'obs'.)


and I wouldn't bother trying to calculate a count if you're trying to overwrite the entire disk

+1


IME performance peaks at 16-64k. Beyond that things don't improve, and can potentially get worse or cause other issues.

I've run benchmarks over the years, usually on Linux. I forget where the performance knees are, but do recall that bs=1M has always been in between.


I've been getting sustained USB2 disk writes in the low 40MB/s range for more than 15 years. I'd suggest either checking that you're using a reasonable block size or getting a better USB2 adapter. 25MB/s is definitely low.

# dd if=/dev/zero of=/dev/sdh bs=64k count=10000 conv=fdatasync
10000+0 records in
10000+0 records out
655360000 bytes (655 MB, 625 MiB) copied, 15.1168 s, 43.4 MB/s

I have Intel desktop and server motherboards, and Dell laptops and one server. I believe they all have Intel USB chips.


Looking at a recent imaging script run of dd(1) with bs=1M over USB 2.0 to a modern USB 3.0 external enclosure with a vintage SATA I HDD, the numbers were better than I was remembering:

13997441024 bytes (14 GB, 13 GiB) copied, 387 s, 36.2 MB/s


David

Reply via email to