On 2020-01-20 04:14, microsoft gaofei wrote:
Many people suggest using dd to create bootable USB, https://www.archlinux.org/download/ . But cp and mv also writes to USB, e.g., cp archlinux-2020.01.01-x86_64.iso /dev/sdb, cat archlinux-2020.01.01-x86_64.iso > /dev/sdb. Is it safe to use these commands instead of dd? If it's unsafe, I want to know the reason.
dd was required on ancient Unix systems for dealing with "raw" devices that had mandatory block sizes. For instance, if a raw device such as a hard disk or tape drive, had a block size of 512, then writing to it required a sequence of correctly sized write system calls. If the program wrote 512 bytes, the device driver would truncate the write to 512. If the program wrote fewer than 512 bytes, then it wouldn't completely overwrite the block, yet the position would advance to the next block. Maybe garbage would be left in the partial block, or zeros. With reads there would be a similar problem. A 256 byte read on a raw device with a 512 block size would result in a truncated read (very reminiscent of a truncated UDP datagram receive). The dd program's block size feature would ensure that reads and writes involving raw devices were performed correctly. With dd you can read from a raw device with 256 byte blocks, and output to a device with 1024 byte blocks, an operation called "re-blocking". The block devices you're working with in a GNU/Linux system aren't raw. You can write to them in whatever request sizes you want. The aggregation into correct transfer units is done by the block driver software inside the kernel. There is a small advantage in writing a multiple of the block size, For instance, suppose we write to a block device like /dev/sda1 one byte at a time. Each time we write a byte, an entire block is edited in-memory to change that byte, and then the entire block is flushed out to the device, usually asynchronously. By writing bytes, we risk reduced performance: that the same block of the device will be wastefully dirtied and flushed two or more times. However, it's very unlikely that the buffer sizes used by standard utilities like "cp" are not good multiples of a block size. Block sizes are almost always powers of two, and buffers in file copying utilities are also, and larger than typical block sizes. dd has features that are not found in other utilities, such as seeking into arbitrary positions in the source and destination and copying only certain amounts. dd can also work with devices that are infinite sources of bytes; with dd you can read 1024 bytes from /dev/urandom, which can't be done with cat or cp. If you need to do any of these things, you need dd, or something like it.