Hello,

with "dd" built from coreutils @ c5fb1c26de05 ("build: update gnulib
submodule to latest", 2026-03-09), with the gnulib submodule advanced to
d5f683434d1a ("doc: Fix documentation that was added today.",
2026-03-09):

  (
    set -e
    ulimit -S -f 1024
    trap '' XFSZ
    rm -f f
    src/dd if=/dev/urandom of=f bs=768K
  )

The above command produces the regular file "f" with 1024*1024 bytes in
it (as expected); however, "dd" prints the following to stderr:

  dd: error writing 'f': File too large
  2+0 records in
  1+0 records out
  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625178 s, 168 MB/s

The third line ("1+0 records out") is incorrect. It should be "1+1
records out", because the second (final) write outputs 256*1024 bytes.
According to POSIX
<https://pubs.opengroup.org/onlinepubs/9799919799/utilities/dd.html#tag_20_31_11>,
that counts as a partial output block:

  On completion, /dd/ shall write the number of input and output blocks
  to standard error. In the POSIX locale the following formats shall be
  used:

  [...]

  "%u+%u records out\n", </number of whole output blocks/>, </number of
      partial output blocks/>

  [...] A partial output block is one that was written with fewer bytes
  than specified by the output block size. [...]

In dd_copy(), we have

  2321        if (ibuf == obuf)         /* If not C_TWOBUFS. */
  2322          {
  2323            size_t nwritten = iwrite (STDOUT_FILENO, obuf, n_bytes_read);
  2324            w_bytes += nwritten;
  2325            if (nwritten != n_bytes_read)
  2326              {
  2327                error (0, errno, _("error writing %s"), quoteaf 
(output_file));
  2328                return EXIT_FAILURE;
  2329              }
  2330            else if (n_bytes_read == input_blocksize)
  2331              w_full++;
  2332            else
  2333              w_partial++;
  2334            continue;
  2335          }

The first execution of this code outputs a full block ("nwritten" ==
768*1024 bytes). The second execution outputs "nwritten" == 256*1024
bytes, with "n_bytes_read" == 768*1024 bytes; yet "w_partial" is not
incremented.

The condition for reaching "w_partial++" is

  nwritten == n_bytes_read &&
  n_bytes_read != input_blocksize

which seems correct to me (it means we managed to output everything we
just read, but we couldn't read a full block -- and therefore we also
couldn't write a (-n identically sized) full block). However, a partial
write can also occur when the read was complete.

The write_output() function (which is not used in this reproducer)
counts partial output records differently:

  1275  static void
  1276  write_output (void)
  1277  {
  1278    size_t nwritten = iwrite (STDOUT_FILENO, obuf, output_blocksize);
  1279    w_bytes += nwritten;
  1280    if (nwritten != output_blocksize)
  1281      {
  1282        error (0, errno, _("writing to %s"), quoteaf (output_file));
  1283        if (nwritten != 0)
  1284          w_partial++;
  1285        quit (EXIT_FAILURE);
  1286      }
  1287    else
  1288      w_full++;
  1289    oc = 0;
  1290  }

If we detect a short -- but not entirely fruitless -- write in this
function, then we bump "w_partial" between error() and quit().


The symptom is also reproducible by populating a block device with "dd"
such that the last successful write has no room for a full (output)
block. (In that case, ENOSPC is reported, rather than EFBIG.) That's in
fact how I first encountered the problem; the "ignored SIGXFSZ + EFBIG
errno" method is just a more convenient reproducer.

Thanks,
Laszlo Ersek



Reply via email to