On Mon, Apr 10, 2023 at 12:54 AM Alexey Dokuchaev <da...@freebsd.org> wrote: > > CAUTION: This email originated from outside of the University of Guelph. Do > not click links or open attachments unless you recognize the sender and know > the content is safe. If in doubt, forward suspicious emails to > ith...@uoguelph.ca > > > On Sun, Jan 03, 2021 at 01:02:18AM +0000, Rick Macklem wrote: > > commit c98a764c681f8b70812a9f13a6e61c96aa1a69d2 > > > > cp(1): fix performance issue for large non-sparse file copies > > > > PR252358 reported a serious performance problem when > > copying a large non-sparse file on a UFS file system. > > This problem seems to have been caused by a large > > number of SEEK_HOLE operations, with one done > > for each copy_file_range(2) call. > > > > This patch modifies cp(1) to use a large (SSIZE_MAX) > > len argument, reducing the number of system calls > > and resolving the performance issue. > > char *p; > > > > @@ -236,7 +235,7 @@ copy_file(const FTSENT *entp, int dne) > > do { > > if (use_copy_file_range) { > > rcount = copy_file_range(from_fd, NULL, > > - to_fd, NULL, bufsize, 0); > > + to_fd, NULL, SSIZE_MAX, 0); > > Hi Rick, > > This change unfortunately breaks copying files in resource-limited > environments (e.g. many port builders do that to prevent runaway > processes): > > ulimit -f 16384000 > cp -p packages/13.0-i386-wip/All/perl5-5.32.1_3.tbz /tmp ; echo $? > Filesize limit exceeded > 153 > > Previously bufsize was 2097152 which was a lot saner than current > 9223372036854775807. Perhaps we should set it per getrlimit(2) > value for RLIMIT_FSIZE? I think zfs_copy_file_range() needs to use vn_rlimit_fsizex() the same way that vn_generic_copy_file_range() does.
I have posted the attached patch to D39419. danfe@. Assuming you were using zfs, could you test this patch? (You will need an up to date main kernel and, hopefully, the block cloning stuff has not trashed your zpool.) rick > > ./danfe >
zfscopyrlimit.patch
Description: Binary data