On 09/28/2018 09:37 PM, Dave Ulrick wrote:
> Note: I don't think this is an issue with buffered I/O to stdout per se. I
> say this because I'm seeing pretty consistent run times if I write to stdout
> _as long as the output file didn't previously exist_. Rather, the issue seems
> to be with _ov
On 9/28/18 2:28 PM, Dave Ulrick wrote:
Thanks, that makes sense. Any idea of why unlink()ing the file seems
to be faster than truncating it?
As best I understand it, one process is async and parallel:
unlink, clear block list, sync to disk
open, write, close, sync to disk
Where the other pr
On Fri, 28 Sep 2018, Greg Woods wrote:
When an existing file is truncated, which the shell does when you use
stdout redirection, all the blocks that were in it have to be moved to the
file system's free block list. Exactly what happens there may depend on
what kind of file system you are using,
On Fri, Sep 28, 2018 at 2:37 PM Dave Ulrick wrote:
> Update:
>
> > $ time cat infile >outfile
> >
> > If 'infile' is on the order of 140 MB, 'time' might show something as
> low as:
> >
> > real 0m0.146s
> > user 0m0.000s
> > sys 0m0.109s
> > CPU % 74.29
> >
> > or as high as:
> >
> > real 0
Update:
Sorry to reply to my own post, but...
After posting the following I realized that it can't be just a stdout
redirect issue in view of the fact that I can recreate the same issue if I
fopen(, "w") a disk file from a C program. I've modified the subject
accordingly.
Dave
On Fri, 2
While debugging a custom program that can write large output files to
stdout, I noticed that the run time as displayed by the Bash 'time' prefix
wildly varied from run to run. It turns out that the issue isn't just with
my program. I can recreate it with 'cat':
$ time cat infile >outfile
If '