On Sat, Apr 18, 2020 at 5:14 AM Andres Freund <and...@anarazel.de> wrote: > > zstd -T0 < onegbofrandom > NUL > zstd -T0 < onegbofrandom > /dev/null > linux host: 0.361s > windows guest: 0.602s > > zstd -T0 < onegbofrandom | dd bs=1M of=NUL > zstd -T0 < onegbofrandom | dd bs=1M of=/dev/null > linux host: 0.454s > windows guest: 0.802s > > zstd -T0 < onegbofrandom | dd bs=64k | dd bs=64k | dd bs=64k | wc -c > linux host: 0.521s > windows guest: 1.376s > > > This suggest that pipes do have a considerably higher overhead on > windows, but that it's not all that terrible if one takes care to use > large buffers in each pipe element. >
I have also done some similar experiments on my Win-7 box and the results are as follows: zstd -T0 < 16396 > NUL Execution time: 2.240 s zstd -T0 < 16396 | dd bs=1M > NUL Execution time: 4.240 s zstd -T0 < 16396 | dd bs=64k | dd bs=64k | dd bs=64k | wc -c Execution time: 5.959 s In the above tests, 16396 is a 1GB file generated via pgbench. The above results indicate that adding more pipe chains with dd adds significant overhead but how can we distinguish what is exact overhead due to pipe? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com