desbma <dutch...@gmail.com> added the comment:
If you do a benchmark by reading from a file, and then writing to /dev/null several times, without clearing caches, you are measuring *only* the syscall overhead: * input data is read from the Linux page cache, not the file on your SSD itself * no data is written (obviously because output is /dev/null) Your current command line also measures open/close timings, without that I think the speed should linearly increase when doubling buffer size, but of course this is misleading, because its a synthetic benchmark. Also if you clear caches in between tests, and write the output file to the SSD itself, sendfile will be used, and should be even faster. So again I'm not sure this means much compared to real world usage. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue36103> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com