Antoine Pitrou <pit...@free.fr> added the comment: > > But if line buffering doesn't work, disabling buffering on > > stdout/stderr does have a functional consequence: it allows process > > output to appear as generated instead of coming in chunks when the > > buffer is full > > Yes, sorry, I had it backwards. It's buffering on stdin which doesn't > make any functional difference (whether it's buffered or not, you > always get data as soon as it arrives).
Actually, I had it right (you confused me :-)). In the context of subprocess, you write to the child's stdin pipe, and you read from the child's stdout and stderr pipes. So, whether or not you buffer the reads from stdout and stderr pipes doesn't make a difference (except in performance): as soon as the child outputs a single byte, it becomes available for the parent. But if you buffer the writes to stdin, the child process will see data arrive only when the buffer is flushed. Here is the relevant code in subprocess.py: if p2cwrite != -1: self.stdin = io.open(p2cwrite, 'wb', bufsize) if self.universal_newlines: self.stdin = io.TextIOWrapper(self.stdin) if c2pread != -1: self.stdout = io.open(c2pread, 'rb', bufsize) if universal_newlines: self.stdout = io.TextIOWrapper(self.stdout) if errread != -1: self.stderr = io.open(errread, 'rb', bufsize) if universal_newlines: self.stderr = io.TextIOWrapper(self.stderr) Only stdin is opened in write mode. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue9929> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com