Nick Coghlan <ncogh...@gmail.com> added the comment:

I discovered this same problem recently when updating the subprocess docs, and 
also in working on the improved shell invocation support I am proposing for 3.3 
(#13238).

I initially posted an earlier variant this suggestion as a new issue (#13442), 
but Victor redirected me here.

Firstly, I don't think it makes any sense to set encoding information globally 
for the Popen object. As a simple example, consider using Python to write a 
test suite for the iconv command line tool: there's only one Popen instance 
(for the iconv call), but different encodings for stdin and stdout.

Really, we want to be able to make full use of Python 3's layered I/O model, 
but we want the subprocess pipe instances to be slotted in at the lowest layer 
rather than creating them ourselves.

The easiest way to do that is to have a separate class that specifies the 
additional options for pipe creation and does the wrapping:

    class TextPipe:
        def __init__(self, *args, **kwds):
            self.args = args
            self.kwds = kwds
        def wrap_pipe(self, pipe):
            return io.TextIOWrapper(pipe, *self.args, **self.kwds)

The stream creation process would then include a new "wrap = 
getattr(stream_arg, 'wrap_pipe', None)" check that is similar to the existing 
check for subprocess.PIPE, but invokes the method to wrap the pipe after 
creating it.

So to read UTF-8 encoded data from a subprocess, you could just do:

    data = check_stdout(cmd, stdout=TextPipe('utf-8'), stderr=STDOUT)

----------
nosy: +ncoghlan

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6135>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to