On Wednesday, August 28, 2013 1:13:36 PM UTC+2, Dave Angel wrote: > On 28/8/2013 04:32, Kurt Mueller wrote: > > For some text manipulation tasks I need a template to split lines > > from stdin into a list of strings the way shlex.split() does it. > > The encoding of the input can vary.
> Does that mean it'll vary from one run of the program to the next, or > it'll vary from one line to the next? Your code below assumes the > latter. That can greatly increase the unreliability of the already > dubious chardet algorithm. The encoding only varies from one launch to the other. The reason I process each line is memory usage. Option to have a better reliability of chardet: I could read all of the input, save the input lines for further processing in a list, feed the lines into chardet.universaldetector.UniversalDetector.feed()/close()/result() and then decode and split/shlex the lines in the list. That way the chardet oracle would be more reliable, but roughly twice as much memory will be used. > > import chardet > Is this the one ? > https://pypi.python.org/pypi/chardet Yes. > > $ cat <some-file> | template.py > Why not have a separate filter that converts from a (guessed) encoding > into utf-8, and have the later stage(s) assume utf-8 ? That way, the > filter could be fed clues by the user, or replaced entirely, without > affecting the main code you're working on. Working on UNIX-like systems (I am happy to work in a MSFZ) the processing pipe would be then: cat <some-file> | recode2utf8 | splitlines.py memory usage 2 * <some-file> ( plus chardet memory usage ) > Alternatively, just add a commandline argument with the encoding, and > parse it into enco_type. cat <some-file> | splitlines.py -e latin9 memory usage 1 * <some-file> or cat <some-file> | splitlines.py -e $( codingdetect <some-file> ) memory usage 1 * <some-file> So, because memory usage is not primary, I think I will go with the option described above. -- Kurt Müller -- http://mail.python.org/mailman/listinfo/python-list