ANN: Version 0.1.1 of sarge (a subprocess wrapper library) has been released.
Version 0.1.1 of Sarge, a cross-platform library which wraps the subprocess module in the standard library, has been released. What changed? - - Added the ability to scan for specific patterns in subprocess output streams. - Added convenience methods to operate on wrapped subprocesses. - Exceptions which occur while spawning subprocesses are now propagated. - Fixed issues #2, #3, and #4. - Improved shell_shlex resilience with Unicode on 2.x. - Added get_stdout, get_stderr and get_both for when subprocess output is not expected to be voluminous. - Added an internal lock to serialise access to shared data. - Added tests to cover added functionality and reported issues. - Added numerous documentation updates. What does Sarge do? --- Sarge tries to make interfacing with external programs from your Python applications easier than just using subprocess alone. Sarge offers the following features: * A simple way to run command lines which allows a rich subset of Bash- style shell command syntax, but parsed and run by sarge so that you can run on Windows without cygwin (subject to having those commands available): >>> from sarge import capture_stdout >>> p = capture_stdout('echo foo | cat; echo bar') >>> for line in p.stdout: print(repr(line)) ... 'foo\n' 'bar\n' * The ability to format shell commands with placeholders, such that variables are quoted to prevent shell injection attacks. * The ability to capture output streams without requiring you to program your own threads. You just use a Capture object and then you can read from it as and when you want. Advantages over subprocess --- Sarge offers the following benefits compared to using subprocess: * The API is very simple. * It's easier to use command pipelines - using subprocess out of the box often leads to deadlocks because pipe buffers get filled up. * It would be nice to use Bash-style pipe syntax on Windows, but Windows shells don't support some of the syntax which is useful, like &&, ||, |& and so on. Sarge gives you that functionality on Windows, without cygwin. * Sometimes, subprocess.Popen.communicate() is not flexible enough for one's needs - for example, when one needs to process output a line at a time without buffering the entire output in memory. * It's desirable to avoid shell injection problems by having the ability to quote command arguments safely. * subprocess allows you to let stderr be the same as stdout, but not the other way around - and sometimes, you need to do that. Python version and platform compatibility - Sarge is intended to be used on any Python version >= 2.6 and is tested on Python versions 2.6, 2.7, 3.1, 3.2 and 3.3 on Linux, Windows, and Mac OS X (not all versions are tested on all platforms, but sarge is expected to work correctly on all these versions on all these platforms). Finding out more You can read the documentation at http://sarge.readthedocs.org/ There's a lot more information, with examples, than I can put into this post. You can install Sarge using "pip install sarge" to try it out. The project is hosted on BitBucket at https://bitbucket.org/vinay.sajip/sarge/ And you can leave feedback on the issue tracker there. I hope you find Sarge useful! Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.4) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: An encoding bug which caused an exception when getting the GPG version has been fixed. Recipients can be passed in a set or frozenset as well as in a list or tuple. The keyring argument now accepts a list of public keyring filenames as well as a single filename. A secret_keyring argument has been added which accepts either a single filename or a list of filenames for secret keyrings. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 2.7, 3.1 and Jython 2.5.1), Mac OS X (Python 2.5) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.1) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. Support for additional arguments to be passed to GnuPG. Bugs in tests which used Latin-1 encoded data have been fixed by specifying that encoding. On verification (including after decryption), the signer trust level is returned in integer and text formats. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 2.7, 3.1 and Jython 2.5.1), Mac OS X (Python 2.5) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
Re: Logging handler: No output
Florian Lindner xgm.de> writes: > But neither the FileHandler nor the StreamHandler produce any actual output. > The file is being created but stays empty. If I use a print output in the > while loop it works, so output is catched and the applications stdout in > working. But why the logger proclog catching nothing? > Paul Rubin's answer looks correct. In addition, you should note that every call to start_process will add a new handler to the logger (I can't tell if the logger could be the same on multiple calls, but it seems likely) and that may produce multiple messages. The rule of thumb is: most code should get loggers and log to them, but adding handlers, setting levels etc. should be only done in one place (typically invoked from a "if __name__ == '__main__'" clause. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Logging: Specifying converter attribute of a log formatter in config file
On Thursday, August 30, 2012 11:38:27 AM UTC+1, Radha Krishna Srimanthula wrote: > > Now, how do I specify the converter attribute (time.gmtime) in the above > section? Sadly, there is no way of doing this using the configuration file, other than having e.g. a class UTCFormatter(logging.Formatter): converter = time.gmtime and then using a UTCFormatter in the configuration. -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting a TimedRotatingFileHandler not to put two dates in the same file?
David M Chess us.ibm.com> writes: > >> But now the users have noticed that if the process isn't up at > midnight, > >> they can end up with lines from two (or I guess potentially more) > dates in > >> the same log file. > >> > >> Is there some way to fix this, either with cleverer arguments > into the > >> TimedRotatingFileHandler, or by some plausible subclassing of > it or its > >> superclass? Well, of course you can subclass and override it to do what you want - there's no magic there. The behaviour is as you would expect: the default behaviour of a TimedRotatingFileHandler is to append, and roll over at midnight. So if your program isn't running at midnight, it won't rotate: Day 1. Run your program, stop it before midnight. The log file contains dates from this day. No rotation occurred. Day 2. Run your program again, stop it before midnight. The log file contains dates from this day, and Day 1. No rotation occurred. That's the symptom you're seeing, right? You could do as Dave Angel suggested - just use a FileHandler with the name derived from the date. If you don't want to or can't actually rotate files at midnight, you're using the wrong tool for the job :-) If you sometimes want to rotate at midnight (the process is running at that time) and at other times not (the process isn't running then), you might have to code startup logic in your program to deal with the vagaries of your environment, since only you would know what they are :-) Work is afoot to make the actual rollover time configurable (i.e. not forced to be literally midnight) - see http://bugs.python.org/issue9556 - but that's an enhancement request, not a bug, and so it'll see the light of day in Python 3.4, if at all. An implementation is in my sandbox repo at http://hg.python.org/sandbox/vsajip in branch fix9556. If all you need to do is rollover at a different time daily (say 7 a.m.), you might be able to use this. Feel free to use that code as inspiration for your subclass. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Specifying two log files with one configuration file
Peter Steele gmail.com> writes: > I have been unable to get this to work. My current conf file looks like this: Try with the following changes: [logger_test] level: DEBUG handlers: test propagate: 0 qualname: test The qualname: test is what identifies the logger as the logger named 'test', and propagate: 0 prevents the test message from being passed up to the root logger. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.2) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Improved support for status messages from GnuPG. Fixed key generation to skip empty values. Fixed list_keys to handle escaped characters. Removed doctests which required interactive entry of passwords. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. Tests also pass under CPython 2.5 and CPython 2.6 on OS X. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
A new script which creates Python 3.3 venvs with Distribute and pip installed in them
Python 3.3 includes a script, pyvenv, which is used to create virtual environments. However, Distribute and pip are not installed in such environments - because, though they are popular, they are third-party packages - not part of Python. The Python 3.3 venv machinery allows customisation of virtual environments fairly readily. To demonstrate how to do this, and to provide at the same time a script which might be useful to people, I've created a script, pyvenvex.py, at https://gist.github.com/4673395 which extends the pyvenv script to not only create virtual environments, but to also install Distribute and pip into them. The script needs Python 3.3, and one way to use it is: 1. Download the script to a directory in your path, and (on Posix platforms) make it executable. 2. Add a shebang line at the top of your script, pointing to your Python 3.3 interpreter (Posix, and also Windows if you have the PEP 397 launcher which is part of Python 3.3 on Windows). 3. Run the pyvenvex script to create your virtual environments, in place of pyvenv, when you want Distribute and pip to be installed for you (this is how virtualenv sets up environments it creates). You can run the script with -h to see the command line options available, which are a superset of the pyvenv script. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: A new script which creates Python 3.3 venvs with Distribute and pip installed in them
Ian Kelly gmail.com> writes: > > I have a shell script for this: > Sure - there's a similar one at https://gist.github.com/4591655 The main purpose of the script was to illustrate how to subclass venv.EnvBuilder, and I've added it as an example to the 3.3 and in-development documentation: http://docs.python.org/3/library/venv.html#an-example-of-extending-envbuilder Doing it in Python means that it runs cross-platform, offers a few benefits such as command line help, or the option to install Distribute but not pip. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Python launcher (PEP 397) and emacs python-mode.el
Thomas Heller ctypes.org> writes: > What I meant to write is this: > > when the shebang line in script.py contains this: >#!/usr/bin/python3.1-32 > then emacs SHOULD run >py.exe -3.1-32 script.py > and the launcher runs >c:\Python31\python.exe script.py IMO it would be better for emacs to just run py.exe script.py and py.exe can read the shebang and do the right thing. This saves the emacs code from having to duplicate the shebang line processing logic that py.exe uses (which, as we know, is unusual. So for a cross-platform you can have a shebang line of #!/usr/bin/python3.2, and on Windows it will still call the appropriate Python 3.2 even if it's not in /usr/bin, as there's no /usr/bin :-)) Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: python logging filter limitation, looks intentional?
On Jan 28, 10:51 am, Chris Withers wrote: > To be clear, I wasn't asking for a change to existing behaviour, I was > asking for the addition of an option that would allow thelogging > framework to behave as most people would expect when it comes to filters ;-) And the evidence for "most people" would be ... ? ;-) It hasn't been raised before, despite filters working the way they do since Python 2.3 ... And I wonder, would any of those people be willing to accept the performance impact of the change? I'm not sure how big the impact would be, but it does involve another hierarchy traversal and additional calls to the ancestor filters. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: PythonWin debugger holds onto global logging objects too long
On Jan 24, 2:52 pm, Rob Richardson wrote: > I use PythonWin to debug the Python scripts we write. Our scripts often use > the log2pyloggingpackage. When running the scripts inside the debugger, we > seem to get oneloggingobject for every time we run the script. The result is > that after running the script five times, the log file contains five copies > of every message. The only way I know to clean this up and get only a single > copy of each message is to close PythonWin and restart it. > > What do I have to do in my scripts to clean up theloggingobjects so that I > never get more than one copy of each message in my log files? > I don't know what log2py is - Google didn't show up anything that looked relevant. If you're talking about the logging package in the Python standard library, I may be able to help: but a simple script that I ran in PythonWin didn't show any problems, so you'll probably need to post a short script which demonstrates the problem when run in PythonWin. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: Sarge, a library wrapping the subprocess module, has been released.
Sarge, a cross-platform library which wraps the subprocess module in the standard library, has been released. What does it do? Sarge tries to make interfacing with external programs from your Python applications easier than just using subprocess alone. Sarge offers the following features: * A simple way to run command lines which allows a rich subset of Bash- style shell command syntax, but parsed and run by sarge so that you can run on Windows without cygwin (subject to having those commands available): >>> from sarge import capture_stdout >>> p = capture_stdout('echo foo | cat; echo bar') >>> for line in p.stdout: print(repr(line)) ... 'foo\n' 'bar\n' * The ability to format shell commands with placeholders, such that variables are quoted to prevent shell injection attacks. * The ability to capture output streams without requiring you to program your own threads. You just use a Capture object and then you can read from it as and when you want. Advantages over subprocess --- Sarge offers the following benefits compared to using subprocess: * The API is very simple. * It's easier to use command pipelines - using subprocess out of the box often leads to deadlocks because pipe buffers get filled up. * It would be nice to use Bash-style pipe syntax on Windows, but Windows shells don't support some of the syntax which is useful, like &&, ||, |& and so on. Sarge gives you that functionality on Windows, without cygwin. * Sometimes, subprocess.Popen.communicate() is not flexible enough for one's needs - for example, when one needs to process output a line at a time without buffering the entire output in memory. * It's desirable to avoid shell injection problems by having the ability to quote command arguments safely. * subprocess allows you to let stderr be the same as stdout, but not the other way around - and sometimes, you need to do that. Python version and platform compatibility - Sarge is intended to be used on any Python version >= 2.6 and is tested on Python versions 2.6, 2.7, 3.1, 3.2 and 3.3 on Linux, Windows, and Mac OS X (not all versions are tested on all platforms, but sarge is expected to work correctly on all these versions on all these platforms). Finding out more You can read the documentation at http://sarge.readthedocs.org/ There's a lot more information, with examples, than I can put into this post. You can install Sarge using "pip install sarge" to try it out. The project is hosted on BitBucket at https://bitbucket.org/vinay.sajip/sarge/ And you can leave feedback on the issue tracker there. I hope you find Sarge useful! Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 12, 9:41 am, Anh Hai Trinh wrote: > Having written something with similar purpose > (https://github.com/aht/extproc), here are my comments: > > * Having command parsed from a string is complicated. Why not just have an > OOP API to construct commands? It's not hard for the user, and less work e.g. when migrating from an existing Bash script. I may have put in the effort to use a recursive descent parser under the hood, but why should the user of the library care? It doesn't make their life harder. And it's not complicated, not even particularly complex - such parsers are commonplace. > * Using threads and fork()ing process does not play nice together unless > extreme care is taken. Disasters await. By that token, disasters await if you ever use threads, unless you know what you're doing (and sometimes even then). Sarge doesn't force the use of threads with forking - you can do everything synchronously if you want. The test suite does cover the particular case of thread +fork. Do you have specific caveats, or is it just a "there be dragons" sentiment? Sarge is still in alpha status; no doubt bugs will surface, but unless a real show-stopper occurs, there's not much to be gained by throwing up our hands. BTW extproc is nice, but I wanted to push the envelope a little :-) Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 12, 3:35 pm, Anh Hai Trinh wrote: > I think most users like to use Python, or they'd use Bash. I think people > prefer not another language that is different from both, and having little > benefits. My own opinion of course. > I have looked at pbs and clom: they Pythonify calls to external programs by making spawning those look like function calls. There's nothing wrong with that, it's just a matter of taste. I find that e.g. wc(ls("/etc", "-1"), "-l") is not as readable as call(“ls /etc –1 | wc –l”) and the attempt to Pythonify doesn't buy you much, IMO. Of course, it is a matter of taste - I understand that there are people who will prefer the pbs/clom way of doing things. > Re. threads & > fork():http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-befo... > > For a careful impl of fork-exec with threads, > seehttp://golang.org/src/pkg/syscall/exec_unix.go Thanks for the links. The first seems to me to be talking about the dangers of locking and forking; if you don't use threads, you don't need locks, so the discussion about locking only really applies in a threading+forking scenario. I agree that locking+forking can be problematic because the semantics of what happens to the state of the locks and threads in the child (for example, as mentioned in http://bugs.python.org/issue6721). However, it's not clear that any problem occurs if the child just execs a new program, overwriting the old - which is the case here. The link you pointed to says that "It seems that calling execve(2) to start another program is the only sane reason you would like to call fork(2) in a multi-threaded program." which is what we're doing in this case. Even though it goes on to mention the dangers inherent in inherited file handles, it also mentions that these problems have been overcome in recent Linux kernels, and the subprocess module does contain code to handle at least some of these conditions (e.g. preexec_fn, close_fds keyword arguments to subprocess.Popen). Hopefully, if there are race conditions which emerge in the subprocess code (as has happened in the past), they will be fixed (as has happened in the past). > Hmm, if the extra "envelop" is the async code with threads that may deadlock, > I would say "thanks but no thanks" :p That is of course your privilege. I would hardly expect you to drop extproc in favour of sarge. But there might be people who need to tread in these dangerous waters, and hopefully sarge will make things easier for them. As I said earlier, one doesn't *need* to use asynchronous calls. I agree that I may have to review the design decisions I've made, based on feedback based on people actually trying the async functionality out. I don't feel that shying away from difficult problems without even trying to solve them is the best way of moving things forward. What are the outcomes? * Maybe people won't even try the async functionality (in which case, they won't hit problems) * They'll hit problems and just give up on the library (I hope not - if I ever have a problem with a library I want to use, I always try and engage with the developers to find a workaround or fix) * They'll report problems which, on investigation, will turn out to be fixable bugs - well and good * The reported bugs will be unfixable for some reason, in which case I'll just have to deprecate that functionality. Remember, this is version 0.1 of the library, not version 1.0. I expect to do some API and functionality tweaks based on feedback and bugs which show up. > I do think that IO redirection is much nicer with extproc. Again, a matter of taste. You feel that it's better to pass dicts around in the public API where integer file handles map to other handles or streams; I feel that using a Capture instance is less fiddly for the user. Let a thousand flowers bloom, and all that. I do thank you for the time you've taken to make these comments, and I found the reading you pointed me to interesting. I will update the sarge docs to point to the link on the Linux Programming blog, to make sure people are informed of potential pitfalls. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 12, 4:19 pm, Anh Hai Trinh wrote: > If you use threads and call fork(), you'll almost guaranteed to face with > deadlocks. Perhaps not in a particular piece of code, but some others. > Perhaps not on your laptop, but on the production machine with different > kernels. Like most race conditions, they will eventually show up. You can hit deadlocks in multi-threaded programs even without the fork(), can't you? In that situation, you either pin it down to a bug in your code (and even developers experienced in writing multi- threaded programs hit these), or a bug in the underlying library (which can hopefully be fixed, but that applies to any bug you might hit in any library you use, and is something you have to consider whenever you use a library written by someone else), or an unfixable problem (e.g. due to problems in the Python or C runtime) which require a different approach. I understand your concerns, but you are just a little further along the line from people who say "If you use threads, you will have deadlock problems. Don't use threads." I'm not knocking that POV - people need to use what they're comfortable with, and to avoid things that make them uncomfortable. I'm not pushing the async feature as a major advantage of the library - it's still useful without that, IMO. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 13, 3:57 am, Anh Hai Trinh wrote: > > I don't disagree with it. But the solution is really easy, just call 'sh' and > pass it a string! > > >>> from extproc import sh > >>> n = int(sh(“ls /etc –1 | wc –l”)) > > No parser needed written! > > Yes there is a danger of argument parsing and globs and all that. But people > are aware of it. With string parsing, ambiguity is always there. Even when > you have a BNF grammar, people easily make mistakes. You're missing a few points: * The parser is *already* written, so there's no point worrying about saving any effort. * Your solution is to pass shell=True, which as you point out, can lead to shell injection problems. To say "people are aware of it" is glossing over it a bit - how come you don't say that when it comes to locking+forking? ;-) * I'm aiming to offer cross-platform functionality across Windows and Posix. Your approach will require a lower common denominator, since the Windows shell (cmd.exe) is not as flexible as Bash. For example - no "echo foo; echo bar"; no "a && b" or "a || b" - these are not supported by cmd.exe. * Your comment about people making mistakes applies just as much if someone passes a string with a Bash syntax error, to Bash, via your sh() function. After all, Bash contains a parser, too. For instance: >>> from extproc import sh >>> sh('ls >>> abc') /bin/sh: Syntax error: redirection unexpected '' If you're saying there might be bugs in the parser, that's something else - I'll address those as and when they turn up. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 13, 7:08 am, Anh Hai Trinh wrote: > > Objection! Does the defense REALLY expect this court to believe that > > he can testify as to how MOST members of the Python community would or > > would not favor bash over Python? And IF they do in fact prefer bash, > > is this display of haughty arrogance nothing more than a hastily > > stuffed straw-man presented to protect his own ego? > > Double objection! Relevance. The point is that the OP created another > language that is neither Python nor Bash. > Triple objection! I think Rick's point was only that he didn't think you were expressing the views of "most" people, which sort of came across in your post. To say I've created "another language" is misleading - it's just a subset of Bash syntax, so you can do things like "echo foo; echo bar", use "&&", "||" etc. (I used the Bash man page as my guide when designing the parser.) As an experiment on Windows, in a virtualenv, with GnuWin32 installed on the path: (venv) C:\temp>python ActivePython 2.6.6.17 (ActiveState Software Inc.) based on Python 2.6.6 (r266:84292, Nov 24 2010, 09:16:51) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from extproc import sh >>> sh('echo foo; echo bar') Traceback (most recent call last): File "", line 1, in File "C:\temp\venv\lib\site-packages\extproc.py", line 412, in sh f = Sh(cmd, fd=fd, e=e, cd=cd).capture(1).stdout File "C:\temp\venv\lib\site-packages\extproc.py", line 202, in capture p = subprocess.Popen(self.cmd, cwd=self.cd, env=self.env, stdin=self.fd[0], stdout=self.fd[1], stderr=self.fd[2]) File "C:\Python26\Lib\subprocess.py", line 623, in __init__ errread, errwrite) File "C:\Python26\Lib\subprocess.py", line 833, in _execute_child startupinfo) WindowsError: [Error 3] The system cannot find the path specified >>> from sarge import capture_stdout >>> capture_stdout('echo foo; echo bar').stdout.text u'foo\r\nbar\r\n' >>> That's all from a single interactive session. So as you can see, my use cases are a little different to yours, which in turn makes a different approach reasonable. > My respectful opinion is that the OP's approach is fundamentally flawed. > There are many platform-specific issues when forking and threading are fused. > My benign intent was to warn others about unsolved problems and > scratching-your-head situations. > > Obviously, the OP can always choose to continue his direction at his own > discretion. I think you were right to bring up the forking+threading issue, but I have addressed the points you made in this thread - please feel free to respond to the points I made about the Linux Programming Blog article. I've updated the sarge docs to point to that article, and I've added a section on API stability to highlight the fact that the library is in alpha status and that API changes may be needed based on feedback. I'm not being blasé about the issue - it's just that I don't want to be too timid, either. Python does not proscribe using subprocess and threads together, and the issues you mention could easily occur even without the use of sarge. You might say that sarge makes it more likely that the issues will surface - but it'll only do that if you pass "a & b & c & d" to sarge, and not otherwise. The other use of threads by sarge - to read output streams from child processes - is no different from the stdlib usage of threads in subprocess.Popen.communicate(). Possibly Rick was objecting to the tone of your comments, but I generally disregard any tone that seems confrontational when the benefit of the doubt can be given - on the Internet, you can never take for granted, and have to make allowances for, the language style of your interlocutor ... I think you meant well when you responded, and I have taken your posts in that spirit. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: re module: Nothing to repeat, but no sre_constants.error: nothing to repeat ?
On Feb 14, 4:38 am, Devin Jeanpierre wrote: > Hey Pythonistas, > > Consider the regular expression "$*". Compilation fails with the > exception, "sre_constants.error: nothing to repeat". > > Consider the regular expression "(?=$)*". As far as I know it is > equivalent. It does not fail to compile. > > Why the inconsistency? What's going on here? > > -- Devin $ is a meta character for regular expressions. Use '\$*', which does compile. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Sarge, a library wrapping the subprocess module, has been released.
On Feb 17, 1:49 pm, Jean-Michel Pichavant wrote: > I can't use it though, I'm still using a vintage 2.5 version :-/ That's a shame. I chose 2.6 as a baseline for this package, because I need it to work on Python 2.x and 3.x with the same code base and minimal work, and that meant supporting Unicode literals via "from __future__ import unicode_literals". I'm stuck on 2.5 with other projects, so I share your pain :-( Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: HTTP logging
On Feb 20, 5:07 pm, Jean-Michel Pichavant wrote: > However, I looked into the code and find out an (undocumented ?) > attribute of the logging module : raiseException which value is set to 1 > by default (python 2.5.2 logging.__version__ < '0.5.0.2' > ). > > When set to 1, handlerError print the traceback. > > This has been probably fixed in recent version of the module since the > handleError doc does not reference raiseException anymore. Actually, I think it's a mistake in the docs - when they were reorganised a few months ago, the text referring to raiseExceptions was moved to the tutorial: http://docs.python.org/howto/logging.html#exceptions-raised-during-logging I will reinstate it in the reference API docs, but the answer to Jason's problem is to either subclass HTTPHandler and override handleError to suppress the error, or set logging.raiseExceptions to True (in which case all logging exceptions will be swallowed - not necessarily what he wants). Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: HTTP logging
On Feb 20, 5:47 pm, Vinay Sajip wrote: > I will reinstate it in the reference API docs, but the answer to > Jason's problem is to either subclass HTTPHandler and override > handleError to suppress the error, or set logging.raiseExceptions to > True (in which case all logging exceptions will be swallowed - not Um, that should be *False*, not True. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Should I acquire lock for logging.Handler.flush()?
On Feb 21, 7:23 am, Fayaz Yusuf Khan wrote: > I'm writing a custom logging Handler that sends emails through AWS Simple > Email Service using the boto library. > As there's a quota cap on how many (200) emails I can send within 24hrs, I > think I need to buffer my log messages from the emit() calls (Or is that a bad > idea?). > And I was reading the Handler documentation and was confused if I should call > the acquire() and release() methods from within a flush() call. > -- > Fayaz Yusuf Khan > Cloud developer and architect > Dexetra SS, Bangalore, India > fayaz.yusuf.khan_AT_gmail_DOT_com > fayaz_AT_dexetra_DOT_com > +91-9746-830-823 > > signature.asc > < 1KViewDownload If you are using SMTPHandler, calling flush() won't do anything. You'll probably need to subclass the handler to implement rate limiting. In the stdlib, only StreamHandler and its subclasses actually implement flush(), which flushes I/O buffers to disk. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Should I acquire lock for logging.Handler.flush()?
On Feb 22, 4:44 am, Fayaz Yusuf Khan wrote: > Anyway, I read the source and found many interesting things that ought to be > mentioned in the docs. > Such as flush() should be called from close() whenever it's implemented. > (FileHandler.close() is doing it) This is entirely handler-dependent - there's no absolute rule that you *have* to call flush() before close(). Some underlying will do flushing when you close. > And how come close()/flush() isn't being called from inside a lock? Why does it need to be? Are you aware of any race conditions or other concurrency problems which will occur with the code as it is now? > (Handler.close() calls the module level _acquireLock() and _releaseLock()s but > nothing about the instance level acquire() or release()) > Or is it being locked from somewhere else? The module level acquisitions are because module-level handler lists are changed in Handler.close(). If locking is required in a particular handler class for close or flush, that can be implemented by the developer of that handler class. AFAIK there is no such need for the handler classes in the stdlib - if you have reason to believe otherwise, please give some examples of potential problems and with example code if possible. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Should I acquire lock for logging.Handler.flush()?
On Feb 23, 5:55 pm, Fayaz Yusuf Khan wrote: > Well, as emit() is always being called from within a lock, I assumed that > flush() should/would also be handled similarly. Afterall, they are handling > the > same underlying output stream or in case of the BufferingHandler share the > same > buffer. Shouldn't the access be synchronized? Yes, you might well be right - though no problems have been reported, it's probably best to be safe. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Should I acquire lock for logging.Handler.flush()?
On Feb 23, 5:55 pm, Fayaz Yusuf Khan wrote: > buffer. Shouldn't the access be synchronized? I've now updated the repos for 2.7, 3.2 and default to add locking for flush/close operations. Thanks for the suggestion. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
PEP 414 has been accepted
PEP 414 has been accepted: http://mail.python.org/pipermail/python-dev/2012-February/116995.html This means that from Python 3.3 onwards, you can specify u'xxx' for Unicode as well as just 'xxx'. The u'xxx' form is not valid syntax in Python 3.2, 3.1 or 3.0. The idea is to make porting code from 2.x to 3.x easier than before. Get porting! Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Error with co_filename when loading modules from zip file
On Mar 5, 8:36 pm, Bob wrote: > The logging package gets the filename and line number > of the calling function by looking at two variables, the filename > of the frame in the stack trace and the variable logging._srcfile. > The comparison is done in logging/__init__.py:findCaller. > The _srcfile is computed in logging/__init__.py - can you see which of the paths it takes when computing _srcfile? > I've tried putting only the pyc files, only the py files > and both in the zip file. I think the filename info might be stored in the .pyc from when you ran it outside the .zip. If you delete all .pyc files and only have .py in the .zip, what happens? Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Error with co_filename when loading modules from zip file
On Mar 6, 2:40 am, Bob Rossi wrote: > Darn it, this was reported in 2007 > http://bugs.python.org/issue1180193 > and it was mentioned the logging package was effected. > > Yikes. > I will think about this, but don't expect any quick resolution :-( I think the right fix would be not in the logging package, but in the module loading machinery (as mentioned on that issue). I wouldn't worry about the performance aspect - once the logging package is loaded, there's no performance impact. That's a tiny one- off hit which you will probably not notice at all. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Help me with weird logging problem
On Mar 6, 4:09 pm, J wrote: > > Any idea what I'm doing wrong? Levels can be set on loggers as well as handlers, and you're only setting levels on the handlers. The default level on the root logger is WARNING. A logger checks its level first, and only if the event passes that test will it be passed to the handlers (which will also perform level tests). So, a logger.setLevel(logging.DEBUG) should be all you need to add before logging anything. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.2.9) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. A random data file used in testing is no longer shipped with the source distribution, but created by the test suite if needed. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
Re: Reading Live Output from a Subprocess
On Apr 6, 7:57 am, buns...@gmail.com wrote: > I've heard that the Pexpect module works wonders, but the problem is that > relies on pty which is available in Unix only. Additionally, because I want > this script to be usable by others, any solution should be in the standard > library, which means I'd have to copy the Pexpect code into my script to use > it. > > Is there any such solution in the Python 3 Standard Library, and if not, how > much of a thorn is this? > > "There should be one-- and preferably only one --obvious way to do it." > Unfortunately, this is one case where the above is true for Perl but not > Python. Such an example in Perl is > > open(PROG, "command |") or die "Couldn't start prog!"; > while () { > print "$_"; } > > (Note that I do not know Perl and do not have any intentions to learn it; the > above comes from the script I was previously copying and extending, but I > imagine (due to its simplicity) that it's a common Perl idiom. Note however, > that the above does fail if the program re-prints output to the same line, as > many long-running C programs do. Preferably this would also be caught in a > Python solution.) > > If there is a general consensus that this is a problem for lots of people, I > might consider writing a PEP. > > Of course, my highest priority is solving the blasted problem, which is > holding up my script at the moment. (I can work around this by redirecting > the program to a tmp file and reading that, but that would be such a perilous > and ugly kludge that I would like to avoid it if at all possible.) Try the sarge package [1], with documentation at [2] and source code at [3]. It's intended for your use case, works with both Python 2.x and 3.x, and is tested on Linux, OS X and Windows. Disclosure: I'm the maintainer. Regards, Vinay Sajip [1] http://pypi.python.org/pypi/sarge/0.1 [2] http://sarge.readthedocs.org/en/latest/ [3] https://bitbucket.org/vinay.sajip/sarge/ -- http://mail.python.org/mailman/listinfo/python-list
Possible change to logging.handlers.SysLogHandler
There is a problem with the way logging.handlers.SysLogHandler works when presented with Unicode messages. According to RFC 5424, Unicode is supposed to be sent encoded as UTF-8 and preceded by a BOM. However, the current handler implementation puts the BOM at the start of the formatted message, and this is wrong in scenarios where you want to put some additional structured data in front of the unstructured message part; the BOM is supposed to go after the structured part (which, therefore, has to be ASCII) and before the unstructured part. In that scenario, the handler's current behaviour does not strictly conform to RFC 5424. The issue is described in [1]. The BOM was originally added / position changed in response to [2] and [3]. It is not possible to achieve conformance with the current implementation of the handler, unless you subclass the handler and override the whole emit() method. This is not ideal. For 3.3, I will refactor the implementation to expose a method which creates the byte string which is sent over the wire to the syslog daemon. This method can then be overridden for specific use cases where needed. However, for 2.7 and 3.2, removing the BOM insertion would bring the implementation into conformance to the RFC, though the entire message would have to be regarded as just a set of octets. A Unicode message would still be encoded using UTF-8, but the BOM would be left out. I am thinking of removing the BOM insertion in 2.7 and 3.2 - although it is a change in behaviour, the current behaviour does seem broken with regard to RFC 5424 conformance. However, as some might disagree with that assessment and view it as a backwards-incompatible behaviour change, I thought I should post this to get some opinions about whether this change is viewed as objectionable. Regards, Vinay Sajip [1] http://bugs.python.org/issue14452 [2] http://bugs.python.org/issue7077 [3] http://bugs.python.org/issue8795 -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing & Logging
Thibaut gmail.com> writes: > Ok, I understand what happenned. In fact, configuring the logging before > forking works fine. Subprocess inherits the configuration, as I thought. > > The problem was that I didn't passed any handler to the QueueListener > constructor. The when the listener recieved an message, it wasn't handled. > > I'm not sure how the logging module works, but what handlers should I > pass the QueueListener constructor ? I mean, maybe that I would like > some messages (depending of the logger) to be logged to a file, while > some others message would just be printed to stdout. > > This doesn't seem to be doable with a QueueListener. Maybe I should > implement my own system, and pass a little more informations with the > record sent in the queue : the logger name for example. > > Then, in the main process I would do a logging.getLogger(loggername) and > log the record using this logger (by the way it was configured). > > What do you think ? > You probably need different logging configurations in different processes. In your multiprocessing application, nominate one of the processes as a logging listener. It should initialize a QueueListener subclass which you write. All other processes should just configure a QueueHandler, which uses the same queue as the QueueListener. All the processes with QueueHandlers just send their records to the queue. The process with the QueueListener picks these up and handles them by calling the QueueListener's handle() method. The default implementation of QueueListener.handle() is: def handle(self, record): record = self.prepare(record) for handler in self.handlers: handler.handle(record) where self.handlers is just the handlers you passed to the QueueListener constructor. However, if you want a very flexible configuration where different loggers have different handlers, this is easy to arrange. Just configure logging in the listener process however you want, and then, in your QueueListener subclass, do something like this: class MyQueueListener(logging.handlers.QueueListener): def handle(self, record): record = self.prepare(record) logger = logging.getLogger(record.name) logger.handle(record) This will pass the events to whatever handlers are configured for a particular logger. I will try to update the Cookbook in the logging docs with this approach, and a working script. Background information is available here: [1][2] Regards, Vinay Sajip [1] http://plumberjack.blogspot.co.uk/2010/09/using-logging-with-multiprocessing.html [2] http://plumberjack.blogspot.co.uk/2010/09/improved-queuehandler-queuelistener.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing & Logging
Thibaut gmail.com> writes: > This is exactly what I wanted, it seems perfect. However I still have a > question, from what I understood, > I have to configure logging AFTER creating the process, to avoid > children process to inherits the logging config. > > Unless there is a way to "clean" logging configuration in children > processes, so they only have one handler : the QueueHandler. > > I looked at the logging code and it doesn't seems to have an easy way to > do this. The problem of configuring the logging > after the process creation is that... I can't log during process > creation. But if it's too complicated, I will just do this. You may be able to have a "clean" configuration: for example, dictConfig() allows the configuration dictionary to specify whether existing loggers are disabled. So the details depend on the details of your desired configuration. One more point: I suggested that you subclass QueueListener, but you don't actually need to do this. For example, you can do something like: class DelegatingHandler(object): def handle(self, record): logger = logging.getLogger(record.name) logger.handle(record) And then instantiate the QueueListener with an instance of DelegatingHandler. QueueListener doesn't need actual logging handlers, just something with a handle method which takes a record. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing & Logging
Thibaut gmail.com> writes: > This is exactly what I wanted, it seems perfect. However I still have a > question, from what I understood, > I have to configure logging AFTER creating the process, to avoid > children process to inherits the logging config. > > Unless there is a way to "clean" logging configuration in children > processes, so they only have one handler : the QueueHandler. > > I looked at the logging code and it doesn't seems to have an easy way to > do this. The problem of configuring the logging > after the process creation is that... I can't log during process > creation. But if it's too complicated, I will just do this. > I've updated the 3.2 / 3.3 logging cookbook with an example of what I mean. There is a gist of the example script at https://gist.github.com/2331314/ and the cookbook example should show once the docs get built on docs.python.org. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Possible change to logging.handlers.SysLogHandler
Vinay Sajip yahoo.co.uk> writes: > I am thinking of removing the BOM insertion in 2.7 and 3.2 - although > it is a change in behaviour, the current behaviour does seem broken > with regard to RFC 5424 conformance. However, as some might disagree > with that assessment and view it as a backwards-incompatible behaviour > change, I thought I should post this to get some opinions about > whether this change is viewed as objectionable. As there have been no objections, the BOM insertion code has been removed from SysLogHandler; these changes will appear in Python 2.7.4, Python 3.2.4, Python 3.3 and later versions. If you do need a BOM inserted into the message sent to your syslog daemon, a cookbook recipe will shortly appear telling you how to do this. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Wish: Allow all log Handlers to accept the level argument
Fayaz Yusuf Khan gmail.com> writes: > > ***TRIVIAL ISSUE***, but this has been irking me for a while now. > The main logging.Handler class' __init__ accepts a level argument while none > of its children do. The poor minions seem to be stuck with the setLevel > method which considerably lengthens the code. > > In short: > Let's do this: > root.addHandler(FileHandler('debug.log', level=DEBUG) > Instead of this: > debug_file_handler = FileHandler('debug.log') > debug_file_handler.setLevel(DEBUG) > root.addHandler(debug_file_handler) Levels on handlers are generally not needed (though of course they are sometimes needed) - level filtering should be applied at the logger first, and at the handler only when necessary. I don't especially want to encourage the pattern you suggest, because it isn't needed much of the time. The code above won't do any more or less than if you hadn't bothered to set the level on the handler. Don't forget, more complex configurations are effected even more simply using dictConfig(). Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Permission denied and lock issue with multiprocess logging
On Jun 12, 8:49 am, david dani wrote: > When i am running the implementation of multiprocessloggingthrough > queue handler, i get this error. It is the same with sockethandler as > well as with pipe handler if multiprocesses are involved. I am not > getting any hint to solve this problem. Please help to solve the > problem. There is an old bug on AIX which might be relevant, see http://bugs.python.org/issue1234 It depends on how your Python was built - see the detailed comments about how Python should be configured before being built on AIX systems. N.B. This error has nothing to do with logging - it's related to semaphore behaviour in the presence of fork(), which of course happens in multiprocessing scenarios. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Improper creating of logger instances or a Memory Leak?
foobar gmail.com> writes: > I've run across a memory leak in a long running process which I can't > determine if its my issue or if its the logger. As Chris Torek said, it's not a good idea to create a logger for each thread. A logger name represents a place in your application; typically, a module, or perhaps some part of a module. If you want to include information in the log to see what different threads are doing, do that using the information provided here: http://docs.python.org/howto/logging-cookbook.html#adding-contextual-information-to-your-logging-output Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Improper creating of logger instances or a Memory Leak?
foobar gmail.com> writes: > > I've run across a memory leak in a long running process which I can't > determine if its my issue or if its the logger. > BTW did you also ask this question on Stack Overflow? I've answered there, too. http://stackoverflow.com/questions/6388514/ Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Improper creating of logger instances or a Memory Leak?
On Jun 20, 3:50 pm, foobar wrote: > Regarding adding a new logger for each thread - each thread represents > a telephone call in a data collection system. I need to be able to > cleanly provided call-loggingfor debugging to my programmers as well > as dataloggingand verification; having a single log file is somewhat > impractical. To use theloggingfiltering then I would have to be > dynamically adding to the filtering hierarchy continuously, no? > You could, for example, have a different *handler* for each thread. There are a number of possibilities according to exactly what you want to do, but there's certainly no need to create one *logger* per thread. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: String concatenation vs. string formatting
Andrew Berg gmail.com> writes: > Other than the case where a variable isn't a string (format() converts > variables to strings, automatically, right?) and when a variable is used > a bunch of times, concatenation is fine, but somehow, it seems wrong. > Sorry if this seems a bit silly, but I'm a novice when it comes to > design. Plus, there's not really supposed to be "more than one way to do > it" in Python. In a logging context at least, using the form like logger.debug("formatting message with %s", "arguments") rather than logger.debug("formatting message with %s" % "arguments") means that the formatting is deferred by logging until it is actually needed. If the message never gets output because of the logging configuration in use, then the formatting is never done. This optimisation won't matter in most cases, but it will in some scenarios. By the way, logging primarily uses %-formatting instead of the newer {}-formatting, because it pre-dates {}-formatting. In more recent versions of Python, all of Python's three formatting styles are supported - see http://plumberjack.blogspot.com/2010/10/supporting-alternative-formatting.html Also by the way - Python doesn't say there shouldn't be more than one way to do things - just that there should be one *obvious* way (from the Zen of Python). Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: String concatenation vs. string formatting
Andrew Berg gmail.com> writes: > How would I do that with the newer formatting? I've tried: There are examples in the blog post I linked to earlier: http://plumberjack.blogspot.com/2010/10/supporting-alternative-formatting.html Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: String concatenation vs. string formatting
Andrew Berg gmail.com> writes: > On 2011.07.10 02:23 AM, Vinay Sajip wrote: > > There are examples in the blog post I linked to earlier: > It seems that would require logutils. I'm trying to keep dependencies to > a minimum in my project, but I'll take a look at logutils and see if > there's anything else I could use. Thanks. You don't need logutils, just the BraceMessage class - which is shown in the blog post (around 10 lines). Feel free to use it with copy and paste :-) Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Help me understand this logging config
On Aug 30, 1:39 pm, Roy Smith wrote: > Oh, my, it turns out that django includes: > > # This is a copy of the Pythonlogging.config.dictconfig module, > # reproduced with permission. It is provided here for backwards > # compatibility for Python versions prior to 2.7. > > Comparing the django copy to lib/logging/config.py from Python 2.7.2, > they're not identical. It's likely they grabbed something earlier in > the 2.7 series. I'll check 2.7.0 and 2.7.1 to see. They're not identical, but should be functionally equivalent. I'm not able to reproduce your results: I copied the "loggers" part of your config into a Django 1.3 project, and from a manage.py shell session: Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> import logging >>> logger = logging.getLogger('djfront.auth.facebook') >>> logger.debug('Debug') >>> logger.info('Info') 2011-09-02 10:51:13,445 INFO djfront.auth.facebook Info >>> ... as expected. Since it's Python 2.6, it should be using the dictconfig which ships with Django 1.3. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Help me understand this logging config
On Aug 30, 1:39 pm, Roy Smith wrote: > Oh, my, it turns out that django includes: > > # This is a copy of the Pythonlogging.config.dictconfig module, > # reproduced with permission. It is provided here for backwards > # compatibility for Python versions prior to 2.7. > > Comparing the django copy to lib/logging/config.py from Python 2.7.2, > they're not identical. It's likely they grabbed something earlier in > the 2.7 series. I'll check 2.7.0 and 2.7.1 to see. They're not identical, but should be functionally equivalent. I'm not able to reproduce your results: I copied the "loggers" part of your config into a Django 1.3 project, and from a manage.py shell session: Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> import logging >>> logger = logging.getLogger('djfront.auth.facebook') >>> logger.debug('Debug') >>> logger.info('Info') 2011-09-02 10:51:13,445 INFO djfront.auth.facebook Info >>> ... as expected. Since it's Python 2.6, it should be using the dictconfig which ships with Django 1.3. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Help me understand this logging config
On Aug 30, 1:39 pm, Roy Smith wrote: > Oh, my, it turns out that django includes: > > # This is a copy of the Pythonlogging.config.dictconfig module, > # reproduced with permission. It is provided here for backwards > # compatibility for Python versions prior to 2.7. > > Comparing the django copy to lib/logging/config.py from Python 2.7.2, > they're not identical. It's likely they grabbed something earlier in > the 2.7 series. I'll check 2.7.0 and 2.7.1 to see. They're not identical, but should be functionally equivalent. I'm not able to reproduce your results: I copied the "loggers" part of your config into a Django 1.3 project, and from a manage.py shell session: Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> import logging >>> logger = logging.getLogger('djfront.auth.facebook') >>> logger.debug('Debug') >>> logger.info('Info') 2011-09-02 10:51:13,445 INFO djfront.auth.facebook Info >>> ... as expected. Since it's Python 2.6, it should be using the dictconfig which ships with Django 1.3. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.2.8) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. The fixing of some Unicode encoding problems. Quoted some command-line arguments to gpg for increased safety. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
Re: Making `logging.basicConfig` log to *both* `sys.stderr` and `sys.stdout`?
On Aug 30, 9:53 am, Michel Albert wrote: > Unfortunately this setup makes `logging.basicConfig` pretty useless. > However, I believe that this is something that more people could > benefit from. I also believe, that it just "makes sense" to send > warnings (and above) to `stderr`, the rest to `stdout`. > > So I was thinking: "Why does `logging.basicConfig` not behave that > way". Because what seems entirely natural and obvious to you might not seem so for someone else. The API in the stdlib tries to provide baseline functionality which others can build on. For example, if you always have a particular pattern which you use, you can always write a utility function to set things up exactly how you like, and others who want to set things up differently (for whatever reason) can do the same thing, without having to come into conflict (if that's not too strong a word) with views different from their own. > Naturally, I was thinking of writing a patch against the python > codebase and submit it as a suggestion. But before doing so, I would > like to hear your thoughts on this. Does it make sense to you too or > am I on the wrong track? Are there any downsides I am missing? Python 2.x is closed to feature changes, and Python 2.7 and Python 3.2 already support flexible configuration using dictConfig() - see http://docs.python.org/library/logging.config.html#logging.config.dictConfig Also, Python 3.3 will support passing a list of handlers to basicConfig(): see http://plumberjack.blogspot.com/2011/04/added-functionality-for-basicconfig-in.html which will allow you to do what you want quite easily. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Changing class name causes process to 'hang'
On Mar 13, 11:48 pm, Tim Johnson wrote: > :) I like my logging module, I believe it may have 'anticipated' > the 2.7 module. And I can't count on my client's servers to host > 2.7 for a while. Perhaps you already know this, but you don't have to use Python 2.7 to use the standard logging package - that's been available since Python 2.3. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: logging and PyQt4
On Mar 14, 7:40 am, Adrian Casey wrote: > I have a multi-threaded PyQt4 application which has both a GUI and command- > line interface. I am using Qt4's threading because from what I have read, > it is more efficient than the native python threading module. Also, given > most users will probably use the GUI, it seemed to make sense. > > I want a flexible, threadsafeloggingfacility for my application so I was > thinking of using python'sloggingmodule. I need a logger that can log to > the GUI or a terminal depending on how the application is invoked. > > So, my question is -: > > Is it wise to use python'sloggingmodule in conjunction with Qt4 threads? > If not, what are my options apart from writing my ownloggingmodule? > > If it is OK, then I would like to know how to subclass theloggingclass so > that instead of sending output to stdout (as in StreamHandler), it emits Qt4 > signals instead. > > Any help would be appreciated. > > Thank you. > Adrian Casey. Logging certainly works well with PyQt4 in multi-threaded applications, though of course it's based on Python's threading API rather than Qt's. To direct logging output to a GUI, it would be appropriate to develop a Qt/PyQt-aware handler class (derived from logging.Handler) to do the Qt interfacing. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: logging module usage
On Mar 30, 3:49 pm, mennis wrote: > I am working on a library for controlling various appliances in which > I use theloggingmodule. I'd like some input on the basic structure > of what I've done. Specifically theloggingaspect but more general > comments are welcome. I'm convinced I mis-understand something but > I'm not sure what. I've posted a version of the library at github. > > g...@github.com:mennis/otto.githttp://github.com/mennis/otto > Ian >From a quick glance over your code, looking only at the logging perspective: It's fine to use the Django-like approach to provide better compatibility with Python versions < 2.5. Your use of an extra level is also OK, but other applications and tools won't know about your extra level, which could limit interoperability. If your application is completely self contained, however, that should be fine. You don't really need to hold loggers as instance attributes in objects - they are effectively singletons. The convention is to use logger = getLogger(__name__), that way you don't have to change your code to rename loggers if you move modules around in a package. Not sure why you are doing logging.disable() in code, this means that you can't change the verbosity using configuration files. You don't appear to be using the logger.exception() method in exception handlers, thereby not putting tracebacks in the log. You don't add a NullHandler to the root logger of your top-level package, which you should. I see you're using Python 2.x, but you may nevertheless find it useful to look at the logging docs for Python 3.2. These have been split into reference docs and HOWTOs, rather than the somewhat monolithic approach taken in the 2.x docs. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.2.7) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. The ability to use symmetrix encryption. The ability to receive keys from keyservers. The ability to use specific keyring files instead of the default keyring files. Internally, the code to handle Unicode and bytes has been tidied up. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
Re: Two similar logging programs but different ouputs
On Apr 18, 10:11 pm, Disc Magnet wrote: > Could you please help me understand this difference? Programs and > log.conf file follow: The first program prints two messages because loggers pass events to handlers attached to themselves and their ancestors. Hence, logger1's message is printed by logger1's handler, and logger2's message is printed by logger1's handler because it is an ancestor of logger2. In the second case, logger foo.bar exists when fileConfig() is called, but it is not named explicitly in the configuration. Hence, it is disabled (as documented). Hence only logger1's message is printed. NullHandler is a handler which does nothing - there is no point in adding it to a system which configures logging, and only any point in adding it to top-level loggers of libraries which may be used when logging is configured (also documented). Regards, Vinay Sajip Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Two similar logging programs but different ouputs
On Apr 18, 10:11 pm, Disc Magnet wrote: > Could you please help me understand this difference? Programs and > log.conf file follow: The first program prints two messages because loggers pass events to handlers attached to themselves and their ancestors. Hence, logger1's message is printed by logger1's handler, and logger2's message is printed by logger1's handler because logger1 is an ancestor of logger2. In the second case, logger foo.bar exists when fileConfig() is called, but it is not named explicitly in the configuration. Hence, it is disabled (as documented). Hence only logger1's message is printed. NullHandler is a handler which does nothing - there is no point in adding it to a system which configures logging, and only any point in adding it to top-level loggers of libraries which may be used when logging is not configured by the using application (this is also documented). Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Two similar logging programs but different ouputs
On Apr 19, 6:35 am, Disc Magnet wrote: > I couldn't find this mentioned in the documentation at: > > http://docs.python.org/library/logging.config.html#configuration-file... > > Could you please tell me where this is documented? It's documented here: http://docs.python.org/library/logging.config.html#dictionary-schema-details (look for 'disable_existing_logger'), but having looked at it, it *is* poorly documented and hard to find. I'll update the fileConfig section to describe the behaviour more clearly. > In the following code, foo.bar is not explicitly mentioned in the file > configuration. As per what you said, foo.bar should be disabled. Actually I wasn't clear enough in my earlier response. The behaviour is that all loggers are disabled other than those explicitly named in the configuration *and their descendants*. I'm glad you brought these points up, they do highlight an area where the documentation could be clearer. I'll get on it. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: unpickling derived LogRecord in python 2.7 from python2.6
On Apr 27, 5:41 pm, Peter Otten <__pete...@web.de> wrote: > The Problem is that as of Python 2.7logging.LogRecord has become a newstyle > class which is pickled/unpickled differently. I don't know if there is an > official way to do the conversion, but here's what I've hacked up. > The script can read pickles written with 2.6 in 2.7, but not the other way > round. > [code snipped] I don't know about "official", but another way of doing this is to pickle just the LogRecord's __dict__ and send that over the wire. The logging package contains a function makeLogRecord(d) where d is a dict. This is the approach used by the examples in the library documentation which pickle events for sending across a network: http://docs.python.org/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network The built-in SocketHandler pickles the LogRecord's __dict__ rather than the LogRecord itself, precisely because of the improved interoperability over pickling the instance directly. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Have you read the Python docs lately?
On Apr 28, 12:40 am, Ben Finney wrote: > This one in particular was sorely needed, especially its early if-then > discussion of whether to use ‘logging’ at all. For that "when to use logging" part, you can thank Nick Coghlan :-) Thanks are also due to all those who commented on early drafts, which were put together initially for the 3.2 release. If anyone has any other improvements to suggest, keep 'em coming! Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: A suggestion for an easy logger
On May 8, 1:00 am, mcilrain wrote: > Aside from the fact that it's very Javay, what's wrong with theloggingmodule? It's not especially Java-like. Since you can log using just import logging logging.basicConfig(level=logging.DEBUG) logging.debug('This is %sic, not %s-like - that's FUD', 'Python', 'Java') it doesn't seem especially Java-like: no factories, Interfaces, builders, using plain functions etc. The second line is optional and needed only if you want to log DEBUG or INFO messages (as the default threshold is WARNING). Despite other logging libraries claiming to be more Pythonic, they have pretty much the same concepts as stdlib logging - because those concepts are tied to logging, not to Java. Call it correlation vs. causation, or convergent evolution, or what you will. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: A suggestion for an easy logger
On May 8, 7:15 am, TheSaint wrote: > OK, my analysis led me to the print() function, which would suffice for > initial my purposes. The logging HOWTO tells you when to use logging, warnings and print(): http://docs.python.org/howto/logging.html > Meanwhile I reading the tutorials, but I couldn't get how to make a > formatter to suppress or keep the LF(CR) at the end of the statement. For Python 3.2 and later, it's the terminator attribute of the StreamHandler. See: http://plumberjack.blogspot.com/2010/10/streamhandlers-newline-terminator-now.html Unfortunately, for earlier Python versions, you'd need to subclass and override StreamHandler.emit() to get equivalent functionality :-( Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: A suggestion for an easy logger
On May 8, 12:21 pm, TheSaint wrote: > First I didn't espect to see much more than my message. I agree that I'm > very new to the module You could do logging.basicConfig(level=logging.DEBUG, format='%(message)s') to get just the message. > Second the will terminator appear only to real stdout, or am I doing > something incorrect? The terminator is an attribute on the StreamHandler instance, so works with whatever stream the handler is using. You can't use basicConfig() directly, if you want to configure the terminator - instead, use something like sh = logging.StreamHandler(sys.stdout) sh.terminator = '' logging.getLogger().addHandler(sh) but be sure to execute this code one time only, or you will get multiple identical messages for a single logging call. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: A suggestion for an easy logger
On May 9, 3:53 pm, TheSaint wrote: > Vinay Sajip wrote: > >logging.basicConfig(level=logging.DEBUG, format='%(message)s') > > logging.basicConfig(format='%(message)s', level=logging.DEBUG) > > I formulated in the reverse order of arguments, may that cause an > unpredicted result? No, you can pass keyword arguments in any order - that's what makes them keyword, as opposed to positional, arguments. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: python logging
On May 18, 11:10 pm, Ian Kelly wrote: > It seems to work without any configuration just as well as the root logger: > > >>> importlogging > >>>logging.getLogger('foo').warning('test') > > WARNING:foo:test > > Or am I misunderstanding you? In general for Python 2.x, the code import logging logging.getLogger('foo').warning('test') will produce No handlers could be found for logger "foo" unless loggers have been configured, e.g. by calling logging.warning() - that call implicitly adds a console handler to the root logger, if no other handlers have been configured for the root logger. In Python 3.2 and later, if no handlers have been configured, messages at level WARNING and greater will be printed to sys.stderr using a "handler of last resort" - see http://docs.python.org/py3k/howto/logging.html#what-happens-if-no-configuration-is-provided Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: python logging
On May 18, 11:42 pm, Ian Kelly wrote: > I was wrong, it's more complicated than that. > > >>>logging.getLogger('log').warning('test') > > No handlers could be found for logger "log">>>logging.warning('test') > WARNING:root:test > >>>logging.getLogger('log').warning('test') > > WARNING:log:test > > Apparently, getLogger() is unconfigured by default, but if you just > use the root logger once, then they magically get configured. The difference is that you called the module-level convenience function - logging.warning('test') The module-level convenience functions call basicConfig(), which configures a console handler on the root logger if no handlers are present there. This is documented at http://docs.python.org/library/logging.html#logging.log (see para starting "PLEASE NOTE:") and http://docs.python.org/howto/logging.html#advanced-logging-tutorial (search for "If you call the functions") This is not a behaviour change - it's been like this since logging appeared in Python, see http://hg.python.org/cpython/annotate/f72b1f8684a2/Lib/logging/__init__.py#l1145 Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Class decorators might also be super too
On May 29, 7:33 am, Michele Simionato wrote: > He is basically showing that using mixins for implementingloggingis not such > a good idea, I don't think he was particularly advocating implementing logging this way, but rather just using logging for illustrative purposes. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: How to prevent logging warning?
Thomas Heller wrote: > I get the behaviour that I want when I add a 'NULL' handler in the > library, but is this really how logging is intended to be used? > The reason for the one-off message is that without it, a misconfiguration or a failure to configure any handlers is notified to a user (who is possibly not used to the logging package). I'm not sure which is more annoying - a one-off message which occurs when no handlers are configured and yet events are logged, or complete silence from logging when something is misconfigured, and not giving any feedback on what's wrong? (It's a rhetorical question - the answer is of course quite subjective). Certainly, I could change things so that e.g. the error is suppressed when logging.raiseExceptions is set to 0 (typically for production use). Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: How to prevent logging warning?
s/without/with/ -- http://mail.python.org/mailman/listinfo/python-list
Re: How to prevent logging warning?
Thomas Heller wrote: > I do *not* think 'no handler' is a misconfiguration. Is it possible to > differentiate between a misconfiguration and 'no configuration'? It's a fair point. The line was more blurred when the logging package was newly released into the wild ;-) But "no configuration" could be caused e.g. by an unreadable config file, which might also be categorised as a "misconfiguration". > That would be fine. But there are also other ways - you could, for > example, print the warning only when __debug__ is False. And you could > use the warnings module instead of blindly printing to stderr, this way > it could also be filtered out. Compatibility with 1.5.2 precludes use of the warnings module. If using raiseExceptions meets your requirement, I'll use that. I'm not sure it's a good idea for the behaviour to change between running with and without -O. > BTW: Since I have your attention now, is the graphical utility to > configure the logging.conf file still available somewhere, and > compatible with the current logging package (with 'current' I mean > the one included with Python 2.3.5)? You can always get my attention via email :-) The graphical utility (logconf.py) is available from the download at http://www.red-dove.com/python_logging.html#download AFAIK it should work OK with 2.3.5, though I haven't tested it recently as there wasn't much interest in it. In fact you're the first person to ask! It generates a few extra entries in the config file which are used by the utility only, which are seemingly regarded as "cruft" by most people. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: multiple logger
Just replace logging.basicConfig(level=logging.DEBUG) with logging.getLogger().setLevel(logging.DEBUG) and you will no longer get messages written to the console. The basicConfig() method is meant for really basic use of logging - it allows one call to set level, and to add either a console handler a simple (non-rotating) file handler to the root logger. See the documentation for more information. Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: Controlling exception handling of logging module
sj wrote: > Thanks, but my point wasn't fixing the bug. I'd like the logging > module to raise an exception on this occasion (rather than print and > consume the error) so that I can find the bug easily. If those two > lines were part of 10,000-line code, I'd have to check all logging > statements one-by-one. You'll need to subclass your handler and redefine handleError(), which is called when an exception is raised during a handler's emit() operation. See http://www.red-dove.com/logging/public/logging.Handler-class.html#handleError -- http://mail.python.org/mailman/listinfo/python-list
Re: How to prevent logging warning?
I have now checked a change into CVS whereby the one-off error message is not printed unless raiseExceptions is 1. The default behaviour is thus unchanged, but if you set raiseExceptions to 0 for production use and then don't configure any handlers, then the message is not printed. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
[EMAIL PROTECTED] wrote: > Perhaps so, but the logging module seems like such an unpythonic beast to > me. How about cleaning it up (*) before we add more to it? Stuff like > colorizing seems like it belongs in its own module (presuming a reasonably > general markup scheme can be agreed upon) so it can be used outside the > logging package. How is it unpythonic, exactly? I agree that colorizing, etc. is probably best located in its own module. > (*) Stuff that seems very odd to me: > > - It's a package, but contrary to any other package I've ever seen, most > of its functionality is implemented in __init__.py. __init__.py is > roughly four times larger than the next largest (bsddb, which is a > beast because BerkDB has gotten so big over the years and the > module/package has strived to remain backwards-compatible). I agree that __init__.py is rather large, but I wasn't aware of any guidelines restricting its size. Having a smaller __init__.py and e.g. putting the bulk of the code in a subpackage such as logging.core was considered; I didn't see the point of doing that, though! And the module certainly received a reasonable amount of peer review on python-dev before going into the standard library. > - It's still too hard to use. The obvious 'hello world' example > > import logging > logging.info('hello world') > > ought to just work (implicitly add a stream handler connected to > stderr to the root logger). As Trent pointed out in another post, one extra line with a basicConfig() call would provide the behaviour that you want. This is surely not too much to have to add. The default level of WARNING was deliberately chosen to avoid excessive verbosity in the general case. > - Its functionality is partitioned in sometimes odd ways. For example, > it has a handlers module, but what I presume would be the most > commonly used handler (StreamHandler) is not defined there. It's in > (you have three guesses and the first two don't count) __init__.py > instead of in logging.handlers. Consequently, browsing in the obvious > way fails to find the StreamHandler class. It's partitioned that way so that the most commonly used handlers are in the core package, and the less commonly used ones are in the handlers package. This seems reasonable to me - you don't incur the footprint of the less common handlers just to do console and file based logging. > - It doesn't use PEP 8 style as far as naming is concerned, instead > doing some sort of Java or C++ or Perl camelCase thing. Eschewing PEP > 8 is fine for other stuff, but code in the Python core (especially new > code like the logging module) should strive to adhere to PEP 8, since > many people will use the core code as a pattern for their own code. I would not have been too unhappy to change the naming to unix_like rather than CamelCase. Nobody on python-dev asked for it to be done. The code was mostly written before the idea of putting it into Python came up - Trent had independently written PEP-282 and I had done a fair amount of work on the module before getting the idea that, by adhering to PEP 282, it could be put forward as an addition to the Python standard library. Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
[EMAIL PROTECTED] wrote: > >> - It's a package, but contrary to any other package I've ever seen, > >> most of its functionality is implemented in __init__.py. > > Trent> I'm not defending the implementation, but does this cause any > Trent> particular problems? > > No, it just seems symptomatic of some potential organizational problems. Could you please elaborate on this point further? What sort of problems? > Maybe there's a bug then (or maybe the docs still need work). When I > executed (all of these examples were typed at an interactive prompt): > > import logging > logging.info('hello world') > > I get no output. Looking at the doc for the basicConfig() function, I see: > > The functions debug(), info(), warning(), error() and critical() will > call basicConfig() automatically if no handlers are defined for the root > logger. > > If I read that right, my "hello world" example ought to work. I tried: > > import logging > logging.getLogger("main") > logging.info("hello world") > > and > > import logging > logging.basicConfig() > logging.info("hello world") > > and > > import logging > logging.basicConfig() > log = logging.getLogger("main") > log.info("hello world") > > Shouldn't one of these have emitted a "hello world" to stderr? (Maybe not. > Maybe I need to explicitly add handlers to non-root loggers.) > > Trent> Having lazy configuration like this means that it can be a subtle > Trent> thing for top-level application code to setup the proper logging > Trent> configuration. > > Again, based on my reading of the basicConfig doc, it seems like the logging > package is supposed to already do that. > > Trent> I think the usability of the logging module could be much > Trent> improved with a nicer introduction to it (i.e. docs). It's not > Trent> really a "hello world" type of tool. Its usefulness only really > Trent> shows in larger use cases. > > I agree about the docs. Whatever the "hello world" example is (I clearly > haven't figured it out yet), it ought to be right at the top of the docs. OK, it's not right at the top of the docs, but the example at http://docs.python.org/lib/minimal-example.html has been there for a while, and if you think it can be made clearer, please suggest how. > If logging isn't trivial to use, then many simple apps won't use logging. I would contend that it *is* pretty trivial to use for simple use cases. > Consequently, when they grow, logging has to be retrofitted. > > It was probably the log4j roots that provided the non-PEP 8 naming. I > suspect the naming could be improved while providing backward compatibility > aliases and deprecating those names. Not directly - but I work all the time with Python, Java, C, C++, C#, JavaScript environments, among others. In some environments, such as Java, CamelCase is more or less mandated. So I tend to use camelCaseMethodNames because then I don't have a cognitive disconnection every time I switch languages; I don't have the freedom to use lower_case_with_underscores everywhere. I certainly didn't copy much beyond the ideas from log4j - if you compare the log4j implementation with stdlib logging, you will see how Pythonic it is in comparison (if you leave aside superficial things like the use of camelCaseMethodNames). In any event, the commonest use of logging does not require you to use much camelCasing - apart from basicConfig() which is only called once. Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
Trent Mick wrote: > Yah. It was added before Guido more clearly stated that he thought > modules should have a successful life outside the core before being > accepted in the stdlib. Perhaps so, but Guido was also quite keen to get PEP-282 implemented for inclusion in 2.3, and pronounced on the code that I put forward for inclusion. The original code was all in one module - I partitioned it into a package with subpackages as part of the review process conducted on python-dev, in which Guido and several others took an active part. I'm not sure that there really is a problem here, other than naming convention. Since it's not practical to turn the clock back, my vote would be to just live with it. I'd rather focus my energies on functional improvements such as better configuration, etc. Not that I get as much time as I'd like to work on improvements to logging - but we've probably all been in an analogous position :-( Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
[EMAIL PROTECTED] wrote: > Since the logging package currently uses mixedCase it would appear it > shouldn't revert to lower_case. I'm thinking it should have probably used > lower_case from the start though. I see no real reason to have maintained > compatibility with log4j. Similarly, I think PyUnit (aka unittest) should > probably have used lower_case method/function names. After all, someone > went to the trouble of PEP-8-ing the module name when PyUnit got sucked into > the core. Why not the internals as well? Well, it seems a little too late now, for unittest, threading, logging and probably a few more. > I realize I'm playing the devil's advocate here. If a module that's been > stable outside the core for awhile gets sucked into Python's inner orbit, > gratuitous breakage of the existing users' code should be frowned upon, > otherwise people will be hesitant to be early adopters. There's also the > matter of synchronizing multiple versions of the module (outside and inside > the core). Still, a dual naming scheme with the non-PEP-8 names deprecated > should be possible. The breakage in my own usage of the module, and that of some existing users of the logging module in its pre-stdlib days, seemed to me to be good enough reason to leave the naming alone. Certainly, I was aware that the stdlib at that time contained both naming styles. Certainly the package did not have a long and stable life before coming into stdlib, but neither was it written from scratch for inclusion in the core. What would you suggest for threading, unittest etc. in terms of binding more unix_like_names and deprecating existing ones? It seems a lot of work for not very much benefit, beyond consistency for its own sake. > In the case of the logging module I'm not sure that applies. If I remember > correctly, it was more-or-less written for inclusion in the core. In that > case it should probably have adhered to PEP 8 from the start. Maybe going > forward we should be more adamant about that when an external module becomes > a candidate for inclusion in the core. Not quite - as I said earlier, it was already pretty much written when PEP-282 came along, and Trent very kindly let me piggyback onto it. Of course, I changed a few things to fit in with PEP-282, and Trent let me become the co-author. Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: improvements for the logging package
Thomas Heller wrote: > Yes, it seems so. Although I would have expected the documentation to > inform me about incompatible changes in the api. It does, in the "in-development" version of the documentation. Sorry it was not in the 2.4 releases :-( http://www.python.org/dev/doc/devel/lib/minimal-example.html Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: logging into one file problem
Maksim Kasimov wrote: [Example snipped] Will the following do what you want? Don't add handlers in each module. Just add a handler to the root logger in the main script. Thus: module1.py: import logging logger = logging.getLogger('module1') #now use the logger in your code module2.py: import logging logger = logging.getLogger('module2') #now use the logger in your code script.py: import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='/tmp/script.log', filemode='w') # this only works with 2.4+ - for earlier versions, need to do as in your original post Then, the output from loggers 'module1' and 'module2' will end up in '/tmp/script.log' automatically. Regards, Vinay -- http://mail.python.org/mailman/listinfo/python-list
Re: NTEventLogHandler not logging `info'?
Jaime Wyant wrote: > This code doesn't seem to do what I think it should do: > > # python 2.3.2 > # not sure of my win32 extensions version > > import logging > from logging.handlers import NTEventLogHandler > logger = logging.getLogger("testlogger") > handler = NTEventLogHandler("testlogger") > logger.addHandler(handler) > logger.info("This is a test") > > > I expected to see an `information' message in my `Application' event > log. Any ideas? > By default, the logger's level is WARNING, because you haven't explicitly set a level and the level inherited from the parent logger is WARNING (this is the default value for the root logger level). So if you add a line before the logger.info() call: logger.setLevel(logging.INFO) # or you can use logging.DEBUG Then you should see an entry appear in the NT Event log. -- http://mail.python.org/mailman/listinfo/python-list
Re: NTEventLogHandler not logging `info'?
Jaime Wyant wrote: > I must be missing something. This is what I read from the documentation: > > When a logger is created, the level is set to NOTSET (which causes all > messages to be processed in the root logger, or delegation to the > parent in non-root loggers). > The documentation could be clearer, I agree. I will add the following clarifying sentence to the docs: The term "delegation to the parent" means that if a logger has a level of NOTSET, its ancestor loggers are examined until the root is reached, or an ancestor with a level other than NOTSET is found. In the latter case, that level is treated as the effective level of the logger where the ancestor search started, and is used to determine how a logging event is handled. If the root is reached, and it has a level of NOTSET, then all messages will be processed. Otherwise, the root's level will be used as the effective level. Please post a response on the list if you think the above is still not clear enough. -- http://mail.python.org/mailman/listinfo/python-list
Re: unicode encoding usablilty problem
> This will help in your code, but there is big pile of modules in stdlib > that are not unicode-friendly. From my daily practice come shlex > (tokenizer works only with encoded strings) and logging (you cann't > specify encoding for FileHandler). You can, of course, pass in a stream opened using codecs.open to StreamHandler. Not quite as friendly, I'll grant you. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Re: SysLogHandler is drivin me nuts PEBCAC
Jan Dries <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>... > Slightly OT, but regarding the title, shouldn't it be PEBKAC, since it's > keyboard and not ceyboard? > PEBCAC: Problem Exists Between Chair And Computer -- http://mail.python.org/mailman/listinfo/python-list
Re: RotatingFileHandler
Robert Brewer wrote: Kamus of Kadizhar wrote: I'm having a problem with logging. I have an older app that used the RotatingFileHandler before it became part of the main distribution (I guess in 2.3). [snip] The offending snippet of code is: logFile = logging.handlers.RotatingFileHandler('/var/log/user/movies2.lo g','a',2000,4) logFile.emit(movieName) Making a quick run-through of the logging module, it looks like you need to have a Formatter object added to your Handler: filename = '/var/log/user/movies2.log' logFile = logging.handlers.RotatingFileHandler(filename,'a',2000,4) formatter = logging.Formatter() logFile.setFormatter(formatter) ...then you can call emit. Of course, you should not normally be calling emit() from user code. The correct approach is to log events to loggers, and not emit them to handlers directly. Best regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: File locking and logging
Kamus of Kadizhar <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>... > Thanks to Robert Brewer, I got enough insight into logging to make it work > > Now I have another issue: file locking. Sorry if this is a very basic > question, but I can't find a handy reference anywhere that mentions this. > > When a logger opens a log file for append, is it automatically locked so > other processes cannot write to it? And what happens if two or more > processes attempt to log an event at the same time? > > Here's my situation. I have two or three workstations that will log an > event (the playing of a movie). The log file is NFS mounted and all > workstations will use the same log file. How is file locking implemented? > Or is it? > No file locking is attempted by current logging handlers with respect to other processes - an ordinary open() call is used. Within a given Python process, concurrency support is is provided through threading locks. If you need bullet-proof operation in the scenario where multiple workstations are logging to the same file, you can do this through having all workstations log via a SocketHandler to a designated node, where you run a server process which locally logs to file events received across the network. There is a working example of this in the Python 2.4 docs. Best regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: logging from severl classes
It works for me: #file3.py import file1 import file2 a = file1.A() b = file2.B() b.otherfunction() gives 2004-12-28 00:18:34,805 DEBUG file2 6 creating class B 2004-12-28 00:18:34,805 DEBUG file2 9 in otherfunction -- http://mail.python.org/mailman/listinfo/python-list
Re: RotatingFileHandler and logging config file
Rob Cranfill wrote: NID (No, It Doesn't) ;-) but thanks anyway. To reiterate, the question is how to make RotatingFileHandler do a doRotate() on startup from a *config file*. No mention of that in what you point to. I don't think that RotatingFileHandler *should* be configurable to do a doRollover() on startup. I would follow up the suggestion of using your own derived class. The config mechanism will allow you to instantiate custom handlers - you only have to take into account (as an earlier poster has indicated) how the evaluation of the handler class (and its constructor arguments) is performed. BTW - constructor is not just for Java, but C++ too (not to mention C#). Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: I'm just an idiot when it comes logging
verizon.net> writes: > > I'm trying to be a good boy and use the logging module but it's > behaving rather counter-intuitively. I've posted a simplified test > script below. > > To summarize, I've set the basic config to only log root level messages > with >= ERROR level. I've created another named logger that logs on > info level. I've set it up so it just shows the message text without > "INFO:: logger:" boilerplate. > > The way I see things, when I call otherLogger.info, it should propogate > the message to the root logger, but the root logger should discard it > since it is at ERROR level. > > Could someone explain what I'm not getting? > > -Grant The way it works is: when you log to a logger, the event level is checked against the logger. If the event should be logged (event level >= logger level) then the event is passed to the handlers configured for that logger, and (while the logger's propagate attribute is true - the default - passed to handlers configured for loggers higher up the hierarchy. In your case, this inlcudes the root logger's handlers. Note that if you don't set a level on a logger, then the hierarchy is searched until a level is found. That becomes the effective level for the logger. Handlers normally process all events passed to them, but you can set a level on a handler to get it to drop events below a certain threshold. Since you haven't done this, you will see an INFO message appear even though the root logger's level is set to ERROR. (This would only affect logging calls to the root logger). Rule of thumb: Set levels on handlers only when you need them, not as common practice. If you don't want to see info messages from otherLogger, set its level to > INFO. Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: logging problems
Simon Dahlbacka gmail.com> writes: This is a known problem, and a patch was put into CVS. I would suggest that you either check out the version from CVS, or move the "import traceback" to the top of the module. The problem is caused by a threading deadlock which occurs when an importer tries to log a message. Regards, Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
Re: __del__ and logging
flupke nonexistingdomain.com> writes: > > Hi, > > i have a class and a class attribute log which is a logger object. In > the __del__() function i want to log a message but it fails even if i > use self.__class__.log. > > The error i get is this: > Traceback (most recent call last): >File "C:\Python24\lib\logging\__init__.py", line 712, in emit > self.stream.write(fs % msg) > ValueError: I/O operation on closed file > > So is there no way to use the logger object in a __del__ > I wanted to use the message to clearly indicate in the logger file that > the instance had closed ok. > It all depends. If your __del__ is being called via atexit() for application cleanup, for example, logging may not be available to you because it has been cleaned up beforehand. The logging module registers an atexit() handler to flush and close handlers before script termination. Vinay Sajip -- http://mail.python.org/mailman/listinfo/python-list
ANN: Version 0.1.2 of sarge (a subprocess wrapper library) has been released.
Version 0.1.2 of Sarge, a cross-platform library which wraps the subprocess module in the standard library, has been released. What changed? - - Fixed issue #12: Prevented a hang which occurred when a redirection failed. - Fixed issue #11: Added "+" to the characters allowed in parameters. - Fixed issue #10: Removed a spurious debugger breakpoint. - Fixed issue #9: Relative pathnames in redirections are now relative to the current working directory for the redirected process. - Added the ability to pass objects with "fileno()" methods as values to the "input" argument of "run()", and a "Feeder" class which facilitates passing data to child processes dynamically over time (rather than just an initial string, byte-string or file). - Added functionality under Windows to use PATH, PATHEXT and the registry to find appropriate commands. This can e.g. convert a command 'foo bar', if 'foo.py' is a Python script in the c:\Tools directory which is on the path, to the equivalent 'c:\Python26\Python.exe c:\Tools\foo.py bar'. This is done internally when a command is parsed, before it is passed to subprocess. - Fixed issue #7: Corrected handling of whitespace and redirections. - Fixed issue #8: Added a missing import. - Added Travis integration. - Added encoding parameter to the "Capture" initializer. - Fixed issue #6: addressed bugs in Capture logic so that iterating over captures is closer to subprocess behaviour. - Tests added to cover added functionality and reported issues. - Numerous documentation updates. What does Sarge do? --- Sarge tries to make interfacing with external programs from your Python applications easier than just using subprocess alone. Sarge offers the following features: * A simple way to run command lines which allows a rich subset of Bash- style shell command syntax, but parsed and run by sarge so that you can run on Windows without cygwin (subject to having those commands available): >>> from sarge import capture_stdout >>> p = capture_stdout('echo foo | cat; echo bar') >>> for line in p.stdout: print(repr(line)) ... 'foo\n' 'bar\n' * The ability to format shell commands with placeholders, such that variables are quoted to prevent shell injection attacks. * The ability to capture output streams without requiring you to program your own threads. You just use a Capture object and then you can read from it as and when you want. * The ability to look for patterns in captured output and to interact accordingly with the child process. Advantages over subprocess --- Sarge offers the following benefits compared to using subprocess: * The API is very simple. * It's easier to use command pipelines - using subprocess out of the box often leads to deadlocks because pipe buffers get filled up. * It would be nice to use Bash-style pipe syntax on Windows, but Windows shells don't support some of the syntax which is useful, like &&, ||, |& and so on. Sarge gives you that functionality on Windows, without cygwin. * Sometimes, subprocess.Popen.communicate() is not flexible enough for one's needs - for example, when one needs to process output a line at a time without buffering the entire output in memory. * It's desirable to avoid shell injection problems by having the ability to quote command arguments safely. * subprocess allows you to let stderr be the same as stdout, but not the other way around - and sometimes, you need to do that. Python version and platform compatibility - Sarge is intended to be used on any Python version >= 2.6 and is tested on Python versions 2.6, 2.7, 3.1, 3.2 and 3.3 on Linux, Windows, and Mac OS X (not all versions are tested on all platforms, but sarge is expected to work correctly on all these versions on all these platforms). Finding out more You can read the documentation at http://sarge.readthedocs.org/ There's a lot more information, with examples, than I can put into this post. You can install Sarge using "pip install sarge" to try it out. The project is hosted on BitBucket at https://bitbucket.org/vinay.sajip/sarge/ And you can leave feedback on the issue tracker there. I hope you find Sarge useful! Regards, Vinay Sajip -- https://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.6) of python-gnupg has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is an enhancement and bug-fix release, but the bug-fixes include some security improvements, so all users are encouraged to upgrade. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Enabled fast random tests on gpg as well as gpg2. Avoided deleting temporary file to preserve its permissions. Avoided writing passphrase to log. Added export-minimal and armor options when exporting keys. Added verify_data() method to allow verification of signatures in memory. Regularised end-of-line characters in ths source code. Rectified problems with earlier fix for shell injection. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- https://mail.python.org/mailman/listinfo/python-list
ANN: distlib 0.2.2 released on PyPI
I've just released version 0.2.2 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: * Fixed issue #81: Added support for detecting distributions installed by wheel versions >= 0.23 (which use metadata.json rather than pydist.json). * Updated default PyPI URL to https://pypi.python.org/pypi * Updated to use different formatting for description field for V1.1 metadata. * Corrected “classifier” to “classifiers” in the mapping for V1.0 metadata. * Improved support for Jython when quoting executables in output scripts. * Fixed issue #77: Made the internal URL used for extended metadata fetches configurable via a module attribute. * Fixed issue #78: Improved entry point parsing to handle leading spaces in ini-format files. A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.2.2 [2] https://goo.gl/M3kQzR [3] https://bitbucket.org/pypa/distlib/issues/new -- https://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.7) of python-gnupg has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is an enhancement and bug-fix release, but the bug-fixes include some security improvements, so all users are encouraged to upgrade. See the project website [1] for more information. Brief summary: * Added an 'output' keyword parameter to the 'sign' and 'sign_file' methods, to allow writing the signature to a file. * Allowed specifying 'True' for the 'sign' keyword parameter, which allows use of the default key for signing and avoids having to specify a key id when it's desired to use the default. * Used a uniform approach with subprocess on Windows and POSIX: shell=True is not used on either. * When signing/verifying, the status is updated to reflect any expired or revoked keys or signatures. * Handled 'NOTATION_NAME' and 'NOTATION_DATA' during verification. * Fixed #1, #16, #18, #20: Quoting approach changed, since now shell=False. * Fixed #14: Handled 'NEED_PASSPHRASE_PIN' message. * Fixed #8: Added a scan_keys method to allow scanning of keys without the need to import into a keyring. * Fixed #5: Added '0x' prefix when searching for keys. * Fixed #4: Handled 'PROGRESS' message during encryption. * Fixed #3: Changed default encoding to Latin-1. * Fixed #2: Raised ValueError if no recipients were specified for an asymmetric encryption request. * Handled 'UNEXPECTED' message during verification. * Replaced old range(len(X)) idiom with enumerate(). * Refactored ``ListKeys`` / ``SearchKeys`` classes to maximise use of common functions. * Fixed GC94: Added ``export-minimal`` and ``armor`` options when exporting keys. This addition was inadvertently left out of 0.3.6. This release [2] has been signed with my code signing key: Vinay Sajip (CODE SIGNING KEY) Fingerprint: CA74 9061 914E AC13 8E66 EADB 9147 B477 339A 9B86 What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' As always, your feedback is most welcome (especially bug reports [3], patches and suggestions for improvement, or any other points via the mailing list/discussion group [4]). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. [1] https://bitbucket.org/vinay.sajip/python-gnupg [2] https://pypi.python.org/pypi/python-gnupg/0.3.7 [3] https://bitbucket.org/vinay.sajip/python-gnupg/issues [4] https://groups.google.com/forum/#!forum/python-gnupg -- https://mail.python.org/mailman/listinfo/python-list
[ANN]: distlib 0.2.0 released on PyPI
I've just released version 0.2.0 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: Updated match_hostname to use the latest Python implementation. Updates to better support PEP 426 / PEP 440. You can now provide interpreter arguments in shebang lines written by distlib. Removed reference to __PYVENV_LAUNCHER__ (relevant to OS X only). A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.2.0 [2] http://pythonhosted.org/distlib/overview.html#change-log-for-distlib [3] https://bitbucket.org/pypa/distlib/issues/new -- https://mail.python.org/mailman/listinfo/python-list
ANN: Version 0.1.4 of sarge (a subprocess wrapper library) has been released.
Version 0.1.4 of Sarge, a cross-platform library which wraps the subprocess module in the standard library, has been released. What changed? - - Fixed issue #20: Now runs a pipeline in a separate thread if async. - Fixed issue #21: The command line isn't parsed if shell=True is specified. - Added Coveralls to Travis configuration. - Tests added to cover added functionality and reported issues. - Numerous documentation updates. What does Sarge do? --- Sarge tries to make interfacing with external programs from your Python applications easier than just using subprocess alone. Sarge offers the following features: * A simple way to run command lines which allows a rich subset of Bash- style shell command syntax, but parsed and run by sarge so that you can run on Windows without cygwin (subject to having those commands available): >>> from sarge import capture_stdout >>> p = capture_stdout('echo foo | cat; echo bar') >>> for line in p.stdout: print(repr(line)) ... 'foo\n' 'bar\n' * The ability to format shell commands with placeholders, such that variables are quoted to prevent shell injection attacks. * The ability to capture output streams without requiring you to program your own threads. You just use a Capture object and then you can read from it as and when you want. * The ability to look for patterns in captured output and to interact accordingly with the child process. Advantages over subprocess --- Sarge offers the following benefits compared to using subprocess: * The API is very simple. * It's easier to use command pipelines - using subprocess out of the box often leads to deadlocks because pipe buffers get filled up. * It would be nice to use Bash-style pipe syntax on Windows, but Windows shells don't support some of the syntax which is useful, like &&, ||, |& and so on. Sarge gives you that functionality on Windows, without cygwin. * Sometimes, subprocess.Popen.communicate() is not flexible enough for one's needs - for example, when one needs to process output a line at a time without buffering the entire output in memory. * It's desirable to avoid shell injection problems by having the ability to quote command arguments safely. * subprocess allows you to let stderr be the same as stdout, but not the other way around - and sometimes, you need to do that. Python version and platform compatibility - Sarge is intended to be used on any Python version >= 2.6 and is tested on Python versions 2.6, 2.7, 3.1, 3.2, 3.3 and 3.4 on Linux, Windows, and Mac OS X (not all versions are tested on all platforms, but sarge is expected to work correctly on all these versions on all these platforms). Finding out more You can read the documentation at http://sarge.readthedocs.org/ There's a lot more information, with examples, than I can put into this post. You can install Sarge using "pip install sarge" to try it out. The project is hosted on BitBucket at https://bitbucket.org/vinay.sajip/sarge/ And you can leave feedback on the issue tracker there. I hope you find Sarge useful! Regards, Vinay Sajip -- https://mail.python.org/mailman/listinfo/python-list
[ANN]: distlib 0.1.8 released on PyPI
I've released version 0.1.8 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: * Fixed issue #45: Improved thread-safety in SimpleScrapingLocator. * Fixed issue #42: Handling of pre-release legacy version numbers now mirrors setuptools logic. * Added exists, verify, update, is_compatible and is_mountable methods to the Wheel class (the update method fixed issue #41). * Added a search method to the PackageIndex class. * Fixed a bug in the Metadata.add_requirements method. * Allowed versions with a single numeric component and a local version component (tracking changes to PEP 440).* Corrected spelling of environment variable used for the stub launcher on OS X. * Avoided using pydist.json in 1.0 wheels (bdist_wheel writes a non- conforming pydist.json). * Improved computation of ABI tags on Python versions where SOABI is not available, and improved computation of compatibility tags on OS X to allow for multiple architectures and older OS X versions. A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.1.8 [2] http://pythonhosted.org/distlib/overview.html#change-log-for-distlib [3] https://bitbucket.org/pypa/distlib/issues/new -- https://mail.python.org/mailman/listinfo/python-list
[ANN]: distlib 0.1.9 released on PyPI
I've just released version 0.1.9 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: Fixed issue #47: Updated binary launchers to fix double-quoting bug where script executable paths have spaces. Added ``keystore`` keyword argument to signing and verification APIs. A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.1.9 [2] http://pythonhosted.org/distlib/overview.html#change-log-for-distlib [3] https://bitbucket.org/pypa/distlib/issues/new -- https://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.3.5) of python-gnupg has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Added improved shell quoting to guard against shell injection attacks. Added search_keys() and send_keys() methods to interact with keyservers. A symmetric cipher algorithm can now be specified when encrypting. UTF-8 encoding is used as a fall back when no other encoding can be determined. The key length now defaults to 2048 bits. A default Name-Comment field is no longer provided during key generation. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions >= 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: >>> import gnupg >>> gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') >>> gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) ']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) ']}] >>> encrypted = gpg.encrypt("Hello, world!", ['0C5FEFA7A921FC4A']) >>> str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' >>> decrypted = gpg.decrypt(str(encrypted), passphrase='secret') >>> str(decrypted) 'Hello, world!' >>> signed = gpg.sign("Goodbye, world!", passphrase='secret') >>> verified = gpg.verify(str(signed)) >>> print "Verified" if verified else "Not verified" 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
ANN: distlib 0.2.1 released on PyPI
I've just released version 0.2.1 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: Fixed issue #58: Return a Distribution instance or None from locate(). Fixed issue #59: Skipped special keys when looking for versions. Improved behaviour of PyPIJSONLocator to be analogous to that of other locators. Added resource iterator functionality. Fixed issue #71: Updated launchers to decode shebangs using UTF-8. This allows non-ASCII pathnames to be correctly handled. Ensured that the executable written to shebangs is normcased. Changed ScriptMaker to work better under Jython. Changed the mode setting method to work better under Jython. Changed get_executable() to return a normcased value. Handled multiple-architecture wheel filenames correctly. A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.2.1 [2] https://goo.gl/K5Spsp [3] https://bitbucket.org/pypa/distlib/issues/new -- https://mail.python.org/mailman/listinfo/python-list