Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
On 31/07/2010 01:51, David Cournapeau wrote: On Fri, Jul 30, 2010 at 10:23 PM, Michael Foord wrote: For those of you who found this document perhaps just a little bit too long, I've written up a *much* shorter intro to the plugin system (including how to get the prototype) on my blog: http://www.voidspace.org.uk/python/weblog/arch_d7_2010_07_24.shtml#e1186 This looks nice and simple, but I am a bit worried about the configuration file for registration. My experience is that end users don't like editing files much. I understand that may be considered as bikesheding, but have you considered a system analog to bzr instead ? A plugin is a directory somewhere, which means that disabling it is just removing a directory. In my experience, it is more reliable from a user POV than e.g. the hg way of doing things. The plugin system of bzr is one of the thing that I still consider the best in its category, even though I stopped using bzr for quite some time. The registration was incredibly robust and easy to use from a user and developer POV, Definitely not bikeshedding, a useful suggestion David. As Matthieu says in his reply, individual projects need to be able to enable (and configure) individual plugins that their tests depend on - potentially even shipping the plugin with the project. The other side of this is generally useful plugins that developers may want to have permanently active (like the debugger plugin), so that it is always available to them (via a command line switch). The proposed system allows this with a user configuration file plus a per-project configuration file. I take your point about users not liking configuration files though. I've looked a little bit at the bzr plugin system and I like the plugins subcommand. If PEP 376 goes ahead then we could keep the user plugin and use the PEP 376 metadata, in concert with a user config file, to discover all plugins *available*. A plugins subcommand could then activate / deactivate individual plugins by editing (or creating) the config file for the user. This could be bolted *on top* of the config file solution once PEP 376 is in place. It *doesn't* handle the problem of configuring plugins. So long as metadata is available about what configuration options plugins have (through a plugins API) then the plugins subcommand could also handle configuration. Installation of plugins would still be done through the standard distutils(2) machinery. (Using PEP 376 would depend on distutils2. I would be fine with this.) Another possibility would be to have a zero-config plugin installation solution *as well* as the config files. Create a plugins directory (in the user home directory?) and automatically activate plugins in this directory. This violates TOOWTDI though. As it happens adding a plugin directory would be easy to implement as a plugin... All the best, Michael David -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
On 31/07/2010 12:46, Michael Foord wrote: [snip...] If PEP 376 goes ahead then we could keep the user plugin I meant "keep the user config file". Michael and use the PEP 376 metadata, in concert with a user config file, to discover all plugins *available*. A plugins subcommand could then activate / deactivate individual plugins by editing (or creating) the config file for the user. This could be bolted *on top* of the config file solution once PEP 376 is in place. It *doesn't* handle the problem of configuring plugins. So long as metadata is available about what configuration options plugins have (through a plugins API) then the plugins subcommand could also handle configuration. Installation of plugins would still be done through the standard distutils(2) machinery. (Using PEP 376 would depend on distutils2. I would be fine with this.) Another possibility would be to have a zero-config plugin installation solution *as well* as the config files. Create a plugins directory (in the user home directory?) and automatically activate plugins in this directory. This violates TOOWTDI though. As it happens adding a plugin directory would be easy to implement as a plugin... All the best, Michael David -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On Sat, Jul 31, 2010 at 3:57 PM, Daniel Waterworth wrote: > @Nick: I suppose the simplest way to detect re-importation in the > general case, is to store a set of hashes of files that have been > imported. When a user tries to import a file where it's hash is > already in the set, a warning is generated. It's simpler than trying > to figure out all the different ways that a file can be imported, and > will also detect copied files. This is less infrastructure than you > were suggesting, but it's not a perfect solution. Hashing every file on import would definitely be more overhead than just checking __file__ values (since we already calculate the latter, and regardless of how a file is imported, it needs to end up in sys.modules eventually). Besides, importing the same code under different names happens in several places in our own test suite (we use it to check that code behaviour doesn't change just because we import it differently), so we can hardly disable that behaviour. That said, I really don't think catching such a rare error is worth *any* runtime overhead. Just making "__main__" and the real module name refer to the same object in sys.modules is a different matter, but I'm not confident enough that I fully grasp the implications to do it without gathering feedback from a wider audience. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On 31/07/2010 16:07, Nick Coghlan wrote: On Sat, Jul 31, 2010 at 3:57 PM, Daniel Waterworth wrote: @Nick: I suppose the simplest way to detect re-importation in the general case, is to store a set of hashes of files that have been imported. When a user tries to import a file where it's hash is already in the set, a warning is generated. It's simpler than trying to figure out all the different ways that a file can be imported, and will also detect copied files. This is less infrastructure than you were suggesting, but it's not a perfect solution. Hashing every file on import would definitely be more overhead than just checking __file__ values (since we already calculate the latter, and regardless of how a file is imported, it needs to end up in sys.modules eventually). Besides, importing the same code under different names happens in several places in our own test suite (we use it to check that code behaviour doesn't change just because we import it differently), so we can hardly disable that behaviour. That said, I really don't think catching such a rare error is worth *any* runtime overhead. Just making "__main__" and the real module name refer to the same object in sys.modules is a different matter, but I'm not confident enough that I fully grasp the implications to do it without gathering feedback from a wider audience. Some people workaround the potential for bugs caused by __main__ reimporting itself by doing it *deliberately*. Glyf even recommends it as good practise. ;-) http://glyf.livejournal.com/60326.html So - the fix you suggest would *break* this code. Raising a warning wouldn't... (and would eventually make this workaround unnecessary.) Michael Cheers, Nick. -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On Sat, Jul 31, 2010 at 11:07 AM, Nick Coghlan wrote: .. > That said, I really don't think catching such a rare error is worth > *any* runtime overhead. Just making "__main__" and the real module > name refer to the same object in sys.modules is a different matter, > but I'm not confident enough that I fully grasp the implications to do > it without gathering feedback from a wider audience. If you make sys.module['__main__'] and sys.module['modname'] the same (let's call it mod), what will be the value of mod.__name__? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On Sun, Aug 1, 2010 at 1:14 AM, Michael Foord wrote: > Some people workaround the potential for bugs caused by __main__ reimporting > itself by doing it *deliberately*. Glyf even recommends it as good practise. > ;-) > > http://glyf.livejournal.com/60326.html > > So - the fix you suggest would *break* this code. Raising a warning > wouldn't... (and would eventually make this workaround unnecessary.) With my change, that code would work just fine. "from myproject.gizmo import main" and "from __main__ import main" would just return the same object, whereas currently they return something different. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On 31/07/2010 16:30, Nick Coghlan wrote: On Sun, Aug 1, 2010 at 1:14 AM, Michael Foord wrote: Some people workaround the potential for bugs caused by __main__ reimporting itself by doing it *deliberately*. Glyf even recommends it as good practise. ;-) http://glyf.livejournal.com/60326.html So - the fix you suggest would *break* this code. Raising a warning wouldn't... (and would eventually make this workaround unnecessary.) With my change, that code would work just fine. "from myproject.gizmo import main" and "from __main__ import main" would just return the same object, whereas currently they return something different. Have you looked at the code in that example? I don't think it would work... Michael Cheers, Nick. -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On Sun, Aug 1, 2010 at 1:23 AM, Alexander Belopolsky wrote: > On Sat, Jul 31, 2010 at 11:07 AM, Nick Coghlan wrote: > .. >> That said, I really don't think catching such a rare error is worth >> *any* runtime overhead. Just making "__main__" and the real module >> name refer to the same object in sys.modules is a different matter, >> but I'm not confident enough that I fully grasp the implications to do >> it without gathering feedback from a wider audience. > > If you make sys.module['__main__'] and sys.module['modname'] the same > (let's call it mod), what will be the value of mod.__name__? "__main__", so pickling would remain broken. unpickling would at least work correctly under this regime though. The only way to fix pickling is to avoid monkeying with __name__ at all (e.g. something along the lines of PEP 299, or a special "__is_main__" flag). Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unexpected import behaviour
On Sun, Aug 1, 2010 at 1:36 AM, Michael Foord wrote: > On 31/07/2010 16:30, Nick Coghlan wrote: >> With my change, that code would work just fine. "from myproject.gizmo >> import main" and "from __main__ import main" would just return the >> same object, whereas currently they return something different. >> > > Have you looked at the code in that example? I don't think it would work... Ah, I see what you mean - yes, there would need to be some additional work done to detect the case of direct execution from within a package directory in order to set __main__.__package__ accordingly (as if the command line had been "python -m myproject.gizmo" rather than "python myproject/gizmo.py"). Even then, the naming problem would remain. Still, this kind of the thing is the reason I'm reluctant to arbitrarily change the existing semantics - as irritating as they can be at times (with pickling/unpickling problems being the worst of it, as pickling in particular depends on the value in __name__ being correct), people have all sorts of workarounds kicking around that need to be accounted for if we're going to make any changes. I kind of regret PEP 366 being accepted in the __package__ form now. At one point I considered proposing something like __module_name__ instead, but I didn't actually need that extra information to solve the relative import issue, and nobody else mentioned the pickling problem at the time. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
On Sat, Jul 31, 2010 at 1:46 PM, Michael Foord wrote: ... > > Installation of plugins would still be done through the standard > distutils(2) machinery. (Using PEP 376 would depend on distutils2. I would > be fine with this.) Note that the PEP 376 implementation is mainly done in pkgutil. A custom version lives in distutils2 but when ready, will be pushed independently in pkgutil Regards Tarek ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
On 31/07/2010 17:22, Tarek Ziadé wrote: On Sat, Jul 31, 2010 at 1:46 PM, Michael Foord wrote: ... Installation of plugins would still be done through the standard distutils(2) machinery. (Using PEP 376 would depend on distutils2. I would be fine with this.) Note that the PEP 376 implementation is mainly done in pkgutil. A custom version lives in distutils2 but when ready, will be pushed independently in pkgutil Ok. It would be helpful for unittest2 (the backport) if it was *still* available in distutils2 even after the merge into pkgutil (for use by earlier versions of Python). I guess you will do this anyway for the distutils2 backport itself anyway... (?) All the best, Michael Foord Regards Tarek -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Is it intentional that "sys.__debug__ = 1" is illegal in Python 2.7?
On Jul 31, 2010, at 08:32 AM, Steven D'Aprano wrote: >On Sat, 31 Jul 2010 07:44:42 am Guido van Rossum wrote: >> On Fri, Jul 30, 2010 at 1:53 PM, Barry Warsaw >wrote: >> > On Jul 30, 2010, at 01:42 PM, Guido van Rossum wrote: >> >>Well it is a reserved name so those packages that were setting it >> >>should have known that they were using undefined behavior that >> >> could change at any time. >> > >> > Shouldn't it be described here then? >> > >> > http://docs.python.org/reference/lexical_analysis.html#identifiers >> >> No, since it is covered here: >> >> http://docs.python.org/reference/lexical_analysis.html#reserved-class >>es-of-identifiers > > >I have a small concern about the wording of that, specifically this: > >"System-defined names. These names are defined by the interpreter and >its implementation (including the standard library); applications >SHOULD NOT EXPECT TO DEFINE additional names using this convention. >The set of names of this class defined by Python may be extended in >future versions." [emphasis added] > >This implies to me that at some time in the future, Python may make it >illegal to assign to any __*__ name apart from those in a list >of "approved" methods. Is that the intention? I have always understood >that if you create your own __*__ names, you risk clashing with a >special method, but otherwise it is allowed, if disapproved off. I >would not like to see it become forbidden. I'm with Steven on this one. I've always understood the rules on double-underscore names to mean that Python reserves the use of those names for its own purposes, and is free to break your code if you define your own. That's very different than saying it's forbidden to use double-underscore names for your own purposes or assign to them, which is I think what's going on with the sys.__debug__ example. If that's the rule, I'd want to make this section of the documentation much stronger about the prohibitions. I've just never considered Python's rule here to be that strong. -Barry signature.asc Description: PGP signature ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Is it intentional that "sys.__debug__ = 1" isillegal in Python 2.7?
On Jul 30, 2010, at 05:23 PM, Eric Snow wrote: >First appeared in docs for 2.6 (October 02, 2008). Not sure if that >is when it first because constrained this way. > >http://docs.python.org/library/constants.html?highlight=__debug__#__debug__ Thanks Eric, this is probably the right section of the docs to reference on the issue. I want to add two clarifications to this section: * Be more explicit that assigments to None and __debug__ are illegal even when used as attributes. IOW it's not just assignment to the built-in names that are illegal. * Add a "Changed in 2.7" to __debug__ stating that assignments to __debug__ as an attribute became illegal. From this though, I think it's clear that Benjamin's change was intentional. I will also add this to the NEWS and What's New files for 2.7. -Barry signature.asc Description: PGP signature ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] pdb mini-sprint report and questions
On Jul 31, 2010, at 12:45 AM, Georg Brandl wrote: >to warm up for tomorrow's 3.2alpha1 release, I did a mini-sprint on >pdb issues today. I'm pleased to report that 14 issues could be >closed, and pdb got a range of small new features, such as commands on >the command line, "until " or "longlist" showing all the code >for the current function (the latter courtesy of Antonio Cuni's pdb++). I haven't played with pdb++ (I might have to do something about that) but it's awesome that you're giving pdb some love. >One issue that's not yet closed is #7245, which adds a (very nice IMO) >feature: when you press Ctrl-C while the program being debugged runs, >you will not get a traceback but execution is suspended, and you can >debug from the current point of execution -- just like in gdb. That *would* be nice. >Another question is about a feature of pdb++ that I personally would >like, but imagine would make others unhappy: one-letter abbreviations >of commands such as c(ontinue) or l(ist) are also often-used variable >names, so they are frequently typed without the required "!" or "print" >that would distinguish them from the command, and the command is >actually executed. The feature in question would default to printing >the variable in cases where one exists -- handy enough or too >inconsistent? Not that important to me... >Also, are there any other features you would like to see? One feature >of pdb++ that is general enough and has no dependencies would be watch >expressions... ...but watch expressions - and the equivalent of gdb's 'display' command - would be very cool. `interact` would also be useful and probably pretty easy to add. -Barry signature.asc Description: PGP signature ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
>> Note that the PEP 376 implementation is mainly done in pkgutil. A >> custom version lives in distutils2 but >> when ready, will be pushed independently in pkgutil > > Ok. It would be helpful for unittest2 (the backport) if it was *still* > available in distutils2 even after the merge into pkgutil (for use by > earlier versions of Python). I guess you will do this anyway for the > distutils2 backport itself anyway... (?) Yes. Even if the goal is to have distutils2 in the stdlib for 3.2 or 3.3, there will still be a standalone release on PyPI for Python 2.4-3.1. You’ll just have to write such compat code: try: from pkgutil import shiny_new_function except ImportError: from distutils2._backport.pkgutil import shiny_new_function Regards ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Is it intentional that "sys.__debug__ = 1" isillegal in Python 2.7?
>>On Sat, 31 Jul 2010 07:44:42 am Guido van Rossum wrote: >>> http://docs.python.org/reference/lexical_analysis.html#reserved-classes-of-identifiers > On Jul 31, 2010, at 08:32 AM, Steven D'Aprano wrote: >>I have a small concern about the wording of that, specifically this: >> >>"System-defined names. These names are defined by the interpreter and >>its implementation (including the standard library); applications >>SHOULD NOT EXPECT TO DEFINE additional names using this convention. >>The set of names of this class defined by Python may be extended in >>future versions." [emphasis added] >> >>This implies to me that at some time in the future, Python may make it >>illegal to assign to any __*__ name apart from those in a list >>of "approved" methods. Is that the intention? I have always understood >>that if you create your own __*__ names, you risk clashing with a >>special method, but otherwise it is allowed, if disapproved off. I >>would not like to see it become forbidden. The key phrase is "system-defined names". Since this is in the section on lexical analysis, it does not limit the contexts in which such names are reserved for the system; they are potentially special *everywhere* (as variables, builtins, classes, function, methods, attributes, any other use of names in the language). The phrase "define additional names" should not be intended to imply that using __*__ names that already have a defined meaning (like __debug__) in new contexts is fair game -- to the contrary, I would think that since __debug__ is a system-defined name (and one with pretty deep implications) doing things not explicitly allowed, like setting sys.__debug__, is really like playing with fire. On Sat, Jul 31, 2010 at 9:36 AM, Barry Warsaw wrote: > I'm with Steven on this one. I've always understood the rules on > double-underscore names to mean that Python reserves the use of those names > for its own purposes, and is free to break your code if you define your own. Or if you use the ones reserved by Python in undocumented ways. > That's very different than saying it's forbidden to use double-underscore > names for your own purposes or assign to them, which is I think what's going > on with the sys.__debug__ example. A blanket prohibition of assigning to or defining any __*__ names in any context (besides the documented ones in documented contexts) would clearly break a lot of code, but I don't think implementations are required or expected to avoid occasional such breakages at all cost. The occasional introduction of new __*__ names with new special meanings is clearly allowed, and if the language were to introduce a bunch of new keywords of this form (keywords meaning that they become syntactically illegal everywhere except where the syntax explicitly allows them) that would be totally within the rules. > If that's the rule, I'd want to make this section of the documentation much > stronger about the prohibitions. I've just never considered Python's rule > here to be that strong. I have. I have also occasionally ignored this rule, but I've always felt that I was taking a calculated risk and would not have a leg to stand on if my code would be broken. On Sat, Jul 31, 2010 at 9:41 AM, Barry Warsaw wrote: > On Jul 30, 2010, at 05:23 PM, Eric Snow wrote: > >>First appeared in docs for 2.6 (October 02, 2008). Not sure if that >>is when it first because constrained this way. >> >>http://docs.python.org/library/constants.html?highlight=__debug__#__debug__ > > Thanks Eric, this is probably the right section of the docs to reference on > the issue. I want to add two clarifications to this section: > > * Be more explicit that assigments to None and __debug__ are illegal even when > used as attributes. IOW it's not just assignment to the built-in names that > are illegal. Well None is a reserved word in Py3k (as are True and False). But yes, the docs should clarify that *any* use of __*__ names, in *any* context, that does not follow explicitly documented use, is subject to breakage without warning. > * Add a "Changed in 2.7" to __debug__ stating that assignments to __debug__ as > an attribute became illegal. > > From this though, I think it's clear that Benjamin's change was intentional. > I will also add this to the NEWS and What's New files for 2.7. Thanks! -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Is it intentional that "sys.__debug__ = 1" is illegal in Python 2.7?
Barry Warsaw wrote: I've always understood the rules on double-underscore names to mean that Python reserves the use of those names for its own purposes, and is free to break your code if you define your own. That's very different than saying it's forbidden to use double-underscore names for your own purposes or assign to them, which is I think what's going on with the sys.__debug__ example. I don't see that there's any difference. Once upon a time, __debug__ wasn't special, and someone decided to use it for their own purposes. Then Guido decided to make it special, and broke their code, which is within the rules as you just stated them. The rule doesn't say anything about what *kinds* of breakage are allowed, so anything goes, including making it impossible to assign to the name any more. -- Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] No response to posts
Hi all, I have been wading through outstanding issues today and have noticed that there are several where there has been no response at all to the initial post. Failing that, the only response has been Terry Reedy back in May 2010, and that only updating the versions affected. Would it be possible to get some code in place whereby if there is no response to the initial post, this could be flagged up after (say) 24 hours? Surely any response back to the OP is better than a complete wall of silence? Kindest regards. Mark Lawrence. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Exception chaining and generator finalisation
While updating my yield-from impementation for Python
3.1.2, I came across a quirk in the way that the new
exception chaining feature interacts with generators.
If you close() a generator, and it raises an exception
inside a finally clause, you get a double-barrelled
traceback that first reports a GeneratorExit, then
"During handling of the above exception, another
exception occurred", followed by the traceback for
the exception raised by the generator.
To my mind, the fact that GeneratorExit is involved
is an implementation detail that shouldn't be leaking
through like this.
Does anyone think this ought to be fixed, and if so,
how? Should GeneratorExit be exempt from being
implicitly set as the context of another exception?
Should any other exceptions also be exempt?
Demonstration follows:
Python 3.1.2 (r312:79147, Jul 31 2010, 21:23:14)
[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> def g():
... try:
... yield 1
... finally:
... raise ValueError("Spanish inquisition")
...
>>> gi = g()
>>> next(gi)
1
>>> gi.close()
Traceback (most recent call last):
File "", line 3, in g
GeneratorExit
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 1, in
File "", line 5, in g
ValueError: Spanish inquisition
--
Greg
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] No response to posts
Good call. Alternative idea: Have a new status “unread” to make searching easier for bug people. Or a predefined custom search for nosy_count == 1. Regards ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] No response to posts
On Sat, Jul 31, 2010 at 19:48, Mark Lawrence wrote: > Hi all, > > I have been wading through outstanding issues today and have noticed that > there are several where there has been no response at all to the initial > post. Failing that, the only response has been Terry Reedy back in May > 2010, and that only updating the versions affected. > > Would it be possible to get some code in place whereby if there is no > response to the initial post, this could be flagged up after (say) 24 hours? > Surely any response back to the OP is better than a complete wall of > silence? > > Kindest regards. > > Mark Lawrence. > We could just add globally visible query which shows all issues with a message count of 1. That query currently shows 372 issues, most of which were entered within the last few months. 24 hours seems too soon for any kind of notification. Who would receive this notification? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exception chaining and generator finalisation
On Sun, Aug 1, 2010 at 11:01 AM, Greg Ewing wrote:
> While updating my yield-from impementation for Python
> 3.1.2, I came across a quirk in the way that the new
> exception chaining feature interacts with generators.
>
> If you close() a generator, and it raises an exception
> inside a finally clause, you get a double-barrelled
> traceback that first reports a GeneratorExit, then
> "During handling of the above exception, another
> exception occurred", followed by the traceback for
> the exception raised by the generator.
>
> To my mind, the fact that GeneratorExit is involved
> is an implementation detail that shouldn't be leaking
> through like this.
>
> Does anyone think this ought to be fixed, and if so,
> how? Should GeneratorExit be exempt from being
> implicitly set as the context of another exception?
> Should any other exceptions also be exempt?
I don't see it as an implementation detail - it's part of the spec of
generator finalisation in PEP 342 that GeneratorExit is thrown in to
the incomplete generator at the point of the most recent yield. Trying
to hide that doesn't benefit anybody.
SystemExit and KeyboardInterrupt behave the same way:
Python 3.2a0 (py3k:82729, Jul 9 2010, 20:26:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> try:
... sys.exit(1)
... finally:
... raise RuntimeError("Ooops")
...
Traceback (most recent call last):
File "", line 2, in
SystemExit: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 4, in
RuntimeError: Ooops
>>> try:
... input("Hit Ctrl-C now")
... finally:
... raise RuntimeError("Ooops")
...
Hit Ctrl-C nowTraceback (most recent call last):
File "", line 2, in
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 4, in
RuntimeError: Ooops
Cheers,
Nick.
--
Nick Coghlan | [email protected] | Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] No response to posts
On Sun, Aug 1, 2010 at 11:00 AM, Brian Curtin wrote: > We could just add globally visible query which shows all issues with a > message count of 1. That query currently shows 372 issues, most of which > were entered within the last few months. > 24 hours seems too soon for any kind of notification. Who would receive this > notification? The query for unreviewed issues to help out the triage folks sounds like an excellent idea. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exception chaining and generator finalisation
Nick Coghlan wrote: I don't see it as an implementation detail - it's part of the spec of generator finalisation in PEP 342 It doesn't seem like something you need to know in this situation, though. All it tells you is that the finalisation is happening because the generator is being closed rather than completing on its own. I suppose it doesn't do any harm, but it seems untidy to clutter up the traceback with irrelevant and possibly confusing information. Hit Ctrl-C nowTraceback (most recent call last): File "", line 2, in KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 4, in RuntimeError: Ooops That's a bit different, because the fact that the program was terminated by Ctrl-C could be useful information. -- Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exception chaining and generator finalisation
On Sun, 01 Aug 2010 13:01:32 +1200
Greg Ewing wrote:
> While updating my yield-from impementation for Python
> 3.1.2, I came across a quirk in the way that the new
> exception chaining feature interacts with generators.
>
> If you close() a generator, and it raises an exception
> inside a finally clause, you get a double-barrelled
> traceback that first reports a GeneratorExit, then
> "During handling of the above exception, another
> exception occurred", followed by the traceback for
> the exception raised by the generator.
It only happens if you call close() explicitly:
>>> def g():
... try: yield 1
... finally: 1/0
...
>>> gi = g()
>>> next(gi)
1
>>> del gi
Exception ZeroDivisionError: ZeroDivisionError('division by zero',) in
ignored
>>> gi = g()
>>> next(gi)
1
>>> next(gi)
Traceback (most recent call last):
File "", line 1, in
File "", line 3, in g
ZeroDivisionError: division by zero
>>>
Regards
Antoine.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Exception chaining and generator finalisation
On Sun, Aug 1, 2010 at 1:25 PM, Greg Ewing wrote: > Nick Coghlan wrote: > >> I don't see it as an implementation detail - it's part of the spec of >> generator finalisation in PEP 342 > > It doesn't seem like something you need to know in this > situation, though. All it tells you is that the finalisation > is happening because the generator is being closed rather > than completing on its own. That may be important though (e.g. if the generator hasn't been written to correctly take into account the possibility of exceptions being thrown in, then knowing the exception happened when GeneratorExit in particular was thrown in rather than when next() was called or a different exception was thrown in may matter for the debugging process). Basically, I disagree with your assumption that knowing GeneratorExit was involved won't be significant in figuring why the generator threw an exception at all, so I see this as providing useful exception context information rather than being untidy noise. A toy example, that isn't obviously broken at first glance, but in fact fails when close() is called: def toy_gen(): try: yield 1 except Exception as ex: exc = ex else: exc = None finally: if exc is not None: print(type(exc)) >>> g = toy_gen() >>> next(g) 1 >>> g.throw(NameError) Traceback (most recent call last): File "", line 1, in StopIteration >>> g = toy_gen() >>> next(g) 1 >>> g.close() Traceback (most recent call last): File "", line 3, in toy_gen GeneratorExit During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "", line 9, in toy_gen UnboundLocalError: local variable 'exc' referenced before assignment Without knowing GeneratorExit was thrown, the UnboundLocalError would be rather confusing. Given GeneratorExit to work with though, it shouldn't be hard for a developer to realise that "exc" won't be set when a thrown exception inherits directly from BaseException rather than from Exception. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
