Re: [Python-Dev] Extending tuple unpacking
Ron Adam wrote: > My concern is if it's used outside of functions, then on the left hand > side of assignments, it will be used to pack, but if used on the right > hand side it will be to unpack. I don't see why that should be any more confusing than the fact that commas denote tuple packing on the right and unpacking on the left. Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Guido van Rossum wrote: > BTW, what should > > [a, b, *rest] = (1, 2, 3, 4, 5) > > do? Should it set rest to (3, 4, 5) or to [3, 4, 5]? Whatever type is chosen, it should be the same type, always. The rhs could be any iterable, not just a tuple or a list. Making a special case of preserving one or two types doesn't seem worth it to me. > Suppose the latter. Then should we allow > > [*rest] = x > > as alternative syntax for > > rest = list(x) That would be a consequence of that choice, yes, but so what? There are already infinitely many ways of writing any expression. > ? And then perhaps > > *rest = x > > should mean > > rest = tuple(x) > > Or should that be disallowed Why bother? What harm would result from the ability to write that? > There certainly is a need for doing the same from the end: > > *rest, a, b = (1, 2, 3, 4, 5) I wouldn't mind at all if *rest were only allowed at the end. There's a pragmatic reason for that if nothing else: the rhs can be any iterable, and there's no easy way of getting "all but the last n" items from a general iterable. > Where does it stop? For me, it stops with *rest only allowed at the end, and always yielding a predictable type (which could be either tuple or list, I don't care). > BTW, and quite unrelated, I've always felt uncomfortable that you have to > write > > f(a, b, foo=1, bar=2, *args, **kwds) > > I've always wanted to write that as > > f(a, b, *args, foo=1, bar=2, **kwds) Yes, I'd like that too, with the additional meaning that foo and bar can only be specified by keyword, not by position. Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: defaultproperty
Brett Cannon wrote: > On 10/10/05, Barry Warsaw <[EMAIL PROTECTED]> wrote: > >>On Mon, 2005-10-10 at 01:47, Calvin Spealman wrote: >> >> >>>Never created for a reason? lumping things together for having the >>>similar usage semantics, but unrelated purposes, might be something to >>>avoid and maybe that's why it hasn't happened yet for decorators. If >>>ever there was a makethreadsafe decorator, it should go in the thread >>>module, etc. I mean, come on, its like making a module just to store a >>>bunch of unrelated types just to lump them together because they're >>>types. Who wants that? >> >>Like itertools? >> >>+1 for a decorators module. > > > +1 from me as well. And placing defaultproperty in there makes sense > if it is meant to be used as a decorator and not viewed as some spiffy > descriptor. > > Should probably work in Michael's update_meta() function as well > (albeit maybe with a different name since I think I remember Guido > saying he didn't like the name). I thought mimic was a nice name: @mimic(func) def wrapper(*args, **kwds): return func(*args, **kwds) As a location for this, I would actually suggest a module called something like "metatools", rather than "decorators". The things these have in common is that they're about manipulating the way functions and the like interact with the Python language infrastructure - they're tools to make metaprogramming a bit easier. If "contextmanager" isn't made a builtin, this module would also be the place for it. Ditto for any standard context managers (such as closing()) which aren't made builtins. At the moment, the only location for such things is the builtin namespace (e.g. classmethod, staticmethod). Regardless, a short PEP is needed to: a. pick a name for the module b. decide precisely what will be in it for Python 2.5 Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Making Queue.Queue easier to use
The multi-processing discussion reminded me that I have a few problems I run into every time I try to use Queue objects. My first problem is finding it: Py> from threading import Queue # Nope Traceback (most recent call last): File "", line 1, in ? ImportError: cannot import name Queue Py> from Queue import Queue # Ah, there it is What do people think of the idea of adding an alias to Queue into the threading module so that: a) the first line above works; and b) Queue can be documented with all of the other threading primitives, rather than being off somewhere else in its own top-level section. My second problem is with the current signatures of the put() and get() methods. Specifically, the following code blocks forever instead of raising an Empty exception after 500 milliseconds as one might expect: from Queue import Queue x = Queue() x.get(0.5) I assume the current signature is there for backward compatibility with the original version that didn't support timeouts (considering the difficulty of telling the difference between "x.get(1)" and "True = 1; x.get(True)" from inside the get() method) However, the need to write "x.get(True, 0.5)" seems seriously redundant, given that a single paramater can actually handle all the options (as is currently the case with Condition.wait()). The "put_nowait" and "get_nowait" functions are fine, because they serve a useful documentation purpose at the calling point (particularly given the current clumsy timeout signature). What do people think of the idea of adding "put_wait" and "get_wait" methods with the signatures: put_wait(item,[timeout=None) get_wait([timeout=None]) Optionally, the existing "put" and "get" methods could be deprecated, with the goal of eventually changing their signature to match the put_wait and get_wait methods above. If people are amenable to these ideas, I should be able to work up a patch for them this week. Regards, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
Donovan Baarda wrote: > On Fri, 2005-10-07 at 23:54, Nick Coghlan wrote: > [...] > >>The few times I have encountered anyone saying anything resembling "threading >>is easy", it was because the full sentence went something like "threading is >>easy if you use message passing and copy-on-send or release-reference-on-send >>to communicate between threads, and limit the shared data structures to those >>required to support the messaging infrastructure". And most of the time there >>was an implied "compared to using semaphores and locks directly, " at the >>start. > > > LOL! So threading is easy if you restrict inter-thread communication to > message passing... and what makes multi-processing hard is your only > inter-process communication mechanism is message passing :-) > > Sounds like yet another reason to avoid threading and use processes > instead... effort spent on threading based message passing > implementations could instead be spent on inter-process messaging. > Actually, I think it makes it worth building a decent message-passing paradigm (like, oh, PEP 342) that can then be scaled using backends with four different levels of complexity: - logical threading (generators) - physical threading (threading.Thread and Queue.Queue) - multiple processing - distributed processing Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] problem with genexp
Neal Norwitz wrote: > There's a problem with genexp's that I think really needs to get > fixed. See http://python.org/sf/1167751 the details are below. This > code: > I agree with the bug report that the code should either raise a > SyntaxError or do the right thing. I agree it should be a SyntaxError - I believe the AST compiler actually raises one in this situation. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] C.E.R. Thoughts
jamesr wrote: > Congragulations heartily given. I missed the ternary op in c... Way to > go! clean and easy and now i can do: > > if ((sys.argv[1] =='debug') if len(sys.argv) > 1 else False): > pass > > and check variables IF AND ONLY if they exist, in a single line! > > but y'all knew that.. Yep, it was a conscious decision to add a construct with the *potential* to be abused for use in places where the existing "and" and "or" expressions *are* being abused and resulting in buggy code. The code in your example is lousy because it's unreadable (and there are far more readable alternatives like a simple short-circuiting usage of "and"), but at least it's semantically correct (whereas the same can't be said for the current abuse of "and" and "or"). If code using a conditional expression is unclear, blame the programmer for choosing to write the code, don't blame the existence of the conditional expression :) We're-all-adults-here-ly yours, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3000 and exec
Guido van Rossum wrote: > My idea was to make the compiler smarter so that it would recognize > exec() even if it was just a function. > > Another idea might be to change the exec() spec so that you are > required to pass in a namespace (and you can't use locals() either!). > Then the whole point becomes moot. I vote for the latter option. Particularly if something like Namespace objects make their way into the standard lib before Py3k (a Namespace object is essentially designed to provide attribute style lookup into a string-keyed dictionary- you can fake it pretty well with an empty class, but there are a few quirks with doing it that way). Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
Bruce Eckel wrote: >>Yes, there's a troublesome meme in the world: "threads are hard". >>They aren't, really. You just have to know what you're doing. > > > I would say that the troublesome meme is that "threads are easy." I > posted an earlier, rather longish message about this. The gist of > which was: "when someone says that threads are easy, I have no idea > what they mean by it." > > Perhaps this means "threads in Python are easier than threads in other > languages." One key thing is that the Python is so dynamic that the compiler can't get too fancy with the order in which it does things. However, Python threading has its own traps for the unwary (mainly related to badly-behaved C extensions, but they're still traps). Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
Bruce Eckel wrote: [Bill Janssen] >>Yes, there's a troublesome meme in the world: "threads are hard". >>They aren't, really. You just have to know what you're doing. > But that begs the question, because there is a significant amount of evidence that when it comes to threads "knowing what you are doing" is hard to the point that people can *think* they do when they demonstrably don't! > > I would say that the troublesome meme is that "threads are easy." I > posted an earlier, rather longish message about this. The gist of > which was: "when someone says that threads are easy, I have no idea > what they mean by it." > I would suggest that the truth lies in the middle ground, and would say that "you can get yourself into a lot of trouble using threads without considering the subtleties". It's an area where anything but the most simplistic solutions are almost always wrong at some point. > Perhaps this means "threads in Python are easier than threads in other > languages." > > But I just finished a 150-page chapter on Concurrency in Java which > took many months to write, based on a large chapter on Concurrency in > C++ which probably took longer to write. I keep in reasonably good > touch with some of the threading experts. I can't get any of them to > say that it's easy, even though they really do understand the issues > and think about it all the time. *Because* of that, they say that it's > hard. > > So alright, I'll take the bait that you've laid down more than once, > now. Perhaps you can go beyond saying that "threads really aren't > hard" and explain the aspects of them that seem so easy to you. > Perhaps you can give a nice clear explanation of cache coherency and > memory barriers in multiprocessor machines? Or explain atomicity, > volatility and visibility? Or, even better, maybe you can come up with > a better concurrency model, which is what I think most of us are > looking for in this discussion. > The nice thing about Python threads (or rather threading.threads) is that since each thread is an instance it's *relatively* easy to ensure that a thread restricts itself to manipulating thread-local resources (i.e. instance members). This makes it possible to write algorithms parameterized for the number of "worker threads" where the workers are taking their tasks off a Queue with entries generated by a single producer thread. With care, multiple producers can be used. More complex inter-thread communications are problematic, and arbitrary access to foreign-thread state is a nightmare (although the position has been somewhat alleviated by the introduction of threading.local). Beyond the single-producer many-consumers model there is still plenty of room to shoot yourself in the foot. In the case of threads true sophistication is staying away from the difficult cases, an option which unfortunately isn't always available in the real world. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] problem with genexp
On 10/11/05, Nick Coghlan <[EMAIL PROTECTED]> wrote: > Neal Norwitz wrote: > > There's a problem with genexp's that I think really needs to get > > fixed. See http://python.org/sf/1167751 the details are below. This > > code: > > I agree with the bug report that the code should either raise a > > SyntaxError or do the right thing. > > I agree it should be a SyntaxError - I believe the AST compiler actually > raises one in this situation. Could someone add a test for this on the AST branch? BTW, it looks like doctest is the way to go for SyntaxError tests. There are older tests, like test_scope.py, that use separate files with bad syntax (and lots of extra kludges in the infrastructure to ignore the fact that those .py files can't be compiled). Jeremy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] problem with genexp
Nick Coghlan wrote:
> Neal Norwitz wrote:
>
>>There's a problem with genexp's that I think really needs to get
>>fixed. See http://python.org/sf/1167751 the details are below. This
>>code:
>>I agree with the bug report that the code should either raise a
>>SyntaxError or do the right thing.
>
>
> I agree it should be a SyntaxError - I believe the AST compiler actually
> raises one in this situation.
I was half right. Both the normal compiler and the AST compiler give a
SyntaxError if you write:
foo((a=i for i in range(10)))
The problem is definitely on the parser end though:
Py> compiler.parse("foo(x=i for i in range(10))")
Module(None, Stmt([Discard(CallFunc(Name('foo'), [Keyword('x', Name('i'))],
None, None))]))
It's getting to what looks like a valid keyword argument in "x=i" and throwing
the rest of it away, when it should be flagging a syntax error (the parser's
limited lookahead should still be enough to spot the erroneous 'for' keyword
and bail out). The error will be even more obscure if there is an "i" visible
from the location of the function call.
Whereas when it's parenthesised correctly, the parse tree looks more like this:
Py> compiler.parse("foo(x=(i for i in range(10)))")
Module(None, Stmt([Discard(CallFunc(Name('foo'), [Keyword('x',
GenExpr(GenExprInner(Name('i'), [GenExprFor(AssName('i', 'OP_ASSIGN'),
CallFunc(Name('range'), [Const(10)], None, None), [])])))], None, None))]))
Cheers,
Nick.
P.S. I added a comment showing the parser output to the SF bug report.
--
Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia
---
http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PythonCore\CurrentVersion
[Tim Peters] >>> never before this year -- maybe sys.path _used_ to contain the current >>> directory on Linux?). [Fred L. Drake, Jr.] >> It's been a long time since this was the case on Unix of any variety; I >> *think* this changed to the current state back before 2.0. [Martin v. Löwis] > Please check again: > > [GCC 4.0.2 20050821 (prerelease) (Debian 4.0.1-6)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import sys > >>> sys.path > ['', '/usr/lib/python23.zip', '/usr/lib/python2.3', > '/usr/lib/python2.3/plat-linux2', '/usr/lib/python2.3/lib-tk', > '/usr/lib/python2.3/lib-dynload', > '/usr/local/lib/python2.3/site-packages', > '/usr/lib/python2.3/site-packages', > '/usr/lib/python2.3/site-packages/Numeric', > '/usr/lib/python2.3/site-packages/gtk-2.0', '/usr/lib/site-python'] > > We still have the empty string in sys.path, and it still > denotes the current directory. Well, that's in interactive mode, and I see sys.path[0] == "" on both Windows and Linux then. I don't see "" in sys.path on either box in batch mode, although I do see the absolutized path to the current directory in sys.path in batch mode on Windows but not on Linux -- but Mark Hammond says he doesn't see (any form of) the current directory in sys.path in batch mode on Windows. It's a bit confusing ;-) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
Title: RE: [Python-Dev] Pythonic concurrency Steve Holden wrote: > The nice thing about Python threads (or rather threading.threads) is > that since each thread is an instance it's *relatively* easy to ensure > that a thread restricts itself to manipulating thread-local resources > (i.e. instance members). > > This makes it possible to write algorithms parameterized for the number > of "worker threads" where the workers are taking their tasks off a Queue > with entries generated by a single producer thread. With care, multiple > producers can be used. More complex inter-thread communications are > problematic, and arbitrary access to foreign-thread state is a nightmare > (although the position has been somewhat alleviated by the introduction > of threading.local). "Somewhat alleviated" and somewhat worsened. I've had half a dozen conversations in the last year about sharing data between threads; in every case, I've had to work quite hard to convince the other person that threading.local is *not* magic pixie thread dust. Each time, they had come to the conclusion that if they had a global variable, they could just stick a reference to it into a threading.local object and instantly have safe, concurrent access to it. Robert Brewer System Architect Amor Ministries [EMAIL PROTECTED] ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Greg Ewing wrote: > Guido van Rossum wrote: > > >>BTW, what should >> >>[a, b, *rest] = (1, 2, 3, 4, 5) >> >>do? Should it set rest to (3, 4, 5) or to [3, 4, 5]? > > > Whatever type is chosen, it should be the same type, always. > The rhs could be any iterable, not just a tuple or a list. > Making a special case of preserving one or two types doesn't > seem worth it to me. And, for consistency with functions, the type chosen should be a tuple. I'm also trying to figure out why you would ever write: [a, b, c, d] = seq instead of: a, b, c, d = seq or: (a, b, c, d) = seq It's not like the square brackets generate different code: Py> def foo(): ... x, y = 1, 2 ... (x, y) = 1, 2 ... [x, y] = 1, 2 ... Py> dis.dis(foo) 2 0 LOAD_CONST 3 ((1, 2)) 3 UNPACK_SEQUENCE 2 6 STORE_FAST 1 (x) 9 STORE_FAST 0 (y) 3 12 LOAD_CONST 4 ((1, 2)) 15 UNPACK_SEQUENCE 2 18 STORE_FAST 1 (x) 21 STORE_FAST 0 (y) 4 24 LOAD_CONST 5 ((1, 2)) 27 UNPACK_SEQUENCE 2 30 STORE_FAST 1 (x) 33 STORE_FAST 0 (y) 36 LOAD_CONST 0 (None) 39 RETURN_VALUE So my vote would actually go for deprecating the use of square brackets to surround an assignment target list - it makes it look like an actual list object should be involved somewhere, but there isn't one. >>? And then perhaps >> >>*rest = x >> >>should mean >> >>rest = tuple(x) >> >>Or should that be disallowed > > Why bother? What harm would result from the ability to write that? Given that: def foo(*args): print args is legal, I would have no problem with "*rest = x" being legal. >>There certainly is a need for doing the same from the end: >> >>*rest, a, b = (1, 2, 3, 4, 5) > > > I wouldn't mind at all if *rest were only allowed at the end. > There's a pragmatic reason for that if nothing else: the rhs > can be any iterable, and there's no easy way of getting "all > but the last n" items from a general iterable. Agreed. The goal here is to make the name binding rules consistent between for loops, tuple assigment and function entry, not to create different rules. >>Where does it stop? > For me, it stops with *rest only allowed at the end, and > always yielding a predictable type (which could be either tuple > or list, I don't care). For me, it stops when the rules for positional name binding are more consistent across operations that bind names (although complete consistency isn't possible, given that function calls don't unpack sequences automatically). Firstly, let's list the operations that permit name binding to a list of identifiers: - binding of function parameters to function arguments - binding of assignment target list to assigned sequence - binding of iteration variables to iteration values However, that function argument case needs to be recognised as a two step operation, whereby the arguments are *always* packed into a tuple before being bound to the parameters. That is something very vaguely like: if numargs > 0: if numargs == 1: argtuple = args, # One argument gives singleton tuple else: argtuple = args # More arguments gives appropriate tuple argtuple += tuple(starargs) # Extended arguments are added to the tuple param1, param2, *rest = argtuple # Tuple is unpacked to parameters This means that the current behaviour of function parameters is actually the same as assignment target lists and iteration variables, in that the argument tuple is *always* unpacked into the parameter list - the only difference is that a single argument is always considered a singleton tuple. You can get the same behaviour with target lists and iteration variables by only using tuples of identifiers as targets (i.e., use "x," rather than just "x"). So the proposal at this stage is simply to mimic the unpacking of the argument tuple into the formal parameter list in the other two name list binding cases, such that the pseudocode above would actually do the same thing as building an argument list and binding it to its formal parameters does. Now, when it came to tuple *packing* syntax (i.e., extended call syntax) The appropriate behaviour would be for: 1, 2, 3, *range(10) to translate (roughly) to: (1, 2, 3) + tuple(range(10)) However, given that the equivalent code works just fine anywhere it really matters (assignment value, return value, yield value), and is clearer about what is going on, this option is probably worth avoiding. >>BTW, and quite unrelated, I've always felt uncomfortable that you have to >>write >> >>f(a, b, foo=1, bar=2, *args, **kwds) >> >>I've always wanted to write that as >> >>f
Re: [Python-Dev] Extending tuple unpacking
Nick Coghlan wrote: > For me, it stops when the rules for positional name binding are more > consistent across operations that bind names (although complete consistency > isn't possible, given that function calls don't unpack sequences > automatically). Oops - forgot to delete this bit once I realised that functions actually *do* unpack the arugment tuple automatically. It's just that an argument which is a single sequence gets put into a singleton tuple before being unpacked. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
Robert Brewer wrote: > "Somewhat alleviated" and somewhat worsened. I've had half a dozen > conversations in the last year about sharing data between threads; in > every case, I've had to work quite hard to convince the other person > that threading.local is *not* magic pixie thread dust. Each time, they > had come to the conclusion that if they had a global variable, they > could just stick a reference to it into a threading.local object and > instantly have safe, concurrent access to it. Ouch. Copy, yes, reference, no. . . Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.blogspot.com ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making Queue.Queue easier to use
On 10/11/05, Nick Coghlan <[EMAIL PROTECTED]> wrote: > The multi-processing discussion reminded me that I have a few problems I run > into every time I try to use Queue objects. > > My first problem is finding it: > > Py> from threading import Queue # Nope > Traceback (most recent call last): >File "", line 1, in ? > ImportError: cannot import name Queue > Py> from Queue import Queue # Ah, there it is I don't think that's a reason to move it. >>> from sys import Queue ImportError: cannon import name Queue >>> from os import Queue ImportError: cannot import name Queue >>> # Well where the heck is it?! > What do people think of the idea of adding an alias to Queue into the > threading module so that: > a) the first line above works; and I see no need. Code that *doesn't* need Queue but does use threading shouldn't have to pay for loading Queue.py. > b) Queue can be documented with all of the other threading primitives, >rather than being off somewhere else in its own top-level section. Do top-level sections have to limit themselves to a single module? Even if they do, I think it's fine to plant a prominent link to the Queue module. You can't really expect people to learn how to use threads wisely from reading the library reference anyway. > My second problem is with the current signatures of the put() and get() > methods. Specifically, the following code blocks forever instead of raising an > Empty exception after 500 milliseconds as one might expect: >from Queue import Queue >x = Queue() >x.get(0.5) I'm not sure if I have much sympathy with a bug due to refusing to read the docs... :) > I assume the current signature is there for backward compatibility with the > original version that didn't support timeouts (considering the difficulty of > telling the difference between "x.get(1)" and "True = 1; x.get(True)" from > inside the get() method) Huh? What a bizarre idea. Why would you do that? I gues I don't understand where you're coming from. > However, the need to write "x.get(True, 0.5)" seems seriously redundant, given > that a single paramater can actually handle all the options (as is currently > the case with Condition.wait()). So write x.get(timeout=0.5). That's clear and unambiguous. > The "put_nowait" and "get_nowait" functions are fine, because they serve a > useful documentation purpose at the calling point (particularly given the > current clumsy timeout signature). > > What do people think of the idea of adding "put_wait" and "get_wait" methods > with the signatures: >put_wait(item,[timeout=None) >get_wait([timeout=None]) -1. I'd rather not tweak the current Queue module at all until Python 3000. Then we could force people to use keyword args. > Optionally, the existing "put" and "get" methods could be deprecated, with the > goal of eventually changing their signature to match the put_wait and get_wait > methods above. Apart from trying to guess the API without reading the docs (:-), what are the use cases for using put/get with a timeout? I have a feeling it's not that common. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PythonCore\CurrentVersion
On 10/11/05, Tim Peters <[EMAIL PROTECTED]> wrote: > Well, that's in interactive mode, and I see sys.path[0] == "" on both > Windows and Linux then. I don't see "" in sys.path on either box in > batch mode, although I do see the absolutized path to the current > directory in sys.path in batch mode on Windows but not on Linux -- but > Mark Hammond says he doesn't see (any form of) the current directory > in sys.path in batch mode on Windows. > > It's a bit confusing ;-) How did you test batch mode? All: sys.path[0] is *not* defined to be the current directory. It is defined to be the directory of the script that was used to invoke python (sys.argv[0], typically). If there is no script, or it is being read from stdin, the default is ''. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PythonCore\CurrentVersion
[Tim] >> Well, that's in interactive mode, and I see sys.path[0] == "" on both >> Windows and Linux then. I don't see "" in sys.path on either box in >> batch mode, although I do see the absolutized path to the current >> directory in sys.path in batch mode on Windows but not on Linux -- but >> Mark Hammond says he doesn't see (any form of) the current directory >> in sys.path in batch mode on Windows. >> >> It's a bit confusing ;-) [Guido] > How did you test batch mode? I gave full code (it's brief) and screen-scrapes from Windows and Linux yesterday: http://mail.python.org/pipermail/python-dev/2005-October/057162.html By batch mode, I meant invoking path_to_python path_to_python_script.py from a shell prompt. > All: > > sys.path[0] is *not* defined to be the current directory. > > It is defined to be the directory of the script that was used to > invoke python (sys.argv[0], typically). In my runs, sys.argv[0] was the path to the Python executable, not to the script being run. The directory of the script being run was nevertheless in sys.path[0] on both Windows and Linux. On Windows, but not on Linux, the _current_ directory (the directory I happened to be in at the time I invoked Python) was also on sys.path; Mark Hammond said it was not when he tried, but he didn't show exactly what he did so I'm not sure what he saw. > If there is no script, or it is being read from stdin, the default is ''. I believe everyone sees that. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PythonCore\CurrentVersion
On 10/11/05, Tim Peters <[EMAIL PROTECTED]> wrote: > [Tim] > >> Well, that's in interactive mode, and I see sys.path[0] == "" on both > >> Windows and Linux then. I don't see "" in sys.path on either box in > >> batch mode, although I do see the absolutized path to the current > >> directory in sys.path in batch mode on Windows but not on Linux -- but > >> Mark Hammond says he doesn't see (any form of) the current directory > >> in sys.path in batch mode on Windows. > >> > >> It's a bit confusing ;-) > > [Guido] > > How did you test batch mode? > > I gave full code (it's brief) and screen-scrapes from Windows and > Linux yesterday: > > http://mail.python.org/pipermail/python-dev/2005-October/057162.html > > By batch mode, I meant invoking > > path_to_python path_to_python_script.py > > from a shell prompt. > > > All: > > > > sys.path[0] is *not* defined to be the current directory. > > > > It is defined to be the directory of the script that was used to > > invoke python (sys.argv[0], typically). > > In my runs, sys.argv[0] was the path to the Python executable, not to > the script being run. I tried your experiment but added 'print sys.argv[0]' and didn't see that. sys.argv[0] is the path to the script. > The directory of the script being run was > nevertheless in sys.path[0] on both Windows and Linux. On Windows, > but not on Linux, the _current_ directory (the directory I happened to > be in at the time I invoked Python) was also on sys.path; Mark Hammond > said it was not when he tried, but he didn't show exactly what he did > so I'm not sure what he saw. I see what you see. The first entry is the script's directory, the 2nd is a nonexistent zip file, the 3rd is the current directory, then the rest is standard library stuff. I suppose PC/getpathp.c puts it there, per your post quoted above? -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Nick Coghlan wrote:
> Greg Ewing wrote:
>
>>Guido van Rossum wrote:
>>
>>
>>
>>>BTW, what should
>>>
>>> [a, b, *rest] = (1, 2, 3, 4, 5)
>>>
>>>do? Should it set rest to (3, 4, 5) or to [3, 4, 5]?
>>
>>
>>Whatever type is chosen, it should be the same type, always.
>>The rhs could be any iterable, not just a tuple or a list.
>>Making a special case of preserving one or two types doesn't
>>seem worth it to me.
>
>
> And, for consistency with functions, the type chosen should be a tuple.
>
> I'm also trying to figure out why you would ever write:
>[a, b, c, d] = seq
>
> instead of:
>a, b, c, d = seq
>
> or:
>(a, b, c, d) = seq
>
[...]
> So my vote would actually go for deprecating the use of square brackets to
> surround an assignment target list - it makes it look like an actual list
> object should be involved somewhere, but there isn't one.
>
But don't forget that at present unpacking can be used at several levels:
>>> ((a, b), c) = ((1, 2), 3)
>>> a, b, c
(1, 2, 3)
>>>
So presumably you'd need to be able to say
((a, *b), c, *d) = ((1, 2, 3), 4, 5, 6)
and see
a, b, c, d == 1, (2, 3), 4, (5, 6)
if we are to retain today's multi-level consistency. And are you also
proposing to allow
a, *b = [1]
to put the empty list into b, or is that an unpacking error?
>
>>>? And then perhaps
>>>
>>> *rest = x
>>>
>>>should mean
>>>
>>> rest = tuple(x)
>>>
>>>Or should that be disallowed
>>
>>Why bother? What harm would result from the ability to write that?
>
>
> Given that:
>def foo(*args):
>print args
>
> is legal, I would have no problem with "*rest = x" being legal.
>
Though presumably we'd still be raising TypeError is x weren't a sequence.
>
>>>There certainly is a need for doing the same from the end:
>>>
>>> *rest, a, b = (1, 2, 3, 4, 5)
>>
>>
>>I wouldn't mind at all if *rest were only allowed at the end.
>>There's a pragmatic reason for that if nothing else: the rhs
>>can be any iterable, and there's no easy way of getting "all
>>but the last n" items from a general iterable.
>
>
> Agreed. The goal here is to make the name binding rules consistent between
> for
> loops, tuple assigment and function entry, not to create different rules.
>
>
>>>Where does it stop?
>>
>>For me, it stops with *rest only allowed at the end, and
>>always yielding a predictable type (which could be either tuple
>>or list, I don't care).
>
>
> For me, it stops when the rules for positional name binding are more
> consistent across operations that bind names (although complete consistency
> isn't possible, given that function calls don't unpack sequences
> automatically).
>
Hmm. Given that today we can write
>>> def foo((a, b), c):
... print a, b, c
...
>>> foo((1, 2, 3))
Traceback (most recent call last):
File "", line 1, in ?
TypeError: foo() takes exactly 2 arguments (1 given)
>>> foo((1, 2), 3)
1 2 3
>>>
does this mean that you'd also like to be able to write
def foo((a, *b), *c):
print a, b, c
and then call it like
foo((1, 2, 3, 4), 5, 6)
to see
1, (2, 3, 4), (5, 6)
[...]
>
>>>BTW, and quite unrelated, I've always felt uncomfortable that you have to
>>>write
>>>
>>> f(a, b, foo=1, bar=2, *args, **kwds)
>>>
>>>I've always wanted to write that as
>>>
>>> f(a, b, *args, foo=1, bar=2, **kwds)
>>
>>
>>Yes, I'd like that too, with the additional meaning that
>>foo and bar can only be specified by keyword, not by
>>position.
>
>
> Indeed. It's a (minor) pain that optional flag variables and variable length
> argument lists are currently mutually exclusive. Although, if you had that
> rule, I'd want to be able to write:
>
>def f(a, b, *, foo=1, bar=2): pass
>
> to get a function which required exactly two positional arguments, but had a
> couple of optional keyword arguments, rather than having to do:
>
>def f(a, b, *args, foo=1, bar=2):
> if args:
>raise TypeError("f() takes exactly 2 positional arguments (%d given)",
> 2 + len(args))
>
I do feel that for Python 3 it might be better to make a clean
separation between keywords and positionals: in other words, of the
function definition specifies a keyword argument then a keyword must be
used to present it.
This would allow users to provide an arbitrary number of positionals
rather than having them become keyword arguments. At present it's
difficult to specify that.
regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
(my own 2 eurocents) > I do feel that for Python 3 it might be better to make a clean > separation between keywords and positionals: in other words, of the > function definition specifies a keyword argument then a keyword must be > used to present it. Do you mean it would also be forbidden to invoke a "positional" argument using its keyword? It would be a huge step back in usability IMO. Some people like invoking by position (because it's shorter) and some others prefer invoking by keyword (because it's more explicit). Why should the implementer of the API have to make a choice for the user of the API ? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
> Java's condition variables don't (didn't? has this been fixed?) quite > work. The emphasis on portability and the resulting notions of > red/green threading packages at the beginning didn't help either. > Read Allen Holub's book. And Doug Lea's book. I understand much of > this has been addressed with a new package in Java 1.5. Not only are there significant new library components in java.util.concurrent in J2SE5, but perhaps more important is the new memory model that deals with issues that are (especially) revealed in multiprocessor environments. The new memory model represents new work in the computer science field; apparently the original paper is written by Ph.D.s and is a bit too theoretical for the normal person to follow. But the smart threading guys studied this and came up with the new Java memory model -- so that volatile, for example, which didn't work quite right before, does now. This is part of J2SE5, and this work is being incorporated into the upcoming C++0x. Java concurrency is certainly one of the bad examples of language design. Apparently, they grabbed stuff from C++ (mostly the volatile keyword) and combined it with what they new about pthreads, and decided that being able to declare a method as synchronized made the whole thing object-oriented. But you can see how ill-thought-out the design was because in later versions of Java some fundamental methods: stop(), suspend(), resume() and destroy(), were deprecated because ... oops, we didn't really think those out very well. And then finally, with J2SE5, it *appears* that all the kinks have been fixed, but only with some really smart folks like Doug Lea, Brian Goetz, and that gang, working long and hard on all these issues and (we hope) figuring them all out. I think threading *can* be much simpler, and I *want* it to be that way in Python. But that can only happen if the right model is chosen, and that model is not pthreads. People migrate to pthreads if they already understand it and so it might seem "simple" to them because of that. But I think we need something that supports an object-oriented approach to concurrency that doesn't prevent beginners from using it safely. Bruce Eckel ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making Queue.Queue easier to use
Guido van Rossum <[EMAIL PROTECTED]> wrote: > > Optionally, the existing "put" and "get" methods could be deprecated, with > > the > > goal of eventually changing their signature to match the put_wait and > > get_wait > > methods above. > > Apart from trying to guess the API without reading the docs (:-), what > are the use cases for using put/get with a timeout? I have a feeling > it's not that common. With timeout=0, a shared connection/resource pool (perhaps DB, etc., I use one in the tuple space implementation I have for connections to the tuple space). Note that technically speaking, Queue.Queue from Pythons prior to 2.4 is broken: get_nowait() may not get an object even if the Queue is full, this is caused by "elif not self.esema.acquire(0):" being called for non-blocking requests. Tim did more than simplify the structure by rewriting it, he fixed this bug. With block=True, timeout=None, worker threads pulling from a work-to-do queue, and even a thread which handles the output of those threads via a result queue. - Josiah ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Nick Coghlan wrote: > So my vote would actually go for deprecating the use of square brackets to > surround an assignment target list - it makes it look like an actual list > object should be involved somewhere, but there isn't one. I've found myself using square brackets a few times for more complicated unpacking, e.g.: try: x, y = args except ValueError: [x], y = args, None where I thought that (x,), y = args, None would have been more confusing. OTOH, I usually end up rewriting this to x, = args y = None because even the bracketed form is a bit confusing. So I wouldn't really be upset if the brackets went away. STeVe -- You can wordify anything if you just verb it. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python-Dev Digest, Vol 27, Issue 44
> Date: Tue, 11 Oct 2005 09:51:06 -0400 > From: Tim Peters <[EMAIL PROTECTED]>> Subject: Re: [Python-Dev] PythonCore\CurrentVersion > To: Martin v. L?wis <[EMAIL PROTECTED]>> Cc: [email protected] > Message-ID: > <[EMAIL PROTECTED]> > Content-Type: text/plain; charset=ISO-8859-1 > > [Tim Peters] > >>> never before this year -- maybe sys.path _used_ to contain the current > >>> directory on Linux?). > > [Fred L. Drake, Jr.] > >> It's been a long time since this was the case on Unix of any variety; I > >> *think* this changed to the current state back before 2.0. > > [Martin v. L?wis] > > Please check again: > > > > [GCC 4.0.2 20050821 (prerelease) (Debian 4.0.1-6)] on linux2 > > Type "help", "copyright", "credits" or "license" ! for more information. > > >>> import sys > > >>> sys.path > > ['', '/usr/lib/python23.zip', '/usr/lib/python2.3', > > '/usr/lib/python2.3/plat-linux2', '/usr/lib/python2.3/lib-tk', > > '/usr/lib/python2.3/lib-dynload', > > '/usr/local/lib/python2.3/site-packages', > > '/usr/lib/python2.3/site-packages', > > '/usr/lib/python2.3/site-packages/Numeric', > > '/usr/lib/python2.3/site-packages/gtk-2.0', '/usr/lib/site-python'] > > > > We still have the empty string in sys.path, and it still > > denotes the current directory. > > Well, that's in interactive mode, and I see sys.path[0] == "" on both > Windows and Linux then. I don't see "" in sys.path on either box in > batch mode, although I do see the absolutized path to the current > directory in sys.path in batch mode on Windows but not on Linux -- but &! gt; Mark Hammond says he doesn't see (any form of) the current directo ry > in sys.path in batch mode on Windows. > > It's a bit confusing ;-) > Been bit by this in the past. On windows, it's a relative path in interactive mode and absolute path in non-interactive mode. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
"Robert Brewer" <[EMAIL PROTECTED]> wrote: > "Somewhat alleviated" and somewhat worsened. I've had half a dozen > conversations in the last year about sharing data between threads; in > every case, I've had to work quite hard to convince the other person > that threading.local is *not* magic pixie thread dust. Each time, they > had come to the conclusion that if they had a global variable, they > could just stick a reference to it into a threading.local object and > instantly have safe, concurrent access to it. *boggles* Perhaps there should be an entry in the documentation about this. Here is a proposed modification. Despite desires and assumptions to the contrary, threading.local is not magic. Placing references to global shared objects into threading.local will not make them magically threadsafe. Only by using threadsafe shared objects (by design with Queue.Queue, or by desire with lock.acquire()/release() placed around object accesses) will you have the potential for doing safe things. - Josiah ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making Queue.Queue easier to use
[Guido]
>> Apart from trying to guess the API without reading the docs (:-), what
>> are the use cases for using put/get with a timeout? I have a feeling
>> it's not that common.
[Josiah Carlson]
> With timeout=0, a shared connection/resource pool (perhaps DB, etc., I
> use one in the tuple space implementation I have for connections to the
> tuple space).
Passing timeout=0 is goofy: use {get,put}_nowait() instead. There's
no difference in semantics.
> Note that technically speaking, Queue.Queue from Pythons
> prior to 2.4 is broken: get_nowait() may not get an object even if the
> Queue is full, this is caused by "elif not self.esema.acquire(0):" being
> called for non-blocking requests. Tim did more than simplify the
> structure by rewriting it, he fixed this bug.
I don't agree it was a bug, but I did get fatally weary of arguing
with people who insisted it was ;-) It's certainly easier to explain
(and the code is easier to read) now.
> With block=True, timeout=None, worker threads pulling from a work-to-do
> queue, and even a thread which handles the output of those threads via
> a result queue.
Guido understands use cases for blocking and non-blocking put/get, and
Queue always supported those possibilities. The timeout argument got
added later, and it's not really clear _why_ it was added. timeout=0
isn't a sane use case (because the same effect can be gotten with
non-blocking put/get).
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making Queue.Queue easier to use
On 10/11/05, Tim Peters <[EMAIL PROTECTED]> wrote: > Guido understands use cases for blocking and non-blocking put/get, and > Queue always supported those possibilities. The timeout argument got > added later, and it's not really clear _why_ it was added. timeout=0 > isn't a sane use case (because the same effect can be gotten with > non-blocking put/get). In the socket world, a similar bifurcation of the API has happened (also under my supervision, even though the idea and prototype code were contributed by others). The API there is very different because the blocking or timeout is an attribute of the socket, not passed in to every call. But one lesson we can learn from sockets (or perhaps the reason why people kept asking for timeout=0 to be "fixed" :) is that timeout=0 is just a different way to spell blocking=False. The socket module makes sure that the socket ends up in exactly the same state no matter which API is used; and in fact the setblocking() API is redundant. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Greg Ewing wrote: > Guido van Rossum wrote: > >> BTW, what should >> >> [a, b, *rest] = (1, 2, 3, 4, 5) >> >> do? Should it set rest to (3, 4, 5) or to [3, 4, 5]? > > Whatever type is chosen, it should be the same type, always. > The rhs could be any iterable, not just a tuple or a list. > Making a special case of preserving one or two types doesn't > seem worth it to me. I don't think that [a, b, c] = iterable is good style right now, so I'd say that [a, b, *rest] = iterable should be disallowed or be the same as with parentheses. It's not intuitive that rest could be a list here. >> ? And then perhaps >> >> *rest = x >> >> should mean >> >> rest = tuple(x) >> >> Or should that be disallowed > > Why bother? What harm would result from the ability to write that? > >> There certainly is a need for doing the same from the end: >> >> *rest, a, b = (1, 2, 3, 4, 5) > > I wouldn't mind at all if *rest were only allowed at the end. > There's a pragmatic reason for that if nothing else: the rhs > can be any iterable, and there's no easy way of getting "all > but the last n" items from a general iterable. > >> Where does it stop? > > For me, it stops with *rest only allowed at the end, and > always yielding a predictable type (which could be either tuple > or list, I don't care). +1. Tuple is more consistent. >> BTW, and quite unrelated, I've always felt uncomfortable that you have to >> write >> >> f(a, b, foo=1, bar=2, *args, **kwds) >> >> I've always wanted to write that as >> >> f(a, b, *args, foo=1, bar=2, **kwds) > > Yes, I'd like that too, with the additional meaning that > foo and bar can only be specified by keyword, not by > position. That would be a logical consequence. But one should also be able to give default values for positional parameters. So: foo(a, b, c=1, *args, d=2, e=5, **kwargs) ^ positionalonly with kw or with kw Reinhold -- Mail address is perfectly valid! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PythonCore\CurrentVersion
[Guido] > I tried your experiment but added 'print sys.argv[0]' and didn't see > that. sys.argv[0] is the path to the script. My mistake! You're right, sys.argv[0] is the path to the script for me too. [Tim] >> The directory of the script being run was >> nevertheless in sys.path[0] on both Windows and Linux. On Windows, >> but not on Linux, the _current_ directory (the directory I happened to >> be in at the time I invoked Python) was also on sys.path; Mark Hammond >> said it was not when he tried, but he didn't show exactly what he did >> so I'm not sure what he saw. [Guido] > I see what you see. The first entry is the script's directory, the > 2nd is a nonexistent zip file, the 3rd is the current directory, then > the rest is standard library stuff. So why doesn't Mark see that? I'll ask him ;-) > I suppose PC/getpathp.c puts it there, per your post quoted above? I don't think it does (although I understand why it's sane to believe that it must). Curiously, I do _not_ see the current directory on sys.path on Windows if I run from current CVS HEAD. I do see it running Pythons 2.2.3, 2.3.5 and 2.4.2. PC/getpathp.c doesn't appear to have changed in a relevant way. blor.py: """ import sys from pprint import pprint print sys.version_info pprint(sys.path) """ C:\>\code\python\PCbuild\python.exe code\blor.py # C:\ not in sys.path (2, 5, 0, 'alpha', 0) ['C:\\code', 'C:\\code\\python\\PCbuild\\python25.zip', 'C:\\code\\python\\DLLs', 'C:\\code\\python\\lib', 'C:\\code\\python\\lib\\plat-win', 'C:\\code\\python\\lib\\lib-tk', 'C:\\code\\python\\PCbuild', 'C:\\code\\python', 'C:\\code\\python\\lib\\site-packages'] C:\>\python24\python.exe code\blor.py # C:\ in sys.path (2, 4, 2, 'final', 0) ['C:\\code', 'C:\\python24\\python24.zip', 'C:\\', 'C:\\python24\\DLLs', 'C:\\python24\\lib', 'C:\\python24\\lib\\plat-win', 'C:\\python24\\lib\\lib-tk', 'C:\\python24', 'C:\\python24\\lib\\site-packages', 'C:\\python24\\lib\\site-packages\\PIL', 'C:\\python24\\lib\\site-packages\\win32', 'C:\\python24\\lib\\site-packages\\win32\\lib', 'C:\\python24\\lib\\site-packages\\Pythonwin'] ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed changes to PEP 343
On 10/7/05, Fredrik Lundh <[EMAIL PROTECTED]> wrote:
> the whole concept might be perfectly fine on the "this construct corre-
> sponds to this code" level, but if you immediately end up with things that
> are not what they seem, and names that don't mean what the say, either
> the design or the description of it needs work.
>
> ("yes, I know you can use this class to manage the context, but it's not
> really a context manager, because it's that method that's a manager, not
> the class itself. yes, all the information that belongs to the context are
> managed by the class, but that doesn't make... oh, shut up and read the
> PEP")
Good points... Maybe it is the description that needs work.
Here is a description of iterators, to illustrate the parallels:
An object that has an __iter__ method is iterable. It can plug
into the Python 'for' statement. obj.__iter__() returns an
iterator. An iterator is a single-use, forward-only view of a
sequence. 'for' calls __iter__() and uses the resulting
iterator's next() method.
(This is just as complicated as PEP343+changes, but not as
mindboggling, because the terminology is better. Also because
we're used to iterators.)
Now contexts, per PEP 343 with Nick's proposed changes:
An object that has a __with__ method is a context. It can plug
into the Python 'with' statement. obj.__with__() returns a
context manager. A context manager is a single-use object that
manages a single visit into a context. 'with' calls __with__()
and uses the resulting context manager's __enter__() and
__exit__() methods.
A contextmanager is a function that returns a new context manager.
Okay, that last bit is weird. But note that PEP 343 has this oddness
even without the proposed changes. Perhaps either "context manager"
or contextmanager should be renamed, regardless of whether Nick's
changes are accepted.
With the changes, context managers will be (conceptually) single-use.
So maybe a different term might be appropriate. Perhaps "ticket".
"A ticket is a single-use object that manages a single visit into a
context."
-j
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Reinhold Birkenfeld wrote: > Greg Ewing wrote: > >>Guido van Rossum wrote: >> >> >>>BTW, what should >>> >>>[a, b, *rest] = (1, 2, 3, 4, 5) >>> >>>do? Should it set rest to (3, 4, 5) or to [3, 4, 5]? >> >>Whatever type is chosen, it should be the same type, always. >>The rhs could be any iterable, not just a tuple or a list. >>Making a special case of preserving one or two types doesn't >>seem worth it to me. > > > I don't think that > > [a, b, c] = iterable > > is good style right now, so I'd say that > > [a, b, *rest] = iterable > > should be disallowed or be the same as with parentheses. It's not > intuitive that rest could be a list here. I wonder if something like the following would fulfill the need? This divides a sequence at given index's by using an divider iterator on it. class xlist(list): def div_at(self, *args): """ return a divided sequence """ return [x for x in self.div_iter(*args)] def div_iter(self, *args): """ return a sequence divider-iter """ s = None for n in args: yield self[s:n] s = n yield self[n:] seq = xlist(range(10)) (a,b),rest = seq.div_at(2) print a,b,rest # 0 1 [2, 3, 4, 5, 6, 7, 8, 9] (a,b),c,(d,e),rest = seq.div_at(2,4,6) print seq.div_at(2,4,6) # [[0, 1], [2, 3], [4, 5], [6, 7, 8, 9]] print a,b,c,d,e,rest# 0 1 [2, 3] 4 5 [6, 7, 8, 9] This addresses the issue of repeating the name of the iterable. Cheers, Ron ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making Queue.Queue easier to use
[Guido]
> >> Apart from trying to guess the API without reading the docs (:-), what
> >> are the use cases for using put/get with a timeout? I have a feeling
> >> it's not that common.
[Josiah Carlson]
> > With timeout=0, a shared connection/resource pool (perhaps DB, etc., I
> > use one in the tuple space implementation I have for connections to the
> > tuple space).
[Tim Peters]
> Passing timeout=0 is goofy: use {get,put}_nowait() instead. There's
> no difference in semantics.
I understand this, as do many others who use it. However, having both
manually and automatically tuned timeouts myself in certain applications,
the timeout=0 case is useful. Uncommon? Likely, I've not yet seen any
examples of anyone using this particular timeout method at koders.com .
> > Note that technically speaking, Queue.Queue from Pythons
> > prior to 2.4 is broken: get_nowait() may not get an object even if the
> > Queue is full, this is caused by "elif not self.esema.acquire(0):" being
> > called for non-blocking requests. Tim did more than simplify the
> > structure by rewriting it, he fixed this bug.
>
> I don't agree it was a bug, but I did get fatally weary of arguing
> with people who insisted it was ;-) It's certainly easier to explain
> (and the code is easier to read) now.
When getting an object from a non-empty queue fails because some other
thread already had the lock, and it is a fair assumption that the other
thread will release the lock within the next context switch...
Because I still develop on Python 2.3 (I need to support a commercial
codebase made with 2.3), I was working around it by using the timeout
parameter:
try:
connection = connection_queue.get(timeout=.01)
except Queue.Empty:
connection = make_new_connection()
With only get_nowait() calls, by the time I hit 3-4 threads, it was
failing to pick up connections even when there were hundreds in the
queue, and I quickly ran into the file handle limit for my platform, not
to mention that the server I was connecting to used asynchronous sockets
and select, which died at the 513th incoming socket.
I have since copied the implementation of 2.4's queue into certain
portions of code which make use of get_nowait() and its variants
(handline the deque reference as necessary).
Any time one needs to work around a "not buggy feature" with some
claimed "unnecessary feature", it tends to smell less than pristine to
my nose.
> > With block=True, timeout=None, worker threads pulling from a work-to-do
> > queue, and even a thread which handles the output of those threads via
> > a result queue.
>
> Guido understands use cases for blocking and non-blocking put/get, and
> Queue always supported those possibilities. The timeout argument got
> added later, and it's not really clear _why_ it was added. timeout=0
> isn't a sane use case (because the same effect can be gotten with
> non-blocking put/get).
def t():
try:
#thread state setup...
while not QUIT:
try:
work = q.get(timeout=5)
except Queue.Empty:
continue
#handle work
finally:
#thread state cleanup...
Could the above be daemonized? Certainly, but then the thread state
wouldn't be cleaned up. If you can provide me with a way of doing the
above with equivalent behavior, using only get_nowait() and get(), then
put it in the documentation. If not, then I'd say that the timeout
argument is a necessarily useful feature.
[Guido]
> But one lesson we can learn from sockets (or perhaps the reason why
> people kept asking for timeout=0 to be "fixed" :) is that timeout=0 is
> just a different way to spell blocking=False. The socket module makes
> sure that the socket ends up in exactly the same state no matter which
> API is used; and in fact the setblocking() API is redundant.
This would suggest to me that at least for sockets, setblocking() could
be deprecated, as could the block parameter in Queue. I wouldn't vote
for either deprecation, but it would seem to make more sense than to
remove the timeout arguments from both.
- Josiah
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Autoloading? (Making Queue.Queue easier to use)
Guido van Rossum wrote: > I see no need. Code that *doesn't* need Queue but does use threading > shouldn't have to pay for loading Queue.py. However, it does seem awkward to have a whole module providing just one small class that logically is so closely related to other threading facilities. What we want in this kind of situation is some sort of autoloading mechanism, so you can import something from a module and have it trigger the loading of another module behind the scenes to provide it. Another place I'd like this is in my PyGUI library, where I want all the commonly-used class names to appear in the top-level package, but ideally not import the code to implement them until they're actually used. There are various ways of hacking up such functionality today, but it would be nice if there were some kind of language or library support for it. Maybe something like a descriptor mechanism for lookups in module namespaces. -- Greg Ewing, Computer Science Dept, +--+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | [EMAIL PROTECTED] +--+ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed changes to PEP 343
Jason Orendorff wrote: > A contextmanager is a function that returns a new context manager. > > Okay, that last bit is weird. If the name of the decorator is to be 'contextmanager', it really needs to say something like The contextmanager decorator turns a generator into a function that returns a context manager. So maybe the decorator should be called 'contextmanagergenerator'. Or perhaps not, since that's getting rather too much of an eyeful to parse... -- Greg Ewing, Computer Science Dept, +--+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | [EMAIL PROTECTED] +--+ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: defaultproperty
Nick Coghlan wrote: > As a location for this, I would actually suggest a module called something > like "metatools", -1, too vague and meaningless a name. If "decorator" is the official term for this kind of function, then calling the module "decorators" is precise and helpful. Other kinds of meta-level tools should go in their own suitably-named modules. -- Greg Ewing, Computer Science Dept, +--+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | [EMAIL PROTECTED] +--+ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Nick Coghlan wrote: > I'm also trying to figure out why you would ever write: >[a, b, c, d] = seq I think the ability to use square brackets is a holdover from some ancient Python version where you had to match the type of the thing being unpacked with the appropriate syntax on the lhs. It was a silly requirement from the beginning, and it became unworkable as soon as things other than lists and tuples could be unpacked. In Py3k I expect that [...] for unpacking will no longer be allowed. > Indeed. It's a (minor) pain that optional flag variables and variable length > argument lists are currently mutually exclusive. Although, if you had that > rule, I'd want to be able to write: > >def f(a, b, *, foo=1, bar=2): pass Yes, I agree. -- Greg Ewing, Computer Science Dept, +--+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | [EMAIL PROTECTED] +--+ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Extending tuple unpacking
Steve Holden wrote: > So presumably you'd need to be able to say > >((a, *b), c, *d) = ((1, 2, 3), 4, 5, 6) Yes. >a, *b = [1] > > to put the empty list into b, or is that an unpacking error? Empty sequence in b (of whatever type is chosen). > does this mean that you'd also like to be able to write > >def foo((a, *b), *c): That would follow, yes. > I do feel that for Python 3 it might be better to make a clean > separation between keywords and positionals: in other words, of the > function definition specifies a keyword argument then a keyword must be > used to present it. But then how would you give a positional arg a default value without turning it into a keyword arg? It seems to me that the suggested extension covers all the possibilities quite nicely. You can have named positional args with or without default values, optional extra positional args with *, named keyword-only args with or without default values, and unnamed extra keyword-only args with **. The only thing it doesn't give you directly is mandatory positional-only args, and you can get that by catching them with * and unpacking them afterwards. This would actually synergise nicely with * in tuple unpacking: def f(*args): a, b, *rest = args And with one further small extension, you could even get that into the argument list as well: def f(*(a, b, *rest)): ... -- Greg Ewing, Computer Science Dept, +--+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | [EMAIL PROTECTED] +--+ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pythonic concurrency
that stm paper isn't the end. there's a java implementation which seems to be exactly what we want: http://research.microsoft.com/~tharris/papers/2003-oopsla.pdf ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
