Re: Single type for __builtins__ in Py3.0
Collin Winter wrote: > Hallo all, > I'd like to propose that in Py3.0 (if not earlier), __builtins__ will > be the same type regardless of which namespace you're in. Tim Peters > has said [1] that the reason __builtins__ in __main__ is a module so > that "the curious don't get flooded with output when doing vars() at > the prompt". Based on this, I propose that __builtins__ be a module > (really, an alias for the __builtin__ module as it is now) in all > namespaces. > > If possible, I'd like to see this go in before 3.0. The reference > manual currently states [2] that __builtins__ can be either a dict or > a module, so changing it to always be a module would still be in > keeping with this. However, I realise that there's probably code out > there that hasn't been written to deal with both types, so this would > result in some minor breakage (though it would be easily fixable). > > If this gets a good response, I'll kick it up to python-dev. A few questions: How would this change, if made in a minimal way, impact the "provide alternate globals() and locals() to eval and exec" feature? Altering __builtins__ is a half-assed way of providing some sort of security, but it's probably useful in preventing user-supplied code from shooting itself in the foot without aiming first. Secondly, wouldn't this also be a good time to implement modules as actual objects, so (e.g) modules could provide a __getattribute__ for references of the form modname.thing? If the change can't be made without breaking the altering of __builtins__ for exec/eval, then I'm -0.5. Otherwise, +1, and the second bit is probably good for further debate. -- http://mail.python.org/mailman/listinfo/python-list
Re: Overloading __init__ & Function overloading
Iyer, Prasad C wrote: > Thanks a lot for the reply. > But I want to do something like this > > class BaseClass: > def __init__(self): > # Some code over here > def __init__(self, a, b): > # Some code over here > def __init__(self, a, b, c): > # some code here > > baseclass1=BaseClass() > baseclass2=BaseClass(2,3) > baseclass3=BaseClass(4,5,3) In my experience, the vast majority of cases where you "want" function overloading, you really just want sensible default parameters. Since Python is dynamically typed, the other common use case in static-typed language (to provide f(int,int), f(float,float), f(double,complex), f(Momma,Poppa) equivalents) is entirely unnecessary. Try: class BaseClass: def __init__(self, a = None, b = None, c = None): if a == None: or (if you want to take any number of parameters) class BaseClass: def __init__(self, *args): if len(args) == 0: Of course, this is assuming that the behaviour is radically different based on the number of arguments, which is generally Poor Design. You probably _REALLY_ want: class BaseClass: def __init__(self, a=SensibleDefault1, b=SensibleDefault2, c=SensibleDefault3): As a concrete example of this, consider: class Point: def __init__(self, x=0, y=0, z=0): Then you can call it with: originPoint = Point() pointInX = Point(xloc) pointInXYPlane = Point(xloc,yloc) pointIn3DSpace = Point(xloc,yloc,zloc) Or if the Defaults aren't quite so simple, and sensible defaults depend on previous values, use: class BaseClass: def __init__(self, a=SensibleDefault1, b=None, c=None): if b==None: b = stuff_involving(a) if c==None: c = stuff_involving(a,b) -- http://mail.python.org/mailman/listinfo/python-list
Re: Can't extend function type
Diez B. Roggisch wrote: > Paul Rubin wrote: > >> Oh well. I had wanted to be able to define two functions f and g, and >> have f*g be the composition of f and g. >> >> >>> func_type = type(lambda: None) >> >>> class composable_function(func_type): >> ... def __mult__(f,g): >> ... def c(*args, **kw): >> ... return f(g(*args, **kw)) >> ... return c >> ... >> Traceback (most recent call last): >> File "", line 1, in ? >> TypeError: Error when calling the metaclass bases >> type 'function' is not an acceptable base type >> >>> >> >> Seems like a wart to me. > > So the only way to achieve this with current semantics is to make f anf > g objects with a call methods. In that very moment, you're done - as > extending from object is no problem :) > > > class ComposeableFunction(object): > > def __call__(self, *args, **kwargs): > return self.second(self.first(*args, **kwargs)) Note, with a little bit of massaging, you can turn ComposableFunction into a decorator, for more natural function definition: (Untested, I'm not on a system with Py2.4 at the moment): class Composable(object): def __init__(self,f): self.callable = f def __call__(self,*args, **kwargs): return self.callable(*args, **kwargs) def __mul__(self,other): return Composable(lambda (*a, **kwa): self.callable(other(*a, **kwa))) Usage: @Composable def f(x): return x**2 @Composable def g(x): return x+1 # And we shouldn't even need a @Composable on the last in the sequence def h(x): return x/2.0 >>>f(1) 1 >>>(f*g)(1) 4 >>>(f*g*h)(2) 4 This might not combine neatly with methods, however; the bound/unbound method magic is still mostly voodoo to me. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python interpreter bug
[EMAIL PROTECTED] wrote: > No doubt you're right but common sense dictates that membership testing > would test identity not equality. > This is one of the rare occasions where Python defeats my common sense But object identity is almost always a fairly ill-defined concept. Consider this (Python 2.2, 'cuz that's what I have right now): Python 2.2.3 (#1, Nov 12 2004, 13:02:04) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> al = "ab" >>> alp = al+"c" >>> alphabet = "abc" >>> al 'ab' >>> alp 'abc' >>> alphabet 'abc' >>> alp is alphabet 0 >>> alp == alphabet 1 # True on Py2.4 The only reliable thing that object identity tells you is "these two foos occupy the same memory location." Even for identical, immutable objects, this may not be true (see case above) -- some immutables do end up occupying the same memory location (try i=1;j=2;k=j-1;i is k), but this is by no means guraranteed, and indeed only happens sometimes because of optimizations. -- http://mail.python.org/mailman/listinfo/python-list
Re: socketServer questions
Paul Rubin wrote: > rbt <[EMAIL PROTECTED]> writes: > >>1. Do I need to use threads to handle requests, if so, how would I >>incorporate them? The clients are light and fast never sending more >>than 270 bytes of data and never connecting for more than 10 seconds >>at a time. There are currently 500 clients and potentially there >>could be a few thousand... how high does the current version scale? > > open for very long. If you want to have longer-running connections > open simultaneously, you need some type of concurrency such as threads. > But then you have to write the code differently, to serialize the > log recording. > > You probably should get a copy of "Python Cookbook" which explains the > basics of multi-threaded programming, if you have to ask a question Or take a look at non-threaded ways of doing non-blocking IO; I've personally used the Twisted libraries and they work decently without manual thread overhead [indeed, the default reactor uses select, and there's a version for 'nix systems that uses poll]. Either way will work, it just depends on how deeply you want to integrate the network functionality into the code. As someone else said (paraphrased, an apologies for stealing the quote; a Google search isn't bringing it up), "You don't use Twisted, you provide Twisted callbacks to use you." -- http://mail.python.org/mailman/listinfo/python-list
Re: Yes, this is a python question, and a serious one at that (moving to Win XP)
Chris Lambacher wrote: > The shell that comes with MSys (from the MinGW guys). Is pretty good, although > it does have a bit of a problem with stdout output before a process exits, ie > it will hold back output until the process exits. > > As a bonus, the file system is a little more sane, and if you are interested > in compiling software that is not open source, you are not tied to the Cygwin > DLL which is GPLed. Worth mentioning here that cygwin's gcc does allow a -mno-cygwin compile-time flag to not link against the cygwin dll. Cygwin's packaging system also includes the MinGW development libraries as an easily installable option, for compiling against mingw's stuff; I've done it for Python extensions, in fact. -- http://mail.python.org/mailman/listinfo/python-list
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > It would be better if cmp would give an indication it > can't compare two objects instead of giving incorrect > and inconsistent results. If two objects aren't totally comparable, then using 'cmp' on them is ill-defined to begin with. The Standard Thing To Do is throw an exception; see the Highly Obscure Case of the Complex Numbers. >>>1 == 1j False >>>1 != 1j True >>>1 < 1j Traceback (most recent call last): File "", line 1, in ? TypeError: cannot compare complex numbers using <, <=, >, >= >>>cmp(1j,1j) 0 >>>cmp(1,1j) Traceback (most recent call last): File "", line 1, in ? TypeError: cannot compare complex numbers using <, <=, >, >= So using the well-known case of complex numbers, the semantics are already well-defined. >>> class Incomparable: ... def __cmp__(self,other): ... raise TypeError("cannot compare Incomparables using <, <=, >, >=") ... def __eq__(self,other): ... return self is other ... def __neq__(self,other): ... return self is not other >>> a1 = Incomparable() >>> a2 = Incomparable() >>> a1 == a1 1 # I'm running on python 2.2.3 at the moment, so hence the 1. >>> a2 == a2 1 >>> a1 == a2 0 >>> a1 < a2 Traceback (most recent call last): File "", line 1, in ? File "", line 3, in __cmp__ TypeError: cannot compare Incomparables using <, <=, >, >= >>> cmp(a1,a2) Traceback (most recent call last): File "", line 1, in ? File "", line 3, in __cmp__ TypeError: cannot compare Incomparables using <, <=, >, >= So your use-case is already well-defined, and rather simple. Make __cmp__ raise an exception of your choice, and define rich comparators only for the comparisons that are supported. If, as you say in another post, "some" pairs in D cross D are comparable on an operator but not all of them (and further that this graph is not transitive), then your _ONLY_ choices, no matter your implementation, is to return some possibly inconsistent result (a < b == 1, b < c == 1, a < c == 0) or raise an exception for unapplicable comparisons. This isn't a Python problem; this is a problem from formal mathematics. Personally, I'd favor the "raise an exception" case, which is exactly what will happen if you define rich comparisons and let cmp throw an exception. Operators that assume comparable objects, like sort, also almost always assume a total order -- inconsistent operations can give weird results, while throwing an exception will at least (usually) give some sort of error that can be caught. Another poster mentioned the B-tree example, and that isn't solvable in this case -- B-trees assume a total order, but by their nature aren't explicit about checking for it; inserting a "partial plus exception" order might result in tree failure at weird times. An inconsistent order, however, is even worse -- it would corrupt the tree at the same times. -- http://mail.python.org/mailman/listinfo/python-list
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > It *is* a definition of an ordering. > > For something to be an ordering it has to be anti symmetric and transitive. > > The subset relationships on sets conform to these conditions so it is a > (partial) > ordering. Check your mathematic books, Why you would think this is abuse is > beyond me Which is exactly why a < b on sets returns True xor False, but cmp(a,b) throws an exception. a b is a local comparison, asking only for the relationship between two elements. In some bases, like the complex numbers, some comparisons are ill-defined.; in others, like sets, they're well-defined but don't give a total ordering. cmp(a,b) asks for their relative rankings in some total ordering. For a space that does not have a total ordering, cmp(a,b) is meaningless at best and dangerous at worst. It /should/ throw an exception when the results of cmp aren't well-defined, consistent, antisymmetric, and transitive. -- http://mail.python.org/mailman/listinfo/python-list
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > I also think there is the problem that people aren't used to partial > ordering. There is an ordering over sets, it is just not a total > ordering. But that some pairs are uncomparable (meaning that neither > one is smaller or greater) doesn't imply that comparing them is > ill defined. It depends on your definition of "comparison." Admittedly, <, =, !=, and > can be defined for a partial ordering, but classical mathematics, as understood by most people (even programmers), assumes that unless a == b, a > b or a < b. The comparisons, as defined this way, are done on a totally ordered set, yes. But since most comparisons ever done -are- done on a totally ordered set (namely the integers or reals), it's entirely fair to say that "a programmer's expectation" is that comparisons should work more or less like a totally ordered list. With that caevat in mind, comparison on a partially ordered domain /is/ ill-defined; it can give inconsistent results from the "a < b or a > b or a == b" rule. > > Well it is a wrong assumption is general. There is nothing impure about > partial ordering. Impure? Probably not. Useless from many perspective when "comparisons" are needed, to the point where it's probably safer to throw an exception than define results. > That is IMO irrelevant. The subset relationship is an ordering and as > such has all characteristics of other orderings like "less than", > except that the subset relationship is partial and the less than > relationship is total. How it is called "subset" vs "less than" is > IMO irrelevant. It is about mathematical characteristics. Still accident. < wouldn't be used for sets if we had a subset symbol on the standard keyboard, APL fans notwithstanding. Simce programmers familiar with Python understand that < on sets isn't a "real" comparison (i.e. provide a total order), they don't expect unique, consistent results from something like sort (or a tree, for that matter). > > >>By analogy, one can ask, "is the cat inside the box?" and get the answer >>"No", but this does not imply that therefore the box must be inside the >>cat. > > > Bad analogy, this doesn't define a mathematical ordering, the subset > relationship does. Yes, it does. Consider "in" as a mathematical operator: For the set (box, cat-in-box) box in box: False box in cat-in-box: False cat-in-box in box: True cat-in-box in cat-in-box: False For the set (box, smart-cat) # cat's too smart to get in the box box in box: False box in smart-cat: False smart-cat in box: False smart-cat in smart-cat: False In both these cases, the "in" operator is irreflexive, asymmetric, and transitive (extend to mouse-in-cat if you really care about transitive), so "in" is a partial order sans equality. A useless one, but a partial order nonetheless. >>Notice that L1 and L2 contain the same elements, just in different orders. >>Sorting those two lists should give the same order, correct? > > > No. Since we don't have a total ordering, sorting doesn't has to give > the same result. For as far as sorting is defined on this kind of > object, the only thing I would expect is that after a sort the > following condition holds. > > for all i,j if i < j then not L[i] > L[j] Highly formal, aren't you? Again, common programming allows for "not a > b" == "a <= b", so your condition becomes the more familiar: for all i,j in len(list), if i < j then L[i] <= L[j] This condition is also assumed implicitly by many other assumed "rules" of sorted lists, namely their uniqueness: "If L is a list of unique keys, then sort(L) is a unique list of keys in sorted order." Yes, this assumes that L has a total order. Big whoop -- this is a matter of programming practiciality rather than mathematical purity. >>Personally, I argue that sorting is something that you do to lists, and >>that all lists should be sortable regardless of whether the objects within >>them have a total order or not. If not, the list should impose a sensible >>ordering on them, such as (perhaps) lexicographical order ("dictionary >>order"). To reply to the grandparent poster here: that's not always easy to do. A "sensible order" isn't always easy to define on a set where a partial order exists, if it includes that order already. Adding comparison-pairs to a partially ordered set (where incomparable elements throw an exception, rather than just return False which is confusing from sort's perspective) can easily result in a graph that isn't actually transitive and antisymmetric, i.e.: A > B > C, but C > D > A for some operator ">" With this in mind, the only sensible thing for .sort to do when it encounters an exception is to fall back to its "backup" comparator (id, for example), and resort /the entire list/ using that comparator. The results returned will then be valid by sort's comparison, but a subset of that list containing only "good" objects (like integers) may not (and probably
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > Op 2005-10-25, Christopher Subich schreef <[EMAIL PROTECTED]>: > >>Which is exactly why a < b on sets returns True xor False, but cmp(a,b) >>throws an exception. > > > I don't see the conection. > > The documentation states that cmp(a,b) will return a negative value if > a < b. So why should it throw an exception? Because it's useful that cmp(a1, a2) should either (return a value) or (throw an exception) for any element a1, a2 within type(a1) cross type(a2). If cmp sometimes is okay and sometimes throws an exception, then it leads to weird borkage in things like trees. With that in mind, not all sets are comparable. {a} and {b} have no comparison relationship, as you've pointed out, aside from not-equal. I'll get to why "not-equal" is a bad idea below. >>cmp(a,b) asks for their relative rankings in some total ordering. > > > The documentation doesn't state that. I also didn't find anything in > the documentation on how the programmer should code in order to > enforce this. Most programmers are simply programmers; they haven't had the benefit of a couple years' pure-math education, so the distinction between "partial order" and "total order" is esoteric at best. With that in mind, compare should conform, as best as possible, to "intuitive" behavior of comparison. Since comparisons are mostly done on numbers, an extension of comparisons should behave "as much like numbers" as possible. > > So someone who just use the rich comparisons to implement his > partial order will screw up this total order that cmp is somehow > providing. > > And cmp doesn't provide a total ordering anyway. Sets are clearly > uncomparable using cmp, so cmp doesn't provide a total order. Cmp isn't supposed to "provide" a total order, it's supposed to reflect relative standing in one if one already exists for P1 x P2. If one doesn't exist, I'd argue that it's the best course of action to throw an exception. After all, rich comparisons were put in precisely to allow support of limited comparisons when a total order (or indeed full comparison) isn't appropriate. > > Maybe the original idea was that cmp provided some total ordering, > maybe the general idea is that cmp should provide a total ordering, > but that is not documented, nor is there any documentation in > helping the programmer in this. I doubt that anyone was thinking about it in such depth. My bet is that the thought process goes this way: Og compare numbers. Numbers good, compare good. Grunt grunt. Language designers: Wouldn't it be nice if we could allow user-defined objects, such as numbers with units, to compare properly with each other? This would let us say (1 m) < (.5 mile) pretty easily, eh? Guido: Let's let classes override a __cmp__ function for comparisons. In programming language theory, comparisons were firstly about numbers, and their leading-order behaviour has always stayed about numbers. Comparing entities which are best represented in an... interesting formal mathematical way (i.e. partial orders, objects for which some comparisons are Just Plain Weird) works only as a side-effect of number-like behavior. The lesson to take home from this: the less a custom class behaves like a number, the less intutitively meaningful (or even valid) comparisons will be on it. > > And even if we leave sets aside it still isn't true. > > >>>>from decimal import Decimal >>>>Zero = Decimal(0) >>>>cmp( ( ) , Zero) > > -1 > >>>>cmp(Zero, 1) > > -1 > >>>>cmp(1, ( ) ) > > -1 I'd argue that the wart here is that cmp doesn't throw an exception, not that it returns inconsistent results. This is a classic case of incomparable objects, and saying that 1 < an empty tuple is bordering on meaningless. > >>For a >>space that does not have a total ordering, cmp(a,b) is meaningless at >>best and dangerous at worst. > > > The current specs and implemantation are. > > I see nothing wrong with a function that would provide four kinds of > results when given two elements. The same results as cmp gives now > when it applies and None or other appropiate value or a raised > exception when not. > > Given how little this functionality differs from the current cmp, > I don't see why it couldn't replace the current cmp. My biggest complaint here is about returning None or IncomparableValue; if that happens, then all code that relies on cmp returning a numeric result will have to be rewritten. Comparing incomparables is an exceptional case, and hence it should raise
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > Op 2005-10-25, Christopher Subich schreef <[EMAIL PROTECTED]>: >> >>My biggest complaint here is about returning None or IncomparableValue; >>if that happens, then all code that relies on cmp returning a numeric >>result will have to be rewritten. > > > I don't know. There are two possibilities. > > 1) The user is working with a total order. In that case the None > or IncomparableValues won't be returned anyway so nothing about > his code has to change. > > 2) The user is working with a partial order. In that case cmp > doesn't provide consistent results so the use of cmp in this > case was a bug anyway. Case 3) The user is working with an unknown class, using duck typing, and expects a total order. If cmp suddenly returned Incomparable or None, the code would break in Unexpected Ways, with Undefined Behavior. This is a classic "exception versus error code" argument; in this case, returning None would be the error flag. It's almost always better to just throw the exception, because then this allows error-checking at a more natural level. >>As for saying that cmp should return some times and raise an exception >>other times, I just find it squicky. > > > But cmp already behaves this way. The only difference would be that > the decision would be made based on the values of the objects instead > of only their class. > > >>Admittedly, this is entirely up to >>the class designer, and your proposed guideline wouldn't change cmp's >>behavior for clases that /are/ totally ordered. >> >>Then again, sets excepted, your guideline doesn't seem very applicable >>in standard library code. > > > Well AFAIAC this isn't specific about library code. A change that cmp return a 4th possible "value" (None or Incomparable) is a fundamental language change. Because this would break large amounts of existing code, it's a bad idea. A change that cmp throw an exception iff the two objects, rather than the two classes, were incomparable (allowing comparisons of( 1+0j and 2+0j) and ({a} and {a,b}) but not (1+1j and 2+0j) and ({a} and {b})) is a stylistic guideline, since it's already possible to write your own classes this way. The only place this change would matter is in the standard library code, and in just a few places at that. -- http://mail.python.org/mailman/listinfo/python-list
Re: textwidget.tag_bind("name", "", self.donothing) not working
shannonl wrote: > Hi all, > > For some reason this bind is calling the donothing function, like it > should, but is then allowing the text to be inserted into the Text > widget. [...] > This bind does work on the text widget as a whole, but on a individual > tag, it does not. You're trying to prevent a user from editing the text -within- a single tag. Does Tk even support this? Is your bind-applied-to-tag even firing, when the user presses a key within a tag? -- http://mail.python.org/mailman/listinfo/python-list
Re: Would there be support for a more general cmp/__cmp__
Antoon Pardon wrote: > If you are concerned about sorting times, I think you should > be more concerned about Guido's idea of doing away with __cmp__. > Sure __lt__ is faster. But in a number of cases writing __cmp__ > is of the same complexity as writing __lt__. So if you then > need a __lt__, __le__, __eq__, __ne__, __gt__ and __ge__ it > would be a lot easier to write a __cmp__ and have all rich > comparisons methods call this instead of duplicating the code > about six times. So you would be more or less forced to write > your class as class cld or cle. This would have a bigger > impact on sorting times than my suggestion. Honestly, I don't really mind the idea of __cmp__ going away; for classes that behave Normally with respect to a single __cmp__ value, it's easily possible to write a CompareMixin that defines __lt__, __gt__, etc. for suitable __cmp__ values. Much like DictMixin is part of the standard library. -- http://mail.python.org/mailman/listinfo/python-list
Re: syntax question - if 1:print 'a';else:print 'b'
Steve Holden wrote: >> On Thu, 2005-10-27 at 14:00, Gregory PiƱero wrote: >> >>> Not quite because if something(3) fails, I still want something(4) to >>> run. > Then the obvious extension: > > for i in range(20): >... > > but I get the idea that Gregory was thinking of different statements > rather than calls to the same function with different arguments. Sorry for the descendant-reply, but the original hasn't hit my news server yet (I think). It sounds like Gregory wants a Python equivalent of "on error continue next," which is really a bad idea almost everywhere. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: > Op 2005-11-03, Stefan Arentz schreef <[EMAIL PROTECTED]>: >>The model makes sense in my opinion. If you don't like it then there are >>plenty of other languages to choose from that have decided to implement >>things differently. > > > And again this argument. Like it or leave it, as if one can't in general > like the language, without being blind for a number of shortcomings. > > It is this kind of recations that make me think a number of people is > blindly devoted to the language to the point that any criticism of > the language becomes intollerable. No, it's just that a goodly number of people actually -like- the relatively simple conceputal model of Python. Why /shouldn't/ >>>a.x = foo correspond exactly to >>>setattr(a,'x',foo) #? Similarly, why shouldn't >>>foo = a.x correspond exactly to >>>foo = getattr(a,'x') #? With that in mind, the logical action for >>>a.x = f(a.x) is >>>setattr(a,'x',f(a,'x')) #, and since >>>a.x += foo is equal to >>>a.x = A.__iadd__(a.x,foo) # (at least for new-style classes >>> # that have __iadd__ defined. Otherwise, it falls back on >>> # __add__(self,other) to return a new object, making this >>> # evern more clear), why shouldn't this translate into >>>setattr(a,'x',A.__iadd__(getattr(a,'x'),foo)) #? Looking at it this way, it's obvious that the setattr and getattr may do different things, if the programmer understands that "instances (can) look up object attributes, and (always) set instance attributes." In fact, it is always the case (so far as I can quickly check) that += ends up setting an instance attribute. Consider this code: >>> class foo: >>> x = [5] >>> a = foo() >>> a += [6] >>> a.x [5,6] >>> foo.x [5,6] >>> foo.x = [7] >>> a.x [5,6] In truth, this all does make perfect sense -- if you consider class variables mostly good for "setting defaults" on instances. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Steven D'Aprano wrote: > On Thu, 03 Nov 2005 14:13:13 +, Antoon Pardon wrote: > > >>Fine, we have the code: >> >> b.a += 2 >> >>We found the class variable, because there is no instance variable, >>then why is the class variable not incremented by two now? > > > Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a = > to correspond to b.__class__.a = ? Small correction, it expands to b.a = B.a.__class__.__iadd__(b.a,2), assuming all relevant quantities are defined. For integers, you're perfectly right. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: >>Since ints are immutable objects, you shouldn't expect the value of b.a >>to be modified in place, and so there is an assignment to b.a, not A.a. > > > You are now talking implementation details. I don't care about whatever > explanation you give in terms of implementation details. I don't think > it is sane that in a language multiple occurence of something like b.a > in the same line can refer to different objects > This isn't an implementation detail; to leading order, anything that impacts the values of objects attached to names is a specification issue. An implementation detail is something like when garbage collection actually happens; what happens to: b.a += 2 is very much within the language specification. Indeed, the language specification dictates that an instance variable b.a is created if one didn't exist before; this is true no matter if type(b.a) == int, or if b.a is some esoteric mutable object that just happens to define __iadd__(self,type(other) == int). -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: > Well I wonder. Would the following code be considered a name binding > operation: > > b.a = 5 Try it, it's not. Python 2.2.3 (#1, Nov 12 2004, 13:02:04) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a Traceback (most recent call last): File "", line 1, in ? NameError: name 'a' is not defined >>> b = object() >>> b.a Traceback (most recent call last): File "", line 1, in ? AttributeError: 'object' object has no attribute 'a' Once it's attached to an object, it's an attribute, not a base name. The distinction is subtle and possibly something that could (should?) be unified for Py3k, but in cases like this the distinction is important. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: > Except when your default is a list > > class foo: > x = [] # default > > a = foo() > a.x += [3] > > b = foo() > b.x > > This results in [3]. So in this case using a class variable x to > provide a default empty list doesn't work out in combination > with augmented operators. This has nothing to do with namespacing at all, it's the Python idiosyncracy about operations on mutable types. In this case, += mutates an object, while + returns a new one -- as by definition, for mutables. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: > Well maybe because as far as I understand the same kind of logic > can be applied to something like > > lst[f()] += foo > > In order to decide that this should be equivallent to > > lst[f()] = lst[f()] + foo. > > But that isn't the case. Because, surprisingly enough, Python tends to evaluate expressions only once each time they're invoked. In this case, [] is being used to get an item and set an item -- therefore, it /has/ to be invoked twice -- once for __getitem__, and once for __setitem__. Likewises, lst appears once, and it is used once -- the name gets looked up once (which leads to a += 1 problems if a is in an outer scope). f() also appears once -- so to evaluate it more trhan one time is odd, at best. If you know very much about modern lisps, it's similar to the difference between a defun and a defmacro. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Bengt Richter wrote: > > It might be interesting to have a means to push and pop objects > onto/off-of a name-space-shadowing stack (__nsstack__), such that the first > place > to look up a bare name would be as an attribute of the top stack object, i.e., > > name = name + 1 > Don't be that specific; just unify Attributes and Names. Instead of the 'name' X referring to locals()['X'] or globals()['X'], have a hidden "namespace" object/"class", with lookups functioning akin to class inheritence. This would allow, in theory, more uniform namespace behaviour with outer scoping: x = 1 def f(): x += 1 # would work, as it becomes setattr(namespace,'x',getattr(namespace,'x')+1), just like attribute loookup Also, with a new keyword "outer", more rational closures would work: def makeincr(start=0): i = start def inc(): outer i j = i i += 1 return j return inc From a "namespace object" point of view, 'outer i' would declare i to be a descriptor on the namespace object, such that setting actions would set the variable in the inherited scope (getting actions wouldn't actually need modification, since it already falls-through). At the first level, 'outer' would be exactly the same as 'global' -- indeed, it would be reasonable for the outer keyword to entirely replace global (which is actually module-scope). As it stands, the different behaviours of names and attributes is only a minor quirk, and the fix would definitely break backwards compatibility in the language -- it'd have to be punted to Py3k. -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Bengt Richter wrote: > On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <[EMAIL PROTECTED]> > wrote: >>is very much within the language specification. Indeed, the language >>specification dictates that an instance variable b.a is created if one >>didn't exist before; this is true no matter if type(b.a) == int, or if >>b.a is some esoteric mutable object that just happens to define >>__iadd__(self,type(other) == int). > > But if it is an esoteric descriptor (or even a simple property, which is > a descriptor), the behaviour will depend on the descriptor, and an instance > variable can be created or not, as desired, along with any side effect you > like. Right, and that's also language-specification. Voodoo, yes, but language specification nonetheless. :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
Antoon Pardon wrote: > Op 2005-11-04, Christopher Subich schreef <[EMAIL PROTECTED]>: >>it's the Python >>idiosyncracy about operations on mutable types. In this case, += >>mutates an object, while + returns a new one -- as by definition, for >>mutables. > > > It is the combination of the two. > > If python had chosen for an approach like function namespaces, the > problem wouldn't have occured either. What would have happened then > is that the compilor would have noticed the a.x on the right hand > side and based on that fact would then have deciced that all a.x > references should be instance reference (at least in that function > block). The a.x += ... would then result in an AttributeError being raised. Problem: """ class B: x = 1 classx = b() instx = b() instx.x = 5 def addtox(o): o.x += 1 addtox(instx) print B.x # 1 print instx.x # 6; we both agree on this one addtox(classx) # You argue this should AttributeError print B.x # ?! -- 1 currently, you argue 2 if no error print class.x # we both agree 2, if no error """ a.x is /not/ a namespace issue at all; it's an attribute issue. .x is not a name, it is an attribute. Python namespaces are lexically scoped, not dynamically scoped; if, as you argue, .x should be a name in a namespace, then you argue above that addtox in the above should work on instx but fail on classx. But this /cannot be determined at compile time/, because the attribute space is attached to the object passed in as the parameter. I repeat: this is not a name issue at all, it is an attribute issue. Python's behaviour is counterintuitive from some angles, but it is the only behaviour that is consistent with attributes in general, given the signature of __iadd__ as-is. > > You may prefer the current behaviour over this, but that is not the > point. The point is that resolution of name spaces does play its > role in this problem. There are no name spaces. > > > It also has little to do with mutable vs immutable types. > Someone could implement an immutable type, but take advantage > of some implemtation details to change the value inplace > in the __iadd__ method. Such an immutable type would show > the same problems. Immutable? I do not think that word means what you think it means. -- http://mail.python.org/mailman/listinfo/python-list
Unifying Attributes and Names (was: Re: Death to tuples!)
Bengt Richter wrote: > If we had a way to effect an override of a specific instance's attribute > accesses > to make certain attribute names act as if they were defined in > type(instance), and > if we could do this with function instances, and if function local accesses > would > check if names were one of the ones specified for the function instance being > called, > then we could define locally named constants etc like properties. > > The general mechanism would be that instance.__classvars__ if present would > make Nah... you're not nearly going far enough with this. I'd suggest a full unification of "names" and "attributes." This would also enhance lexical scoping and allow an "outer" keyword to set values in an outer namespace without doing royally weird stuff. In general, all lexical blocks which currently have a local namespace (right now, modules and functions) would have a __namespace__ variable, containing the current namespace object. Operations to get/set/delete names would be exactly translated to getattr/setattr/delattrs. Getattrs on a namespace that does not contain the relevant name recurse up the chain of nested namespaces, to the global (module) namespace, which will raise an AttributeError if not found. This allows exact replication of current behaviour, with a couple interesting twists: 1) i = i+1 with "i" in only an outer scope acutally works now; it uses the outer scope "i" and creates a local "i" binding. 2) global variables are easily defined by a descriptor: def global_var(name): return property( lambda self: getattr(self.global,name), lambda (self, v): setattr(self.global,name,v), lambda self: delattr(self.global,name), "Global variable %s" % name) 3) "outer variables" under write access (outer x, x = 1) are also well-defined by descriptor (exercise left for reader). No more weird machinations involving a list in order to build an accumulator function, for example. Indeed, this is probably the primary benefit. 4) Generally, descriptor-based names become possible, allowing some rather interesting features[*]: i) "True" constants, which cannot be rebound (mutable objects aside) ii) Aliases, such that 'a' and 'b' actually reference the same bit, so a = 1 -> b == 1 iii) "Deep references", such that 'a' could be a reference to my_list[4]. iv) Dynamic variables, such as a "now_time" that implicitly expands to some function. 5) With redefinition of the __namespace__ object, interesting run-time manipulations become possible, such as redefining a variable used by a function to be local/global/outer. Very dangerous, of course, but potentially very flexible. One case that comes to mind is a "profiling" namespace, which tracks how often variables are accessed -- over-frequented variables might lead to better-optimized code, and unaccessed variables might indicate dead code. [*] -- I'm not saying that any of these examples are particularly good ideas; indeed, abuse of them would be incredibly ugly. It's just that these are the first things that come to mind, because they're also so related to the obvious use-cases of properties. The first reaction to this is going to be a definite "ew," and I'd agree; this would make Python names be non-absolute [then again, the __classvars__ suggestion goes nearly as far anyway]. But this unification does bring all the power of "instance.attribute" down to the level of "local_name". The single biggest practical benefit is an easy definiton of an "outer" keyword: lexical closures in Python would then become truly on-par with use of global variables. The accumulator example would become: def make_accum(init): i = init def adder(j): outer i #[1] i += j return i return adder [1] -- note, this 'outer' check will have to require that 'i' be defined in an outer namespace -at the time the definition is compiled-. Otherwise, the variable might have to be created at runtime (as can be done now with 'global'), but there's no obvious choice on which namespace to create it in: global, or the immediately-outer one? This implies the following peculiar behaviour (but I think it's for the best): >>> # no i exists >>> def f(): # will error on definition outer i print i >>> def g(): # won't error print i >>> i = 1 >>> f() >>> g() Definitely a Py3K proposal, though. -- http://mail.python.org/mailman/listinfo/python-list
Unifying Attributes and Names (was: Re: Death to tuples!)
Bengt Richter wrote: > If we had a way to effect an override of a specific instance's attribute accesses > to make certain attribute names act as if they were defined in type(instance), and > if we could do this with function instances, and if function local accesses would > check if names were one of the ones specified for the function instance being called, > then we could define locally named constants etc like properties. > > The general mechanism would be that instance.__classvars__ if present would make > Nah... you're not nearly going far enough with this. I'd suggest a full unification of "names" and "attributes." This would also enhance lexical scoping and allow an "outer" keyword to set values in an outer namespace without doing royally weird stuff. In general, all lexical blocks which currently have a local namespace (right now, modules and functions) would have a __namespace__ variable, containing the current namespace object. Operations to get/set/delete names would be exactly translated to getattr/setattr/delattrs. Getattrs on a namespace that does not contain the relevant name recurse up the chain of nested namespaces, to the global (module) namespace, which will raise an AttributeError if not found. This allows exact replication of current behaviour, with a couple interesting twists: 1) i = i+1 with "i" in only an outer scope acutally works now; it uses the outer scope "i" and creates a local "i" binding. 2) global variables are easily defined by a descriptor: def global_var(name): return property( lambda self: getattr(self.global,name), lambda (self, v): setattr(self.global,name,v), lambda self: delattr(self.global,name), "Global variable %s" % name) 3) "outer variables" under write access (outer x, x = 1) are also well-defined by descriptor (exercise left for reader). No more weird machinations involving a list in order to build an accumulator function, for example. Indeed, this is probably the primary benefit. 4) Generally, descriptor-based names become possible, allowing some rather interesting features[*]: i) "True" constants, which cannot be rebound (mutable objects aside) ii) Aliases, such that 'a' and 'b' actually reference the same bit, so a = 1 -> b == 1 iii) "Deep references", such that 'a' could be a reference to my_list[4]. iv) Dynamic variables, such as a "now_time" that implicitly expands to some function. 5) With redefinition of the __namespace__ object, interesting run-time manipulations become possible, such as redefining a variable used by a function to be local/global/outer. Very dangerous, of course, but potentially very flexible. One case that comes to mind is a "profiling" namespace, which tracks how often variables are accessed -- over-frequented variables might lead to better-optimized code, and unaccessed variables might indicate dead code. [*] -- I'm not saying that any of these examples are particularly good ideas; indeed, abuse of them would be incredibly ugly. It's just that these are the first things that come to mind, because they're also so related to the obvious use-cases of properties. The first reaction to this is going to be a definite "ew," and I'd agree; this would make Python names be non-absolute [then again, the __classvars__ suggestion goes nearly as far anyway]. But this unification does bring all the power of "instance.attribute" down to the level of "local_name". The single biggest practical benefit is an easy definiton of an "outer" keyword: lexical closures in Python would then become truly on-par with use of global variables. The accumulator example would become: def make_accum(init): i = init def adder(j): outer i #[1] i += j return i return adder [1] -- note, this 'outer' check will have to require that 'i' be defined in an outer namespace -at the time the definition is compiled-. Otherwise, the variable might have to be created at runtime (as can be done now with 'global'), but there's no obvious choice on which namespace to create it in: global, or the immediately-outer one? This implies the following peculiar behaviour (but I think it's for the best): > # no i exists > def f(): # will error on definition > outer i print i > def g(): # won't error > print i > i = 1 > f() > g() > Definitely a Py3K proposal, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: ncurses' Dark Devilry
Jeremy Moles wrote: >>In article <[EMAIL PROTECTED]>, >> Jeremy Moles <[EMAIL PROTECTED]> wrote: >>>I have a focus "wheel" of sorts that allows the user to do input on >>>various wigets and windows and whatnot. However, if I want to quickly >>>call addstr somewhere else in the application I have to: >>> >>> 1. Store the YX coords of the cursor currently >>> 2. Use the cursor in the "current" action >>> 3. Restore the old cursor location >>> > > All of the routines I can find in the ncurses library want to take > control of the "cursor" object. That is: they either want to advance > it's position (addstr) or not (addchstr), but they both certainly grab > "control" of it; at least, visually. > > Basically what I'm looking for is a way to refresh a portion of a > curses-controlled "window" without affecting the current location of the > cursor or having to manually move it and move it back. Why not wrap your 1-3 in a function of your own? More generally, build a 'cursor location stack', probably using a list. Add utility functions push_cur and pop_cur to push and pop the current location of the cursor from that stack (pop_cur actually resets the current cursor location for future printing). Then your "write over there" becomes: push_cur() move_cursor(location) write(text) pop_cur() which can be pretty easily wrapped in a single function. Mind you, I don't use curses myself, but what would prevent this from working? -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Fredrik Lundh wrote: > Harald Karner wrote: >>>python -c "print len('m' * ((2048*1024*1024)-1))" >> >>2147483647 > > > the string type uses the ob_size field to hold the string length, and > ob_size is an integer: > > $ more Include/object.h > ... > int ob_size; /* Number of items in variable part */ > ... > > anyone out there with an ILP64 system? I have access to an itanium system with a metric ton of memory. I -think- that the Python version is still only a 32-bit python, though (any easy way of checking?). Old version of Python, but I'm not the sysadmin and "I want to play around with python" isn't a good enough reason for an upgrade. :) Python 2.2.3 (#1, Nov 12 2004, 13:02:04) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> str = 'm'*2047*1024*1024 + 'n'*2047*1024*1024 >>> len(str) -2097152 Yes, that's a negative length. And I don't really care about rebinding str for this demo. :) >>> str[0] Traceback (most recent call last): File "", line 1, in ? IndexError: string index out of range >>> str[1] Traceback (most recent call last): File "", line 1, in ? IndexError: string index out of range >>> str[-1] Traceback (most recent call last): File "", line 1, in ? SystemError: error return without exception set >>> len(str[:]) -2097152 >>> l = list(str) >>> len(l) 0 >>> l [] The string is actually created -- top reports 4.0GB of memory usage. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Fredrik Lundh wrote: > Christopher Subich wrote: >> >>I have access to an itanium system with a metric ton of memory. I >>-think- that the Python version is still only a 32-bit python > > > an ILP64 system is a system where int, long, and pointer are all 64 bits, > so a 32-bit python on a 64-bit platform doesn't really qualify. > Did a quick check, and int is 32 bits, while long and pointer are each 64: Python 2.2.3 (#1, Nov 12 2004, 13:02:04) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import struct >>> struct.calcsize('i'),struct.calcsize('l'),struct.calcsize('P') (4, 8, 8) So, as of 2.2.3, there might still be a problem. -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Dao Language v.0.9.6-beta is release!
Paul McNett wrote: > Having .NET and Java in the world makes me into more of a hero when I > can swoop in and get the real business problem solved using Python. +1QOTW -- http://mail.python.org/mailman/listinfo/python-list
Re: hash()
John Marshall wrote: > I was actually interested in the mathematical/probability > side rather than the empirical w/r to the current > hash function in python. Although I imagine I could do > a brute force test for x-character strings. Hah. No. At least on the version I have handy (Py 2.2.3 on Itanium2), hash returns a 64-bit value. Brute-forcing that in any reasonable length of time is rather impossible. -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Dao Language v.0.9.6-beta is release!
[EMAIL PROTECTED] wrote: > >>From "The Design of Everyday Things", docs are a sign of poor design. > Even a single word, such as the word "Push" on the face of a door, is > an indication that the design can be improved. Please, rethink the > design instead of trying to compensate with more documentation. This quote, with a naive reading, would seem to imply that "needing documentation is evidence of bad design." I think we can all agree that this interpretation is ludicrous: the only programming language, for example, which does not need documentation is the natural language, and that contains so many ambiguities that humans often get instructions wrong. If nothing else, documentation is necessary to explain "why X instead of Y," when both X and Y are perfectly valid, but mutually exclusive choices (CamelCase versus underscore_names). IMO, the correct interpretation of this reduces exactly to the principle of least surprise. If a door needs to have a sign that says "push," it means that a fair number of people have looked at the door and thought it was a pull-door. But they expect it to be a pull-door based on /experience with other doors,/ not some odd Platonic ideal of door-osity. Some surprise, however, (especially in Python) is necessary because the same feature can be seen more than one way: see the ever-present discussion about func(arg=default) scoping of default arguments. While "that's the way it is" shouldn't cover up true design flaws, arbitrary replacement with another behaviour doesn't work either: the other way will, ultimately, need the same order of documentation to catch surprises coming from the other way. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bitching about the documentation...
Fredrik Lundh wrote: > Steven D'Aprano wrote: > > >>"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo." > > > Did you mean: Badger badger Badger badger badger badger Badger badger > Mushroom! Mushroom! Thank you, I really needed that stuck in my head. :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Calculating Elapsed Time
Peter Hansen wrote: > A few things. > > 1. "Precision" is probably the wrong word there. "Resolution" seems > more correct. > > 2. If your system returns figures after the decimal point, it probably > has better resolution than one second (go figure). Depending on what > system it is, your best bet to determine why is to check the > documentation for your system (also go figure), since the details are > not really handled by Python. Going by memory, Linux will generally be > 1ms resolution (I might be off by 10 there...), while Windows XP has > about 64 ticks per second, so .015625 resolution... One caevat: on Windows systems, time.clock() is actually the high-precision clock (and on *nix, it's an entirely different performance counter). Its semantics for time differential, IIRC, are exactly the same, so if that's all you're using it for it might be worth wrapping time.time / time.clock as a module-local timer function depending on sys.platform. -- http://mail.python.org/mailman/listinfo/python-list
Re: Calculating Elapsed Time
Fredrik Lundh wrote: > if I run this on the Windows 2K box I'm sitting at right now, it settles > at 100 for time.time, and 1789772 for time.clock. on linux, I get 100 > for time.clock instead, and 262144 for time.time. Aren't the time.clock semantics different on 'nix? I thought, at least on some 'nix systems, time.clock returned a "cpu time" value that measured actual computation time, rather than wall-clock time [meaning stalls in IO don't count]. This is pretty easily confirmed, at least on one particular system (interactive prompt, so the delay is because of typing): Python 2.2.3 (#1, Nov 12 2004, 13:02:04) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> (c,t) = (time.clock, time.time) >>> (nowc, nowt) = (c(), t()) >>> print (c() - nowc, t() - nowt) (0.00195199989, 7.6953330039978027) So caevat programmer when using time.clock; its meaning is different on different platforms. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bitching about the documentation...
Steven D'Aprano wrote: > S > P > O > I > L > E > R > > S > P > A > C > E > > > > "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo." > > Buffalo from the city of Buffalo, which are intimidated by buffalo > from Buffalo, also intimidate buffalo from Buffalo. And to do a small simplification on it, to illustrate just how painful that sentence really is, the semantically equivalent version: N = buffalo from Buffalo (N [that] N buffalo) buffalo N. The dropping of the [that] is legal, if sometimes ambiguous, in English. > I didn't say it was *good* English, but it is *legal* English. Which is why natural language programming's never going to take off. :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Bitching about the documentation...
Steven D'Aprano wrote: > On Wed, 07 Dec 2005 11:45:04 +0100, Fredrik Lundh wrote: > >>Did you mean: Badger badger Badger badger badger badger Badger badger >>Mushroom! Mushroom! > > > Er... no, I can't parse that. I suffered a Too Much Recursion error about > the third Badger (I only have a limited runtime stack). http://www.badgerbadgerbadger.com/ And now back to your regularly scheduled newsgroup, already in progress. -- http://mail.python.org/mailman/listinfo/python-list
Re: Overloading
Johannes Reichel wrote: > Hi! > > In C++ you can overload functions and constructors. For example if I have a > class that represents a complex number, than it would be nice if I can > write two seperate constructors > > class Complex: Please do note, if you want this for the exact use of a Complex class, Python does have complex arithmetic built-in: print (0 + 1j) The other posted points are still valid (and generally applicable). -- http://mail.python.org/mailman/listinfo/python-list
Re: Text() tags and delete()
Bob Greschke wrote: > Does Text.delete(0.0, END) delete all of the tags too? Everything says it > does not delete marks, but nothing about tags. Note to everyone else: this is a TKinter question. Tags are attached to text ranges, in the Text widget. If you delete all of the text in the widget, all of the text ranges will go poof, so therefore all of the tag attachments will likewise go poof. This will not delete any tags that have been defined for the widget, though, although why you'd want to do that is an open question. -- http://mail.python.org/mailman/listinfo/python-list
DParser binaries on Win32 with Python 2.4?
From the documentation, it looks like DParser-python will do what I need, but I'm having trouble getting it installed properly. I'm using a win32 environment, with official 2.4 Python binaries. The official DParser for Python win32 binaries (staff.washington.edu/sabbey/dy_parser) fail, saying that I don't have Python 2.3 installed. :/ Compling the source on cygwin (with -mno-cygwin) succeeds in compilation, but then attempting to install results in: \Python24\python.exe setup.py install running install running build running build_py creating build creating build\lib.win32-2.4 copying dparser.py -> build\lib.win32-2.4 running build_ext building 'dparser_swigc' extension error: Python was built with version 7.1 of Visual Studio, and extensions need to be built with the same version of the compiler, but it isn't installed. I lack VS, and would prefer to stay using win32 Python rather than cygwin Python because I got twisted, which I also use, working on win32 and not cygwin. Any ideas? -- http://mail.python.org/mailman/listinfo/python-list
Re: DParser binaries on Win32 with Python 2.4?
[EMAIL PROTECTED] wrote: > 1) http://mingw.org > 2) python setup.py build --compiler=mingw32 > 3) python setup.py install Thank you very much, it looks like this worked perfectly; it even picked up on the cygwin-mingw32 libraries and compiled with the cygwin compiler and -mno-cygwin. -- http://mail.python.org/mailman/listinfo/python-list
twisted: not doing DNS resolutions?
I'm building an application that makes several user-specified internet connections; twisted meets my needs more or less perfectly. I'm running into a problem, however, in that twisted is not allowing connections (reactor.connectTCP) by hostname, only IP address. [read: connections to IP addresses work fine, hostnames no] From what I can tell, the problem lies in that Twisted simply isn't performing the DNS resolutions. From the connection factory's startedConnecting method, print connector.getDestination() results in: IPv4Address(TCP, 'hostname', port) That is to say, the port is correct, but the 'hostname' is completely unresolved. Since 'hostname' is a really bad IP address, not being one at all, the connection of course fails. A check via tcpdump on my gateway machine shows that the DNS resolution doesn't occur. The API documentation for version 1.3 (I'm using 2.0.1, but a quick check of twisted source/docstrings shows this to be still true[1]) shows that connectTCP taks "a host name," so by that (and the echo client example that connects to 'localhost') I presume there's supposed to be some sort of resolution going on. I'm running twisted 2.0.1 on win32. Is this a bug in twisted, or is there some configuration that I've gone and borked? [1] -- is there some reason in particular that there's no API reference for twisted 2.0x? The documentation/tutorials are pretty sparse as-is, I think. -- http://mail.python.org/mailman/listinfo/python-list
Re: twisted: not doing DNS resolutions?
Christopher Subich wrote: > From what I can tell, the problem lies in that Twisted simply isn't > performing the DNS resolutions. From the connection factory's > startedConnecting method, print connector.getDestination() results in: > > IPv4Address(TCP, 'hostname', port) Update: after doing some diving in the twisted source, it is supposed to do that. My guess is that either it thinks the hostname is a valid ip address (unlikely), or a callback isn't actually getting called. This confuses me. -- http://mail.python.org/mailman/listinfo/python-list
Re: twisted: not doing DNS resolutions?
Christopher Subich wrote: > Christopher Subich wrote: > >> From what I can tell, the problem lies in that Twisted simply isn't >> performing the DNS resolutions. From the connection factory's ... right, finally figured it out after a very long time at debugging the beast. It's an interaction with IDLE. What happens is that the deferToThread call in twisted's lookup Just Doesn't Run Right (under some circumstances) when it's run under IDLE with tksupport. I finally got the idea to run the application from the command line, and it worked just fine. This is kind of odd, since a trivial test case run from the interactive idle prompt works okay, but it's now 5am and I'm going to sleep. Moral: beware the IDLEs of March. -- http://mail.python.org/mailman/listinfo/python-list
Re: Scket connection to server
Steve Horsley wrote: > There is a higher level socket framework called twisted that everyone > seems to like. It may be worth looking at that too - haven't got round > to it myself yet. I wouldn't say 'like,' exactly. I've cursed it an awful lot (mostly for being nonobvious), but it does a damn fine job at networking, especially if you're comfortable building your own protocols. -- http://mail.python.org/mailman/listinfo/python-list
Re: Splitting string into dictionary
Robert Kern wrote: > David Pratt wrote: > >> I have string text with language text records that looks like this: >> >> 'en' | 'the brown cow' | 'fr' | 'la vache brun' > translations = [x.strip(" '") for x in line.split('|')] > d = dict(zip(translations[::2], translations[1::2])) One caevat is that this assumes that the proto-list doesn't have any '|' inside the actual text. -- http://mail.python.org/mailman/listinfo/python-list
Re: Favorite non-python language trick?
Steven D'Aprano wrote: > On Fri, 01 Jul 2005 12:24:44 -0700, Devan L wrote: > > >>With the exception of reduce(lambda x,y:x*y, sequence), reduce can be >>replaced with sum, and Guido wants to add a product function. > > > How do you replace: > > reduce(lambda x,y: x*y-1/y, sequence) > > with sum? You don't, but an almost equally short replacement works just as well, and doesn't even need the lambda: >>>sequence=range(1,100) >>>_res = 0.0 >>>for x in sequence: _res = _res*x + 1/x >>> >>>_res 9.3326215443944096e+155 Sure, this isn't a sum, but I'd argue that the for loop solution is superior: 1) For single expressions, the guts of the operation is still a single line 2) This completely avoids lambda -- while I myself am ambivalent about the idea of lambda going away, lambda syntax can get hairy for complicated expressions -- the comma changes meaning halfway through the expression, from 'parameter delimiter in lambda' to 'next parameter in reduce' 3) This trivially extends to a block of code, which a lambda doesn't 4) Behavior for zero- and one-length lists is explicit and obvious. There are, of course, a few disadvantages, but I think they're more corner corner cases. 1) This solution obviously isn't itself an expression (although the result is a single variable), so it can't be used in totality as a component to a larger call. [Rebuttal: When exactly would this be a good thing, anyway? Reduce statements are at least 11 characters long, 13 with a one-character default value. Using this as a parameter to just about anything else, even a function call, seems a bit unreadable to me.] 2) An explicit intermediate/result value is needed. This seems to be more of a 'cleanliness' argument than anything. Besides, rewriting this as a for loop actually improves performance: >>> sequence = range(1,100) >>> def f1(): j = 0.0 for x in sequence: j = j*x+1/x return j >>> def f2(): return reduce(lambda x,y: x*y - 1/y, sequence) >>> def runtime(f,n): starttime = time.time() for i in xrange(n): f() print time.time()-starttime >>> runtime(f1,1) 1.3717902 >>> runtime(f2,1) 3.6745232 Making the series bigger results in even worse relative performance (no idea why): >>> sequence = range(1,1000) >>> runtime(f1,1) 18.4169998169 >>> runtime(f2,1) 125.491000175 So really, 'reduce' is already useless for large anonymous blocks of code (which can't be defined in lambdas), and it seems slower than 'for .. in' for even simple expressions. -- http://mail.python.org/mailman/listinfo/python-list
Re: Favorite non-python language trick?
Devan L wrote: > sum(sequence[0] + [1/element for element in sequence[1:]]) > > I think that should work. That won't work, because it misses the x*y part of the expression (x[n]*x[n+1] + 1/x[n+1], for people who haven't immediately read the grandparent). Personally, I think demanding that it be writable as a sum (or product, or any, or all) is a false standard -- nobody's claimed that these would replace all cases of reduce, just the most common ones. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientific mini-survey
Steven D'Aprano wrote: > comps. But reduce can't be written as a list comp, only as a relatively > complex for loop at a HUGE loss of readability -- and I've never used > Lisp or Scheme in my life. I'm surely not the only one. See my reply to your other post for a more detailed explanation, but I don't think that the for-loop solution is much less readable at all, and the additional complexity involved is simply setting the initial value and result for the accumulator. The for-loop solution is even more flexible, because it can include anonymous code blocks and not just expressions. One caevat that I just noticed, though -- with the for-solution, you do need to be careful about whether you're using a generator or list if you do not set an explicit initial value (and instead use the first value of 'sequence' as the start). The difference is: _accum = g.next() for i in g: _accum = stuff(_accum,i) versus _accum = g[0] for i in g[1:]: _accum = stuff(_accum,i) The difference is because generators don't support subscripts, while lists don't support .next() iteration. Unless I'm missing something in the language (entirely possible), this suggests a missing feature for same-syntax iteration over the two types. -- http://mail.python.org/mailman/listinfo/python-list
Re: What are the other options against Zope?
Dennis Lee Bieber wrote: > The Windows registry is "a maze of twisty little passages, all > alike" ITYM "a maze of twisty little passeges, {058C1536-2201-11D2-BFC1-00805F858323}" > The registry a cryptic, bloated, system by which M$ can hide > details about anything they want... Instead of having separate .INI > files scattered about. A registry, in general, is a decent idea. It's more robust and more permanent than environment variables, and the centralization is better than INI files scattered about (probably). The problem is that the Windows Registry has passed beyond all mortal ken, probably sometime around when they started indexing things by GUID, losing any hierarchy based on application. That, and the file format definitely isn't robust to bit-rot that happened too often on FAT16/32 filesystems. -- http://mail.python.org/mailman/listinfo/python-list
Re: Favorite non-python language trick?
Steven D'Aprano wrote: > On Sun, 03 Jul 2005 00:39:19 -0400, Christopher Subich wrote: >>Personally, I think demanding that it be writable as a sum (or product, >>or any, or all) is a false standard -- nobody's claimed that these would >>replace all cases of reduce, just the most common ones. > > Er, excuse me, but that is EXACTLY what Devan claimed. > > Quote: "With the exception of reduce(lambda x,y:x*y, sequence), reduce can be > replaced with sum, and Guido wants to add a product function." Okay, then... "not many people have claimed that sum is a universal replacement for reduce, only the most common cases." It's further argued that the uncommon cases are more flexible and (again, mostly) anywhere from only slightly less readable to significantly more readable in for-loop form. The only corner case that isn't, so far as I know, is when the reduce() has no default initial value and the sequence/generator might possibly have 0 elements. But that's a TypeError anyway. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientificmini-survey
Carl Banks wrote: > Listcomps et al. cannot do everything map, lambda, filter, and reduce > did. Listcomps are inferior for functional programming. But, you see, > functional is not the point. Streamlining procedural programs is the > point, and I'd say listcomps do that far better, and without all the > baroque syntax (from the procedural point of view). I've heard this said a couple times now -- how can listcomps not completely replace map and filter? I'd think that: mapped = [f(i) for i in seq] filtered = [i for i in seq if f(i)] The only map case that doesn't cleanly reduce is for multiple sequences of different length -- map will extend to the longest one (padding the others with None), while zip (izip) truncates sequences at the shortest. This suggests an extension to (i)zip, possibly (i)lzip ['longest zip'] that does None padding in the same way that map does. Reduce can be rewritten easily (if an initial value is supplied) as a for loop: _accum = initial for j in seq: _accum=f(_accum,j) result = _accum (two lines if the result variable can also be used as the accumulator -- this would be undesirable of assigning to that can trigger, say, a property function call) Lambdas, I agree, can't be replaced easily, and they're the feature I'd probably be least happy to see go, even though I haven't used them very much. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientificmini-survey
Scott David Daniels wrote: > egbert wrote: >> How do you replace >> map(f1,sequence1, sequence2) >> especially if the sequences are of unequal length ? >> >> I didn't see it mentioned yet as a candidate for limbo, >> but the same question goes for: >> zip(sequence1,sequence2) > > OK, you guys are picking on what reduce "cannot" do. > The first is [f1(*args) for args in itertools.izip(iter1, iter2)] > How to _you_ use map to avoid making all the intermediate structures? Not quite -- zip an izip terminate at the shortest sequence, map extends the shortest with Nones. This is resolvable by addition of an lzip (and ilzip) function in Python 2.5 or something. And egbert's Chicken Littling with the suggestion that 'zip' will be removed. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientificmini-survey
Carl Banks wrote: > > Christopher Subich wrote: >>I've heard this said a couple times now -- how can listcomps not >>completely replace map and filter? > If you're doing heavy functional programming, listcomps are > tremendously unwieldy compared to map et al. Interesting; could you post an example of this? Whenever I try to think of that, I come up with unwieldly syntax for the functional case. In purely functional code the results of map/filter/etc would probably be directly used as arguments to other functions, which might make the calls longer than I'd consider pretty. This is especially true with lots of lambda-ing to declare temporary expressions. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientificmini-survey
Peter Hansen wrote: > [str(parrot) for parrot in sequence], for example, tells you much more > about what is going on than str(x) does. > > Exactly what, I have no idea... but it says _so_ much more. ;-) Yarr! Avast! Etc! -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientificmini-survey
Carl Banks wrote: > I suspect you're misunderstanding what I mean by heavily functional. > Heavily functional programming is a different mindset altogether. In > heavily functional programming, things like maps and filters and > function applications are actually what you're thinking about. map > isn't an indirect way to do a for loop; it's a direct way to do a map. That's true; I'm more comfortable with procedural programming in general, but I had a few classes that used LISP and understand what you're talking about. That said, Python itself is mostly a procedural language, with the functional tools really being bolted on[1]. When we're talking about Py3K, I think we're really talking about a redesign and rethink of pretty much the entire language -- with list and generator comprehensions, for procedural programming the need for map and lambda goes away. Reduce isn't directly replaced, of course, but a for-loop implementation (for procedural programming) is clearer, more powerful, more explicit, and possibly faster. That said, I very much like the idea of putting map and filter in a functional module. For applications like functional-style programming where map/etc are clearer, that keeps them in the library for efficient use, yet it leaves the native language with OO(g)WTDI [Only One (good) Way to Do It]. [1] -- lambda excepted. I think it's kind of cute, in a baby-mammal kind of way. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientific mini-survey
[EMAIL PROTECTED] wrote: > concept quickly familiar. But "lambda" has a very clear meaning... it's > a letter of the greek alphabet. The connection between that letter and > anonymous functions is tenuous at best, and fails the test of making > Python read like "executable pseudocode". But 'lambda' does have a very clear meaning in the realm of functional programming, and it means precisely (mostly) what it means in Python: an anonymous function. It might not be the -best- possible name, but anyone who's had a computer science education should have had a class that introduced basic functional programming topics (even if only for academic interest), and so they should be familiar with the keyword name. If not, then it's just a magic word. Kind of like 'def'. -- http://mail.python.org/mailman/listinfo/python-list
Re: (Win32 API) callback to Python, threading hiccups
Francois De Serres wrote: > - so, on callback, I create a new thread, after checking that the > previous one has returned already (WaitOnSingleObject(mythread)) so we > only have one thread involved. Uh... to me, this looks like a frighteningly inefficient way of doing things. How about using a synchronous queue to post the data to a processing thread? That way, you don't have to create an entierly new thread each time you receive data in the callback. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientific mini-survey
Terry Hancock wrote: > With list comprehensions and generators becoming so integral, I'm > not sure about "unpythonic". And a syntax just occured to me -- > what about this: > > [y*x for x,y] > > ? > > (that is: > > [ for ] > > It's just like the beginning of a list comprehension or generator, but > without the iterator. That implies that one must be given, and > the result is therefore a callable object. As others have mentioned, this looks too much like a list comprehension to be elegant, which also rules out () and {}... but I really do like the infix syntax. Perhaps using angle-brackets would be useful? These have no grouping-meaning in Python that I'm aware of. Example, I'd also prefer using 'with' rather than 'for' as the keyword -- 'with' doesn't suggest iteration. I also suggest parenthization of the argument list, since that makes a zero-argument lambda not look weird. To replicate the examples from http://wiki.python.org/moin/AlternateLambdaSyntax 1 lambda a, b, c:f(a) + o(b) - o(c) 2 lambda x: x * x 3 lambda : x 4 lambda *a, **k: x.bar(*a, **k) 5 ((lambda x=x, a=a, k=k: x(*a, **k)) for x, a, k in funcs_and_args_list) ( for x, a, k in funcs_and_args_list) -- http://mail.python.org/mailman/listinfo/python-list
Re: Tkinter grid layout
Eric Brunel wrote: > So you should either make your MainWindow class inherit from Tk, which > eliminates the unneeded container and the problems it may cause, or make > sure the pack or grid on your MainWindow instance actually tells the > container to grow with its container. With pack, it's quite easy: just > do myWindow.pack(fill=BOTH, expand=1). With grid, it's a bit more > complicated, since you will have to configure the grid on the container. To expand on this, the grid-method uses a few calls that aren't immediately obvious. Specifically, the containing object must have row and columnconfigure called on them: >>> r = Tk() >>> g = Text(r) >>> h = Entry(r) >>> g.grid(row=1,sticky=N+S+E+W) >>> h.grid(row=2,sticky=E+W) >>> r.columnconfigure(0,weight=1) >>> r.rowconfigure(1,weight=1) >>> r.mainloop() This creats a window containing a text widget above an entry widget. Both will resize horizontally to fill the entire window, and the text widget will resize vertically. -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientific mini-survey
Ron Adam wrote: > Christopher Subich wrote: > >> As others have mentioned, this looks too much like a list >> comprehension to be elegant, which also rules out () and {}... but I >> really do like the infix syntax. > > > Why would it rule out ()? Generator expressions. Mind you, Py3k might want to unify generators and lists in some way anyway, freeing up (). :) > > You need to put a lambda express in ()'s anyways if you want to use it > right away. > > print (lambda x,y:x+y)(1,2) Although print (1,2) has natural grouping: the lambda itself is effectively a single token. I also like the infix style reminiscent of Python's existing comprehensions. Hell, call it a 'function comprehension' or 'expression comprehension,' and we can pretend we invented the damn thing. > My choice: > > name = (let x,y return x+y) # easy for beginners to understand > value = name(a,b) > > value = (let x,y return x+y)(a,b) And a zero-argument lambda is (aside from really arcane)? (let return 2)? > I think the association of (lambda) to [list_comp] is a nice > distinction. Maybe a {dictionary_comp} would make it a complete set. ;-) Yeah, dictionary comprehensions would be an interesting feature. :) Syntax might be a bit unwieldy, though, and I doubt they'd be used often enough to be worth implementing, but still neat. -- http://mail.python.org/mailman/listinfo/python-list
Re: Legacy data parsing
gov wrote: > Hi, > > I've just started to learn programming and was told this was a good > place to ask questions :) > > Where I work, we receive large quantities of data which is currently > all printed on large, obsolete, dot matrix printers. This is a problem > because the replacement parts will not be available for much longer. > > So I'm trying to create a program which will capture the fixed width > text file data and convert as well as sort the data (there are several > different report types) into a different format which would allow it to > be printed normally, or viewed on a computer. Are these reports all of the same page-wise format, with fixed-width columns? If so, then the suggestion about a state machine sounds good -- just run a state machine to figure out which linetype you're on, then extract the fixed width fields via slices. name = line[x:y] If that doesn't work, then pyparsing or DParser might work for you as a more general-purpose parser. -- http://mail.python.org/mailman/listinfo/python-list
Re: decorators as generalized pre-binding hooks
Kay Schluehr wrote: > I think it would be a good idea to pronounce the similarity between > function decorators and metaclasses. Metaclasses were once introduced > as an arcane art of fuzzy bearded hackers or supersmart 'enterprise > architects' that plan at least products of Zope size but not as a tool > for the simple programmer on the street. But maybe they should be and > there should also be librarys of them representing orthogonal > customizations ready for plug in. In which case, I point out the need for better, more accessible documentation. The Python 2.4 tutorial, for example, doesn't mention them at all as far as I've noticed. -- http://mail.python.org/mailman/listinfo/python-list
Re: Help with report
ChrisH wrote: > Oh. The one other thing I forgot to mention is that the data needs to be > already updated every 10 minutes or so automatically. You know, this is the most concise example of feature-creep in a specification that I've ever seen. -- http://mail.python.org/mailman/listinfo/python-list
Re: Fwd: Should I use "if" or "try" (as a matter of speed)?
Dark Cowherd wrote: > But one advise that he gives which I think is of great value and is > good practice is > "Always catch any possible exception that might be thrown by a library > I'm using on the same line as it is thrown and deal with it > immediately." That's fine advice, except for when it's not. Consider the following code: try: f = file('file_here') do_setup_code do_stuff_with(f) except IOError: # File doesn't exist error_handle To me, this code seems very logical and straightfoward, yet it doesn't catch the exception on the very next line following its generation. It relies on the behavior of the rest of the try-block being skipped -- the "implicit goto" that Joel seems to loathe. If we had to catch it on the same line, the only alternative that comes to mind is: try: f=file('file_here') except IOError: #File doesn't exist error_handle error_flag = 1 if not error_flag: do_setup_code do_stuff_with(f) which nests on weird, arbitrary error flags, and doesn't seem like good programming to me. -- http://mail.python.org/mailman/listinfo/python-list
Re: python parser
tuxlover wrote: > I have to write a verilog parser in python for a class project. I was > wondering if all you folks could advise me on choosing the right python > parser module. I am not comfortable with lex/yacc and as a result find > myself strugging with any module which use lex/yacc syntax/philosophy. > pyparser looks good to me, but before I dive into it, I would really > appreciate feedback from members of this group I've had good luck with DParser for Python (http://staff.washington.edu/sabbey/dy_parser/index.html); in fact, it might even be a very easy translation from a premade Verilog grammar to a DParser grammar (Google search if you don't have BNF for Verilog already). Two caevats come to mind, though; documentation isn't as newbie-friendly as it could be, and DParser requires a binary library -- it's not Python-only, which might matter for your project. -- http://mail.python.org/mailman/listinfo/python-list
Re: Slicing every element of a list
Gary Herron wrote: > Alex Dempsey wrote: >> for line in lines: >>line = line[1:-5] >>line = line.split('\"\t\"') > This, in fact, did do the operation you expected, but after creating the > new value and assigning it to line, you promptly threw it away. (Because > the loop then went back to the top and (re)assigned the next thing in > lines to line wiping out your nicely sliced computation in lines.) You > need to *do* something with the value in line before you end the loop -- > but what? As an intermediate tip, the entire loop can be written as a single list comprehension: stuff = [li[1:-5].split('"\t"') for li in lines] (You don't need to escape single quotes inside double-quoted strings, and vice versa.) -- http://mail.python.org/mailman/listinfo/python-list
Re: Fwd: Should I use "if" or "try" (as a matter of speed)?
Thomas Lotze wrote: > Neither does it to me. What about > > try: > f=file('file_here') > except IOError: #File doesn't exist > error_handle > else: > do_setup_code > do_stuff_with(f) > > (Not that I'd want to defend Joel's article, mind you...) That works. I'm still not used to having 'else' available like that. I wonder how Joel advocates managing in C++-likes that don't have a try/catch/else semantic. -- http://mail.python.org/mailman/listinfo/python-list
Re: all possible combinations
rbt wrote: > Expanding this to 4^4 (256) to test the random.sample function produces > interesting results. It never finds more than 24 combinations out of the > possible 256. This leads to the question... how 'random' is sample ;) sample(population,k): Return a k length list of unique elements chosen from the population sequence. Used for random sampling without replacement. New in version 2.3. Working as designed, I'd say. 4! = 24. -- http://mail.python.org/mailman/listinfo/python-list
Re: Help - Classes and attributes
rh0dium wrote: > Hi all, > > I believe I am having a fundamental problem with my class and I can't > seem to figure out what I am doing wrong. Basically I want a class > which can do several specific ldap queries. So in my code I would have > multiple searches. But I can't figure out how to do it without it > barfing.. [snip] > File "./ldap-nsc.py", line 40, in search > ldap_result_id = l.search_s(baseDN, searchScope, searchAttrs, > retrieveAttrs) > AttributeError: NSCLdap instance has no attribute 'search_s' > > > The code is also I believe straight forward.. You're going to kick yourself when you see the mistake. > > import ldap > > class NSCLdap: > > def __init__(self,server="sc-ldap.nsc.com"): > who=""; cred="" > self.server=server > try: > print "LDAP Version", ldap.__version__ > l=ldap.open(server) ^^ [big snip] > if __name__ == '__main__': > > l = NSCLdap() > l.search() > I would love some pointers - clearly my code thinks that search_s is an > attribute of my class but it's not.. Ah, but l -is- an instance of your class. You want l to refer to the ldap connection, but you forgot do assign it to self.l -- in __init__, you assign l to simply a local variable, which goes poof as soon as __init__ returns. You forgot the self.l throughout both __init__ and search. You get the slighty misleading traceback because there is an "l" defined -- it just happens to be the one in globals(), the l = NSCLdap() that got assigned when you imported/ran the module. Replace l = NSCLdap() with q = NSCLdap() (and l.search with q.search), and you'll get a NameError instead. -- http://mail.python.org/mailman/listinfo/python-list
Re: threads and sleep?
Jp Calderone wrote: > On 14 Jul 2005 05:10:38 -0700, Paul Rubin > <"http://phr.cx"@nospam.invalid> wrote: > >> Andreas Kostyrka <[EMAIL PROTECTED]> writes: >> >>> Basically the current state of art in "threading" programming doesn't >>> include a safe model. General threading programming is unsafe at the >>> moment, and there's nothing to do about that. It requires the developer >>> to carefully add any needed locking by hand. >> >> >> So how does Java do it? Declaring some objects and functions to be >> synchronized seems to be enough, I thought. > > > Multithreaded Java programs have thread-related bugs in them too. So it > doesn't seem to be enough. Like Python's model, Java's is mostly about > ensuring internal interpreter state doesn't get horribly corrupted. It > doesn't do anything for application-level state. For example, the Hrm... this would suggest the possibility of designing a metaclass, perhaps, that would ensure synchronous access to an object. Perhaps "wrap" the class in another, that gets and releases a mutex on any external get/set access (except, possibly, for a specified list of "asynchronous" data members and methods). This, of course, wouldn't elminate deadlocks, but that's a problem that arises from interaction from multiple objects, rather than within a single one. -- http://mail.python.org/mailman/listinfo/python-list
Re: Changing size of Win2k/XP console?
Sheeps United wrote: > I'm far from sure if it's the right one, but I think it could be > SetConsoleScreenBufferSize from Kernel32. Hrr, for some reason I have nasty > feeling in back of my head... That could also be totally wrong way of > approaching. I have the source code to a win32-console program lying around, and it uses SetConsoleScreenBufferSize from C++ to do just that. -- http://mail.python.org/mailman/listinfo/python-list
Re: main window in tkinter app
William Gill wrote: > O.K. I tried from scratch, and the following snippet produces an > infinite loop saying: > > File "C:\Python24\lib\lib-tk\Tkinter.py", line 1647, in __getattr__ > return getattr(self.tk, attr) > > If I comment out the __init__ method, I get the titled window, and print > out self.var ('1') > > > import os > from Tkinter import * > > class MyApp(Tk): > var=1 > def __init__(self): > pass > def getval(self): > return self.var > > > app = MyApp() > > app.title("An App") > print app.getval() > app.mainloop() You're not calling the parent's __init__ inside your derived class. I would point out where the Python Tutorial points out that you should do this, but it's not in the obvious place (Classes: Inheritance). Python does -not- automagically call parent-class __init__s for derived classes, you must do that explicitly. Changing the definition of your class to the following works: >>> class MyApp(Tk): var=1 def __init__(self): Tk.__init__(self) pass def getval(self): return self.var It works when you comment out __init__ because of a quirk in Python's name resolution. As you'd logically expect, if you don't define a function in a derived class but call it (such as instance.method()), it will call the method from the base class. You just proved that this works for __init__ methods also. When you didn't define __init__ for your derived class, MyApp() called Tk.__init__(), which Does the Right Thing in terms of setting up all the specific Tkinter-specific members. -- http://mail.python.org/mailman/listinfo/python-list
Re: main window in tkinter app
William Gill wrote: > That does it!, thanks. > > Thinking about it, when I created a derived class with an __init__ > method, I overrode the base class's init. It should have been > intuitive that I needed to explicitly call baseclass.__init(self), it > wasn't. It might have hit me if the fault was related to someting in > baseclass.__init() not taking place, but the recursion loop didn't give > me a clue. Any idea why failing to init the base class caused the loop? You never pasted the first part of the traceback: File "", line 1, in -toplevel- app.title('frob') File "C:\Python24\Lib\lib-tk\Tkinter.py", line 1531, in wm_title return self.tk.call('wm', 'title', self._w, string) File "C:\Python24\Lib\lib-tk\Tkinter.py", line 1654, in __getattr__ return getattr(self.tk, attr) File "C:\Python24\Lib\lib-tk\Tkinter.py", line 1654, in __getattr__ return getattr(self.tk, attr) When you didn't call Tk.__init__(self), self.tk was never initialized. Further, it's obvious from the traceback that Tk implements a __getattr__ method; from the python docs: __getattr__(self,name): Called when an attribute lookup has not found the attribute in the usual places Since self.tk doesn't exist, __getattr__(self,tk) was called. Without looking at the TKinter source code, we can surmise that there's a default behavior of "if self doesn't have it, return the relevant attribute from within self.tk." The problem, of course, arises when self.tk doesn't exist -- this causes the self.tk reference to call self.__getattr__('tk'), which is recursive. The infinite recursion, then, is a mild bug in TKinter that doesn't show itself during normal use. The proper solution should be to replace the self.tk call with either self.__dict__['tk'], or to make Tk a new-style class and use object.__getattribute__(self,'tk'). (Note: there are probably some fine details that I'm missing in this 'solution', so take it with a potato-sized grain of salt. The general principle still applies though.) -- http://mail.python.org/mailman/listinfo/python-list
Re: Need to interrupt to check for mouse movement
Peter Hansen wrote: > stringy wrote: > >> I have a program that shows a 3d representation of a cell, depending on >> some data that it receives from some C++. It runs with wx.timer(500), >> and on wx.EVT_TIMER, it updates the the data, and receives it over the >> socket. > > > It's generally inappropriate to have a GUI program do network > communications in the main GUI thread. You should create a worker > thread and communicate with it using Queues and possibly the > AddPendingEvent() or PostEvent() methods in wx. There should be many > easily accessible examples of how to do such things. Post again if you > need help finding them. I'd argue that point; it's certainly inappropriate to do (long-)/blocking/ network communications in a main GUI thread, but that's just the same as any blocking IO. If the main thread is blocked on IO, it can't respond to the user which is Bad. However, instead of building threads (possibly needlessly) and dealing with synchronization issues, I'd argue that the solution is to use a nonblocking network IO package that integrates with the GUI event loop. Something like Twisted is perfect for this task, although it might involve a significant application restructuring for the grandparent poster. Since blocking network IO is generally slow, this should help the grandparent poster -- I am presuming that "the program updating itself" is an IO-bound, rather than processor-bound process. -- http://mail.python.org/mailman/listinfo/python-list
Re: Need to interrupt to check for mouse movement
Jp Calderone wrote: > In the particular case of wxWidgets, it turns out that the *GUI* blocks > for long periods of time, preventing the *network* from getting > attention. But I agree with your position for other toolkits, such as > Gtk, Qt, or Tk. Wow, I'm not familiar with wxWidgets; how's that work? -- http://mail.python.org/mailman/listinfo/python-list
Re: Need to interrupt to check for mouse movement
Paul Rubin wrote: > Huh? It's pretty normal, the gui blocks while waiting for events > from the window system. I expect that Qt and Tk work the same way. Which is why I recommended Twisted for the networking; it integrates with the toolkit event loops so it automagically works: http://twistedmatrix.com/projects/core/documentation/howto/choosing-reactor.html#auto15 I agree, though, that basic socket programming in the same thread as the gui's probably a bad idea. -- http://mail.python.org/mailman/listinfo/python-list
Re: Help with regexp please
Scott David Daniels wrote: > Felix Collins wrote: >> I have an "outline number" system like >> 1 >> 1.2 >> 1.2.3 >> I want to parse an outline number and return the parent. > > Seems to me regex is not the way to go: > def parent(string): > return string[: string.rindex('.')] Absolutely, regex is the wrong solution for this problem. I'd suggest using rsplit, though, since that will Do The Right Thing when a top-level outline number is passed: def parent(string): return string.rsplit('.',1)[0] Your solution will throw an exception, which may or may not be the right behaviour. -- http://mail.python.org/mailman/listinfo/python-list
Re: find a specified dictionary in a list
Odd-R. wrote: > On 2005-07-22, John Machin <[EMAIL PROTECTED]> wrote: > > Odd-R. wrote: > >> I have this list: > >> > >> [{'i': 'milk', 'oid': 1}, {'i': 'butter', 'oid': 2},{'i':'cake','oid':3}] > >> > >> All the dictionaries of this list are of the same form, and all the oids > >> are distinct. If I have an oid and the list, how is the simplest way of > >> getting the dictionary that holds this oid? > >> > > > > Something like this: > > > > def oidfinder(an_oid, the_list): > > for d in the_list: > > if d['oid'] == an_oid: > > return d > > return None > > # These are not the oids you are looking for. > > Thank you for your help, but I was hoping for an even simpler > solution, as I am suppose to use it in a > http://mail.python.org/mailman/listinfo/python-list
Re: Help with regexp please
Terry Hancock wrote: > I think this is the "regexes can't count" problem. When the repetition > count matters, you usually need something else. Usually some > combination of string and list methods will do the trick, as here. Not exactly, regexes are just fine at doing things like "first" and "last." The "regexes can't count" saying applies mostly to activities that reduce to parentheses matching at arbitrary nesting. The OP's problem could easily be written as a regex substitution, it's just that there's no need to; I believe that the sub would be (completely untested, and I'm probably going to use the wrong call to re.sub anyway since I don't have the docs open): re.sub(outline_value,'([0-9.]+)\.[0-9]+','\1') It's just that the string.rsplit call is much more legible, much more intutitive, doesn't do strange things if it's accidentally called on a top-level outline value, and also extends immediately to handle outlines of the form I.1.a.i. -- http://mail.python.org/mailman/listinfo/python-list
Re: "Aliasing" an object's __str__ to a different method
ncf wrote: > Well, suffice to say, having the class not inherit from object solved > my problem, as I suspect it may solve yours. ;) Actually, I did a bit of experimenting. If the __str__ reassignment worked as intended, it would just cause an infinite recursion. To paste the class definition again: > class MyClass(object): > > def Edit(self): > return "I, %s, am being edited" % (self) > > def View(self): > return "I, %s, am being viewed" % (self) > > def setEdit(self): > self.__str__ = self.__repr__ = self.Edit > > def setView(self): > self.__str__ = self.__repr__ = self.View Notice the % (self) in Edit and View -- those recursively call str(self), which causes infinite recursion. In the spirit of getting the class working, though, the class-method behavior of __str__ for new-style classes can be fixed with an extremely ugly hack: class MyClass(object): def __init__(self): self.__str__ = lambda : object.__str__(self) def Edit(self): return "I, %s, am being edited" def View(self): return "I, %s, am being viewed" def setEdit(self): self.__str__ = self.__repr__ = self.Edit def setView(self): self.__str__ = self.__repr__ = self.View def __str__(self): return self.__str__() (Notice that I've removed the string substitution in Edit and View. This also does not change the __repr__ method; it also acts as a class method. But that's also easy enough to change in the same way.) I also would be interested in knowing why new-style classes treat __str__ as a class method. -- http://mail.python.org/mailman/listinfo/python-list
Re: Location of tk.h
none wrote: > Probably a stupid question, but... > > I was attempting to install the Tkinter 3000 WCK. It blew up trying to > build _tk3draw. The first error is a 'No such file or directory' for > tk.h. I can import and use Tkinter just fine, so I'm not sure what is > what here. You can import and user Tkinter, sure, but the WCK setup is trying to -build- itself. Have you installed the relevant Tk development libraries? -- http://mail.python.org/mailman/listinfo/python-list
Re: return None
Grant Edwards wrote: > Personally, I don't really like the idea that falling off the > botton of a function implicitly returns None. It's just not > explicit enough for me. My preference would be that if the > function didn't execute a "return" statement, then it didn't > return anyting and attempting to use a return value would be an > error. This is a bad idea. Classically, distinguishing between functions that return things and functions that don't return things explicitly divides the "callable object" space. From my CS101 class with its incredibly exciting dive into the world of useless pseudocode, callables that returned things were called 'functions' and callables that didn't were called 'procedures'. Some languages do make this distinction; QBASIC, for example, had 'gosub' separate from function calls. What do you do for an early break from a function that still returns not-even-None [ReallyNone], "return?" That looks and acts like a 'real' return statement, and the distinction between return-without-a-value-so-maybe-except and return-with-a-value is suddenly magnified to real importance. Further, and I consider this a truly damning case, look at decorater. A naive "logging" decorator could be defined like this: def logger(func): def new_func(*args, **kwargs): print '%s called with:' % func.__name__, args, kwargs retval = func(*args,**kwargs) print '%s returns:', retval return retval return new_func This logger works without modification for both value and non-value returning functions. Its output isn't quite as pretty for non-value functions, but it works and the implementation is both simple and flexible. With a function-schism, to keep its simple implementation 'logger' would have to be rewritten as 'flogger' (same as current-logger, for use on functions), and 'plogger' (for use on procedures). The downside here is that if the function/method changed to or from a procedure, the decorator would have to be switched. Alternatively, the logger decorator could be longer and explicitly catch the possible exception. But why should we have to write like that, for a use-case that doesn't even represent a true error -- arguably not even an exceptional case? Python's definitely not a B&D language, talk of floggers aside. -- http://mail.python.org/mailman/listinfo/python-list
Re: "Aliasing" an object's __str__ to a different method
Paolino wrote: > Little less ugly: > In [12]:class A(object): >: def __str__(self):return self.__str__() >: def str(self):return 'ciao' >: def setStr(self):self.__str__=self.str >: > > In [13]:a=A() > > In [14]:a.setStr() > > In [15]:str(a) > Out[15]:'ciao' Not quite bug-free, by my eye that'll infintely recur if you call str(A()). -- http://mail.python.org/mailman/listinfo/python-list
Re: return None
Christopher Subich wrote: > print '%s returns:', retval Not that it matters, but this line should be: print '%s returns:' % func.__name__, retval -- http://mail.python.org/mailman/listinfo/python-list
Re: consistency: extending arrays vs. multiplication ?
Soeren Sonnenburg wrote: > On Sat, 2005-07-23 at 23:35 +0200, Marc 'BlackJack' Rintsch wrote: >>Both operate on the lists themselves and not on their contents. Quite >>consistent if you ask me. > But why ?? Why not have them operate on content, like is done on > *arrays ? Because they're lists, not arrays. What do you propose that the following do: [1,2,3] + [4,5,6] [1,2] + [3,4,5] [1,2] + [{3:4,5:6}] dict_var_1.keys() + dict_var_2.keys() [g(3) for g in [f1 f2 f3] + [f4 f5 f6]] I point out that the idiom is + , not + . Operations on lists must deal with them as lists, not lists of any specific type. -- http://mail.python.org/mailman/listinfo/python-list
Re: return None
Repton wrote: > 'Well, there's your payment.' said the Hodja. 'Take it and go!' +1: the koan of None "Upon hearing that, the man was enlightened." -- http://mail.python.org/mailman/listinfo/python-list
Re: can list comprehensions replace map?
Andrew Dalke wrote: > Steven Bethard wrote: > >>Here's one possible solution: >> >>py> import itertools as it >>py> def zipfill(*lists): >>... max_len = max(len(lst) for lst in lists) > > > A limitation to this is the need to iterate over the > lists twice, which might not be possible if one of them > is a file iterator. > > Here's a clever, though not (in my opinion) elegant solution > > import itertools > > def zipfill(*seqs): > count = [len(seqs)] > def _forever(seq): > for item in seq: yield item > count[0] -= 1 > while 1: yield None > seqs = [_forever(seq) for seq in seqs] > while 1: > x = [seq.next() for seq in seqs] > if count == [0]: > break > yield x I like this solution best (note, it doesn't actually use itertools). My naive solution: def lzip(*args): ilist = [iter(a) for a in args] while 1: res = [] count = 0 for i in ilist: try: g = i.next() count += 1 except StopIteration: # End of iter g = None res.append(g) if count > 0: # At least one iter wasn't finished yield tuple(res) else: # All finished raise StopIteration -- http://mail.python.org/mailman/listinfo/python-list
Re: A replacement for lambda
Mike Meyer wrote: > My choice for the non-name token is "@". It's already got magic > powers, so we'll give it more rather than introducing another token > with magic powers, as the lesser of two evils. Doesn't work. The crux of your change isn't introducing a meaning to @ (and honestly, I prefer _), it's that you change the 'define block' from a compound_stmt (funcdef) (see www.python.org/doc/current/ref/compound.html) to an expression_stmt (expresion). This change would allow some really damn weird things, like: if def _(x,y): return x**2 - y**2 (5,-5): # ?! How would you immediately call this 'lambda-like'?[1] print 'true' else: print 'false' [1] -- yes, it's generally stupid to, but I'm just pointing out what has to be possible. Additionally, Python's indenting Just Doesn't Work Like That; mandating an indent "after where the def came on the previous line" (as you do in your example, I don't know if you intend for it to hold in your actual syntax) wouldn't parse right -- the tokenizer generates INDENT and DEDENT tokens for whitespace, as I understand it. My personal favourite is to replace "lambda" entirely with an "expression comprehension", using < and > delimeters. It just looks like our existing list and generator comprehensions, and it doesn't use 'lambda' terminology which will confuse any newcomer to Python that has experience in Lisp (at least it did me). g = g(1) == 1 Basically, I'd rewrite the Python grammar such that: lambda_form ::= "<" expression "with" parameter_list ">" Biggest change is that parameter_list is no longer optional, so zero-argument expr-comps would be written as , which makes a bit more sense than . Since "<" and ">" aren't ambiguous inside the "expression" state, this shouldn't make the grammar ambiguous. The "with" magic word does conflict with PEP-343 (semantically, not syntactically), so "for" might be appropriate if less precise in meaning. -- http://mail.python.org/mailman/listinfo/python-list
Re: A replacement for lambda
Paul Rubin wrote: > Christopher Subich <[EMAIL PROTECTED]> writes: > >>My personal favourite is to replace "lambda" entirely with an >>"expression comprehension", using < and > delimeters. > > > But how does that let you get more than one expression into the > anonymous function? It doesn't. Functionally, it's a direct replacement of lambda as-is. -- http://mail.python.org/mailman/listinfo/python-list
Re: A replacement for lambda
Scott David Daniels wrote: > What kind of shenanigans must a parser go through to translate: > < > > this is the comparison of two functions, but it looks like a left- > shift on a function until the second with is encountered. Then > you need to backtrack to the shift and convert it to a pair of > less-thans before you can successfully translate it. I hadn't thought of that, but after much diving into the Python grammar, the grammar would still work with a greedy tokenizer if "<<" (and also ">>", for identical reasons) were replaced in 'shift_expr" with "<" "<" and ">" ">". That, of course, introduces some weirdness of '''a = 5 < < 3''' being valid. I'm not sure whether that is a wart big enough to justify a special-case rule regarding '>>' and '<<' tokens. We do allow 'def f() :' as-is, so I'm not sure this is too big of a problem. -- http://mail.python.org/mailman/listinfo/python-list
Re: A replacement for lambda
Paolino wrote: > why (x**2 with(x))<(x**3 with(x)) is not taken in consideration? Looks too much like a generator expression for my taste. Also, syntax could be used with 'for' instead of 'with' if PEP343 poses a problem, whereas (expr for params) is identically a generator expression. > If 'with' must be there (and substitue 'lambda:') then at least the > syntax is clear.IMO Ruby syntax is also clear. I haven't used Ruby myself, but as I understand it that language allows for full anonymous blocks. Python probably doesn't want to do that. -- http://mail.python.org/mailman/listinfo/python-list
Re: A replacement for lambda
Paddy wrote: > Christopher Subich <[EMAIL PROTECTED]> writes: > >>Basically, I'd rewrite the Python grammar such that: >>lambda_form ::= "<" expression "with" parameter_list ">" > > > I do prefer my parameter list to come before the expression. It would > remain consistant with simple function definitions. Stylistic choice; I can appreciate your sentiment, but remember that this isn't exactly a function definition. It's a form of 'delayed expression.' Also, <... with ...> is nearly identical (identical if you replace 'with' with 'for') to existing list and generator comprehensions, so we'd get to stretch that idiom. -- http://mail.python.org/mailman/listinfo/python-list
Re: Thaughts from an (almost) Lurker.
Robert Kern wrote: > My experience with USENET suggests that there is always a steady stream > of newbies, trolls, and otherwise clueless people. In the absence of > real evidence (like traceable headers), I don't think there's a reason > to suspect that there's someone performing psychological experiments on > the denizens of c.l.py. Er... yes! Exactly! These are not the trolls you're looking for, move along. :) -- http://mail.python.org/mailman/listinfo/python-list
Re: Wheel-reinvention with Python
Paul Rubin wrote: > I think my approach is in some sense completely typical: I don't want > to install ANYTHING, EVER. I've described this before. I want to buy > a new computer and have all the software I'll ever need already on the > hard drive, and use it from that day forward. By the time the With all due respect, if you're allergic to installing software then why are you a developer? To me, your view is somewhat akin to that of a woodworker who doesn't want to buy tools, or a painter who doesn't want to buy brushes. Computers can be merely appliances, sure, but that's wasting the general purpose part of computation. Software as separate packaging exists because we (collectively) don't always know what we want the first (or second, or third, or...) time around. And when we do know what we want, we often muck it up when we try it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Standard Threads vs Weightless Threads
yoda wrote: > 1)What is the difference (in terms of performance, scalability,[insert > relevant metric here]) between microthreads and "system" threads? System-level threads are relatively heavyweight. They come with a full call stack, and they take up some level of kernel resources [generally less than a process]. In exchange, they're scheduled by the OS, with the primary benefit (on uniprocessor systems) that if one thread executes a blocking task (like IO writes) another thread will receive CPU attention. The primary disadvantage is that they're scheduled by the CPU. This leads to the concurrency nightmare, where the developer needs to keep track of what blocks of code (and data) need locks to prevent deadlock and race conditions. > > 2)If microthreads really are superior then why aren't they the standard > Python implementation (or at least within the standard library)? (where > my assumption is that they are not the standard implementation and are > not contained within the standard library). Microthreads are very different; they're entirely internal to the Python process, and they're not seen at all by the operating system. Scheduling is done explicitly by the microthread implementation -- multitasking is not preemptive, as with system threads. They're not in the standard library because implementing microthreads has thus far required a very large rewrite of the CPython architecture -- see Stackless Python. -- http://mail.python.org/mailman/listinfo/python-list
Re: 2-player game, client and server at localhost
Michael Rybak wrote: > That's the problem - "or a player input comes in". As I've explained, > this happens a dozen of times per second :(. I've even tried not > checking for player's input after every frame, but do it 3 times more > rare (if framecount % 3 == 0 : process_players_input()). Well, I've > already got it that I shouldn't tie this around framerate, but > nevertheless... There's the key. How are you processing network input, specifically retrieving it from the socket? -- http://mail.python.org/mailman/listinfo/python-list
Re: 2-player game, client and server at localhost
Michael Rybak wrote: > CS> There's the key. How are you processing network input, specifically > CS> retrieving it from the socket? > > A "sock" class has a socket with 0.1 timeout, and every time I > want anything, I call it's read_command() method until it returns > anything. read_command() and send_command() transfer user's actions in > special format so that it takes 10 bytes per transfer. So when you want player input, you explicitly halt the program until you receive it, inside a while loop? No wonder your programs aren't behaving well, then. Network processing Is Not Slow. Unless you're sending near the maximum capacity of your line, which you're obviously not[1], the slowness is architectural. [1] - The TCP packet contains at most 28 bytes of overhead, so combine that with 10 bytes of data and you're looking at 38 bytes/packet. A 33.6 modem can handle 4.2kB/sec, cut that in half for a safety margin for 2.1kB/sec. That will handle about 55 updates/second, which you shouldn't be reaching if you're "just sending updates when a player does something." Why are you messing with sockets directly, to begin with? It looks like you want an asynchronous socket interface, so that you don't explicitly loop and wait for data from the nyetwork for updates. In addition to making Julienne fries, Twisted is an excellent framework for asynchronous network IO. For a naive, non-threaded implementation, you'd schedule your update code as a timed event, and you'd define a Protocol for handling your network stuff. When you receive data, the protocl would update your application's state, and that would be picked up automagically the next time your update event ran. In a threaded implementation, you'd run your update code to a thread (DeferToThread), and your network code would post updates to a synchronous queue, read by your update code. -- http://mail.python.org/mailman/listinfo/python-list
Re: 2-player game, client and server at localhost
Michael Rybak wrote: > As stated above, that's how I'm trying it right now. Still, if doing > it turn-base, I would have to create a new thread every time. >I have some other questions though - please see below. No, you should never need to create a new thread upon receiving input. What you want is inter-thread communication, a synchronous queue. A synchronous queue is a thread-safe queue. You'd push event updates to it from the communication thread, and in the update thread, WHICH IS ALWAYS RUNNING, you'd check the queue each loop to see if there was anything new. > Now, few questions. Do I need to time.sleep(0.xxx) in any of these > while True: loops, not to overwhelm CPU? I can measure the time at > beginning and end of each iteration to make things happen fixed number > of times per second, but should I? And another: do I get it right that > instead of "lock global" you mean: > while global.locked: > time.sleep(0.001) > lock global > And I also wonder how do I make sure that 2 threads don't pass this > "while" loop simultaneously and both try locking global. There is a > probability, not? You have the right idea, that locking's important, but when the grandparent poster said "lock global," he meant "lock global." Locks are low-level primitives in any threading system, they can also be called mutexes. Attempting to acquire a lock returns immediate if the lock can be acquired; if it can't (and it's set to block, which is the default) the thread will wait until it -can- acquire the lock -- the entire thrust of your 'time.sleep' loop, only good. See thread.acquire and threading.Lock for python built-in locks. > In my yesterday experiment, I have a separate thread for each of 2 > clients, and what I do there is: > > def thr_send_status(player_sock): > while 1: > t, sub_addr = player_sock.recvfrom(128) #player ready to accept > player_sock.sendto(encode_status(g.get_status()), sub_addr) > > I'm reading 1 byte from client every time before sending new update to > him. OK, Ok, I know that's not good, ok. Now, I like your idea much > more, where you say we first measure the processing speed of each > client, and send data to every client as often as he can process it: Just how much data are you sending in each second? Testing client speed and managing updates that way is relatively advanced, and I'd argue that it's only necessary when your data has the potential to swamp a network connection. > While thinking about this, I've decided to go the wrong way, and to > wait for confirmation from client before sending next pack. No, you definitely don't need to do this. TCP is a reliable protocol, and so long as the connection stays up your client will receive data in-order with guaranteed arrival. If you were using UDP, then yes you'd need (possibly) to send confirmation, but you'd probably need a more advanced version to handle missing / out-of-order packets. > I'm also almost sure it's wrong to have separate sockets and threads > for each player, you say I should select()/dispatch instead, but I'm > afraid of that some *thing* being wrong with select() for Windows. > Somehow, I'm doing a lot of thins the wrong way :( Just use the Twisted library. It abstracts that away, and not touching sockets is really much nicer. > Before describing another problem I've encountered, I thought I'd > remind you of what my test game is: each player controls it's ball by > moving mouse pointer, towards which his ball starts moving; that's it. What's the nature of your evend update? Do you say "mouse moved N" or "mouse moved to (123,456)?" If it's the latter, then without motion prediction there's no way that either simulation should have the ball overshoot the mouse. Synchronization, however, will still be an issue. -- http://mail.python.org/mailman/listinfo/python-list
Re: HELP:sorting list of outline numbers
Felix Collins wrote: > Using Decorate, Sort , Undecorate... > > works like a charm. As a one-liner, you can also deconstruct and rebuild the outline numbers: new_outline = ['.'.join(v) for v in (sorted([k.split('.') for k in old_outline]))] -- http://mail.python.org/mailman/listinfo/python-list
Re: cut & paste text between tkinter widgets
William Gill wrote: > Is there a simple way to cut and paste from a tkinter text widget to an > entry widget? I know I could create a mouse button event that triggers > a popup (message widget) prompting for cut/paste in each of the widgets > using a temp variable to hold the text, but I don't wnat to reinvent the > wheel if there already is something that does the job. 1) TKinter text and entry widgets should already have proper event bindings for cut/copy/paste. Test first with your system-default keyboard shortcuts (^C, ^X, ^V on Windows). I haven't tried it myself, but I think those events bind to '<>', '<>', and '<>', so generating them should Do The Right Thing with selected text. 2) If you need to do any processing on the clipboard data, look at widget.selection_get [so named because of the way that X handles its clipboard] -- http://mail.python.org/mailman/listinfo/python-list