Re: Speed ain't bad
Anders J. Munch wrote: Another way is the strategy of "it's easier to ask forgiveness than to ask permission". If you replace: if(not os.path.isdir(zfdir)): os.makedirs(zfdir) with: try: os.makedirs(zfdir) except EnvironmentError: pass then not only will your script become a micron more robust, but assuming zfdir typically does not exist, you will have saved the call to os.path.isdir. ... at the cost of an exception frame setup and an incomplete call to os.makedirs(). It's an open question whether the exception setup and recovery take less time than the call to isdir(), though I'd expect probably not. The exception route definitely makes more sense if the makedirs() call is likely to succeed; if it's likely to fail, then things are murkier. Since isdir() *is* a disk i/o operation, then in this case the exception route is probably preferable anyhow. In either case, one must touch the disk; in the exception case, there will only ever be one disk access (which either succeeds or fails), while in the other case, there may be two disk accesses. However, if it wasn't for the extra disk i/o operation, then the 'if ...' might be slightly faster, even though the exception-based route is more Pythonic. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: what is lambda used for in real code?
Steven Bethard wrote: The only ones that make me a little nervous are examples like: inspect.py: def formatargspec(args, varargs=None, varkw=None, ... formatvarargs=lambda name: '*' + name, formatvarkw=lambda name: '**' + name, formatvalue=lambda value: '=' + repr(value), where the lambdas are declaring functions as keyword arguments in a def. At least in this case, a number of these can be handled with curry / partial(), I think -- ... formatvarargs = partial(operator.add, '*'), formatvarkw = partial(operator.add, '**'), ... The last is a bit more complicated, since it's got an extra (deferred) function call, so I'm not sure exactly how to deal with that cleanly. Actually, in this specific case, since these are all creating strings, it'd be pretty trivial to simply do this manipulation inside of the function body rather than inside of the arglist: def formatargspec(..., formatvarargs, formatkwargs, formatvalue, ...): formatvarargs = '*' + formatvarargs formatvarkw = '**' + formatvarkw formatvalue = '=' + repr(value) This has the disadvantage of having names typed multiple times, which is definitely a minus, but it's arguably a bit more clear to explicitly manipulate the strings within the function body rather than burying that manipulation somewhere in the argument list. Personally I'd call this a wash, though I expect that others will disagree with me. ;) And whatever the merits of this particular case, similar cases may not be so easy to avoid in this fashion... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Help clean up clumsy code
Scott David Daniels wrote: Nick Coghlan wrote: A custom generator will do nicely: Py> def flatten(seq): ... for x in seq: ... if hasattr(x, "__iter__"): ... for y in flatten(x): ... yield y ... else: ... yield x Avoiding LBYL gives you: def flatten(seq): for x in seq: try: for y in flatten(x): yield y except TypeError: yield x If I'm not mistaken, this will result in infinite recursion on strings. 'for x in aString' will iterate over the characters in the string, even if the string is only a single character, so "for y in flatten('a'):" will not give a type error. You'd need to add special-case tests to watch for this condition (and try not to be too special-case and allow unicode objects to pass). Nick's version works on strings (and unicode objects) because they lack an __iter__() method, even though they follow the (older) sequence protocol. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Lambda as declarative idiom
Robert Brewer wrote: Michael Spencer wrote: I believe that this "possibility to postpone" divides into two related but separate concepts: controlling the moment of evaluation, and assembling the arguments required at that moment. They are both species of 'eval', but managing arguments is more specialized, because it includes possibly renaming parameters, assigning default values, processing positional and keyword arguments, and, perhaps in the future dealing with argument types. Yes, but the "moment of evaluation" is more complex than just "postponing". In a declarative construct, you probably also want global variables to be bound early, so that the expression does not depend upon *any* free variables. Ditto for closures. A more realistic example: term = input("Enter the amount to add") e = expr(x): x + term ...MUCH code passes, maybe even a new process or thread... d = a + e(3) I see this as simply a combination of both of the aforementioned concepts -- argument control plus moment-of-evaluation control. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 2.4 on Windows XP
DavidHolt wrote: I have a problem that I see on two different machines, one running XP SP1 and one XP SP 2. On both I installed Python 2.4. I can't seem to start IDLE. When I try to start it, I get an hourglass cursor for a short time then nothing more happens. This happens whether I click the IDLE shortcut or click the pythonw.exe directly, or attempt to launch pythonw from a command line. Maybe I'm misinterpreting you, here, but pythonw.exe is *not* IDLE. It is, instead, a console-less version of the Python interpreter, which can run the Python scripts for IDLE (among other things). My version of Python is older, but in %pythondir%/Tools/idle, there is an idle.pyw file. Try running that. If it doesn't work, then copy & paste any error messages (you'll probably need to run it from a command line for this) to your next post here so that we can try to troubleshoot a bit more effectively. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 2.4 on Windows XP
It's me wrote: In my case, there is *no* error message of any kind. When I run pythonw.exe from the python23 directory, the screen blinked slightly and goes back to the command prompt. Right -- pythonw.exe is a console-less interpreter. Having no console, it doesn't have an interactive mode, and since you didn't give it a script to run, it simply started, found nothing to do, and then terminated itself. You need to run idle.pyw, *not* pythonw.exe. The idle.pyw script runs inside the pythonw.exe interpreter, but the latter can't do anything without instructions. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: What could 'f(this:that=other):' mean?
Jonathan Fine wrote: Giudo has suggested adding optional static typing to Python. (I hope suggested is the correct word.) http://www.artima.com/weblogs/viewpost.jsp?thread=85551 An example of the syntax he proposes is: > def f(this:that=other): > print this This means that f() has a 'this' parameter, of type 'that'. And 'other' is the default value. Hm; so for a slightly more concrete example, one might have def fib_sequence(length:int=9): ... I'm going to suggest a different use for a similar syntax. In XML the syntax > is used for name spaces. Name spaces allow independent attributes to be applied to an element. For example, 'fo' attributes for fonts and layout. XSLT is of course a big user of namespaces in XML. Namespaces seems to be a key idea in allow independent applications to apply attributes to the same element. [...] Here's an example of how it might work. With f as above: > f(this:that='value') {'that': 'value'} I fail to see how this is a significant advantage over simply using **kwargs. It allows you to have multiple dictionaries instead of just one, that's all. And as you point out, it's trivial to construct your own nested dicts. Besides, Python already uses the concept of namespaces by mapping them to object attributes. Module references are a namespace, exposed via the attribute-lookup mechanism. This (IMO) fails the "there should be one (and preferably only one) obvious way to do things" test. The functionality already exists, so having yet-another way to spell it will only result in more confusion. (The fact that we're borrowing the spelling from XML does little to mollify that confusion.) 3. Granted (2), perhaps function calls are first in the queue for syntactic sugar. Huh? How much simpler of syntax do you want for calling a function? I'm not sure what you'd want as "sugar" instead of funcname(). Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Steve Holden wrote: Bulba! wrote: I was utterly shocked. Having grown up in Soviet times I have been used to seeing precious resources wasted by organizations as if resources were growing on trees, but smth like this?! In a shining ideal country of Germany?! Unthinkable. Indeed not. Quite often the brown paper bag is a factor in purchases like this. I wouldn't be at all surprised if somebody with a major input to the decision-making process retired to a nice place in the country shortly afterwards. You appear to be making the mistake of believing that people will act in the larger interest, when sadly most individuals tend to put their own interests first (some would go as far as to define self-interest as the determinant of behavior). Indeed, it is almost expected that those in charge of any large organization (whether government, corporation, trade union, industry association, fan club, or whatever else) are likely to act in their personal interests at the expense of the organization's interests. This is why things like public-disclosure laws and oversight committees exist. As they say, power corrupts. (Of course, this is not at all limited to people in charge; it's just most notable there, since those people can direct the efforts of the rest of the organization for their personal gain, whereas a rank-and-file member can typically only direct their own efforts.) It's also noteworthy to consider that many times, waste happens not because of corruption or self-interest, but simply because of errors of judgement. Humans being as we are, it's inevitable that over time, some "obvious" important details will escape our attention, and the resulting imperfect information will result in poor decisions. This is a simple fact of human nature, and (ob-Python ;) ) it's one of the reasons that Python is designed as it is -- it makes a serious effort to reduce the number of details that might escape detection. (One should also consider that many business failures are a case of simply having played the odds and lost. Many ventures depend on outside events playing in a certain way; when by chance those events happen, the decision-makers are called "bold and insightful", but if things don't work out, they're called foolish or misguided. Often, though, it was not foolishness but shrewd risk-taking -- if you take a one-in-three chance of making a tenfold return on investment, then 66% of the time you'll lose but if you hit those odds just once, you'll come out way ahead.) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: What could 'f(this:that=other):' mean?
Jonathan Fine wrote: Jeff Shannon wrote: Jonathan Fine wrote: Giudo has suggested adding optional static typing to Python. (I hope suggested is the correct word.) http://www.artima.com/weblogs/viewpost.jsp?thread=85551 An example of the syntax he proposes is: > def f(this:that=other): > print this I'm going to suggest a different use for a similar syntax. In XML the syntax > is used for name spaces. Name spaces allow independent attributes to be applied to an element. For example, 'fo' attributes for fonts and layout. XSLT is of course a big user of namespaces in XML. Namespaces seems to be a key idea in allow independent applications to apply attributes to the same element. [...] Here's an example of how it might work. With f as above: > f(this:that='value') {'that': 'value'} I fail to see how this is a significant advantage over simply using **kwargs. It allows you to have multiple dictionaries instead of just one, that's all. And as you point out, it's trivial to construct your own nested dicts. This argument could be applied to **kwargs (and to *args). In other words, **kwargs can be avoided using a trivial construction. The use of *args and **kwargs allows functions to take a variable number of arguments. The addition of ***nsargs does not add significantly. Note that *args and **kwargs should always be used together, because to do otherwise would require the function caller to know details of the function's implementation (i.e. which arguments are expected to be positional and which must be keywords). Since we *want* the caller to not need to know this, then ***nsargs would always need to be used together with *args and **kwargs. (A function defined 'def f(***nsargs): ...' could not be called with 'f(1)'. This means that all you're gaining is an extra bag to put variable numbers of arguments in. The value here is that it maintains a parallel with *args and **kwargs when one allows 'namespaced' arguments -- if one allows that, then ***nsargs is required for consistency's sake, but it does not simplify anything by itself. So really, we need to look at what gains we'd get from having 'namespaced' arguments. What's the real advantage here? When using 'namespace' arguments, instead of standard keyword arguments, the function body would be given a dictionary instead of a set of local variables, right? 'def f1(arg1, arg2, arg3, arg4)' creates four names in locals(), where 'def f2(ns1:arg1, ns1:arg2, ns1:arg3, ns1:arg4) creates a single dict named ns1 in locals(), which contains four items (keyed on 'arg1', 'arg2', etc.), and 'def f3(ns1:arg1, ns1:arg2, ns2:arg3, ns2:arg4)' creates two dicts (ns1 and ns2) with two entries each. Okay, now, let's take a look at how these functions will be used. f1(1, 2, 3, 4) f1(arg1=1, arg2=2, arg3=3, arg4=4) Note that f1 doesn't care which of these methods of calling is employed -- both result in the same situation inside of f1(). So what's the intended way of calling f2()? I'd presume that it shouldn't care whether keywords or namespaces are specified, so that the following should all be equivalent: f2(1, 2, 3, 4) f2(1, 2, arg3=3, arg4=4) f2(1, 2, arg3=3, ns1:arg4=4) Note that this doesn't *add* any utility. The function caller hasn't gained anything. Since arg4 is unambiguous regardless of whether it's referred to as arg4 or ns1:arg4, the only time that the caller has any reason to specify the namespace is if argnames within different namespaces clash -- that is, if we allow something like 'def f4(ns1:arg1, ns1:arg2, ns2:arg1, ns2:arg2)'. Now, though, we've lost the ability to specify *only* the argname and not the namespace as well -- that is, you *cannot* call f4 with keywords but not namespaces. From the caller's vantage point, this means that they need to know the full namespace spec of the function, which makes it no different than simply using longer (but unique) keyword names. So, we can see that allowing namespaces and ***nsargs doesn't add any utility from the caller's perspective. How much does the callee gain from it? Well, the following functions would be equivalent: def g1(arg1, arg2, arg3, arg4): ns1 = {'arg1':arg1, 'arg2':arg2, 'arg3':arg3, 'arg4':arg4} return ns1 def g2(ns1:arg1, ns1:arg2, ns1:arg3, ns1:arg4): return ns1 You might say "Wow, look at all that repetetive typing I'm saving!" But that *only* applies if you need to stuff all of your arguments into dictionaries. I suspect that this is a rather small subset of functions. In most cases, it will be more convenient to use your arguments as local variables th
Re: sorting on keys in a list of dicts
Jp Calderone wrote: L2 = [(d[key], i, d) for (i, d) in enumerate(L)] L2.sort() L = [d for (v, i, d) in L2] Out of curiosity, any reason that you're including the index? I'd have expected to just do L2 = [(d[key], d) for d in L] L2.sort() L = [d for (v, d) in L2] I suppose that your version has the virtue that, if the sortkey value is equal, items retain the order that they were in the original list, whereas my version will sort them into an essentially arbitrary order. Is there anything else that I'm missing here? Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Bulba! wrote: On Thu, 06 Jan 2005 08:39:11 GMT, Roel Schroeven <[EMAIL PROTECTED]> wrote: That's generally the goal of the Free Software Foundation: they think all users should have the freedom to modify and/or distribute your code. You have the freedom of having to wash my car then. ;-) A more accurate analogy would be, "You're free to borrow my car, but if you do, you must wash it and refill the gas tank before you return it." Note that the so-called 'viral' nature of GPL code only applies to *modifications you make* to the GPL software. The *only* way in which your code can be 'infected' by the GPL is if you copy GPL source. Given the standard usage of closed-source software, you never even have access to the source. If you use GPL software in the same way that you use closed-source software, then the GPL cannot 'infect' anything you do. The 'infective' nature of the GPL *only* comes when you make use of the *extra* privelidges that open source grants. So yes, those extra privelidges come with a price (which is that you share what you've done); but if you don't want to pay that price, you have the choice of not using those privelidges. This does not, in any way, prevent you from using GPL'ed software as a user. (Problems may come if someone licenses a library under the GPL; that's what the LGPL was invented for. But the issue here is not that the GPL is bad, it's that the author used the wrong form of it.) Personally, I'm not a big fan of the GPL. I'm much more likely to use BSD-ish licenses than [L]GPL. But it still bugs me to see the GPL misrepresented as some plot to steal the effort of hardworking programmers -- it is, instead, an attempt to *encourage* hardworking programmers to share in a public commons, by ensuring that what's donated to the commons remains in the commons. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Alan Gauld wrote: Can I ask what the objection to lambda is? 1) Is it the syntax? 2) Is it the limitation to a single expression? 3) Is it the word itself? I can sympathise with 1 and 2 but the 3rd seems strange since a lambda is a well defined name for an anonymous function used in several programming languages and originating in lambda calculus in math. Lambda therefore seems like a pefectly good name to choose. I think that the real objection is a little bit of 1), and something that's kinda close to 2), but has nothing to do with 3). The issue isn't that lambdas are bad because they're limited to a single expression. The issue is that they're an awkward special case of a function, which was added to the language to mollify functional-programming advocates but which GvR never felt really "fit" into Python. Other, more pythonic functional-programming features have since been added (like list comprehensions and iterators). It seems to me that in other, less-dynamic languages, lambdas are significantly different from functions in that lambdas can be created at runtime. In Python, *all* functions are created at runtime, and new ones can be defined at any point in execution, so lambdas don't get that advantage. Thus, their advantages are limited to the fact that they're anonymous (but names are treated differently in Python than in most other languages, so this is of marginal utility), and that they can be created inline. This last bit makes them suitable for creating quick closures (wrapping a function and tweaking its parameters/return values) and for creating a delayed-execution object (e.g. callbacks), so there's a lot of pressure to keep them, but they're still a special case, and "special cases aren't special enough to break the rules". Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Bulba! wrote: And note that it was definitely not in his personal interest, whoever that was, a person or group of persons, as he/they risked getting fired for that. This doesn't necessarily follow. The decision-maker in question may have received a fat bonus for having found such a technically excellent manufacturing process, and then moved into a different position (or left the corporation altogether) before construction was complete and the power-cost issue was noticed. That person may even have *known* about the power-cost issue, and forged ahead anyhow due to the likelihood of such a personal bonus, with the intention of no longer being in a bag-holding position once the problem became general knowledge. Of course, this discussion highlights the biggest problem with economics, or with any of the other "social sciences" -- there's simply too many open variables to consider. One can't control for all of them in experiments (what few experiments are practical in social sciences, anyhow), and they make any anecdotal evidence hazy enough to be suspect. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Paul Rubin wrote: Jeff Shannon <[EMAIL PROTECTED]> writes: It seems to me that in other, less-dynamic languages, lambdas are significantly different from functions in that lambdas can be created at runtime. What languages are those, where you can create anonymous functions at runtime, but not named functions?! That notion is very surprising to me. Hm, I should have been more clear that I'm inferring this from things that others have said about lambdas in other languages; I'm sadly rather language-deficient (especially as regards *worthwhile* languages) myself. This particular impression was formed from a recent-ish thread about lambdas http://groups-beta.google.com/group/comp.lang.python/messages/1719ff05118c4a71,7323f2271e54e62f,a77677a3b8ff554d,844e49bea4c53c0e,c126222f109b4a2d,b1c9627390ee2506,0b40192c36da8117,e3b7401c3cc07939,6eaa8c242ab01870,cfeff300631bd9f2?thread_id=3afee62f7ed7094b&mode=thread (line-wrap's gonna mangle that, but it's all one line...) Looking back, I see that I've mis-stated what I'd originally concluded, and that my original conclusion was a bit questionable to begin with. In the referenced thread, it was the O.P.'s assertion that lambdas made higher-order and dynamic functions possible. From this, I inferred (possibly incorrectly) a different relationship between functions and lambdas in other (static) languages than exists in Python. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Paul Rubin wrote: Jeff Shannon <[EMAIL PROTECTED]> writes: Note that the so-called 'viral' nature of GPL code only applies to *modifications you make* to the GPL software. Well, only under an unusually broad notion of "modification". True enough. It can be difficult, in software development, to define a distiction between a situation where two software products are distinct but cooperative, and a situation where one software product is derivative of another. Stallman has chosen a particular definition for use in the GPL; one may debate the value of using this definition over any other possible definition, but the line had to be drawn *somewhere*. (And given Stallman's philosophies, it shouldn't be too surprising that he's drawn it about as broadly as he reasonably could.) (Problems may come if someone licenses a library under the GPL; that's what the LGPL was invented for. But the issue here is not that the GPL is bad, it's that the author used the wrong form of it.) The "problem" is not a problem except that in the case of some libraries, simply being able to use a library module is often not enough incentive to GPL a large application if the library module's functionality is available some other way (including by reimplemntation). If the library does something really unique and difficult, there's more reason to GPL it instead of LGPL'ing it. To my mind, the intent of the GPL is "use it, but if you change it or make a derivative, share the changes". With libraries, though, you *can't* use it without hitting the FSF-specified definition of a derivative. The LGPL exists to make it clear that, for libraries, the common meaning of "using" and "changing" are different than they are for applications. Of course, there's nothing that stops people from insisting that, if you *use* their libraries, anything you use them for must be free-as-in-speech (which is the effect of using the GPL instead of the LGPL); it's the author's choice what restrictions should be put on the software. But the usage-restrictions on a library under GPL are more severe than they are on an application under GPL. The unfortunate thing, in my opinion, is that a fair number of library authors don't think about that when they GPL their code. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Bulba! wrote: On 6 Jan 2005 19:01:46 -0500, [EMAIL PROTECTED] (Aahz) wrote: Note that the so-called 'viral' nature of GPL code only applies to *modifications you make* to the GPL software. The *only* way in which your code can be 'infected' by the GPL is if you copy GPL source. That's not true -- consider linking to a GPL library. Will someone please explain to me in simple terms what's the difference between linking to LGPLed library and linking to GPLed library - obviously in terms of consequences of what happens to _your_ source code? Because if there isn't any, why bother with distinguishing between the two? Releasing a product in which your code is linked together with GPL'ed code requires that your code also be GPL'ed. The GPL goes to some lengths to define what exactly "linked together" means. Releasing a product in which your code is linked together with LGPL'ed code does *not* require that your code also be (L)GPL'ed. Changes to the core library must still be released under (L)GPL, but application code which merely *uses* the library does not. (I've forgotten, now, exactly how LGPL defines this distinction...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Alex Martelli wrote: Jeff Shannon <[EMAIL PROTECTED]> wrote: Note that the so-called 'viral' nature of GPL code only applies to *modifications you make* to the GPL software. The *only* way in which your code can be 'infected' by the GPL is if you copy GPL source. ... (Problems may come if someone licenses a library under the GPL; that's what the LGPL was invented for. But the issue here is not that the GPL is bad, it's that the author used the wrong form of it.) Stallman now says that you should use GPL, not Lesser GPL. http://www.gnu.org/licenses/why-not-lgpl.html Specifically, he wants library authors to use GPL to impose the viral nature of GPL on other programs just USING the library -- the very opposite of what you say about "only applies ... if you copy"! Ah, I haven't kept up on Stallman's current opinions, and was speaking from the understanding I had of GPL/LGPL as of a number of years ago (before that article was written). By "copy", above, I meant "use GPL source in your product". The GPL defines what it means to use source in a rather inclusive way. That inclusiveness means that the standard usage of libraries falls under their definition of "using source". This distinction in the normal terms of "usage" is what impelled the FSF to create the LGPL in the first place... So, I think what I said still (mostly) stands, as long as you look at it in terms of whether object code is copied into your executable. ;) It's still true that one can use (in a consumer sense) GPL software for whatever purpose one wishes, and the restrictions only kick in when one includes GPL code in another product. Indeed, I should have used the word "include" rather than "copy"... (It's hardly surprising that Stallman wants to use whatever leverage he can get to encourage FSF-style free software...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Calling Function Without Parentheses!
Kamilche wrote: Yeah, but still. If they even had the most basic check, like 'an object is being referred to on this line, but you're not doing anything with it' would be handy in catching that. When you use an object like that, usually you're doing something with it, like assigning it to a variable. In many cases, however, it's not possible to distinguish this. def get_pi(): import math return math.pi print my_func(get_pi) Now, am I trying to pass the function object get_pi into my_func(), or do I want to call get_pi() and pass the return value? There are also many times when it's sensible to do nothing with an object reference -- i.e. ignoring the return value of a function which you're calling for its side-effects. It seems to me that it's reasonable for the Python interpreter to *not* attempt to guess at whether a questionable usage is an error or not. Better to have that done by a developer tool (pychecker) than through runtime checks every time the program is used. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: sorting on keys in a list of dicts
Nick Coghlan wrote: Jeff Shannon wrote: I suppose that your version has the virtue that, if the sortkey value is equal, items retain the order that they were in the original list, whereas my version will sort them into an essentially arbitrary order. Is there anything else that I'm missing here? Stability in sorting is a property not to be sneezed at [...] Agreed. I'd started typing before I realized that it'd provide a stable sort, which pretty much answered my own question, but decided to send it anyhow in case I'd missed anything else... :) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Paul Rubin wrote: Richard Feynman told a story about being on a review committee for some grade-school science textbooks. One of these book said something about "counting numbers" and it took him a while to figure out that this was a new term for what he'd been used to calling "integers". With all due respect to Richard Feynman, I'd have thought that "counting numbers" would be non-negative integers, rather than the full set of integers... which, I suppose, just goes to show how perilous it can be to make up new, "more natural" terms for things. ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Jacek Generowicz wrote: "Anna" <[EMAIL PROTECTED]> writes: But first, wouldn't something like: [x+1 for x in seq] be even clearer? I'm glad you mentioned that. [...] As to whether it is clearer. That depends. I would venture to suggest that, given a pool of laboratory rats with no previous exposure to Python, more of them would understand the map-lambda than the list comprehension. I would venture to suggest the exact opposite, that the syntax of a list comprehension is in itself suggestive of its meaning, while 'map' and 'lambda' are opaque until you've researched them. The verb 'to map', in this mathematical sense, is not part of standard usage among anyone that *I* know. Instead, they'd speak of doing something for (or to) each item in a group -- exactly what list comps express. Speaking for *this* laboratory rat, at least, map/lambda was always a nasty puzzle for me and difficult to sort out. But when list comps were introduced, after reading just a sentence or two on how they worked, they were completely clear and understandable -- much more so than map/lambda after many months of exposure. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Jacek Generowicz wrote: Given a population with previous exposure to computer programming, my money is on the map-lambda version. But this last point is mostly irrelevant. The fact is that you cannot program computers without doing a bit of learning ... and the lambda, map and friends really do not take any significant learning. I guess we'll have to agree to disagree, because given the same conditions, I *still* think that a list comprehension expresses its semantics more clearly than map/lambda. I'd also point out that not all Python programmers will have significant prior exposure to programming ideas, and even those who do will not necessarily have prior exposure to lambdas. It's true that programming requires learning, and that map/lambda aren't a tremendous burden to learn. Still, to my mind they make a program a tiny increment more complicated. (I find that reading a lambda requires mentally pushing a stack frame to parse the lambda and another to translate map() into a loop, whereas a list comp's expression doesn't require such a shift, and a function name works as a good placeholder that makes reading easier.) It's not a big difference in any individual case, but incremental differences build up. From the sounds of it, you may have the opposite experience with reading map/lambda vs. reading list comps, though, so we could go back and forth on this all week without convincing the other. :) Speaking for *this* laboratory rat, at least, map/lambda was always a nasty puzzle for me and difficult to sort out. But when list comps were introduced, after reading just a sentence or two on how they worked, they were completely clear and understandable -- much more so than map/lambda after many months of exposure. Forgetting about lambda, map, filter and reduce, do you find that you pass callables around in your Python programs, or is this not typically done in your programs? Sure, I pass callables around quite a bit. Usually they're GUI callbacks or the like. Usually they're also either complex enough that lambda would be undesireable if not impossible, or they're simple and numerous (e.g. calling a function with different parameters) such that it's easy to write a factory function that returns closures rather than feed the parameter in with a lambda. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: python3: 'where' keyword
Paul Rubin wrote: Steve Holden <[EMAIL PROTECTED]> writes: [...] and if you think that newbies will have their lives made easier by the addition of ad hoc syntax extensions then you and I come from a different world (and I suspect the walls might be considerably harder in mine than in yours). I'm saying that many proposals for ad hoc extensions could instead be taken care of with macros. Newbies come to clpy all the time asking how to do assignment expressions, or conditional expressions, or call-by-reference. Sometimes new syntax results. Lots of times, macros could take care of it. Personally, given the requests in question, I'm extremely thankful that I don't have to worry about reading Python code that uses them. I don't *want* people to be able to make up their own control-structure syntax, because that means I need to be able to decipher the code of someone who wants to write Visual Basic as filtered through Java and Perl... If I want mental gymnastics when reading code, I'd use Lisp (or Forth). (These are both great languages, and mental gymnastics would probably do me good, but I wouldn't want it as part of my day-to-day requirements...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: reference or pointer to some object?
Torsten Mohr wrote: Hi, i'd like to pass a reference or a pointer to an object to a function. The function should then change the object and the changes should be visible in the calling function. There are two possible meanings of "change the object" in Python. One of them will "just work" for your purposes, the other won't work at all. Python can re-bind a name, or it can mutate an object. Remember, names are just convenient labels that are attached to an object in memory. You can easily move the label from one object to another, and the label isn't affected if the object it's attached to undergoes some sort of change. Passing a parameter to a function just creates a new label on that object, which can only be seen within that function. The object is the same, though. You can't change what the caller's original label is bound to, but you *can* modify (mutate) the object in place. >>> def mutate(somedict): ... somedict['foo'] = 'bar' ... >>> def rebind(somedict): ... somedict = {'foo':'bar'} ... >>> d = {'a':1, 'b':2} >>> rebind(d) >>> d {'a': 1, 'b': 2} >>> mutate(d) >>> d {'a': 1, 'b': 2, 'foo': 'bar'} >>> In mutate(), we take the object (which is d in the caller, and somedict in the function) and mutate it. Since it's the same object, it doesn't matter where the mutation happened. But in rebind(), we're moving the somedict label to a *new* dict object. Now d and somedict no longer point to the same object, and when the function ends the object pointed to by somedict is garbage-collected, while the object pointed to by d has never changed. So, to do what you want to do, you simply need to arrange things so that your parameter is an object that can be mutated in-place. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: python guy ide
Kartic wrote: SPE is great, but it stops responding when I try to run my wxPython apps (probably something I am doing!). I don't know about SPE specifically, but this is a common issue with a lot of lower-end IDEs. The IDE is a GUI application, which operates using an event loop. If the IDE runs user code in the same process that it runs in itself, and if that user code also contains some sort of event loop, then the two loops will interfere with each other (or rather, the first loop won't run until the inner loop quits, which tends to make Windows unhappy...) Most commercial IDEs, I believe, run user code in a separate process and thus avoid this problem. I *think* that IDLE now runs user code out-of-process as well, but I'm not sure about that. I can't afford to pay for an IDE for hobby purposes, so when I'm writing GUI apps, I also keep a command shell open. Edit, save, alt-tab to command shell, uparrow-enter to run program... not as convenient as a toolbar button or hotkey, but it works. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: a new Perl/Python a day
Jon Perez wrote: ... or why 'Perl monkey' is an oft-heard term whereas 'Python monkey' just doesn't seem to be appropriate? That's just because pythons are more likely to *eat* a monkey than to be one :) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Refactoring; arbitrary expression in lists
Paul McGuire wrote: "Frans Englich" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] #-- def detectMimeType( filename ): extension = filename[-3:] You might consider using os.path.splitext() here, instead of always assuming that the last three characters are the extension. That way you'll be consistent even with extensions like .c, .cc, .h, .gz, etc. Note that os.path.splitext() does include the extension separator (the dot), so that you'll need to test against, e.g., ".php" and ".cpp". Since the majority of your tests will be fairly direct 'extension "XYZ" means mimetype "aaa/bbb"', this really sounds like a dictionary type solution is called for. I strongly agree with this. The vast majority of your cases seem to be a direct mapping of extension-string to mimetype-string; using a dictionary (i.e. mapping ;) ) for this is ideal. For those cases where you can't key off of an extension string (such as makefiles), you can do special-case processing if the dictionary lookup fails. if extension.endswith("cc"): return extToMimeDict["cpp"] If the intent of this is to catch .cc files, it's easy to add an extra entry into the dict to map '.cc' to the same string as '.cpp'. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
Jacek Generowicz wrote: One more question. Imagine that Python had something akin to Smalltalk code blocks. Would something like map([x | x+1], seq) be any better for you than map(lambda x:x+1, seq) ? I'd say that this is very slightly better, but it's much closer (in my mind) to map/lambda than it is to a list comprehension. In this case, at least the code block is visually self-contained in a way that lambdas are not, but I still have to do more mental work to visualize the overall results than I need with list comps. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: reference or pointer to some object?
Torsten Mohr wrote: I still wonder why a concept like "references" was not implemented in Python. I think it is (even if small) an overhead to wrap an object in a list or a dictionary. Because Python uses a fundamentally different concept for variable names than C/C++/Java (and most other static languages). In those languages, variables can be passed by value or by reference; neither term really applies in Python. (Or, if you prefer, Python always passes by value, but those values *are* references.) Python doesn't have lvalues that contain rvalues; Python has names that are bound to objects. Passing a parameter just binds a new name (in the called function's namespace) to the same object. It's also rather less necessary to use references in Python than it is in C et. al. The most essential use of references is to be able to get multiple values out of a function that can only return a single value. Where a C/C++ function would use the return value to indicate error status and reference (or pointer) parameters to communicate data, a Python program will return multiple values (made quick & easy by lightweight tuples and tuple unpacking) and use exceptions to indicate error status. Changing the value of a parameter is a side-effect that complicates reading and debugging code, so Python provides (and encourages) more straightforward ways of doing things. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Statement local namespaces summary (was Re: python3: 'where' keyword)
Nick Coghlan wrote: def f(): a = 1 b = 2 print 1, locals() print 3, locals() using: a = 2 c = 3 print 2, locals() print 4, locals() I think the least suprising result would be: 1 {'a': 1, 'b': 2} # Outer scope 2 {'a': 2, 'c': 3} # Inner scope 3 {'a': 2, 'b': 2, 'c': 3} # Bridging scope 4 {'a': 1, 'b': 2} # Outer scope Personally, I think that the fact that the bridging statement is executed *after* the inner code block guarantees that results will be surprising. The fact that it effectively introduces *two* new scopes just makes matters worse. It also seems to me that one could do this using a nested function def with about the same results. You wouldn't have a bridging scope with both sets of names as locals, but your nested function would have access to the outer namespace via normal nested scopes, so I'm really not seeing what the gain is... (Then again, I haven't been following the whole using/where thread, because I don't have that much free time and the initial postings failed to convince me that there was any real point...) Jeff Shannon Technician/Programmer Credit International In that arrangement, the statement with a using clause is executed normally in the outer scope, but with the ability to see additional names in its local namespace. If this can be arranged, then name binding in the statement with the using clause will work as we want it to. Anyway, I think further investigation of the idea is dependent on a closer look at the feasibility of actually implementing it. Given that it isn't as compatible with the existing nested scope structure as I first thought, I suspect it will be both tricky to implement, and hard to sell to the BDFL afterwards :( Cheers, Nick. -- http://mail.python.org/mailman/listinfo/python-list
Re: reference or pointer to some object?
Antoon Pardon wrote: Op 2005-01-12, Jeff Shannon schreef <[EMAIL PROTECTED]>: It's also rather less necessary to use references in Python than it is in C et. al. You use nothing but references in Python, that is the reason why if you assign a mutable to a new name and modify the object through either name, you see the change through both names. Perhaps it would've been better for me to say that the sorts of problems that are solved by (pointers and) references in C/C++ can be better solved in other ways in Python... One can take the position that every variable in Python is a reference; the semantics work out the same. But I find it clearer to view the Python model as conceptually distinct from the "classic" value/reference model. Re-using the old terms is likely to lead to making mistakes based on inapplicable presumptions. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Refactoring; arbitrary expression in lists
Stephen Thorne wrote: As for the overall efficiency concerns, I feel that talking about any of this is premature optimisation. I disagree -- using REs is adding unnecessary complication and dependency. Premature optimization is a matter of using a conceptually more-complicated method when a simpler one would do; REs are, in fairly simple cases such as this, clearly *not* simpler than dict lookups. The optimisation that is really required in this situation is the same as with any large-switch-statement idiom, be it C or Python. First one must do a frequency analysis of the inputs to the switch statement in order to discover the optimal order of tests! But if you're using a dictionary lookup, then the frequency of inputs is irrelevant. Regardless of the value of the input, you're doing a single hash-compute and (barring hash collisions) a single bucket-lookup. Since dicts are unordered, the ordering of the literal (or of a set of statements adding to the dict) doesn't matter. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: What strategy for random accession of records in massive FASTA file?
Chris Lasher wrote: Given that the information content is 2 bits per character that is taking up 8 bits of storage, there must be a good reason for storing and/or transmitting them this way? I.e., it it easy to think up a count-prefixed compressed format packing 4:1 in subsequent data bytes (except for the last byte which have less than 4 2-bit codes). My guess for the inefficiency in storage size is because it is human-readable, and because most in-silico molecular biology is just a bunch of fancy string algorithms. This is my limited view of these things at least. Yeah, that pretty much matches my guess (not that I'm involved in anything related to computational molecular biology or genetics). Given the current technology, the cost of the extra storage size is presumably lower than the cost of translating into/out of a packed format. Heck, hard drives cost less than $1/GB now. And besides, for long-term archiving purposes, I'd expect that zip et al on a character-stream would provide significantly better compression than a 4:1 packed format, and that zipping the packed format wouldn't be all that much more efficient than zipping the character stream. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: finding/replacing a long binary pattern in a .bin file
Bengt Richter wrote: BTW, I'm sure you could write a generator that would take a file name and oldbinstring and newbinstring as arguments, and read and yield nice os-file-system-friendly disk-sector-multiple chunks, so you could write fout = open('mynewbinfile', 'wb') for buf in updated_file_stream('myoldbinfile','rb', oldbinstring, newbinstring): fout.write(buf) fout.close() What happens when the bytes to be replaced are broken across a block boundary? ISTM that neither half would be recognized I believe that this requires either reading the entire file into memory, to scan all at once, or else conditionally matching an arbitrary fragment of the end of a block against the beginning of the oldbinstring... Given that the file in question is only a few tens of kbytes, I'd think that doing it in one gulp is simpler. (For a large file, chunking it might be necessary, though...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: reference or pointer to some object?
Torsten Mohr wrote: But i think my understanding was wrong (though it is not yet clear). If i hand over a large string to a function and the function had the possibility to change it, wouldn't that mean that it is necessary to hand over a _copy_ of the string? Else, how could it be immutable? Anything which might change the string, can only do so by returning a *new* string. >>> a = "My first string" >>> b = a.replace('first', 'second') >>> b 'My second string' >>> a 'My first string' >>> Saying that strings are immutable means that, when 'a' is pointing to a string, the (string) object that 'a' points to will always be the same. (Unless 'a' is re-bound, or someone uses some deep black magic to change things "behind the scenes"...) No method that I call on 'a', or function that I pass 'a' to, can alter that object -- it can only create a new object based off of the original. (You can demonstrate this by checking the id() of the objects.) Mutable objects, on the other hand, can change in place. In the case of lists, for example, it will stay the same list object, but the list contents can change. Note, also, that passing a string into a function does not copy the string; it creates a new name binding (i.e. reference) to the same object. >>> def func(item): ... print "Initial ID:", id(item) ... item = item.replace('first', 'second') ... print "Resulting ID:", id(item) ... >>> id(a) 26278416 >>> func(a) Initial ID: 26278416 Resulting ID: 26322672 >>> id(a) 26278416 >>> Thinking about all this i came to the idea "How would i write a function that changes a string with not much overhead?". Since strings cannot really be changed, you simply try to minimize the number of new strings created. For example, appending to a string inside of a for-loop creates a new string object each time, so it's generally more efficient to convert the string to a list, append to the list (which doesn't create a new object), and then join the list together into a string. def func(s): change s in some way, remove all newlines, replace some charaters by others, ... return s s = func(s) This seems to be a way to go, but it becomes messy if i hand over lots of parameters and expect some more return functions. This has the advantage of being explicit about s being (potentially) changed. References, in the way that you mean them, are even messier in the case of numerous parameters because *any* of those parameters might change. By simply returning (new) objects for all changes, the function makes it very clear what's affected and what isn't. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: What strategy for random accession of records in massive FASTA file?
Chris Lasher wrote: And besides, for long-term archiving purposes, I'd expect that zip et al on a character-stream would provide significantly better compression than a 4:1 packed format, and that zipping the packed format wouldn't be all that much more efficient than zipping the character stream. This 105MB FASTA file is 8.3 MB gzip-ed. And a 4:1 packed-format file would be ~26MB. It'd be interesting to see how that packed-format file would compress, but I don't care enough to write a script to convert the FASTA file into a packed-format file to experiment with... ;) Short version, then, is that yes, size concerns (such as they may be) are outweighed by speed and conceptual simplicity (i.e. avoiding a huge mess of bit-masking every time a single base needs to be examined, or a human-(semi-)readable display is needed). (Plus, if this format might be used for RNA sequences as well as DNA sequences, you've got at least a fifth base to represent, which means you need at least three bits per base, which means only two bases per byte (or else base-encodings split across byte-boundaries) That gets ugly real fast.) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: hash patent by AltNet; Python is prior art?
Robert Kern wrote: I don't know the details [...] Neither do I, but... I'm also willing to bet that the patent won't hold up in court because there's quite a lot of prior art with respect to cryptographic hashes, too. The problem with that is that someone needs to be able to *afford* to challenge it in court. Even patents that are blatantly non-original on the face of things can be difficult and expensive to challenge. Most companies would rather just avoid the legal risks involved in making such a challenge, and most individuals can't afford the kind of legal team that'd be necessary. I'll join in encouraging Europeans to do their best to reject these styles of patents. It's a bit too late for the US, but maybe if we have concrete examples of the benefits of limiting patents then there might be hope for the future. And if things get too bad here, I'd like to have somewhere pleasant to emigrate to. ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Fuzzy matching of postal addresses
Andrew McLean wrote: The problem is looking for good matches. I currently normalise the addresses to ignore some irrelevant issues like case and punctuation, but there are other issues. I'd do a bit more extensive normalization. First, strip off the city through postal code (e.g. 'Beaminster, Dorset, DT8 3SS' in your examples). In the remaining string, remove any punctuation and words like "the", "flat", etc. Here are just some examples where the software didn't declare a match: And how they'd look after the transformation I suggest above: 1 Brantwood, BEAMINSTER, DORSET, DT8 3SS THE BEECHES 1, BRANTWOOD, BEAMINSTER, DORSET DT8 3SS 1 Brantwood BEECHES 1 BRANTWOOD Flat 2, Bethany House, Broadwindsor Road, BEAMINSTER, DORSET, DT8 3PP 2, BETHANY HOUSE, BEAMINSTER, DORSET DT8 3PP 2 Bethany House Broadwindsor Road 2 BETHANY HOUSE Penthouse,Old Vicarage, 1 Clay Lane, BEAMINSTER, DORSET, DT8 3BU PENTHOUSE FLAT THE OLD VICARAGE 1, CLAY LANE, BEAMINSTER, DORSET DT8 3BU Penthouse Old Vicarage 1 Clay Lane PENTHOUSE OLD VICARAGE 1 CLAY LANE St John's Presbytery, Shortmoor, BEAMINSTER, DORSET, DT8 3EL THE PRESBYTERY, SHORTMOOR, BEAMINSTER, DORSET DT8 3EL St Johns Presbytery Shortmoor PRESBYTERY SHORTMOOR The Pinnacles, White Sheet Hill, BEAMINSTER, DORSET, DT8 3SF PINNACLES, WHITESHEET HILL, BEAMINSTER, DORSET DT8 3SF Pinnacles White Sheet Hill PINNACLES WHITESHEET HILL Obviously, this is not perfect, but it's closer. At this point, you could perhaps say that if either string is a substring of the other, you have a match. That should work with all of these examples except the last one. You could either do this munging for all address lookups, or you could do it only for those that don't find a match in the simplistic way. Either way, you can store the Database B's pre-munged address so that you don't need to constantly recompute those. I can't say for certain how this will perform in the false positives department, but I'd expect that it wouldn't be too bad. For a more-detailed matching, you might look into finding an algorithm to determine the "distance" between two strings and using that to score possible matches. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: macros
Jeremy Bowers wrote: On Tue, 18 Jan 2005 12:59:07 -0800, Robert Brewer wrote: You know, Guido might as well give in now on the Macro issue. If he doesn't come up with something himself, apparently we'll just hack bytecode. I'm not sure that's a gain. I think that this sort of thing is better to have as an explicitly risky hack, than as an endorsed part of the language. The mere fact that this *is* something that one can clearly tell is working around certain deliberate limitations is a big warning sign, and it makes it much less likely to be used extensively. Relatively few people are going to want to use something called "bytecodehacks" in a mission-critical piece of software, compared to the number who'd be perfectly happy to use a language's built-in macro facilities, so at least it keeps the actual usage down to a somewhat more manageable level. To rephrase this a bit more succinctly ;) there's a big difference between having no practical way to prevent something, and actually encouraging it. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Assigning to self
Marc 'BlackJack' Rintsch wrote: Frans Englich wrote: Then I have some vague, general questions which perhaps someone can reason from: what is then the preferred methods for solving problems which requires Singletons? As already mentioned it's similar to a global variable. If I need a "Singleton" I just put it as global into a module. Either initialize it at module level or define a function with the content of your __init__(). If one is determined to both use a Singleton and avoid having a plain module-global variable, one could (ab)use function default parameters: class __Foo: "I am a singleton!" pass def Foo(foo_obj = __Foo()): assert isinstance(foo_obj, __Foo return foo_obj Of course, this suffers from the weakness that one might pass an object as an argument to the factory function and thus temporarily override the Singleton-ness of __Foo... but a determined programmer will work around any sort of protection scheme anyhow. ;) In general, ISTM that if one wants a Singleton, it's best to create it via a factory function (even if that function then masquerades as a class). This gives you pretty good control over the circumstances in which your Singleton will be created and/or retrieved, and it also makes it trivial to replace the Singleton with some other pattern (such as, e.g., a Flyweight or Borg object) should the need to refactor arise. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Zen of Python
Timothy Fitz wrote: On 19 Jan 2005 15:24:10 -0800, Carl Banks <[EMAIL PROTECTED]> wrote: The gist of "Flat is better than nested" is "be as nested as you have to be, no more," because being too nested is just a mess. Which I agree with, and which makes sense. However your "gist" is a different meaning. It's not that "Flat is better than nested" it's that "Too flat is bad and too flat is nested so be as nested (or as flat) as you have to be and no more." Perhaps Tim Peters is far too concise for my feeble mind Well, the way that the Zen is phrased, it implies a bit more than that. We all agree that there's a balance to be found between "completely flat" and "extremely nested"; the specific phrasing of the Zen conveys that (in the Python philosophy at least) the appropriate balance point is much closer to the "completely flat" side of things. It's not "... as nested (or as flat) as you have to be and no more", it's "... as nested as you have to be and no more, but if you need significant nesting, you might want to re-examine your design". ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: why am I getting a segmentation fault?
Paul McGuire wrote: 4. filename=r[7].split('/')[-1] is not terribly portable. See if there is a standard module for parsing filespecs (I'll bet there is). Indeed there is -- os.path. In particular, os.path.basename() seems to do exactly that snippet is intending, in a much more robust (and readable) fashion. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: default value in a list
TB wrote: Hi, Is there an elegant way to assign to a list from a list of unknown size? For example, how could you do something like: a, b, c = (line.split(':')) if line could have less than three fields? (Note that you're actually assigning to a group of local variables, via tuple unpacking, not assigning to a list...) One could also do something like this: >>> l = "a:b:c".split(':') >>> a, b, c, d, e = l + ([None] * (5 - len(l))) >>> print (a, b, c, d, e) ('a', 'b', 'c', None, None) >>> Personally, though, I can't help but think that, if you're not certain how many fields are in a string, then splitting it into independent variables (rather than, say, a list or dict) *cannot* be an elegant solution. If the fields deserve independent names, then they must have a definite (and distinct) meaning; if they have a distinct meaning (as opposed to being a series of similar items, in which case you should keep them in a list), then which field is it that's missing? Are you sure it's *always* the last fields? This feels to me like the wrong solution to any problem. Hm, speaking of fields makes me think of classes. >>> class LineObj: ... def __init__(self, a=None, b=None, c=None, d=None, e=None): ... self.a = a ... self.b = b ... self.c = c ... self.d = d ... self.e = e ... >>> l = "a:b:c".split(':') >>> o = LineObj(*l) >>> o.__dict__ {'a': 'a', 'c': 'c', 'b': 'b', 'e': None, 'd': None} >>> This is a bit more likely to be meaningful, in that there's almost certainly some logical connection between the fields of the line you're splitting and keeping them as a class demonstrates that connection, but it still seems a bit smelly to me. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Tuple slices
[My newsreader crapped out on sending this; apologies if it appears twice.] George Sakkis wrote: "Terry Reedy" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] Aside from the problem of not being able to delete the underlying object, the view object for a tuple would have to be a new type of object with a new set of methods. It *could*, but it doesn't have to. One can represent a view as essentially an object with a pointer to a memory buffer and a (start,stop,step) triple. Then a "real tuple" is just a "view" with the triple being (0, len(sequence), 1). Except that that's not how Python tuples *are* constructed, and it'd be a pretty big deal to redo the architecture of all tuples to support this relatively special case. Even if this re-architecting were done, you're still constructing a new object -- the difference is that you're creating this (start,stop,step) triple instead of duplicating a set of PyObject* pointers, and then doing math based on those values instead of straightforward pointer access. I'm not at all convinced that this really saves you a significant amount for tuple slices (really, you're still constructing a new tuple anyhow, aren't you?), and it's going to cost a bit in both execution time and complexity in the common case (accessing tuples without slicing). If there's a big cost in object construction, it's probably going to be the memory allocation, and for a reasonable tuple the size of the memory required is not going to significantly affect the allocation time. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Another scripting language implemented into Python itself?
Roy Smith wrote: In article <[EMAIL PROTECTED]>, Quest Master <[EMAIL PROTECTED]> wrote: So, my question is simply this: is there an implementation of another scripting language into Python? Python *is* a scripting language. Why not just let your users write Python modules which you them import and execute via some defined API? Because you cannot make Python secure against a malicious (or ignorant) user -- there's too much flexibility to be able to guard against every possible way in which user-code could harm the system. Parsing your own (limited) scripting language allows much better control over what user-code is capable of doing, and therefore allows (at least some measure of) security against malicious code. To the O.P.: Yes, people have implemented other languages in Python. For example, I believe that Danny Yoo has written a Scheme interpreter in Python (Google tells me it should be at http://hkn.eecs.berkeley.edu/~dyoo/python/pyscheme but I'm getting no response from that host right now), but I don't know whether Scheme counts as a scripting language. ;) However, if you're using a fully-featured language for these user scripts, you'll probably have the same security issues I mentioned for Python. Unless you really need that level of features, you may be better off designing your own limited language. Check into the docs for pyparsing for a starter... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: is this use of lists normal?
Gabriel B. wrote: My object model ended up as DataStorageObj |-itemsIndex (list, could very well be a set...) | |-[0] = 0 | |-[1] = 1 | |-[2] = 5 | '-[3] = 6 '-Items (list) |-[0] = ['cat food', '12,20'] |-[1] = ['dog food', 8,00'] |-[2] = ['dead parrot', '25,00'] '-[3] = ['friendly white bunny', '12,25'] the list itemsindex has the DB index of the data, and the list items has the data. So if i want something like "SELECT * FROM items WHERE idx=5" i'd use in my program self.items[ self.itemsIndex.index(5) ] i reccon that's not much nice to use when you're gona do /inserts/ but my program will just read the entire data and never change it. Was i better with dictionaries? the tutorial didn't gave me a good impression on them for custom data... Tupples? the tutorial mentions one of it's uses 'employee records from a database' but unfortunatly don't go for it... Yes, I think you'd be better off using dictionaries here. You can spare yourself a level of indirection. Tuples would be a good way to store the individual items -- instead of a list containing a name and a price (or so I presume), you'd use a tuple. Your data storage would then be a dictionary of tuples -- self.items = { 0: ('cat food', '12,20'), 1: ('dog food', '8,00'), 5: ('dead parrot', '25,00'), 6: ('friendly white bunny', '12,25') } Then your SELECT above would translate to my_item = self.items[5] and my_item would then contain the tuple ('dead parrot', '25,00'). Note that the most important difference between tuples and lists, for this example, is conceptual. Tuples generally express "this is a collection of different things that are a conceptual group", whereas lists express "this is a series of similar objects". i think the 'ideal' data model should be something like ({'id': 0, 'desc': 'dog food', 'price': '12,20'}, ...) But i have no idea how i'd find some item by the ID within it withouy using some loops You could use a dictionary for each item, as you show, and then store all of those in a master dictionary keyed by id -- in other words, simply replace the tuples in my previous example with a dict like what you've got here. You could also create a simple class to hold each item, rather than using small dicts. (You'd probably still want to store class instances in a master dict keyed by id.) Generally, any time your problem is to use one piece of information to retrieve another piece (or set) of information, dictionaries are very likely to be the best approach. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Classical FP problem in python : Hamming problem
Bengt Richter wrote: On 25 Jan 2005 08:30:03 GMT, Nick Craig-Wood <[EMAIL PROTECTED]> wrote: If you are after readability, you might prefer this... def hamming(): def _hamming(): yield 1 for n in imerge(imap(lambda h: 2*h, iter(hamming2)), imerge(imap(lambda h: 3*h, iter(hamming3)), imap(lambda h: 5*h, iter(hamming5: yield n hamming2, hamming3, hamming5, result = tee(_hamming(), 4) return result Are the long words really that helpful? def hamming(): def _hamming(): yield 1 for n in imerge(imap(lambda h: 2*h, iter(hg2)), imerge(imap(lambda h: 3*h, iter(hg3)), imap(lambda h: 5*h, iter(hg5: yield n hg2, hg3, hg5, result = tee(_hamming(), 4) # four hamming generators return result Well, judging by the fact that shortening the identifiers made it so that you felt the need to add a comment indicating what they were identifying, I'd say that yes, the long words *are* helpful. ;) Comments are good, but self-documenting code is even better. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Tuple slices
George Sakkis wrote: An iterator is perfectly ok if all you want is to iterate over the elements of a view, but as you noted, iterators are less flexible than the underlying sequence. The view should be (or at least appear) identical in functionality (i.e. public methods) with its underlying sequence. So, what problem is it, exactly, that you think you'd solve by making tuple slices a view rather than a copy? As I see it, you get the *possibility* of saving a few bytes (which may go in the other direction) at a cost of complexity and speed. You have greater dependence of internal objects on each other, you can't get rid of the original tuple while any slice views of it exist, you gain nothing in the way of syntax/usage simplicity... so what's the point? To my mind, one of the best things about Python is that (for the most part) I don't have to worry about actual memory layout of objects. I don't *care* how tuples are implemented, they just work. It seems to me that you're saying that they work perfectly fine as-is, but that you have a problem with the implementation that the language tries its best to let you not care about. Is this purely abstract philosophy? Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Another scripting language implemented into Python itself?
Grant Edwards wrote: On 2005-01-25, Rocco Moretti <[EMAIL PROTECTED]> wrote: Bottom line: Don't exec or eval untrusted code. Don't import untrusted modules. I still don't see how that's any different for Python than for any other language. Yep, and one should be careful about executing untrusted C code, too. If you're running a webserver, do you let random users upload executables and then run them? Probably not. The key point here, what I was attempting to say in my earlier post, is that while Python can be useful as an internal scripting language inside of an application, it gives users of that application the same power over your system as any arbitrary C code. That's fine if it's an internal application, or the application can be run with (enforceable) restricted permissions, but it's still risky. How many security alerts has Microsoft issued because of some bug that allowed the execution of arbitrary code? Well, having Python scripting access is essentially the same thing. At best, you can use the external environment to limit the process running Python to its own sandbox (e.g. running as a limited-permission user in a chroot jail), but you still can't prevent one user of that application from screwing with other users of the application, or the application's own internal data. In other words, the only difference is that Python makes it much more tempting to hand over the keys to your server. I confess that I jumped to the (apparently unsupported) conclusion that this was some sort of server-based, internet/shared application. If that's not the case, then concerns about security are not so significant. If the users are running this application on their own machines, then letting them script it in Python is a perfectly valid (and probably quite desirable) approach. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Browsing text ; Python the right tool?
Paul Kooistra wrote: 1. Does anybody now of a generic tool (not necessarily Python based) that does the job I've outlined? 2. If not, is there some framework or widget in Python I can adapt to do what I want? Not that I know of, but... 3. If not, should I consider building all this just from scratch in Python - which would probably mean not only learning Python, but some other GUI related modules? This should be pretty easy. If each record is CRLF terminated, then you can get one record at a time simply by iterating over the file ("for line in open('myfile.dat'): ..."). You can have a dictionary of classes or factory functions, one for each record type, keyed off of the 2-character identifier. Each class/factory would know the layout of that record type, and return a(n) instance/dictionary with fields separated out into attributes/items. The trickiest part would be in displaying the data; you could potentially use COM to insert it into a Word or Excel document, or code your own GUI in Python. The former would be pretty easy if you're happy with fairly simple formatting; the latter would require a bit more effort, but if you used one of Python's RAD tools (Boa Constructor, or maybe PythonCard, as examples) you'd be able to get very nice results. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Help! Host is reluctant to install Python
Daniel Bickett wrote: I've been trying to convince my host to install python/mod_python on his server for a while now, however there are a number of reasons he is reluctant to do so, which I will outline here: 1. His major reason is optimization. He uses Zend's optimization of PHP as an example, and he has stated that python is rather resource consuming. This depends, as all things, on what's being done with it -- it's certainly possible to write resource-hogging Python apps, but it's possible to do that in any language. And I'm not aware of Python being particularly worse in this regard than any other web-scripting language. I suspect this translates to "I'm avoiding anything that I don't already know". And, in light of point #1, I suggested that if there wasn't any optimization immediately available, he could just enable it for my account (thus lessening potential resource consumption at any given time), to which he retorted "Do /you/ know how to do that?", and I must say, he has me cornered ;-) I don't know how to do that offhand... but then, I don't expect people to pay me for web-hosting expertise. I would expect, from the little that I *do* know of Apache configuration, that it wouldn't be too difficult to allow Python CGIs to run out of only one specific directory, that being within your webspace. If you're paying for this service, then I'd agree with everyone else that you should be paying for a different service. There's plenty of webhosts around who *will* do Python. If this is a friend, then point him to the Python Success Stories (http://www.pythonology.com/success) and suggest that if there's that many Python web apps around, it can't be too horrible on resources/management, and that he shouldn't be so afraid to try something new... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: python without OO
Davor wrote: [...] what I need that Python has and bash&dos don't is: 1. portability (interpreter runs quite a bit architectures) 2. good basic library (already there) 3. modules for structuring the application (objects unnecessary) 4. high-level data structures (dictionaries & lists) 5. no strong static type checking 6. very nice syntax But modules, lists, and dictionaries *are* all objects, and one uses standard object attribute-access behavior to work with them. so initially I was hoping this is all what Python is about, but when I started looking into it it has a huge amount of additional (mainly OO) stuff which makes it in my view quite bloated now... If you're basing your opinion of OO off of C++ and Java, then it's not too surprising that you're wary of it. But really, the OO in Python is as simple and transparent as the rest of the syntax. You don't need to define your own classes if you don't want to -- it's quite easy to write modules that contain only simple functions. A trivial understanding of objects & object attributes is needed to use the OO portions of the standard library. If you really want, you can still dictate that your own project's code be strictly procedural (i.e. you can use objects but not define objects). > anyhow, I guess I'll have to constrain what can be included in the code through different policies rather than language limitations... You mention elsewhere the fear of some developer with a 50-layer inheritance heirarchy. That's not something that normally happens in Python. Indeed, one of the tenets of the Zen of Python is that "flat is better than nested". But more than that, it's just not necessary to do that sort of thing in Python. In statically typed languages like C++ and Java, inheritance trees are necessary so that you can appropriately categorize objects by their type. Since you must explicitly declare what type is to be used where, you may need fine granularity of expressing what type a given object is, which requires complex inheritance trees. In Python, an object is whatever type it acts like -- behavior is more important than declared type, so there's no value to having a huge assortment of potential types. Deep inheritance trees only happen when people are migrating from Java. ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: python without OO
Davor wrote: M.E.Farmer wrote: Wrap your head around Python, don't wrap the Python around your head! This is NOT Java, or C++ or C , it IS Python. that's interesting hypothesis that behavior will vary due to the use of different language ... If using a different language doesn't require/encourage different programming habits, then what's the point of using a different language? "A language that doesn't affect the way you think about programming, is not worth knowing." --Alan Perlis Different languages offer different modes of expression, different ways of approaching the same problem. That's *why* we have so many different programming languages -- because no single approach is the best one for all problems, and knowing multiple approaches helps you to use your favored approach more effectively. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Browsing text ; Python the right tool?
John Machin wrote: Jeff Shannon wrote: [...] If each record is CRLF terminated, then you can get one record at a time simply by iterating over the file ("for line in open('myfile.dat'): ..."). You can have a dictionary classes or factory functions, one for each record type, keyed off of the 2-character identifier. Each class/factory would know the layout of that record type, This is plausible only under the condition that Santa Claus is paying you $X per class/factory or per line of code, or you are so speed-crazy that you are machine-generating C code for the factories. I think that's overly pessimistic. I *was* presuming a case where the number of record types was fairly small, and the definitions of those records reasonably constant. For ~10 or fewer types whose spec doesn't change, hand-coding the conversion would probably be quicker and/or more straightforward than writing a spec-parser as you suggest. If, on the other hand, there are many record types, and/or those record types are subject to changes in specification, then yes, it'd be better to parse the specs from some sort of data file. The O.P. didn't mention anything either way about how dynamic the record specs are, nor the number of record types expected. I suspect that we're both assuming a case similar to our own personal experiences, which are different enough to lead to different preferred solutions. ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: inherit without calling parent class constructor?
Christian Dieterich wrote: Hi, I need to create many instances of a class D that inherits from a class B. Since the constructor of B is expensive I'd like to execute it only if it's really unavoidable. Below is an example and two workarounds, but I feel they are not really good solutions. Does somebody have any ideas how to inherit the data attributes and the methods of a class without calling it's constructor over and over again? You could try making D a container for B instead of a subclass: class D(object): def __init__(self, ...): self._B = None def __getattr__(self, attr): if self._B is None: self._B = B() return getattr(self._B, attr) Include something similar for __setattr__(), and you should be in business. If it will work for numerous D instances to share a single B instance (as one of your workarounds suggests), then you can give D's __init__() a B parameter that defaults to None. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Browsing text ; Python the right tool?
John Machin wrote: Jeff Shannon wrote: [...] For ~10 or fewer types whose spec doesn't change, hand-coding the conversion would probably be quicker and/or more straightforward than writing a spec-parser as you suggest. I didn't suggest writing a "spec-parser". No (mechanical) parsing is involved. The specs that I'm used to dealing with set out the record layouts in a tabular fashion. The only hassle is extracting that from a MSWord document or a PDF. The "specs" I'm used to dealing with are inconsistent enough that it's more work to "massage" them into strict tabular format than it is to retype and verify them. Typically it's one or two file types, with one or two record types each, from each vendor -- and of course no vendor uses anything similar to any other, nor is there a standardized way for them to specify what they *do* use. Everything is almost completely ad-hoc. If, on the other hand, there are many record types, and/or those record types are subject to changes in specification, then yes, it'd be better to parse the specs from some sort of data file. "Parse"? No parsing, and not much code at all: The routine to "load" (not "parse") the layout from the layout.csv file into dicts of dicts is only 35 lines of Python code. The routine to take an input line and serve up an object instance is about the same. It does more than the OP's browsing requirement already. The routine to take an object and serve up a correctly formatted output line is only 50 lines of which 1/4 is comment or blank. There's a tradeoff between the effort involved in writing multiple custom record-type classes, and the effort necessary to write the generic loading routines plus the effort to massage coerce the specifications into a regular, machine-readable format. I suppose that "parsing" may not precisely be the correct term here, but I was using it in parallel to, say, ConfigParser and Optparse. Either you're writing code to translate some sort of received specification into a usable format, or you're manually pushing bytes around to get them into a format that your code *can* translate. I'd say that my creation of custom classes is just a bit further along a continuum than your massaging of specification data -- I'm just massaging it into Python code instead of CSV tables. I suspect that we're both assuming a case similar to our own personal experiences, which are different enough to lead to different preferred solutions. ;) Indeed. You seem to have lead a charmed life; may the wizards and the rangers ever continue to protect you from the dark riders! :-) Hardly charmed -- more that there's so little regularity in what I'm given that massaging it to a standard format is almost as much work as just buckling down and retyping it. My one saving grace is that I'm usually able to work with delimited files, rather than column-width-specified files. I'll spare you the rant about my many job-related frustrations, but trust me, there ain't no picnics here! Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: python without OO
Davor wrote: so you get a nice program with separate data structures and functions that operate on these data structures, with modules as containers for both (again ideally separated). Very simple to do and maintain [...] Replace "modules" with "classes" in the above quote, and you have the very essence of object-oriented programming. (What you describe here *is* object-oriented programming, you're just trying to avoid the 'class' statement and use module-objects where 'traditional' OO would use class instances.) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: how to comment out a block of code
Xah Lee wrote: is there a syntax to comment out a block of code? i.e. like html's or perhaps put a marker so that all lines from there on are ignored? thanks. Of course -- this feature is so important that all computer manufacturers worldwide have made a special button on the computer case just for this! Normally it's a large round button, with perhaps a green backlight. Press the button and hold it in for about 3 seconds, and the rest of your code/writing will be ignored just as it should be. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess.Popen() redirecting to TKinter or WXPython textwidget???
Ivo Woltring wrote: The output of mencoder is not readable with readlines (i tried it) because after the initial informational lines You don't get lines anymore (you get a linefeed but no newline) The prints are all on the same line (like a status line) something like Pos: 3,1s 96f ( 0%) 42fps Trem: 0min 0mb A-V:0,038 [171:63] Hm, I'm inferring that what you mean is that you get a carriage return (ASCII 0x0C) but no linefeed (ASCII 0x0A) -- CR returns you to the beginning of the current line, and LF advances you to the next line. Rather than using readlines(), you could simply read() a few characters (or a single character) at a time, buffering it yourself and passing it on when you see the CR. You're likely to run into I/O blockage issues no matter how you do this, though -- even if you're reading a single character at a time, read(1) won't return until you've read that character, and if the program on the other end of the pipe isn't writing anything, then your app is stuck. The "simple" way to do this is by using an i/o thread, which does something like this: buffer = [] while 1: char = outpipe.read(1) if char == '\0x0A': notify_gui_thread( ''.join(buffer) ) buffer = [] else: buffer.append(char) if StopEvent.IsSet(): raise CustomStopException Note that you don't want to try to update your GUI widgets directly from the worker (i/o) thread -- very few GUI toolkits are threadsafe, so you need to make all GUI calls from a single thread. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: String Fomat Conversion
Stephen Thorne wrote: On Thu, 27 Jan 2005 00:02:45 -0700, Steven Bethard <[EMAIL PROTECTED]> wrote: By using the iterator instead of readlines, I read only one line from the file into memory at once, instead of all of them. This may or may not matter depending on the size of your files, but using iterators is generally more scalable, though of course it's not always possible. I just did a teensy test. All three options used exactly the same amount of total memory. I would presume that, for a small file, the entire contents of the file will be sucked into the read buffer implemented by the underlying C file library. An iterator will only really save memory consumption when the file size is greater than that buffer's size. Actually, now that I think of it, there's probably another copy of the data at Python level. For readlines(), that copy is the list object itself. For iter and iter.next(), it's in the iterator's read-ahead buffer. So perhaps memory savings will occur when *that* buffer size is exceeded. It's also quite possible that both buffers are the same size... Anyhow, I'm sure that the fact that they use the same size for your test is a reflection of buffering. The next question is, which provides the most *conceptual* simplicity? (The answer to that one, I think, depends on how your brain happens to see things...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: inherit without calling parent class constructor?
Christian Dieterich wrote: On Dé Céadaoin, Ean 26, 2005, at 17:09 America/Chicago, Jeff Shannon wrote: You could try making D a container for B instead of a subclass: Thank you for the solution. I'll need to have a closer look at it. However it seems like the decision whether to do "some expensive calculation" or not is delegated to the exterior of class D. Different classes that instanciate D would need to know what has been going on elsewhere. That might complicate things. True, in the sense that B is instantiated as soon as a message is sent to D that requires B's assistance to answer. If the "decision" is a case of "only calculate this if we actually want to use it", then this lazy-container approach works well. If the decision requires consideration of other factors, then it's a bit more complex -- though one could write that logic into D, and have D throw an exception (or return a sentinel) if it decides not to instantiate B at that time. This then requires a bit more checking on the client side, but the actual logic is encapsulated in D. And really, I don't see where the container approach shifts the decision any more than the descriptor approach does. In either case, the calculation happens as soon as someone requests D.size ... (However, from your other descriptions of your problem, it does sound like a property / descriptor is a better conceptual fit than a contained class is. I mostly wanted to point out that there are other ways to use OO than inheritance...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: inherit without calling parent class constructor?
Christian Dieterich wrote: On Déardaoin, Ean 27, 2005, at 14:05 America/Chicago, Jeff Shannon wrote: the descriptor approach does. In either case, the calculation happens as soon as someone requests D.size ... Agreed. The calculation happens as soon as someone requests D.size. So far so good. Well, maybe I'm just not into it deep enough. As far as I can tell, In your class D the calculation happens for every instantiation of D, right? For my specific case, I'd like a construct that calculates D.size exactly once and uses the result for all subsequent instantiations. Okay, so size (and the B object) is effectively a class attribute, rather than an instance attribute. You can do this explicitly -- class D(object): _B = None def __getattr__(self, attr): if self._B is None: if myB is None: myB = B() D._B = myB return getattr(self._B, attr) Now, when the B object is first needed, it's created (triggering that expensive calculation) and stored in D's class object. Since all instances of D share the class object, they'll all share the same instance of B. Probably not worth the trouble in this particular case, but maybe in another case... :) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Question about 'None'
flamesrock wrote: I should also mention that I'm using version 2.0.1 (schools retro solaris machines :( ) At home (version 2.3.4) it prints out 'True' for the above code block. That would explain it -- as /F mentioned previously, the special case for None was added in 2.1. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Basic file operation questions
Caleb Hattingh wrote: Peter Yes, you can even write f = open("data.txt") for line in f: # do stuff with line f.close() This has the additional benefit of not slurping in the entire file at once. Is there disk access on every iteration? I'm guessing yes? It shouldn't be an issue in the vast majority of cases, but I'm naturally curious :) Disk access should be buffered, possibly both at the C-runtime level and at the file-iterator level (though I couldn't swear to that). I'm sure that the C-level buffering happens, though. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: returning True, False or None
Jeremy Bowers wrote: On Fri, 04 Feb 2005 10:48:44 -0700, Steven Bethard wrote: For a given list: * If all values are None, the function should return None. * If at least one value is True, the function should return True. * Otherwise, the function should return False. Yes, I see the smell, you are searching the list multiple times. You could bail out when you can: seenFalse = False for item in list: if item: return True if item is False: seenFalse = True if seenFalse: return False return None I'd modify this approach slightly... def tfn(lst): answer = None for item in lst: if item is True: return True if item is False: answer = False return answer But yeah, the original, straightforward way is probably enough clearer that I wouldn't bother with anything else unless lists might be long enough for performance to matter. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: returning True, False or None
Jeremy Bowers wrote: On Fri, 04 Feb 2005 16:44:48 -0500, Daniel Bickett wrote: [ False , False , True , None ] False would be returned upon inspection of the first index, even though True was in fact in the list. The same is true of the code of Jeremy Bowers, Steve Juranich, and Jeff Shannon. As for Raymond Hettinger, I can't even be sure ;) Nope. Indeed. Similarly for mine, which was really just a slight transform of Jeremy's (setting a return variable directly, instead of setting a flag that's later used to decide what to return): >>> def tfn(lst): ... answer = None ... for item in lst: ... if item is True: return True ... if item is False: answer = False ... return answer ... >>> list = [False, False, True, None] >>> tfn(list) 1 >>> list = [None, False, False, None] >>> tfn(list) 0 >>> list = [None, None, None, None] >>> print tfn(list) None >>> >>> The noted logical flaw *has* been present in a number of proposed solutions, however. The key point to note is that one *must* examine the entire list *unless* you find a True; short-circuiting on False means that you may miss a later True. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: changing local namespace of a function
Bo Peng wrote: My function and dictionaries are a lot more complicated than these so I would like to set dict as the default namespace of fun. This sounds to me like you're trying to re-implement object orientation. Turn all of those functions into methods on a class, and instead of creating dictionaries that'll be passed into functions, create class instances. class MyClass(object): def __init__(self, **kwargs): for key, val in kwargs: setattr(self, key, val) def fun(self): self.z = self.y + self.x a = MyClass(x=1, y=2) a.fun() print a.z Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: [noob] Error!
administrata wrote: Write a Car Salesman program [...] This sounds like homework, and we generally try to avoid solving peoples' homework problems for them, but I can offer a suggestion. error occurs, i think the problem is in actual_price. but, I don't know how to comebine percentage and raw_input. help me... It's hard to be sure since you don't say what the error is, nor anything about what you expect to see and what you actually see. However, if you're pretty sure that the problem is the line where you calculate actual_price, then fire up an interactive interpreter and try fiddling with things. You've got a lot of subexpressions there; pick some values and try each subexpression, one at a time, and take a look at what you get. I bet that it won't take you long to figure out why you're not getting the result you expect. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: variable declaration
Alexander Zatvornitskiy wrote: Another example. Let say you have variable PowerOfGenerator in your program. But, it is only active power, so you have to (1)rename PowerOfGenerator to ActivePowerOfGenerator, (2)calculate ReactivePowerOfGenerator, and (3)calculate new PowerOfGenerator by formula PowerOfGenerator=sqrt(ReactivePowerOfGenerator**2+ActivePowerOfGenerator**2) With var declarations, on step (1) you just rename PowerOfGenerator to ActivePowerOfGenerator in the place of its declaration, and compile your program. Compiler will show you all places where you have to rename variables. After it, on step (3) you can safely and peacefully add new PowerOfGenerator variable. You can also get all places where said variable exists by using grep, or your editor's search feature. I don't see how a var declaration gains you anything over 'grep PowerOfGenerator *.py' ... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Basic file operation questions
Marc Huffnagle wrote: When you read a file with that method, is there an implied close() call on the file? I assume there is, but how is that handled? [...] for line in file(...): # do stuff As I understand it, the disk file will be closed when the file object is garbage collected. In CPython, that will be as soon as there are no active references to the file; i.e., in the above case, it should happen as soon as the for loop finishes. Jython uses Java's garbage collector, which is a bit less predictable, so the file may not be closed immediately. It *will*, however, be closed during program shutdown if it hasn't happened before then. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Big development in the GUI realm
Maciej Mróz wrote: However, imagine simple situation: 1. I write proprietary program with open plugin api. I even make the api itself public domain. Program works by itself, does not contain any GPL-ed code. 2. Later someone writes plugin using the api (which is public domain so is GPL compatible), plugin gets loaded into my software, significantly affecting its functionality (UI, operations, file formats, whatever). 3. Someone downloads the plugin and loads it into my program I believe that in this case, the key is *distribution*. You are not violating the GPL, because you are not distributing a program that is derived (according to the GPL's definition of derived) from GPL code. The plugin author *is* distributing GPL-derived code, but is doing so under a GPL license. That's fine too. The end user is now linking (dynamically) GPL code with your proprietary code. However, he is *not* distributing the linked assemblage. This is allowed under the GPL; its terms only apply when distribution takes place. If the end user is a repackager, and then turns around and distributes both sets of code together, then that would (potentially) violate GPL terms. But as long as they're not distributed together, then it's okay. This should even extend to distributing a basic (proprietary) plugin and including a document describing where & how to get the more-featureful GPL replacement plugin. (Distributing both programs as separate packages on a single installation medium would be a tricky edge case. I suspect it *could* be done in a GPL-acceptable way, but one would need to take care about it.) Of course, this is only my own personal interpretation and opinion -- IANAL, TINLA, YMMV, etc, etc. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: interactive execution
Jive Dadson wrote: How does one execute arbitrary text as code within a module's context? I've got some code that compiles some text and then executes it. When the string is "print 'Hello'", it prints "Hello". I get no exception when I compile and execute "foo = 555". If I then compile and exec "print foo", I get a name error. The variable foo is undefined. My assumption is that the "exec" command created a new namespace, put "foo" in that namespace, and then threw the namespace away. Or something. You can do exec codestring in globaldict, localdict (Or something like that, this is from unused memory and is untested.) The net effect is that exec uses the subsequent dictionaries as its globals and locals, reading from and writing to them as necessary. (Note that this doesn't get you any real security, because malicious code can still get to __builtins__ from almost any object...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: interactive execution
Jive Dadson wrote: Yeah. I got it. exec "foo = 555" in globals(), locals() does the trick. You can do it with your own dicts, too -- but they must already exist, exec doesn't create them out of nowhere. >>> myglobals = {'a':2, 'b':5} >>> mylocals = {'c': 3} >>> exec "d = a * b + c" in myglobals, mylocals >>> myglobals {'a': 2, '__builtins__': {...}, 'b': 5} >>> mylocals {'c': 3, 'd': 13} >>> This gives you some control over what the exec'ed statement actually sees, as well as what happens with the results. (But as I mentioned before, there is no real security here if you're exec'ing arbitrary code -- there's no sandboxing involved, and the exec'ed string *can* use that __builtins__ reference (among other things) to do all sorts of malicious stuff.) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: newbie question
[EMAIL PROTECTED] wrote: Thanks for the reply. I am trying to convert some C code to python and i was not sure what the equivalent python code would be. I want to postdecrement the value in the while loop. Since i cannot use assignment in while statements is there any other way to do it in python? Roughly speaking, if you have a loop in C like while (n--) { func(n); } then the mechanical translation to Python would look like this: while True: n -= 1 if not n: break func(n) However, odds are fairly decent that a mechanical translation is not the best approach, and you may (as just one of many examples) be much better off with something more like: for i in range(n)[::-1]: func(n) The '[::-1]' iterates over the range in a reverse (decreasing) direction; this may or may not be necessary depending on the circumstances. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Python as capable as Perl for sysadmin work?
Courageous wrote: *checks self to see if self is wearing rose colored glasses* assert(self.glasses.color != 'rose') ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: newbie question
Dennis Lee Bieber wrote: On Wed, 09 Feb 2005 18:10:40 -0800, Jeff Shannon <[EMAIL PROTECTED]> declaimed the following in comp.lang.python: for i in range(n)[::-1]: func(n) Shouldn't that be func(i) (the loop index?) You're right, that's what I *meant* to say. (What, the interpreter doesn't have a "do what I mean" mode yet? ;) ) The '[::-1]' iterates over the range in a reverse (decreasing) direction; this may or may not be necessary depending on the circumstances. Eeee sneaky... (I'm a bit behind on latest syntax additions) I'd probably have coded something like for n1 in range(n): func(n-n1) though, and note that I do admit it here [...] Given a need/desire to avoid extended slicing (i.e. being stuck with an older Python, as I often am), I'd actually do this by changing the input to range(), i.e. for i in range(n, 0, -1): # ... That (IMO) makes the decreasing-integer sequence a bit clearer than doing subtraction in the function parameter list does. Actually, it's possibly clearer than the extended slicing, too, so maybe this would be the better way all around... ;) I haven't done the detailed analysis to properly set the end point... And as Peter Hansen points out, none of the Python versions leave n in the same state that the C loop does, so that's one more way in which an exact translation is not really possible -- and (IMO again) further evidence that trying to do an exact translation would be ill-conceived. Much better to consider the context in which the loop is used and do a looser, idiomatic translation. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: how can I replace a execfile with __import__ in class to use self variables
Wensheng wrote: I just realized I can pass the object itself: like p=__import__("printit") p.pr(self) Leaving no reason not do do *this* part as import printit printit.pr(self) rather than using the internal hook function to do exactly the standard thing. in printit.py - def pr(self): print self.var --- (Though frankly I don't see the advantage of having this tiny function in a separate file to begin with...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: [N00B] What's %?
Harlin wrote: What good is the modulus operator? What would I ever need it for? * A quick way of testing whether an integer is even and odd * For that matter, a quick way of testing whether a the variable is a factor of any other arbitrary number. * In some programs (a weight control program I worked on comes to mind) it's necessary to get a remainder so that you can get the results of a leftover evenly divisible number. Also, it's a good way to ensure that some number is in a specified range, and "wraps around" to the beginning if it goes out of that range. For a quick & cheesy example, let's say we want to count time for music: import time def beats = ['one', 'two', 'three', 'four'] n = 0 while True: print beats[n] n = (n+1) % 4 time.sleep(0.5) By using '% 4', I ensure that n is always in the interval [0...4) (including 0 but not including 4). Modulus is useful for all sorts of periodic behavior. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: newbie question
Dennis Lee Bieber wrote: On Thu, 10 Feb 2005 09:36:42 -0800, Jeff Shannon <[EMAIL PROTECTED]> declaimed the following in comp.lang.python: And as Peter Hansen points out, none of the Python versions leave n in the same state that the C loop does, so that's one more way in which an exact translation is not really possible -- and (IMO again) further evidence that trying to do an exact translation would be ill-conceived. Much better to consider the context in which the loop is used and do a looser, idiomatic translation. Yeah, though my background tends to be one which considers loop indices to be loop-local, value indeterminate after exit... Well, even though I've programmed mostly in langauges where loop indices to retain a determinate value after exit, I almost always *treat* them as loop-local -- it just seems safer that way. But not everyone does so, and especially with C while loops, often the point is to keep adjusting the control variable until it fits the requirements of the next section... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: goto, cls, wait commands
jean-michel wrote: Hi all, I saw a lot of comments saying GOTO is not usefull, very bad, and we should'nt use it because we don't need it. I think that's true, but only if you *create* programs. But if the goal is to provide some kind of converter to automatically take an old application written with an old language (using GOTO), and generating python code, it would certainly a great help to have this (unclean) feature in native python. But an automatic translator is certain to be imperfect. One can no more translate mechanically between computer languages than one can translate mechanically between human languages -- and we've all seen the fun that can be had by machine-translating from language A -> language B -> language A, right? What do you think the effect of that sort of meaning-drift would be on application code? In other words, any translation from one language to another will require significant human attention, by someone familiar with both languages, to ensure that the original meaning is preserved as close as possible. You're going to have to rewrite chunks of code by hand no matter what you do; it'd be silly to *not* take that opportunity to purge things like GOTO. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: [EVALUATION] - E02 - Support for MinGW Open Source Compiler
Pat wrote: I think the same applies to developers. Not every programmer is willing to go through a lot of pain and effort just to get something simple to work. True... but given I.L.'s insistence on a rather stringent set of requirements (fully open-source toolchain to produce closed-source software on proprietary OS), and his attitude ("Why haven't all of you done this for me already? WHY WHY WHY?"), he comes across as someone who's *insisting* that *someone else* should go to a lot of pain and effort on *his* behalf. Indeed, he's insisting that the Python community should provide volunteer effort because it will (supposedly) assist him in his commercial endeavor. Notably, when you've commented in a reasonable manner about having apparently similar needs, several people have offered suggestions as to how to solve your problems. People have also offered I.L. suggestions, but he derides them as not being exactly what he wants and continues to insist that others should perform volunteer work for his benefit. Now, there's nothing wrong with asking (politely) why certain things are the way they are, or suggesting that it'd be nice if someone changed a few things. But the insistence that he's being horribly wronged because people aren't jumping at the chance to assist him is more than a little bit offensive -- especially when he's turning up his nose at solutions that are close (but not exact) matches to his "requirements". Instead of saying "Hey, someone's done half my work for me -- great!", he's saying "Hey, why haven't you done the rest of my work!" Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Variables.
bruno modulix wrote: administrata wrote: I wrote this, It's a bit lame though (snip code - see other answers in this thread) raw_input("\n\\t\t\t- The End -") Why on earth are you using raw_input() here ? This is a fairly common idiom, on Windows at least. If running a console app from Explorer, the console will close as soon as the app terminates. Using raw_input() at the end of the app means that it won't close until the user hits Enter. HELP plz No one can help you if you don't explain your problem. We are not psychic enough to read your mind !-) Indeed -- it looks like this worked perfectly to me, so the issue is in what's expected. :) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: [EVALUATION] - E02 - Support for MinGW Open Source Compiler
Ilias Lazaridis wrote: Adam DePrince wrote: [...] You're on it. You drive a car? You have to treat it right to get what you want, right? Same here. Ask correctly, and you will get your answers. Your interpretation/definition of "asking correctly" is irrelevant to me. "Interpretation is irrelevant. Logic is irrelevant. You will be assimilated." Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Calling a function from module question.
Sean wrote: So what if I have a whole bunch of functions - say 25 of them. Is there a way to do this without naming each function? Yes [1], but it's basically deprecated and you shouldn't use it. Consider refactoring your code. Refactoring my code? Sorry, I am not sure what you mean here. 'Refactoring' is just a fancy way of saying 'reorganizing'. What it means in this case is to look at the reason that you have 25 functions in this other module whose name you don't want to type. Perhaps reassembling those functions into a class or two will let you have fewer names to import, or perhaps there's no compelling reason for them to be in a different module to begin with. (Or, more likely, you should just not worry about using the module name. It's really better to keep track of where all of your names come from, and fully qualified names do that nicely. What do you see as the harm of using it?) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Variables.
Bruno Desthuilliers wrote: Jeff Shannon a écrit : If running a console app from Explorer, the console will close as soon as the app terminates. Using raw_input() at the end of the app means that it won't close until the user hits Enter. So why dont you just open the console before running the app, then ?-) Well, *I* generally do. ;) But for those with relatively little computing experience that *hasn't* been through the Windows GUI, the thought of opening a console isn't necessarily an obvious one. (Modern versions of Windows seem to try to hide the console as much as they can) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: super not working in __del__ ?
Christopher J. Bottaro wrote: 2 Questions... 1) Why does this never happen in C++? Or does it, its just never happened to me? 2) I can understand random destruction of instantiated objects, but I find it weird that class definitions (sorry, bad terminology) are destroyed at the same time. So __del__ can't safely instantiate any classes if its being called as a result of interpreter shutdown? Boo... Keep in mind that in Python, everything is a (dynamically created) object, including class objects. My recall of C/C++ memory organization is pretty weak, but IIRC it gives completely different behavior to code, stack objects, and heap objects. Code never needs to be cleaned up. In Python, everything (including functions and classes) is effectively a heap object, and thus functions and classes can (and indeed must) be cleaned up. Refcounting means that they won't ever (normally) be cleaned up while they're still in use, but during program shutdown refcounting necessarily ceases to apply. The closest that would happen in C++, I believe, would manifest itself as memory leaks and/or access of already-freed memory. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: [newbie]How to install python under DOS and is there any Wxpython can be installed under dos?
john san wrote: Just want to show "windows" under dos without MsWindows. Also find some difficulty to simply install WxPython under directory(DOS) and then run, which is very good thing if it is just like Java. I don't think you'll have any luck finding wxPython for DOS. A bit of a looksee around the wxWidgets website (wxPython is a wrapper for wxWidgets) mentions being available for Win3.1 and up, as well as various levels of *nix installs (wxGTK, wxMotif, wxX11), but no mention of DOS. I suppose that a very ambitious person could perhaps get the wxUniversal port to run on DOS, but I presume that this would be far from trivial. On the other hand, you probably could find and/or create some sort of basic text-only windowing library. It won't be wxPython, nor anything even close to that level of sophistication, but that's what happens when you insist on using an OS that's been obsolete for a decade or more. ;) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: renaming 'references' to functions can give recursive problems
peter wrote: Hello, nice solution: but it puzzles me :) can anyone tell me why ---correct solution def fA(input): return input def newFA(input, f= fA): return f(input) This saves a reference to the original function in the new function's default argument. -infinite loop- def fA(input): return input def newFA(input): return fA(input) This does not save any reference to the original function; it simply does a run-time lookup of the name, and uses whatever object is currently bound to that name. Since you later rebind the name to this new function, it's simply calling itself. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Why doesn't join() call str() on its arguments?
Leo Breebaart wrote: What I can't find an explanation for is why str.join() doesn't automatically call str() on its arguments [...] [...] Presumably there is some counter-argument involved, some reason why people preferred the existing semantics after all. But for the life of me I can't think what that counter-argument might be... One possibility I can think of would be Unicode. I don't think that implicitly calling str() on Unicode strings is desirable. (But then again, I know embarrassingly little about unicode, so this may or may not be a valid concern.) Of course, one could ensure that unicode.join() used unicode() and str.join() used str(), but I can conceive of the possibility of wanting to use a plain-string separator to join a list that might include unicode strings. Whether this is a realistic use-case is, of course, a completely different question... Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Imported or executed?
Stephan Schulz wrote: Is there a (portable, standard) way for the program/module to find out if it is imported or executed stand-alone? def fixbb(*filelist): # ... if __name__ == '__main__': # Executed stand-alone fixbb(sys.argv[1:]) (Obviously, you'd probably want to do more command-line checking than this...) Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: super not working in __del__ ?
Christopher J. Bottaro wrote: So encapsulating your script in a main function fixes the problem: Not necessarily. because all the objects instantiated in main() will be deleted when main ends, but before the interpreter shuts down, thus the objects will have access to all the symbols in module's __dict__ (or however that works). What about default function parameters? Those can't be cleaned up until the function is deleted. What about class attributes, which can't be cleaned until the class is deleted? What about objects which have had references passed to other modules? What about sets of objects with cyclical references? There's too many corner cases and special cases for this to be reliable. Cutting down on module-global variables will help (and is a good idea anyhow), but it's not perfect. I'm just guessing, here, but I'd imagine that it might be possible to modify the interpreter so that, at shutdown, it carefully builds dependency trees and then walks through them in reverse, deleting objects in the "proper" order, and trying to handle cycles as sanely as possible. You could probably get the __del__ of almost every object to be fairly reliable. But that's a lot of work to go to when 99% of the time all you need to do is flush the entire block of memory. (By 'a lot of work', I mean both in execution time causing the shutdown of Python to be notably slower, and in developer time writing such a fancy shutdown scheme.) It's much more practical to just say that __del__() is not reliable (especially given some of the other issues with it, such as cyclic references, etc.) and suggest that people write their code in such a way that it isn't required. Python's __del__() is not a C++/Java destructor. Trying to make it into one is unlikely to give an overal benefit. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Font size
Adam wrote: Here's what I'm trying to do. We are running a numbers game at our retirement village and using a roulette wheel to generate the numbers. but this wheel is only about 12 in diameter and is little more than a toy. So we came up with the idea of using a random number generator to generate numbers from 0 to 36 and display them in large figures on my laptop. This is for the benefit of those people who are hard of hearing. They like to see what is happening. I was an RPG programmer before retirement but am new to Python. So I used the following code to generate the numbers but I don't know how to display them in large figures (about 3 ins high) or get rid of the idle text. The problem is that console displays don't support variable font sizes. In order to do this, you're going to need to write a simple GUI program, which will be significantly more complex. A simple program to display a random number in large text should be relatively easy to do in Tkinter, which has the benefit of being bundled with your Python distribution. (It's got a few downsides, too, but most of them won't apply for this project.) I haven't actually looked at it, but EasyGui (recently mentioned here; google should help you find it) may meet your needs and be simpler to use. Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Why doesn't join() call str() on its arguments?
Roy Smith wrote: What I can't find an explanation for is why str.join() doesn't automatically call str() on its arguments, so that e.g. str.join([1,2,4,5]) would yield "1245", and ditto for e.g. user-defined classes that have a __str__() defined. That would be the wrong thing to do when the arguments are unicodes. Why would it be wrong? I ask this with honest naivete, being quite ignorant of unicode issues. As someone else demonstrated earlier... >>> str(u'ü') Traceback (most recent call last): File "", line 1, in ? UnicodeError: ASCII encoding error: ordinal not in range(128) >>> Using str() on a unicode object works... IF all of the unicode characters are also in the ASCII charset. But if you're using non-ASCII unicode characters (and there's no point to using Unicode unless you are, or might be), then str() will throw an exception. The Effbot mentioned a join() implementation that would be smart enough to do the right thing in this case, but it's not as simple as just implicitly calling str(). Jeff Shannon Technician/Programmer Credit International -- http://mail.python.org/mailman/listinfo/python-list
Re: Why doesn't join() call str() on its arguments?
news.sydney.pipenetworks.com wrote: Fredrik Lundh wrote: a certain "princess bride" quote would fit here, I think. I'm not really familiar with it, can you enlighten please. (Taking a guess at which quote /F had in mind...) Vezzini: "Inconceivable!" Inigo:"You keep using that word. I do not think that it means what you think it means." Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: Pausing a program - poll/sleep/threads?
Simon John wrote: I'm writing a PyQt network client for XMMS, using the InetCtrl plugin, that on connection receives a track length. [...] So, how would I make a Python program automatically call a function after a preset period of time, without the Python process running in the foreground (effectively single-tasking)? I'm not familiar with Qt/PyQt myself, but the GUI toolkits I *am* familiar with all have a concept of a timer. Basically, you create a timer that, when the specified amount of time has elapsed, will either deliver an event/message to your application's event queue or will directly call the callback function you provide. However, I'd suggest that you may not want to wait for the entire length of the current track, especially if some other process or user (on any machine) may have access to the same XMMS application. What happens when, after the song's been playing for a few seconds, someone skips to the next track? Presumably, you'll want your network client to detect that and update appropriately. This implies that you should check back in with the XMMS "server" every few seconds at least. (You can still use a timer to do this; just have it fire periodically every second or so, rather than only after several minutes.) Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: [newbie]How to install python under DOS and is there any Wxpython can be installed under dos?
Leif B. Kristensen wrote: john san skrev: pure DOS, old pc, used for teaching . want show some "windows" under DOS (under Python). curses is a text-based interface that will let you build windowed applications like you could with the crt unit in Turbo Pascal of those golden days. I've no idea if anyone's compiled it for the 16-bits DOS platform, though. Curses is a *nix interface. There are attempts at a work-alike package for Windows, which by all reports are not very successful. Whether any of those would maintain their already-limited functionality under DOS is questionable. There *are* similar-but-not-compatible libraries for DOS... or perhaps I should say *were*, because I have no idea where one might find such a thing now. (Though I presume that Google would be the best starting place.) One would then need to find/create a Python wrapper for that library... Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: Pausing a program - poll/sleep/threads?
Simon John wrote: As far as querying the server every few seconds, it does make sense (you don't miss events) and is the recommended way of doing things with InetCtrl, but I'd prefer to save the bandwidth/server load than have realtime status updates. The amount of bandwidth and server load that will be used by a once-a-second query is probably pretty trivial (unless you're expecting this to run over internet or dialup networks -- and even then, it's probably not going to be worth worrying about). Even on an old 10Mbit ethernet connection, a couple of extra packets every second will not make a notable difference. This (IMO) is premature optimization. :) The status also updates whenever you send a command (like play/pause). But does the server push events to the client? If there's a filesystem error while a track is playing, how does your client know about it? In addition, what happens if XMMS segfaults, or the server machine loses power? I'm really stuck on how to implement this now One of the big questions here is whether your client will have exclusive access to the XMMS server. That is, will it be possible for more than one such client to connect to the same XMMS, and/or for XMMS to have direct interaction on its host machine? If you have exclusive access, then changes in the status of XMMS will only happen when 1) you change it yourself, or 2) there is an error. In this case, you can check status much less often. (However, you'll still want to deal with the error conditions, which probably means checking at a much shorter interval than expected track length.) If, on the other hand, there may be more than one client/user interacting with XMMS, then you also have to deal with the possibility of your server changing status without your client taking direct action. I really think that you *do* want to do fairly frequent status checks with your server. The cost is small, and the gains in responsiveness and robustness are potentially very significant. Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: How to wrap a class's methods?
Steven Bethard wrote: Grant Edwards wrote: I want to subclass an IMAP connection so that most of the methods raise an exception if the returned status isn't 'OK'. This works, but there's got to be a way to do it that doesn't involve so much duplication: class MyImap4_ssl(imaplib.IMAP4_SSL): def login(*args): s,r = imaplib.IMAP4_SSL.login(*args) if s!='OK': raise NotOK((s,r)) return r def list(*args): s,r = imaplib.IMAP4_SSL.list(*args) if s!='OK': raise NotOK((s,r)) return r def search(*args): s,r = imaplib.IMAP4_SSL.search(*args) if s!='OK': raise NotOK((s,r)) return r [and so on for another dozen methods] You could try something like (Untested!): class Wrapper(object): def __init__(self, func): self.func = func def __call__(*args, **kwargs): self, args = args[0], args[1:] s, r = self.func(*args) if s != 'OK': raise NotOK((s, r)) return r for func_name in ['login', 'list', 'search']: func = Wrapper(getattr(imaplib.IMAP4_SSL, func_name)) setattr(imaplib.IMAP4_SSL, func_name, func) You could probably also do this as a factory function, rather than as a class (also untested!): def Wrapper(func): def wrapped(self, *args, **kwargs): s, r = func(self, *args, **kwargs) if s != 'OK': raise NotOK((s,r)) return r return wrapped I believe that this will be semantically almost equivalent, but conceptually slightly simpler. Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: super not working in __del__ ?
Christopher J. Bottaro wrote: Jeff Shannon wrote: Python's __del__() is not a C++/Java destructor. Learn something new everyday... What is it then? Excuse my ignorance, but what are you suppose to do if your object needs to clean up when its no longer used (like close open file handles, etc)? Well, those "open file handles" are presumably wrapped by file objects. Those file objects will close the underlying file handle as they are deallocated. This means that the only time you really need to worry about closing a file handle is if you want to reopen that same file immediately -- and in that case, __del__() can't help you because you can't know whether or not another reference to the file object exists somewhere else, so you'll have to explicitly close() the file anyhow. The same goes for sockets. Are you use supposed to make a method called Destroy() or something and require users to call it when the object is about to be deleted? That seems to put the burdon of ref counting on the user. Python's refcounting/GC scheme is such that you rarely actually need to explicitly destroy an object. Unlike C++, you're not actually freeing memory, and most of the resources that you might be using are already wrapped by an object that will finalize itself properly without being explicitly destroyed. Thanks to GC and the possibility of needing to break cycles, there's not much that you can guarantee about __del__() ... but there's not much that you *need* to do in __del__(). It just seems like kinda a pain when a C++/Java style destructor would nicely do what is desired. Should I just stop digging and chalk it up to a limitation of Python? Well, I guess you could look at it as a trade-off. In C++, you can count on your destructor getting called, but you have to do your own memory management. (Note that you still can't count on heap objects that your destructor might want to use still being there -- it's just that you can *never* count on such things in C++, and are always expected to ensure these things yourself.) Python will take care of your memory for you, and it will safely handle most OS resources for you, so that you don't have to worry about them... and in return, if you have some type of resource that Python doesn't automatically handle, you need to explicitly save it yourself. Now, while I haven't yet done anything terribly complicated nor tried to use extensive persistence, I've essentially never felt a need to use __del__() in Python, nor missed the "proper" destructor of C++. In general, cleanup takes care of itself, and in those cases where I have more demanding needs, well, it's not *that* hard to hook into application shutdown and explicitly save my data. So, I don't think it's so much a "limitation" of Python, as it is simply a different way of handling things. Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list
Re: global var
Nick Coghlan wrote: Michael Hoffman wrote: raver2046 wrote: How to have a global var in python ? "global var" will give you a global variable named "var". Whether this advice is correct or not depends greatly on what the OP means by 'global' :) Module global, it's right, application global it's wrong. Given the nature of the question, I suspect the latter. And even there, one must be careful. "global var" won't really give you a global variable; it will cause the name "var", when used locally, to refer to a pre-existing module-level reference "var". No variables are actually created in the execution of "global var". Jeff Shannon -- http://mail.python.org/mailman/listinfo/python-list