Re: [RFC] Parametric Polymorphism
In article <[EMAIL PROTECTED]>, Catalin Marinas <[EMAIL PROTECTED]> wrote: ... > Of course, duck-typing is simple to use but the parametric > polymorphism is useful when the types are unrelated. Let's say you > want to implement a colour-print function which should support basic > types like ints and floats as well as lists and dictionaries. In this > case, and thank to the function decorations support, the code would be > clearer. What you're describing (and implementing) is not what I would call parametric polymorphism, though. See http://en.wikipedia.org/wiki/Polymorphism_(computer_science) You're talking about "ad-hoc" polymorphism. Personally, I can't agree that, in principle, this practice makes code clearer. In more common, less formal implementations you get functions like this -- def dosome(cmd): if type(cmd) == StringType: cmd = [cmd] ... Of course the first problem with this that it unnecessarily constrains the input type: if your API supports a string parameter, then in Python you should expect any value to work that supports string operations. This isn't a hypothetical matter, you can see relatively major Python applications have trouble with Unicode for exactly this reason. Secondly, rather than clarify the API, it confuses it. How many programmers will observe this usage and erroneously assume that dosome() _just_ takes a string, when the list parameter may in fact be the more ideal usage? This isn't hypothetical either. Your example is a fine one, and some kind of table to resolve the function according to type of input argument is a good idea. I'm just saying that more general application of this idea is best left to languages like C++. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: "no variable or argument declarations are necessary."
In article <[EMAIL PROTECTED]>, Steve Holden <[EMAIL PROTECTED]> wrote: > Paul Rubin wrote: > > Antoon Pardon <[EMAIL PROTECTED]> writes: > > > >>>Or you just code without declaring, intending to go > >>>back and do it later, and invariably forget. > >> > >>What's the problem, the compilor will allert you > >>to your forgetfullness and you can then correct > >>them all at once. > > > > > > Thiat in fact happens to me all the time and is an annoying aspect of > > Python. If I forget to declare several variables in C, the compiler > > gives me several warning messages and I fix them in one edit. If I > > forget to initialize several variables in Python, I need a separate > > test-edit cycle to hit the runtime error for each one. > > Well I hope you aren't suggesting that declaring variables makes it > impossible to forget to initalise them. So I don;t really see the > relevance of this remark, since you simply add an extra run to fix up > the "forgot to declare" problem. After that you get precisely one > runtime error per "forgot to initialize". It's hard to say what anyone's suggesting, unless some recent utterance from GvR has hinted at a possible declaration syntax in future Pythons. Short of that, it's ... a universe of possibilities, none of them likely enough to be very interesting. In the functional language approach I'm familiar with, you introduce a variable into a scope with a bind - let a = expr in ... do something with a and initialization is part of the package. Type is usually inferred. The kicker though is that the variable is never reassigned. In the ideal case it's essentially an alias for the initializing expression. That's one possibility we can probably not find in Python's universe. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: "no variable or argument declarations are necessary."
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Bengt Richter) wrote: > On Tue, 04 Oct 2005 10:18:24 -0700, Donn Cave <[EMAIL PROTECTED]> wrote: > [...] > >In the functional language approach I'm familiar with, you > >introduce a variable into a scope with a bind - > > > > let a = expr in > > ... do something with a > > > >and initialization is part of the package. Type is usually > >inferred. The kicker though is that the variable is never > >reassigned. In the ideal case it's essentially an alias for > >the initializing expression. That's one possibility we can > >probably not find in Python's universe. > > > how would you compare that with > lambda a=expr: ... do something (limited to expression) with a > ? OK, the limitations of a Python lambda body do have this effect. But compare programming in a language like that, to programming with Python lambdas? Maybe it would be like living in a Zen Monastery, vs. living in your car. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: new forum -- homework help/chit chat/easy communication
Quoth Lasse_Vgsther_Karlsen <[EMAIL PROTECTED]>: | Brandon K wrote: |> Hrm...i find it demeaning to relegate Python to a scripting language |> while Visual Basic is in the "software development" section. Python so |> outdoes VB in every way shape and form. | | | In that respect I would very much like to see a definition of "scripting | language" as well :) | | In other words, what is the difference between a "scripting language" | and a "programming language". I oculd come up with a definition, but I can't come up with _the_ definition. The word is used in such broad and vague ways that to use it is practically a sign of sloppy thinking. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: new forum -- homework help/chit chat/easy communication
Quoth Mike Meyer <[EMAIL PROTECTED]>: | Lasse Vågsæther Karlsen <[EMAIL PROTECTED]> writes: ... |> I think that at one time, scripting languages was something that lived |> within other programs, like Office, and couldn't be used by themselves |> without running it inside that program, and as thus was a way to add |> minor functions and things to that program. | | That's certainly one kind of scripting language. But I don't think | it's ever been the only kind - shells have always been stand-alone | applications. What they have in common with your definition is that | both types of languages are used to capture user actions for later | repetition. And that's what makes a scripting language: it's a | language in which one writes "scripts" that describe actions - | normally taken by a user - so that a series of them can be performed | automatically. I don't think the shell is any exception - I think it's reasonable to see it as a control+UI language embedded in the UNIX operating system. It wouldn't really be a very useful stand-alone application on a computer platform without the same basic properties. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's Performance
Quoth "Fredrik Lundh" <[EMAIL PROTECTED]>: | Alex Stapleton wrote | | > Except it is interpreted. | | except that it isn't. Python source code is compiled to byte code, which | is then executed by a virtual machine. if the byte code for a module is up | to date, the Python runtime doesn't even look at the source code. Fair to say that byte code is interpreted? Seems to require an application we commonly call an interpreter. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's Performance
In article <[EMAIL PROTECTED]>, "Fredrik Lundh" <[EMAIL PROTECTED]> wrote: > Donn Cave wrote: > > > | > Except it is interpreted. > > | > > | except that it isn't. Python source code is compiled to byte code, which > > | is then executed by a virtual machine. if the byte code for a module is > > | up > > | to date, the Python runtime doesn't even look at the source code. > > > > Fair to say that byte code is interpreted? Seems to require an > > application we commonly call an interpreter. > > well, the bytecode isn't Python (the transformation isn't exactly straight- > forward, as we've seen in other posts I've made today). > > and even if the bytecode engine used in CPython can be viewed as an inter- > preter, does that make Python an interpreted language? what about Python > code that uses a runtime that converts it to machine code? (e.g. Psycho, > Iron- > Python). is it still an interpreter if it generates machine code? Is what an interpreter? I am not very well acquainted with these technologies, but it sounds like variations on the implementation of an interpreter, with no really compelling distinction between them. When a program is deployed as instructions in some form other than native code executable, and therefore those instructions need to be evaluated by something other than the hardware, then that would be some kind of interpretation. I agree that there are many shades of grey here, but there's also a real black that's sharply distinct and easy to find -- real native code binaries are not interpreted. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess and non-blocking IO (again)
In article <[EMAIL PROTECTED]>, Marc Carter <[EMAIL PROTECTED]> wrote: > I am trying to rewrite a PERL automation which started a "monitoring" > application on many machines, via RSH, and then multiplexed their > collective outputs to stdout. > > In production there are lots of these subprocesses but here is a > simplified example what I have so far (python n00b alert!) > - SNIP - > import subprocess,select,sys > > speakers=[] > lProc=[] > > for machine in ['box1','box2','box3']: > p = subprocess.Popen( ('echo '+machine+';sleep 2;echo goodbye;sleep > 2;echo cruel;sleep 2;echo world'), stdout=subprocess.PIPE, > stderr=subprocess.STDOUT, stdin=None, universal_newlines=True ) > lProc.append( p ) > speakers.append( p.stdout ) > > while speakers: > speaking = select.select( speakers, [], [], 1000 )[0] > for speaker in speaking: > speech = speaker.readlines() > if speech: > for sentence in speech: > print sentence.rstrip('\n') > sys.stdout.flush() # sanity check > else: # EOF > speakers.remove( speaker ) > - SNIP - > The problem with the above is that the subprocess buffers all its output > when used like this and, hence, this automation is not informing me of > much :) You're using C stdio, through the Python fileobject. This is sort of subprocess' fault, for returning a fileobject in the first place, but in any case you can expect your input to be buffered. You're asking for it, because that's what C stdio does. When you call readlines(), you're further guaranteeing that you won't go on to the next statement until the fork dies and its pipe closes, because that's what readlines() does -- returns _all_ lines of output. If you want to use select(), don't use the fileobject functions. Use os.read() to read data from the pipe's file descriptor (p.stdout.fileno().) This is how you avoid the buffering. > This topic seems to have come up more than once. I am hoping that > things have moved on from posts like this: > http://groups.google.com/group/comp.lang.python/browse_thread/thread/5472ce95e > b430002/434fa9b471009ab2?q=blocking&rnum=4#434fa9b471009ab2 > as I don't really want to have to write all that ugly > fork/dup/fcntl/exec code to achieve this when high-level libraries like > "subprocess" really should have corresponding methods. subprocess doesn't have pty functionality. It's hard to say for sure who said what in that page, after the incredible mess Google has made of their USENET archives, but I believe that's why you see dup2 there - the author is using a pty library, evidently pexpect. As far as I know, things have not moved on in this respect, not sure what kind of movement you expected to see in the intervening month. I don't think you need ptys, though, so I wouldn't worry about it. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's Performance
In article <[EMAIL PROTECTED]>, Mike Meyer <[EMAIL PROTECTED]> wrote: > Donn Cave <[EMAIL PROTECTED]> writes: > > I agree that there are many shades of grey here, but there's also a > > real black that's sharply distinct and easy to find -- real native > > code binaries are not interpreted. > > Except when they are. Many machines are microcoded, which means your > "real native code binary" is interpreted by a microcode program stored > in the control store. Most machines don't have a writeable control > store (WCS), so you generally can't change the interpreter, but that's > not always true. In the simple case, a WCS lets the vendor fix > "hardware" bugs by providing a new version of the microcode. In the > extreme cases, you get OS's in which the control store is part of the > process state, so different processes can have radically different > formats for their "native code binaries". > > Then there's the Nanodata QM-1, whose microcode was interpreted by > "nanocode". Fine -- given a Python Chip computer, Python programs are native code. It can use microcode, if that helps. The VAX/11 microcode was just a software extension of the CPU hardware, implementing some extra instructions, the way I remember it. I don't recall that it was of any more than academic interest to anyone using the computer - though it may have been software in a sense, it was on the hardware side of the wall. On the software side of the wall, if your program can go over the wall by itself, then it's native. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: tuple versus list
In article <[EMAIL PROTECTED]>, Bryan <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] wrote: > > In this particular case, it seems that (width,height) looks nicer. But > > I think otherwise, list constuct is easier to read, even though it is > > supposed to be slower. > > > > With list you can : > > [a] + [ x for x in something ] > > > > With tuple it looks like this : > > (a,) + tuple(x for x in something) > > > > I think the list looks cleaner. And since you cannot concat tuple with > > list, I think unless it looks obvious and natural(as in your case), use > > list. > > > > > i always use the structure analogy. if you view (width, height) as a > structure, > use a tuple. if you view it a sequence, use a list. in this example, i view > it > as a stucture, so i would use (width, height) as a tuple. Right, but there's an unfortunate ambiguity in the term "sequence", since in Python it is defined to include tuple. I gather you meant more in the abstract sense of a data collection whose interesting properties are of a sequential nature, as opposed to the way we are typically more interested in positional access to a tuple. Maybe a more computer literate reader will have a better word for this, that doesn't collide with Python terminology. My semi-formal operational definition is "a is similar to a[x:y], where x is not 0 or y is not -1, and `similar' means `could be a legal value in the same context.'" Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Writing an immutable object in python
In article <[EMAIL PROTECTED]>, "Mapisto" <[EMAIL PROTECTED]> wrote: > I've noticed that if I initialize list of integers in the next manner: > > >>> my_list = [0] * 30 > > It works just fine, even if I'll try to assign one element: > > >>> id( my_list[4] ) > 10900116 > >>> id( my_list[6] ) > 10900116 > >>> my_list[4] = 6 > >>> id( my_list[4] ) > 10900044 > >>> id( my_list[6] ) > 10900116 > > The change in the poision occurs becouse int() is an immutable object. > > if I will do the same with a user-defined object, This reference > manipulating will not happen. So, every entry in the array will refer > to the same instance. Not at all. If you do the same thing, class C: pass c = C() a = [c]*12 ... etc., you should observe the same pattern with respect to object identities. Mutability doesn't really play any role here. > Is there a way to bypass it (or perhaps to write a self-defined > immutable object)? Bypass what? What do you need? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Run process with timeout
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Alex Martelli) wrote: > Micah Elliott <[EMAIL PROTECTED]> wrote: [... re problem killing children of shell script ...] > > Is there any way to enable Python's subprocess module to do (implicit?) > > group setup to ease killing of all children? If not, is it a reasonable > > RFE? > > Not as far as I know. It might be a reasonable request in suitable > dialects of Unix-like OSes, though. A setpgrp call (in the callback > which you can request Popen to perform, after it forks and before it > execs) might suffice... except that you can't rely on children process > not to setpgrp's themselves, can you?! I bet that wouldn't be a problem, though. subprocess.Popen constructor takes a preexec_fn parameter that looks like it might be a suitable place to try this. (Interesting that it's a function parameter, not a method to be overridden by a subclass.) Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: KeyboardInterrupt vs extension written in C
Quoth "Tamas Nepusz" <[EMAIL PROTECTED]>: | No, that's actually a bit more complicated. The library I'm working on | is designed for performing calculations on large-scale graphs (~1 | nodes and edges). I want to create a Python interface for that library, | so what I want to accomplish is that I could just type "from igraph | import *" in a Python command line and then access all of the | functionalities of the igraph library. Now it works, except the fact | that if, for example, I start computing the diameter of a random graph | of ~10 nodes and ~20 edges, I can't cancel it, because the | KeyboardInterrupt is not propagated to the Python toplevel (or it isn't | even generated until the igraph library routine returns). I would like | to allow the user to cancel the computation with the usual Ctrl-C and | return to the Python interactive interface. | This whole feature is not vital, but it would mean a big step towards | user-friendliness. | I have access to the source code of igraph as well (since one of my | colleagues is developing it), so another possible solution would be to | inject some calls into the igraph source code which temporarily cancels | the computation and checks whether there's something waiting in the | Python interpreter, but I haven't found any function in the API which | allows me to do this. Hm, tough question. Even in C, this would be a little awkward. If you only wanted the program to abort on ^C, then you could just replace the default sigint handler with SIG_DFL. But then you'd be back to the shell prompt, not the python prompt. If you want to catch the signal, but also abort a computation, then as you say, the computation needs to check periodically. Rather than peek into Python for this, I'm wondering if your module could set its own signal handler for SIGINT, which would set a library flag. Then call PyErr_SetInterrupt(), to emulate the normal signal handler. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Read/Write from/to a process
Quoth "jas" <[EMAIL PROTECTED]>: | Steve Holden wrote: |> Look at how you might do it in other languages. Then you'll realise this |> isn't (just) a Python problem. | | Yea your right. However, for example, in Java, one can use the Process | class, and then read from the stream until its the end (i.e. -1 is | returned). However, with Python when reading from | subprocess.Popen.stdout ...I don't know when to stop (except for | looking for a ">" or something). Is there a standard, like read until | "-1" or something? Sure, end of file is '', a string with 0 bytes. That means the other end of the pipe has closed, usually due to exit of the other process that was writing to it. Not much help for you there, nor would the Java equivalent be, I presume. Even on UNIX, where pipes are a mainstay of ordinary applications, this one would very likely need a pseudotty device instead, to prevent C library block buffering, and it would still be difficult and unreliable. Ironically the best support I've seen came from a platform that didn't use pipes much at all, VAX/VMS (I put that in the past tense because for all I know it may have evolved in this respect.) The pipe-like VMS device was called a "mailbox", and the interesting feature was that you could be notified when a read had been queued on the device. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: popen2
Quoth Pierre Hanser <[EMAIL PROTECTED]>: | Grant Edwards a écrit : |> On 2005-10-29, Piet van Oostrum <[EMAIL PROTECTED]> wrote: |>>>>>>>"g.franzkowiak" <[EMAIL PROTECTED]> (gf) wrote: |>> |>>>gf> If starts a process with popen2.popen3('myprogram') and myprogram.exe is |>>>gf> running before, I've a connection to the second process, not to the first. |>>>gf> I can find the process by name before I start a process with popen2..., |>>>gf> but how bcan I connect t this process with a pipe ? |>> |>>You have to use a named pipe. |> |> |> That would require that the application know about the named |> pipe and open it. I don't think there is any way to swap a |> pipe in for stdin/stdout once a process is running. |> | in C: freopen Hello, it seems fairly clear that the stdin/stdout in question belongs to another process, which cannot be instructed at this point to execute freopen(). If there's a way to do this, it will be peculiar to the platform and almost certainly not worth the effort. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Rich __repr__
In article <[EMAIL PROTECTED]>, Ben Finney <[EMAIL PROTECTED]> wrote: > Erik Max Francis <[EMAIL PROTECTED]> wrote: > > Ben Finney wrote: > > > If I want to implement a __repr__ that's reasonably "nice" to the > > > programmer, what's the Right Way? Are there recipes I should look > > > at? > > > > I tend to use: > > > > def __repr__(self): > > if hasattr(self, '__str__'): > > return '<%s @ 0x%x (%s)>' % (self.__class__.__name__, > > id(self), str(self)) > > else: > > return '<%s @ 0x%x>' % (self.__class__.__name__, id(self)) > > Well that just begs the question: what's a good way (or a Right Way, > if that exists) to write a __str__ for a complex class? Well, in my opinion there pretty much isn't a good way. That is, for any randomly selected complex class, there probably is no worthwhile string value, hence no good __str__. This dives off into a certain amount of controversy over what repr and str are ideally supposed to do, but I think everyone would agree that if there's an "represent object for programmer" string value, it's the repr. So the str is presumably not for the programmer, but rather for the application, and I'm just saying that for application purposes, not all objects can usefully be reduced to a string value. Meanwhile, the code above also raises some questions where str is already provided. Run it on your subclass-of-str object and give the object a value of ') hi ('. This is why containers use repr to render their contents, not str. > It could be done just by hacking __repr__ with whatever things seem > appropriate, in some ad-hoc format. Or, as I'm hoping with this > thread, there may be common practices for outputting object state from > __repr__ that are concise yet easily standardised and/or recognised. I guess the best I could suggest is to stick with the format already used by instances (<__main__.C instance at 0x71eb8>) and augment it with class-specific information. def make_repr(self, special): return '<%s instance at 0x%x: %s>' % (self.__class__.__name__, id(self), special) def __repr__(self): return self.make_repr(repr(self.my_favorite_things)) This omits the module qualifier for the class name, but arguably that's a bit of a nuisance anyway. If there's a best, common practice way to do it, I wouldn't care to pose as an expert in such things, so you have to decide for yourself. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Class Variable Access and Assignment
In article <[EMAIL PROTECTED]>, Magnus Lycka <[EMAIL PROTECTED]> wrote: ... > On the other hand: > > >>> class C: > ... a = [1] > ... > >>> b=C() > >>> b.a += [2] > >>> b.a > [1, 2] > >>> C.a > [1, 2] > > I can understand that Guido was a bit reluctant to introduce > += etc into Python, and it's important to understand that they > typically behave differently for immutable and mutable objects. As far as I know, Guido has never added a feature reluctantly. He can take full responsibility for this misguided wart. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: O_DIRECT on stdin?
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote: > Here's some text from my open(2) manpage: > Transfer sizes, and the alignment of user buffer and file offset must > all > be multiples of the logical block size of the file system. Does that apply in the example he gave, < /dev/sda1 ? It seems to me this would not go through any filesystem anyway. That might account for the "invalid argument" error, but at any rate it would be irrelevant. Plus it doesn't seem to score very high on portability, according to the Linux man page I'm looking at -- apparently not a POSIX or any such standard, just borrowed from Irix in recent Linux versions, and FreeBSD with slightly different behavior. Don't see any trace of it in NetBSD, MacOS X. > It's unlikely that in practice you can get Python's sys.stdin.read() or > os.read() to reliably use a buffer that fits the alignment restriction. Though of course os.read() would eliminate one layer of buffering altogether. Might be worth a try. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Addressing the last element of a list
Quoth Steven D'Aprano <[EMAIL PROTECTED]>: ... | So when working with ints, strs or other immutable objects, you aren't | modifying the objects in place, you are rebinding the name to another | object: | | py> spam = "a tasty meat-like food" | py> alias = spam # both names point to the same str object | py> spam = "spam spam spam spam" # rebinds name to new str object | py> print spam, alias | 'spam spam spam spam' 'a tasty meat-like food' The semantics of assignment are like that, period. If the right hand side is an int, a string, a class instance, a list, whatever, doesn't matter at all. The question of mutability at this point can be a red herring for someone who doesn't already understand these matters. Mutability is nothing special, it's just a feature built into the object type -- in general, the ability to store some state. This is of course useful in situations where we want to propagate state changes, so it naturally comes up in this context, but language per se does not observe any distinction here so far as I know. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Addressing the last element of a list
In article <[EMAIL PROTECTED]>, Mike Meyer <[EMAIL PROTECTED]> wrote: ... > Most OO languages do the name/variable thing, but some of the popular > ones aren't consistent about it, giving some types "special" status, > so that sometimes "a = b" causes b to be copied onto a, and sometimes > it causes a to become a pointer to b. I find a consistent approach is > preferable. Who wouldn't. > Most OO languages also have the mutable/immutable object thing. The > set of which objects are immutable changes from language to > language. It's really only relevant in this case because the solution > to "I want to change an alias" issue involves using a mutable object. Yes, and furthermore it's only vaguely relevant. I mean, it really requires a particular kind of mutability, where one object can store a reference to another. That's easy to find in core object types, and of course it is a kind of mutability, but it isn't the definition of mutable. So we drag out this terminology, that neither clearly nor accurately describes the functionality we have in mind, and then we make some vague or even wrong statement about its relationship to the issue. It has been going on for years, usually I believe from people who understand quite well how it really works. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: running functions
Quoth Grant Edwards <[EMAIL PROTECTED]>: | On 2005-11-18, Scott David Daniels <[EMAIL PROTECTED]> wrote: | > Gorlon the Impossible wrote: | > | >> I have to agree with you there. Threading is working out great for me | >> so far. The multiprocess thing has just baffled me, but then again I'm | >> learning. Any tips or suggestions offered are appreciated... | > | > The reason multiprocess is easier is that you have enforced | > separation. Multiple processes / threads / whatever that share | > reads and writes into shared memory are rife with | > irreproducible bugs and untestable code. | | There can be problems, but you make it sound way worse than it | really is. I've been doing threaded SW for a lot of years | (yikes! almost 25), and it's just not that hard to deal with | shared objects/variables -- especially in Python with its GIL. | | I think it's easier and more intuitive than forking. | | I've written a lot of (admittedly not huge) Python programs | using threading (some with 30-40 thread), and I don't remember | ever tripping over anything significant. I think dealing with | shared objects is easier than figuring out how to do | inter-process communications using sockets or Posix shared | memory or whatnot. It's not difficult if you don't have to do | any communication between processes, but in that case, shared | objects aren't a problem either. To understand the point behind this, it's important to understand what easy and difficult are about here. Are we talking about easy to get something done? Easy to get something absolutely reliable? Easy to screw up? The point is not that it's hard to import the threading module and fire off a thread to do something. On the contrary, if anything maybe it's too easy. The thing that's supposed to be difficult is predictable, reliable execution, in in principle because of a compounding effect on the number of possible states. It should be scary. Whether it should be prohibitively scary is the kind of abstract debate that will never really be resolved here. I got into BeOS when it came out a few years back, and learned to program in Python to its native UI, where each window has its own thread. It works fine for me, and I personally support the decision to do it that way, but later there was a school of thought, including some ex-Be engineers among the proponents, that held it to be a big mistake. Apparently this was after all a common pitfall for application developers, who would release code that could deadlock or go wrong in various ways due to unanticipated thread interactions. All of these application developers were working in C++, and I'm sure that made them a little more vulnerable. Thank heavens Python isn't capable of real concurrent execution, for one thing, and also it's surely easier to put together generic functions for queueing and that sort of thing, so I could afford to be more disciplined about sharing data between threads. But in the end there were a lot more of them, their applications were bigger or more adventurous and more widely used, so there were a lot of opportunities for things to happen to them that have never happened to me. As I said, I thought their design was good, but maybe they just didn't get the word out like they should have - that threads are scary. Donn Cave, [EMAIL PROTECTED] (posting this from a Python/BeOS API newsreader) -- http://mail.python.org/mailman/listinfo/python-list
Re: Adding through recursion
In article <[EMAIL PROTECTED]>, Ben Finney <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > > def add(x, y): > > if x == 0: > > print y > > return y > > else: > > x -= 1 > > y += 1 > > add(x, y) ... > def recursive_add(x, y): > if x == 0: > print y > result = y > else: > x -= 1 > y += 1 > result = recursive_add(x, y) > return result > > I find this much less error-prone than hiding return statements in > branches throughout the function; if the only return statement is at > the very end of the function, it becomes much easier to read. Well, it's sure clearer where it returns. On the other hand, now you have to analyze the block structure to know that the 3rd line assignment is still going to be in effect when you get to the return. That's easy in this case, of course, but make the structure more complex and add a loop or too, and it can be hard. Where if you see a return statement, you know for sure. State variables are analogous to goto in a way, similar sort of spaghetti potential. It may or may not help to have all the strands come out at the same spot, if the route to that spot could be complicated. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: ownership problem?
In article <[EMAIL PROTECTED]>, Jeffrey Schwab <[EMAIL PROTECTED]> wrote: ... > Yes it is. Memory is only one type of resource. There are still files > and sockets to close, pipes to flush, log messages to be printed, GDI > contexts to free, locks to release, etc. In C++, these things are > generally done by destructors, which are called automatically and > deterministically. I am not a Python Guru, but in Perl, Java, and other > languages that have built-in garbage collectors, these tasks have to be > done explicitly. I find that this forces a procedural approach, even in > an otherwise object-oriented program. > > If you want something like automatic garbage collection in C++, I > recommend the use of Factories with destructors that release the > Factories' products. The zeitgeist in c.l.c++.moderated seems to prefer > the use of smart (reference-counted) pointers, which also rely on > destructors to release resources automatically. Plentry of free, > open-source implementations are available. You may be gratified to learn that Python's main storage model is reference counted objects, and when an object falls out of all referenced scopes its finalizers run immediately. This is however true only of the C implementation. The Java implementation naturally has Java's limitations in this matter, so documentation generally avoids the issue. The C implementation has been around for over a decade, wonder if it had any influence on your C++ zeitgeist? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: 2.4.2 on AIX 4.3 make fails on threading
Quoth Paul Watson <[EMAIL PROTECTED]>: | When I try to build 2.4.2 on AIX 4.3, it fails on missing thread | objects. I ran ./configure --without-threads --without-gcc. | | Before using --without-threads I had several .pthread* symbols missing. | I do not have to have threading on this build, but it would be helpful | if it is possible. The machine has the IBM C compiler. Can anyone | suggest a configuration or some change that I can make to cause this to | build correctly? Thanks. | | $ xlc 2>&1|head -1 |VisualAge C++ Professional / C for AIX Compiler, Version 5 In earlier compilers, and I think this one too, "cc_r" (instead of "xlc") gives you the thread options and libraries. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: pipe related question
In article <[EMAIL PROTECTED]>, David Reed <[EMAIL PROTECTED]> wrote: > Is there any way to have one program run another arbitrary program > with input from stdin and display the output as if you had run it in > a shell (i.e., you'd see some of the output followed by the input > they typed in and then a newline because they pressed return followed > by subsequent output, etc.). > > I can't use readline with the pipe because I don't know how much > output the arbitrary program has before it calls an input statement. > I've googled and understand that calling read() will deadlock when > the program is waiting for input. > > When I first write all the input to the input pipe and then call read > on the output pipe it works just the same as if I had run the program > as: program < input_file > > What I'd like to see is the input intermixed with the output as if > the user had typed it in. It sounds like there may be two problems here. You may need to go to some extra lengths to get the arbitrary program to adopt a convenient buffering strategy. I'm sure you have come across plenty of discussion of this problem in your review of the old traffic in this group, since it comes up all the time. The usual answer is to use a pseudotty device instead of a regular pipe. I don't know what's currently popular for Python support of this device, but there's builtin support on some platforms, cf. os.openpty and os.forkpty. It may be convenient in your case to turn ECHO on. The other problem is more intractable. If you want to know for sure when the arbitrary program expects input, well, UNIX doesn't support that. (Can you run your application on VMS?) The only thing I can think of is a select() with timeout, with some compromise value that will allow most outputs to complete without stalling longer than is really convenient. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: exception KeyboardInterrupt and os.system command
In article <[EMAIL PROTECTED]>, "malv" <[EMAIL PROTECTED]> wrote: > That's also kind of what I expected. > However, I quickly tried: > import os > while 1: > y = os.system("sleep 1") > z = (y >> 8) & 0xFF > print z > > I never get anything in return but 0, hitting c-C or not. > I have uset the above code to get exit code returns in the past though. > Would there be anything special with sleep? That algorithm will give you the same thing as os.WEXITSTATUS(), on most platforms, though not necessarily all so it's better to use the function. On platforms where it works, exit status is of course stored in 2nd byte from the low end, and signal status is stored separately, in the low byte. So naturally, your right shift discards the signal status and you're left with 0. On the other hand, if you use os.spawnv, signal status will be returned as a negative integer, instead of a positive integer exit status. spawnv() is safer than system() if the command is constructed from data, and it also doesn't block SIGINT in the caller like system does, so it would work for the problem posed in the original post. But it might be just as well to watch the process status for any non-zero value, and then call the graceful exit procedure. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
In article <[EMAIL PROTECTED]>, Mike Meyer <[EMAIL PROTECTED]> wrote: > "Harald Armin Massa" <[EMAIL PROTECTED]> writes: > >>Faster than assembly? LOL... :) > > why not? Of course, a simple script like "copy 200 bytes from left to > > right" can be handoptimized in assembler and run at optimum speed. > > Maybe there is even a special processor command to do that. > > Chances are, version 1 of the system doesn't have the command. Version > 2 does, but it's no better than the obvious hand-coded loop. Version 3 > finally makes it faster than the hand-coded loop, if you assume you > have the instruction. If you have to test to see if you can use it, > the hand-coded version is equally fast. Version 4 makes it faster even > if you do the test, so you want to use it if you can. Of course, by > then there'll be a *different* command that can do the same thing,j and > is faster in some conditions. > > Dealing with this in assembler is a PITA. If you're generating code on > the fly, you generate the correct version for the CPU you're running > on, and that's that. It'll run at least as fast as hand-coded > assembler on every CPU, and faster on some. Actually I think the post you quote went on to make a similar point. I read yesterday morning in the paper that the Goto Basic Linear Algebra Subroutines, by a Mr. Kazushige Goto, are still the most efficient library of functions for their purpose for use in supercomputing applications. Apparently hand-optimized assembler for specific processors. http://seattlepi.nwsource.com/business/250070_goto29.html (actually from the NY Times, apparently) Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Quoth [EMAIL PROTECTED]: | Christoph Zwerschke wrote: ... |> Sorry, but I still do not get it. Why is it a feature if I cannot count |> or find items in tuples? Why is it bad program style if I do this? So |> far I haven't got any reasonable explanation and I think there is no. | | I have no idea, I can understand their view, not necessarily agree. And | reasonable explanation is not something I usually find on this group, | for issues like this. It's hard to tell from this how well you do understand it, and of course it's hard to believe another explanation is going to make any difference to those who are basically committed to the opposing point of view. But what the hell. Tuples and lists really are intended to serve two fundamentally different purposes. We might guess that just from the fact that both are included in Python, in fact we hear it from Guido van Rossum, and one might add that other languages also make this distinction (more clearly than Python.) As I'm sure everyone still reading has already heard, the natural usage of a tuple is as a heterogenous sequence. I would like to explain this using the concept of an "application type", by which I mean the set of values that would be valid when applied to a particular context. For example, os.spawnv() takes as one of its arguments a list of command arguments, time.mktime() takes a tuple of time values. A homogeneous sequence is one where a and a[x:y] (where x:y is not 0:-1) have the same application type. A list of command arguments is clearly homogeneous in this sense - any sequence of strings is a valid input, so any slice of this sequence must also be valid. (Valid in the type sense, obviously the value and thus the result must change.) A tuple of time values, though, must have exactly 9 elements, so it's heterogeneous in this sense, even though all the values are integer. One doesn't count elements in this kind of a tuple, because it's presumed to have a natural predefined number of elements. One doesn't search for values in this kind of a tuple, because the occurrence of a value has meaning only in conjunction with its location, e.g., t[4] is how many minutes past the hour, but t[5] is how many seconds, etc. I have to confess that this wasn't obvious to me, either, at first, and in fact probably about half of my extant code is burdened with the idea that a tuple is a smart way to economize on the overhead of a list. Somewhere along the line, I guess about 5 years ago? maybe from reading about it here, I saw the light on this, and since then my code has gotten easier to read and more robust. Lists really are better for all the kinds of things that lists are for -- just for example, [1] reads a lot better than (1,) -- and the savings on overhead is not worth the cost to exploit it. My tendency to seize on this foolish optimization is however pretty natural, as is the human tendency to try to make two similar things interchangeable. So we're happy to see that tuple does not have the features it doesn't need, because it helps in a small way to make Python code better. If only by giving us a chance to have this little chat once in a while. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
In article <[EMAIL PROTECTED]>, "Fredrik Lundh" <[EMAIL PROTECTED]> wrote: > Alex Martelli wrote: > > > Steve Holden <[EMAIL PROTECTED]> wrote: > >... > > > Presumably because it's necessary to extract the individual values > > > (though os.stat results recently became addressable by attribute name as > > > well as by index, and this is an indication of the originally intended > > > purpose of tuples). > > > > Yep -- "time tuples" have also become pseudo-tuples (each element can be > > accessed by name as well as by index) a while ago, and I believe there's > > one more example besides stats and times (but I can't recall which one). > > > > Perhaps, if the tuple type _in general_ allowed naming the items in a > > smooth way, that might help users see a tuple as "a kind of > > ``struct''... which also happens to be immutable". There are a few such > > "supertuples" (with item-naming) in the cookbook, but I wonder if it > > might not be worth having such functionality in the standard library > > (for this clarification as well as, sometimes, helping the readability > > of some user code). > > iirc, providing a python-level API to the SequenceStruct stuff > has been proposed before, and rejected. > > (fwiw, I'm not sure the time and stat tuples would have been > tuples if the standard library had been designed today; the C- > level stat struct doesn't have a fixed number of members, and > the C-level time API would have been better off as a light- > weight "time" type (similar to sockets, stdio-based files, and > other C-wrapper types)) Right. After devoting a lengthy post to the defense of tuples as a structured type, I have to admit that they're not a very good one - it's hard to think of many structured values that are ideally expressed by a fixed length vector with elements accessed by integer index. Another theme that occasionally comes up in advice from the learned has been "use a class". Of course for values that are related to some external structure, you'd want to provide something to make tuple(a) work, serialization etc., and you'd probably end up with something a lot like StructSequence. Meanwhile losing a significant overhead. I wrote a quickie Python API to SequenceStruct and used it to make an (x, y) coord type, to compare with a Coord.x,y class. A list of a million coords used 1/5 space, and took 1/10 the time to create. Hm. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
In article <[EMAIL PROTECTED]>, Rocco Moretti <[EMAIL PROTECTED]> wrote: > People who argue that "frozen list" is not needed because we already > have the tuple type, while simultaneously arguing that tuples shouldn't > grow list methods because they are conceptually different from lists > will be bludgeoned to death with a paradox. :) Maybe I can dodge the paradox by noting that the set of things we need, is much larger than the set of things we need enough to do what it takes to get them. So I can agree that frozen lists are needed, without necessarily supporting the effort. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
In article <[EMAIL PROTECTED]>, Mike Meyer <[EMAIL PROTECTED]> wrote: ... > So why the $*@& (please excuse my Perl) does "for x in 1, 2, 3" work? > > Seriously. Why doesn't this have to be phrased as "for x in list((1, > 2, 3))", just like you have to write list((1, 2, 3)).count(1), etc.? How could list(t) work, if for x in t didn't? For me, conceptually, if an object can't be accessed sequentially, then it can't be mapped to a sequence. Anyway, it seems to me that in the end this is about that balance between practicality and purity. Maybe it's more like tuples have a primary intended purpose, and some support for other applications. Not white, but not pure black either. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
In article <[EMAIL PROTECTED]>, Paul Rubin <http://[EMAIL PROTECTED]> wrote: > There's a historical issue too: when tuples started first being > used this way in Python, classes had not yet been introduced. When was that, old-timer? According to Misc/HISTORY, Python was first posted to alt.sources at version 0.9.0, February 1991. It doesn't say 0.9.0 had classes, but at 0.9.3 we see refinements like __dict__, and it's hard to imagine that classes themselves snuck in without notice in the interim. If you got a copy of some older version than this, you have some interesting historical perspectives there, but not much of a historical issue, I'd say, without much going on in the way of a user community. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Why use #!/usr/bin/env python rather than #!python?
In article <[EMAIL PROTECTED]>, Adriano Ferreira <[EMAIL PROTECTED]> wrote: > Hey, that's not fair. In your illustration above, does 'python' can be > found in the PATH? That is, > > $ python /tmp/hello.py > > works? If it does, probably > > #!/usr/bin/python > #!/usr/bin/env python > #!python > > would also work if > (1) 'python' is at '/usr/bin/python' (but that's inflexible) > (2) 'python' can be found in the environment variable path (if 'env' > is at '/usr/bin/env') > (3) 'python' can be found in the environment variable path (no need > for 'env' utility) Contrary to popular belief, #! is not intended for the shell, but rather for the execve(2) system call of the UNIX operating system. These two characters form the 16 bit "magic number" of interpreter files. Any executable file must start with a 16 bit field that identifies it so the operating system will know how to execute it. In the case of a #! interpreter file, the operating system expects the rest of that line to be the path to the file. PATH is not searched, and is irrelevant. The only way #!python can work, is if it's in the current working directory. Just to help make it confusing, when this mechanism fails and execve(2) returns an error, most shells will go on to try to execute the file themselves, regardless of whether there's a #! or not. csh (the shell language that doesn't look anything like C, Bill Joy's attempt at language design before he started over with Java) does that only if the first line is "#"; otherwise it invokes the Bourne shell. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Quoth Mike Meyer <[EMAIL PROTECTED]>: | Donn Cave <[EMAIL PROTECTED]> writes: ... |> For me, conceptually, if an object can't be accessed |> sequentially, then it can't be mapped to a sequence. | | So you're saying that for should implicitly invoke list (or maybe | iter) on any object that it's passed that's not a list or iterator? Not really saying anything so concrete about it -- doesn't make any difference to me how it works, just saying that all these things come together. Convert to list, iterate, for. They're conceptually not just related, but bound together. |> Anyway, it seems to me that in the end this is about |> that balance between practicality and purity. Maybe |> it's more like tuples have a primary intended purpose, |> and some support for other applications. Not white, |> but not pure black either. | | If you do that, you've just weakened the case for not having count | etc. as methods of tuples. | | It really is the dichotomy of "tuples aren't meant to be sequences so | they don't have ..." versus being able to access them sequentially | that gets me. That just doesn't seem right. The case is weakened only if we have been pretending it was really strong. I guess this may be why this issue drives certain people crazy. There's a case for "don't have", but it isn't air-tight, so what do we do? Well, that's the one thing I really like about the winter holiday season -- some particularly flavorful ales come out of the local micro-breweries around this time of year, just in time to ease my worries about tuples. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: spawnle & umask
In article <[EMAIL PROTECTED]>, Yves Glodt <[EMAIL PROTECTED]> wrote: > David Wahler wrote: > > Yves Glodt wrote: > >> It does, I did like this: > >> > >> os.umask(0113) > >> newpid = > >> os.spawnl(os.P_NOWAIT,'/usr/local/bin/wine','/usr/local/bin/wine',executabl > >> e) > >> > >> But I wanted to use spawnle and it's env argument, to avoid setting > >> umask manually... > > > > The umask is not part of the environment, so there's no way to set it > > directly through spawnle. > > ok > > > Why don't you want to use os.umask? > > Only because I thought spawnle could set it through env... > But as it can't I will now go with os.umask. On UNIX, the "spawn" functions are just Python code that wraps up the low level fork and execve system calls. There's no reason you can't write your own version if you like, that does what you need. It does make sense to want to modify umask and who knows what other inheritable context in the fork, so you might be thinking of an API with a function that's called at that time, like spawnve(wait, file, args, env, func) The funny thing is, that's such a good idea that the implementation already has a function with that signature. The only difference is that func() also must call the appropriate execve function. So for example, def execumask113(file, args, env): os.umask(0113) return os.execve(file, args, env) ... os._spawnvef(os.P_NOWAIT, '/usr/local/bin/wine', ['wine', exe], os.environ, execumask113) Now the problem is that this function is evidently not part of the published API for os.py, so it would be unseemly to complain if it were to change in later versions. So I guess the right thing to do is write your own spawn function from the ground up. But at least you have some ideas there about how it might work. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Quoth Hans Nowak <[EMAIL PROTECTED]>: | Paul Rubin wrote: | |> You should write unit tests either way, but in Python you're relying |> on the tests to find stuff that the compiler finds for you with Java. | | As I wrote on my weblog a while ago, I suspect that this effect is | largely psychological. You jump through hoops, declaring types all over | the place, checking exceptions, working around the language's | limitations, etc. So when your code compiles, it *feels* safer. Like | you're at least part of the way towards ensuring correctness. All that | work must be good for *something*, right? Never mind that when writing | unit tests for a dynamic language, you don't check for these things at | all. How often do you explicitly check types in Python unit tests? | IMHO, when using a dynamic language, you don't need most of the checks | that Java, C# and their ilk force upon you. I have been fooling around with strongly, statically typed languages for a couple of years, in my spare time - Objective CAML, Haskell, O'Haskell. This is a little different experience than what you two are talking about - I don't think Java, C# and their ilk are quite as rigorous, nor do they use type inference - but as much as it would probably gag an FP enthusiast to say this, the basic idea is the same. I can only believe that if you think the benefit of static typing is psychological, either something is very different between the way you and I write programs, or you're not doing it right. For me, the effect is striking. I pound out a little program, couple hundred lines maybe, and think "hm, guess that's it" and save it to disk. Run the compiler, it says "no, that's not it - look at line 49, where this expression has type string but context requires list string." OK, fix that, iterate. Most of this goes about as fast as I can edit, sometimes longer, but it's always about structural flaws in my program, that got there usually because I changed my mind about something in midstream, or maybe I just mistyped something or forgot what I was doing. Then, when the compiler is happy -- the program works. Not always, but so much more often than when I write them in Python. Now you may repeat here that we all must make thorough unit testing a cornerstone of our Python programming, but don't tell me that the advantage of static typing is psychological. It does substantially improve the odds that a program will be correct when it runs, because I have seen it happen. If unit testing does so as well, then obviously there will be some redundancy there, but of course only where you actually have complete coverage from unit testing, which not everyone can claim and I'm sure even fewer really have. And like the man said, you're doing that work to find a lot of things that the compiler could have found for you. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Quoth Paul Rubin <http://[EMAIL PROTECTED]>: | [EMAIL PROTECTED] writes: |> Overall I agree with you and would like to have OPTIONAL static type |> declarations in Python, as has often been discussed. But without |> facilities for generic programming, such as templates in C++, static |> type declarations can force one to duplicate a LOT of code, with one |> sorting routine each for integer, floats, strings, etc. | | I don't see that big a problem. The current Python sorting routine | operates on instances of class "object" and calls the __cmp__ method | to do comparisons. Every class of sortable objects either defines a | __cmp__ method or inherits one from some superclass, and sort calls | those methods. Static type declarations would not require writing any | additional sorting routines. Yes, it would be really weird if Python went that way, and the sort of idle speculations we were reading recently from Guido sure sounded like he knows better. But it's not like there aren't some interesting issues farther on downstream there, in the compare function. cmp(), and str() and so forth, play a really big role in Python's dynamically typed polymorphism. It seems to me they are kind of at odds with static type analysis, especially if you want type inference -- kind of a type laundering system, where you can't tell what was supposed to be there by looking at the code. Some alternatives would be needed, I suppose. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: The Industry choice
Quoth Paul Rubin <http://[EMAIL PROTECTED]>: | "Donn Cave" <[EMAIL PROTECTED]> writes: |> Yes, it would be really weird if Python went that way, and the |> sort of idle speculations we were reading recently from Guido |> sure sounded like he knows better. But it's not like there aren't |> some interesting issues farther on downstream there, in the compare |> function. cmp(), and str() and so forth, play a really big role in |> Python's dynamically typed polymorphism. It seems to me they are |> kind of at odds with static type analysis | | I don't understand that. If I see "str x = str(3)", then I know that | x is a string. Sure, but the dynamically typed polymorphism in that function is about its parameters, not its result. If you see str(x), you can't infer the type of x. Of course you don't need to, in Python style programming this is the whole point, and even in say Haskell there will be a similar effect where most everything derives the Show typeclass. But this kind of polymorphism is pervasive enough in Python's primitive functions that it's an issue for static type analysis, it seems to me, especially of the type inference kind. cmp() is more of a real issue than str(), outside of the type inference question. Is (None < 0) a valid expression, for example? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
In article <[EMAIL PROTECTED]>, Jeff Shannon <[EMAIL PROTECTED]> wrote: ... > Hm, I should have been more clear that I'm inferring this from things > that others have said about lambdas in other languages; I'm sadly > rather language-deficient (especially as regards *worthwhile* > languages) myself. This particular impression was formed from a > recent-ish thread about lambdas > > http://groups-beta.google.com/group/comp.lang.python/messages/1719ff05118c4a71 > ,7323f2271e54e62f,a77677a3b8ff554d,844e49bea4c53c0e,c126222f109b4a2d,b1c962739 > 0ee2506,0b40192c36da8117,e3b7401c3cc07939,6eaa8c242ab01870,cfeff300631bd9f2?th > read_id=3afee62f7ed7094b&mode=thread > > (line-wrap's gonna mangle that, but it's all one line...) > > Looking back, I see that I've mis-stated what I'd originally > concluded, and that my original conclusion was a bit questionable to > begin with. In the referenced thread, it was the O.P.'s assertion > that lambdas made higher-order and dynamic functions possible. From > this, I inferred (possibly incorrectly) a different relationship > between functions and lambdas in other (static) languages than exists > in Python. One could easily be led astray by that post. He may be right in some literal sense about "the true beauty of lambda function", inasmuch as beauty is in the eye of the beholder, but in practical terms, functions are functions. I took this up on comp.lang.functional some time back. I rewrote a well known piece of Haskell (State monad), moving functions from lambda expressions to named function declarations in a "where" clause and I think undisputably making it easier to understand, and I asserted that this is representative of the general case - lambda is a non-essential feature in Haskell. I don't know if anyone was persuaded, but I didn't see any counter-arguments either. But of course even if I'm right about that, it doesn't mean the feature should be stripped from Haskell, that would be an atrocity. It may not be essential, but it's eminently useful and natural. Is it useful and natural in Python? Is it worth breaking code over? Why do we even bother to discuss this here? There aren't good answers to those questions. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: python3: 'where' keyword
In article <[EMAIL PROTECTED]>, Steven Bethard <[EMAIL PROTECTED]> wrote: > Andrey Tatarinov wrote: > > It would be great to be able to reverse usage/definition parts in > > haskell-way with "where" keyword. Since Python 3 would miss lambda, that > > would be extremly useful for creating readable sources. > > > > Usage could be something like: > > > > >>> res = [ f(i) for i in objects ] where: > > >>> def f(x): > > >>> #do something > > > > or > > > > >>> print words[3], words[5] where: > > >>> words = input.split() > > > > - defining variables in "where" block would restrict their visibility to > > one expression > > How often is this really necessary? Could you describe some benefits of > this? I think the only time I've ever run into scoping problems is with > lambda, e.g. > > [lambda x: f(x) for x, f in lst] > > instead of > > [lambda x, f=f: for x, f in lst] > > Are there other situations where you run into these kinds of problems? Note that he says "would be extremely useful for creating readable sources", so the "these kinds of problems" he would have been thinking of would be where source was not as readable as it could be. You seem to be concerned about something else. I don't by any means agree that this notation is worth adopting, and in general I think this kind of readability issue is more or less a lost cause for a language with Python's scoping rules, but the motive makes sense to me. One way to look at it might be, if I observe that "words" is assigned to in a where clause, then I know it will not be used elsewhere in the surrounding scope so I can forget about it right away. If the name does occur elsewhere, it evidently refers to something else. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: "A Fundamental Turn Toward Concurrency in Software"
Quoth Skip Montanaro <[EMAIL PROTECTED]>: | | Jp> How often do you run 4 processes that are all bottlenecked on CPU? | | In scientific computing I suspect this happens rather frequently. I think he was trying to say more or less the same thing - responding to "(IBM mainframes) ... All those systems ran multiple programs ... My current system has 42 processes running ...", his point was that however many processes on your desktop, on the rare occasion that your CPU is pegged, it will be 1 process. The process structure of a system workload doesn't make it naturally take advantage of SMP. So "there will still need to be language innovations" etc. -- to accommodate scientific computing or whatever. Your 4 processes are most likely not a natural architecture for the task at hand, but rather a complication introduced specifically to exploit SMP. Personally I wouldn't care to predict anything here. For all I know, someday we may decide that we need cooler and more efficient computers more than we need faster ones. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: python3: accessing the result of 'if'
Quoth "Carl Banks" <[EMAIL PROTECTED]>: ... | As a compromise, howabout: | | . if m > 20 where m=something(): | . do_something_with(m) | | In this case, the m=something() is NOT an assignment statement, but | merely a syntax resembling it. The "where m=something()" is part of | the if-statement, not the if-expression. It causes m to be visisble in | the if-expression and the if-block. If m=something() binds the function return to a name "m" for use in other expressions, it sure IS an assignment. If it isn't an assignment statement, it's only inasmuch as assignment has become something other than a statement. Over whose dead body, I wonder. In case it's of any interest, here's how "where" looks with "if" in Haskell. It would take longer than you might imagine to explain what that "return" is doing there, but part of it is that every "if" must have an "else", and there is no such thing as "elif". Haskell's layout (indent) structure is more flexible than Python's, there are other ways this could look. if a > 10 then putStrLn (show a) else return () where a = 5 + 6 FYI, I suppose the closest it comes to anything like "assignment as an expression" is pattern matching - case (regexp_group "^([^:]*): (.*)" line) of Nothing -> f1 line Just [a, v] -> f2 a v -- This "unwraps" the return value of regexp_group, an imaginary -- function of type (String -> String -> Maybe [String]). The -- Maybe type has two values, Maybe a = Nothing | Just a. | It (or your suggestion) could work with a while-loop too. | | . while line where line=f.readline(): | . do_something_with(line) | | The main problem here (as some would see it) is that you can't do | something this: | | . if m > 20 where (def m(): a(); b()): The way it made sense to me, "where" introduces a block. The whole point is a private scope block. Actually kind of like the reverse of a function, where instead of binding names to input parameters, you in effect bind names to the scope for a sort of return-by-reference effect. But never mind, the point is that you get a private block, with one or more names exported to the surrounding scope in the left hand side of the where clause. What you're trying to do here seems to have almost nothing to do with that. If Python 3 is going to get assignment-as-expression, it will be because GvR accepts that as a reasonable idea. You won't bootleg it in by trying to hide it behind this "where" notion, and you're not doing "where" any good in trying to twist it this way either. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
In article <[EMAIL PROTECTED]>, Jeff Shannon <[EMAIL PROTECTED]> wrote: ... > From the sounds of it, you may have the opposite experience with > reading map/lambda vs. reading list comps, though, so we could go back > and forth on this all week without convincing the other. :) I'm with him. List incomprehensions do not parse well in my eyes. I am reduced to guessing what they mean by a kind of process of elimination. map is simply a function, so it doesn't pose any extra reading problem, and while lambda is awkward it isn't syntactically all that novel. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Securing a future for anonymous functions in Python
In article <[EMAIL PROTECTED]>, Jacek Generowicz <[EMAIL PROTECTED]> wrote: > Donn Cave <[EMAIL PROTECTED]> writes: > > > List incomprehensions do not parse well in my eyes. > > Are you familiar with the Haskell syntax for list comprehensions? > > For example: > > http://www.zvon.org/other/haskell/Outputsyntax/listQcomprehension_reference.h I haven't used it more than once or twice in the modest amount of Haskell code I've written, but I've seen it a few times. > Does their striking similarity to mathematical set notation help at > all ? Not a bit. If it's any more obvious than the Python version, I suppose it's the | -- my parser sees [a|b] on the first pass. But it isn't like I ever made any real effort to get comfortable with Python list comprehensions. I was just relaying my (lack of) intuitive grasp of them, compared to map and lambda. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: spawn syntax + os.P_WAIT mode behavior + spawn stdout redirection
Quoth Derek Basch <[EMAIL PROTECTED]>: | *First question* | | If the syntax of spawnl is: | | spawnl(mode, path, ...) | | Why does everyone write it like: | | os.spawnlp(os.P_WAIT, 'cp', 'cp', 'index.html', '/dev/null') | | or: | | os.spawnl(os.P_WAIT, "/var/www/db/smm/smm_train", "smm_train", | "SMMTrainInput.xml") | | How is the first 'cp' a path to a file? As you may have guessed, 'cp' doesn't have to be a path because the spawnlp() variant finds that file among a list of directories in PATH. | why does the desired executable have to be named again as the first parameter? Because you're supplying the "argv" argument list, which for normal programs (i.e., not written in Python) includes argv[0] as specified by the invoker. This would be more obvious if you consider the spawnv() function, where these arguments are supplied as a list. You can look at the implementation in os.py for more insight into how all this works, particularly see the execve(2) function that is at the bottom of all this. I was recently quite cheesed to find that the Haskell "executeFile" function supplies its own argv[0], depriving the caller of the occasionally useful opportunity to set this value. Python system interface functions are generally pretty good about not watering down functionality. | *Second question* | | I have a script test.py which calls another script sleep.py using a spawn. | | -- | #test.py | import os | | os.spawnv(os.P_WAIT, "/var/www/db/cgi-bin/sleep.py", ["python", "sleep.py"]) | #pid = os.spawnl(os.P_WAIT, 'sh', 'sh', '-cv', 'sleep 10; echo fark > | /tmp/test.out') | -- | | -- | #sleep.py | import time | | time.sleep(10) | -- | | I would expect that the test.py script should take 10sec to return. However it | returns immediatly. Perhaps I am calling the sleep.py script incorrectly? | Shouldn't it take 10sec to execute since the spawn mode argument is os.P_WAIT? Might want to verify that it's really executing. I suspect it isn't, since your parameters are wrong (the file to invoke is python, not sleep.py.) If you're writing anything important, you need to do what you can to verify that the commands you're executing are actually successful. | *Third question* | | If I uncomment the second spawn call in test.py I do not get any output to | /tmp/test.out and it also returns immediatly. Can anyone tell me why? Might be a problem finding 'sh', since in this case you call spawnl(), not spawnlp(). Just a guess. Also you ought to know that the return from os.spawnl(os.P_WAIT, ...) will not be a pid, rather a status that carries a little (very little) information about the problem. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: debugging os.spawn*() calls
In article <[EMAIL PROTECTED]>, Martin Franklin <[EMAIL PROTECTED]> wrote: > Skip Montanaro wrote: > > I have an os.spawnv call that's failing: > > > > pid = os.spawnv(os.P_NOWAIT, "ssh", > > ["ssh", remote, > > "PATH=%(path)s nice -20 make -C %(pwd)s" % locals()]) > > > > When I wait for it the status returned is 32512, indicating an exit status > > of 127. Unfortunately, I see no way to collect stdout or stderr from the > > spawned process, so I can't tell what's going wrong. > > > > The "ssh remotehost PATH=$PATH nice -20 make ..." command works fine from a > > similar shell script. ... > While not a 'real' answer - I use pexpect to automate my ssh scripts > these days as I had a few problems using ssh with the os.* family > perhaps you may find pexpect a wee bit easier... Also, a "-n" can also improve reliability. By default, ssh assumes that you meant to copy data as input to the remote command, and it does so if it can. Very likely at the expense of some other process that you expected would be able to read that data later. -n informs ssh that you don't intend to provide any data to the remote command. But that's probably not the present problem. (Not that it's really all that "present" by now!) 127 seems to mean that the "ssh" command couldn't be found or couldn't be executed. I guess if something like that were really getting me down, I might rewrite os._spawnvef with a pipe. On entering the except clause I'd format the exception and write it to the pipe; otherwise, I'd just close the pipe. The parent could then read the pipe, even for a NOWAIT case like this, and possibly contrive to re-raise the fork's exception if one showed up. This would account for the class of errors that occurs between the fork and the exec. The _spawnvef I'm looking at doesn't account for these very well - 127 covers a lot of ground, and there wouldn't be much in the way of error output. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: managing multiple subprocesses
Quoth Skip Montanaro <[EMAIL PROTECTED]>: | >>>>> "Marcos" == Marcos <[EMAIL PROTECTED]> writes: | | Marcos> I have tried all sorts of popens / excevs / os.systems / | Marcos> commands etc etc. | | I think os.spawn* and os.wait will do what you want. I have trouble with | os.spawn* myself, so may just fall back to fork/exec + os.wait in the | parent. That's probably the ticket. There are endless variations on how you can use these functions (especially if you include pipe() and the dup fcntls), and in a way it may be simpler to write your own variation than work with a packaged one that does approximately what you need. As long as it doesn't need to run on Windows. By the way, I never ever use the *p versions of these functions, and always specify the full path of the executable. I don't use the the C library versions, and I don't use the Python versions. The latter aren't actually C library function wrappers, but rather like I think most shells they contrive to look through PATH themselves, and at any rate it's difficult to deal with the lookup failure in a useful way in the child fork. No doubt there are situations where a path lookup is essential, but it just hasn't been happening to me. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Avoiding deadlocks in concurrent programming
In article <[EMAIL PROTECTED]>, Konstantin Veretennicov <[EMAIL PROTECTED]> wrote: > On 22 Jun 2005 17:50:49 -0700, Paul Rubin > <"http://phr.cx"@nospam.invalid> wrote: > > > Even on a multiprocessor > > system, CPython (because of the GIL) doesn't allow true parallel > > threads, ... . > > Please excuse my ignorance, do you mean that python threads are always > scheduled to run on the same single CPU? Or just that python threads > are often blocked waiting for GIL? Any thread may execute "inside" the interpreter, but not concurrently with another. I don't see the original point, though. If you have a C application with no GIL, the queueing model is just as useful -- more, because a GIL avoids the same kind of concurrency problems in your application that it intends to avoid in the interpreter. Rigorous application of the model can be a little awkward, though, if you're trying to adapt it to a basically procedural application. The original Stackless Python implementation had some interesting options along those lines. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: trouble subclassing str
In article <[EMAIL PROTECTED]>, Steven D'Aprano <[EMAIL PROTECTED]> wrote: > On Thu, 23 Jun 2005 12:25:58 -0700, Paul McGuire wrote: > > > But if you are subclassing str just so that you can easily print your > > objects, look at implementing the __str__ instance method on your > > class. Reserve inheritance for true "is-a" relationships. Often, > > inheritance is misapplied when the designer really means "has-a" or > > "is-implemented-using-a", and in these cases, the supposed superclass > > is better referenced using a member variable, and delegating to it. > > Since we've just be talking about buzzwords in another thread, and the > difficulty self-taught folks have in knowing what they are, I don't > suppose somebody would like to give a simple, practical example of what > Paul means? > > I'm going to take a punt here and guess. Instead of creating a sub-class > of str, Paul suggests you simply create a class: > > class MyClass: > def __init__(self, value): > # value is expected to be a string > self.value = self.mangle(value) > def mangle(self, s): > # do work on s to make sure it looks the way you want it to look > return "*** " + s + " ***" > def __str__(self): > return self.value > > (only with error checking etc for production code). > > Then you use it like this: > > py> myprintablestr = MyClass("Lovely Spam!") > py> print myprintablestr > *** Lovely Spam!!! *** > > Am I close? That's how I read it, with "value" as the member variable that you delegate to. Left unexplained is ``true "is-a" relationships''. Sounds like an implicit contradiction -- you can't implement something that truly is something else. Without that, and maybe a more nuanced replacement for "is-implemented-using-a", I don't see how you could really be sure of the point. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: trouble subclassing str
In article <[EMAIL PROTECTED]>, "Paul McGuire" <[EMAIL PROTECTED]> wrote: [ ... lots of interesting discussion removed ... ] > Most often, I see "is-a" confused with "is-implemented-using-a". A > developer decides that there is some benefit (reduced storage, perhaps) > of modeling a zip code using an integer, and feels the need to define > some class like: > > class ZipCode(int): > def lookupState(self): > ... > > But zip codes *aren't* integers, they just happen to be numeric - there > is no sense in supporting zip code arithmetic, nor in using zip codes > as slice indices, etc. And there are other warts, such as printing zip > codes with leading zeroes (like they have in Maine). I agree, but I'm not sure how easily this kind of reasoning can be applied more generally to objects we write. Take for example an indexed data structure, that's generally similar to a dictionary but may compute some values. I think it's common practice in Python to implement this just as I'm sure you would propose, with composition. But is that because it fails your "is-a" test? What is-a dictionary, or is-not-a dictionary? If you ask me, there isn't any obvious principle, it's just a question of how we arrive at a sound implementation -- and that almost always militates against inheritance, because of liabilities you mentioned elsewhere in your post, but in the end it depends on the details of the implementation. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: trouble subclassing str
In article <[EMAIL PROTECTED]>, "Paul McGuire" <[EMAIL PROTECTED]> wrote: ... > This reminds me of some maddening O-O discussions I used to > have at a former place of employment, in which one developer cited > similar behavior for not having Square inherit from Rectangle - calling > Square.setWidth() would have to implicitly call setHeight() and vice > versa, in order to maintain its squarishness, and thereby broke Liskov. > I withdrew from the debate, citing lack of context that would have > helped resolve how things should go. At best, you can *probably* say > that both inherit from Shape, and can be drawn, have an area, a > bounding rectangle, etc., but not either inherits from the other. This Squares and Rectangles issue sounds debatable in a language like C++ or Java, where it's important because of subtype polymorphism. In Python, does it matter? As a user of Square, I'm not supposed to ask about its parentage, I just try to be clear what's expected of it. There's no static typing to notice whether Square is a subclass of Rectangle, and if it gets out that I tried to discover this issubclass() relationship, I'll get a lecture from folks on comp.lang.python who suspect I'm confused about polymorphism in Python. This is a good thing, because as you can see it relieves us of the need to debate abstract principles out of context. It doesn't change the real issues - Square is still a lot like Rectangle, it still has a couple of differences, and the difference could be a problem in some contexts designed for Rectangle - but no one can fix that. If you need Square, you'll implement it, and whether you choose to inherit from Rectangle is left as a matter of implementation convenience. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: map/filter/reduce/lambda opinions and background unscientific mini-survey
Quoth Tom Anderson <[EMAIL PROTECTED]>: ... | I disagree strongly with Guido's proposals, and i am not an ex-Lisp, | -Scheme or -any-other-functional-language programmer; my only other real | language is Java. I wonder if i'm an outlier. | | So, if you're a pythonista who loves map and lambda, and disagrees with | Guido, what's your background? Functional or not? Dysfunctional, I reckon. I think I disagree with the question more than the answer. First, map and lambda are two different things, and it's reasonable to approve of one and abhor the other. Especially if you have a background in a functional language where lambda works like it should. On the other hand, the list comprehension gimmick that replaces some of the "higher order functions" is borrowed from Haskell, as you probably know, so it isn't exactly alien to functional programming. Prelude.hs defines map: map f xs = [ f x | x <- xs ] Secondly, if there's anything I detest about the Python development model, it is the tendency to focus on gimmicks. For 2.X, elimination of these features would be an atrocity, a gratuitous change that would break programs - but I don't think anyone who counts has seriously proposed to do that. With 3.X, we are talking about a different language. May not ever even get off the ground, but if it does, it's supposed to be distinctly different, and we need to know a lot more about it before we can reasonably worry about trivial details like whether map is going to be there. I personally think real FP is seriously hot stuff, but I think Python is a lousy way to do it, with or without map. I suppose there's a remote possibility that 3.X will change all that. Or more likely, there will by then be a really attractive FP language, maybe out of the "links" initiative by Wadler et al. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Polling, Fifos, and Linux
In article <[EMAIL PROTECTED]>, Andreas Kostyrka <[EMAIL PROTECTED]> wrote: > On Thu, Jul 07, 2005 at 10:21:19PM -0700, Jacob Page wrote: > > Jeremy Moles wrote: > > > This is my first time working with some of the more lower-level python > > > "stuff." I was wondering if someone could tell me what I'm doing wrong > > > with my simple test here? > > > > > > Basically, what I need is an easy way for application in userspace to > > > simply echo values "down" to this fifo similar to the way proc files are > > > used. Is my understanding of fifo's and their capabilities just totally > > > off base? > > > > You shouldn't need to use select.poll(), unless I'm missing something. > > I was able to get the following to work: > Ok, you miss something ;) The program you proposed does busy waiting > and without a time.sleep call will consume 100% CPU time :( I don't doubt that it would, but that's because he (like the original poster) open the file with O_NONBLOCK. From my point of view that's a self-inflicted injury, but if you start from the assumption that O_NONBLOCK is needed for some reason, then the poll makes sense. In normal blocking mode, select+read is identical to plain read for any kind of file that supports select. > Actually, with a named fifo the situation gets even nastier: > > import os, select, time > > fifo = os.open("fifo", os.O_RDONLY) > > while True: > print "SELECT", select.select([fifo],[],[]) > string = os.read(fifo, 1) > if len(string): > print string > else: > nf = os.open("fifo", os.O_RDONLY) > os.close(fifo) > fifo = nf > # Perhaps add a delay under an else > > The problem is, that select (and poll) show a End-Of-File condition by > returning > ready to read. But on a FIFO, when the first client terminates, the reading > end goes into a EOF state till somebody else reopens the fifo for writing. > > [This bit of wisdom comes Advanced Programming in the UNIX Environment by > W.R. Stevens p. 400: 'If we encounter the end of file on a descriptor, that > descriptor is considered readbale by select.'] > > closing the old descriptor must be done after opening a new one, or else you > get a tiny moment where a O_WRONLY client is not able to open the file. > This way there is always a reading client of the fifo. OK, but in more detail, what happens in these two scenarios? In your version, what happens when the writer opens a pipe pipe that the reader is about to close? Who reads the data? On the other hand, if you close the pipe first, what happens to the writer who happens to try to open the pipe at that moment? Luckily, as far as I know, we don't have to worry about the first one, since if data could be lost in this way it would be much more complicated to close a file descriptor without running this risk. But I don't see the second one as much of a problem either. The writer blocks - so? Now, what would really be useful is a way for the writer to detect whether open will block, and potentially time out. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
PPC floating equality vs. byte compilation
I ran into a phenomenon that seemed odd to me, while testing a build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e. test_builtin.py, for example, fails a couple of tests with errors claiming that apparently identical floating point values aren't equal. But it only does that when imported, and only when the .pyc file already exists. Not if I execute it directly (python test_builtin.py), or if I delete the .pyc file before importing it and running test_main(). For now, I'm going to just write this off as a flaky build. I would be surprised if 5 people in the world care, and I'm certainly not one of them. I just thought someone might find it interesting. The stalwart few who still use BeOS are mostly using Intel x86 hardware, as far as I know, but the first releases were for PowerPC, at first on their own hardware and then for PPC Macs until Apple got nervous and shut them out of the hardware internals. They use a Metrowerks PPC compiler that of course hasn't seen much development in the last 6 years, probably a lot longer. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Defending Python
Quoth Dave Cook <[EMAIL PROTECTED]>: | On 2005-07-08, Charlie Calvert <[EMAIL PROTECTED]> wrote: | |> I perhaps rather foolishly wrote two article that mentioned Python as a |> good alternative language to more popular tools such as C# or Java. I | | Sounds like a really hidebound bunch over there. Good luck. Nah, just normal. Evangelism is always wasted on the majority of listeners, but to the small extent it may succeed it depends on really acute delineation of the pitch. It's very hard for people to hear about something without trying to apply it directly to the nearest equivalent thing in their own familiar context. Say good things about language X, and people will hear you saying "give up using language Y and rewrite everything in language X." Then they will conclude that if you would say that, you don't know very much about their environment. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Reading variables from a forked child (Python C/API)
Quoth MrEntropy <[EMAIL PROTECTED]>: | I'm having a little trouble getting an idea running. I am writing a C | program which is really a frontend to a Python program. Now, my C | program starts up, does some initialisation like initialisation of it's | variables and Py_Initialize() and then it fork()s. After forking the | child launches the python program with Py_Main() and with the parent I | want to be able to read the variables of the Python program. I have | tried many ways but cannot find a solution to do this. That's because there is no solution. After a fork, the only effect observable in the parent process is the return value of fork, which will have a non-zero value. Subsequent execution in the child process occurs completely independently from the parent, and leaves no trace whatever. So you can't read variables from memory, that were set by the child. Past that, I deleted the rest of your post because it sort of avoids the question of what you're really trying to accomplish and what errors you actually got, but note that just for the sake of avoiding segmentation faults etc., it's a good idea when writing in C to check return values: obj = PyMapping_GetItemString(dict, "foo"); if (obj) { ... } else { ... } Anyway, if you really need a subprocess, you're going to have to communicate with it via some sort of I/O, like UNIX pipes or temporary files or something. You probably don't need to call Python from C, may as well just invoke python (cf. os.spawnv) Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Need to interrupt to check for mouse movement
Quoth Paul Rubin <http://[EMAIL PROTECTED]>: | Christopher Subich <[EMAIL PROTECTED]> writes: | > > In the particular case of wxWidgets, it turns out that the *GUI* | > > blocks for long periods of time, preventing the *network* from | > > getting attention. But I agree with your position for other | > > toolkits, such as Gtk, Qt, or Tk. | > | > Wow, I'm not familiar with wxWidgets; how's that work? | | Huh? It's pretty normal, the gui blocks while waiting for events | from the window system. I expect that Qt and Tk work the same way. In fact anything works that way, that being the nature of I/O. But usually there's a way to add your own I/O source to be dispatched along with the UI events -- the toolkit will for example use select() to wait for X11 socket I/O, so it can also respond to incoming data on another socket, provided along with a callback function by the application. Am I hearing that wxWindows or other popular toolkits don't provide any such feature, and need multiple threads for this reason? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Something that Perl can do that Python can't?
In article <[EMAIL PROTECTED]>, "Dr. Who" <[EMAIL PROTECTED]> wrote: > So here it is: handle unbuffered output from a child process. Your Perl program works the same for me, on MacOS X, as your Python program. That's what we would expect, of course, because the problem is with the (Python) program on the other end - it's buffering output, because the output device is not a terminal. Donn Cave, [EMAIL PROTECTED] > Here is the child process script (bufcallee.py): > import time > print 'START' > time.sleep(10) > print 'STOP' > > In Perl, I do: > open(FILE, "python bufcallee.py |"); > while ($line = ) > { > print "LINE: $line"; > } > > in which case I get > LINE: START > followed by a 10 second pause and then > LINE: STOP > > The equivalent in Python: > import sys, os > > FILE = os.popen('python bufcallee.py') > for line in FILE: > print 'LINE:', line > > yields a 10 second pause followed by > LINE: START > LINE: STOP > > I have tried the subprocess module, the -u on both the original and > called script, setting bufsize=0 explicitly but to no avail. I also > get the same behavior on Windows and Linux. > > If anyone can disprove me or show me what I'm doing wrong, it would be > appreciated. > > Jeff -- http://mail.python.org/mailman/listinfo/python-list
Re: socket and os.system
In article <[EMAIL PROTECTED]>, Peter Hansen <[EMAIL PROTECTED]> wrote: > mfaujour wrote: > > I HAVE THIS PYTHON PROGRAMM: > [snip] > > > socket.error: (98, 'Address already in use') > > > > DOES SOMEONE HAS AN IDEA ? > > PLEASE learn to format your questions more appropriately! Your post is > simply _awful_ to read. At the very least, ALL CAPS is considered to be > "shouting", though I can see why you had to use them since it would have > been impossible to see the questions amongst all the code. > > In any case, assuming I've been able to guess at the specific problem > based on the above lines, which isn't certain, you need to use a line > something like this in your code to allow your server socket to bind to > an address that was previously in use: > > server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) > > For more background, I suggest a Google search on "python so_reuseaddr". For heaven's sake, it wasn't that hard to read. Of course the upper case text was an unpardonable violation of people's tender sensibilities, but in this case it does have the virtue of a strong visible distinction between his code and his comments. The good thing is that he did provide example that clearly illustrates the problem. Which is not really that he can't reuse the socket address. I mean, it's usually good to take care of that, but ordinarily for reasons having to do with shutdown latency. In the present case, his application is holding the socket open from a fork that inherited it by accident. I think the current stock answer is "use the subprocess module." If that's not helpful, either because it doesn't provide any feature that allows you to close a descriptor in a fork (I seem to recall it does), or it isn't supported in your version of Python (< 2.4), then you have your choice of two slightly awkward solutions: 1. fcntl F_SETFD FD_CLOEXEC (see man 2 fcntl) 2. implement your own spawn command (see os.py's spawnv()) and close the socket FD in the fork. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Fat and happy Pythonistas (was Re: Replacement for keyword 'global' good idea? ...)
Quoth Mike Meyer <[EMAIL PROTECTED]>: | "John Roth" <[EMAIL PROTECTED]> writes: ... |> It seems to be the concensus on this group anyway: declarative typing |> does not give enough improvement in program correctness to override |> more concise programs and TDD. That may, of course, be wishful |> thinking on the Python community's part. | | "The concensus of this group" is a *long* way from "the debate has | moved on". I agree that it's the concensus of this group - but this is | a group devoted to a dynamic programming language. If you go to a | group devoted to a statically typed language, you'll find a different | concensus. Which means the debate is still very much alive. Also an OOP group, which tends to mean that experience with static typing will have been with C++ or Java, or similar languages. The ideas I've read for P3000 fortunately show some influence from the type inference systems popular in FP. What people in this group think is frankly irrelevant if they're thinking in terms of Java. | So we have one (count him, 1) user who complains that it's changing to | fast. I suspect most readers here would disagree with him. True, but another statistic that's compromised by self-selecting population. Earlier in this thread our attention was directed to an article announcing a fairly radical drop in popularity of Python (and other interpreted languages) for new projects outside of North America, citing failure to penetrate the "enterprise" market as a reason. Ask the enterprise world if they think Python is changing fast enough. Maybe they're giving up on Python because they decided they'd never get code blocks. (Ha ha.) Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
Quoth "Kay Schluehr" <[EMAIL PROTECTED]>: | Paul Rubin wrote: [ ... re where to go from Python ] |> Lately I'm interested in OCAML as a possible step up from Python. It |> has bogosity of its own (much of it syntactic) but it has static |> typing and a serious compiler, from what I understand. I don't think |> I can grok it from just reading the online tutorial; I'm going to have |> to code something in it, once I get a block of time available. Any |> thoughts? | | The whole ML family ( including OCaml ) and languages like Haskell | based on a Hindley-Milnor type system clearly make a difference. I | would say that those languages are also cutting edge in language theory | research. It should be definitely interesting to you. Since there is no | single language implementation you might also find one that supports | concepts you need most e.g. concurrency: | | http://cml.cs.uchicago.edu/ My vote would be Haskell first, then other functional languages. Learning FP with Objective CAML is like learning to swim in a wading pool -- you won't drown, but there's a good chance you won't really learn to swim either. Has an interesting, very rigorous OOP model though. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
In article <[EMAIL PROTECTED]>, Paul Rubin <http://[EMAIL PROTECTED]> wrote: > "Donn Cave" <[EMAIL PROTECTED]> writes: > > My vote would be Haskell first, then other functional languages. > > Learning FP with Objective CAML is like learning to swim in a > > wading pool -- you won't drown, but there's a good chance you > > won't really learn to swim either. Has an interesting, very > > rigorous OOP model though. > > I'm not sure what you mean by that about OCAML. That its functional > model is not pure enough? I'd like to look at Haskell as well, but I > have the impression that its implementation is not as serious as > OCaml's, i.e. no native-code compiler. On the contrary, there are a couple. Ghc is probably the leading implementation these days, and by any reasonable measure, it is serious. Objective CAML is indeed not a pure functional language. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
In article <[EMAIL PROTECTED]>, Paul Rubin <http://[EMAIL PROTECTED]> wrote: ... > I notice that Haskell strings are character lists, i.e. at least > conceptually, "hello" takes the equivalent of five cons cells. Do > real implementations (i.e. GHC) actually work like that? If so, that's > enough to kill the whole thing right there. Yep. There is a separate packed string type. > > Objective CAML is indeed not a pure functional language. > > Should that bother me? I should say, my interest in Ocaml or Haskell > is not just to try out something new, but also as a step up from both > Python and C/C++ for writing practical code. That is, I'm looking for > something with good abstraction (unlike Java) and type safety (unlike > C/C++), but for the goal of writing high performance code (like > C/C++). I'm even (gasp) thinking of checking out Ada. It's up to you, I'm just saying. Speaking of C++, would you start someone with Python or Java for their first OOPL? Kind of the same idea. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
Quoth Paul Rubin <http://[EMAIL PROTECTED]>: | Right now I'm mainly interested in OCaml, Haskell, Erlang, and maybe | Occam. Haskell seems to have the happiest users, which is always a | good thing. Erlang has been used for real-world systems and has | built-in concurrency support. OCaml seems to crush Haskell and Erlang | (and even Java) in performance. I'm sure you're aware that these are all arguable. In particular, shootout figures aren't a really reliable way to find out about performance. | The idea is to use one of those languages for a personal project after | my current work project wraps up pretty soon. This would be both a | learning effort and an attempt to write something useful. I'm | thinking of a web application like a discussion board or wiki, | intended to outperform the existing ones, i.e. able to handle a | Slashdot or Wikipedia sized load (millions of hits/day) on a single | fast PC instead of a rack full. "Single fast PC" will probably soon | come to mean a two-cpu-chip motherboard in a 1U rack box, where each | cpu chip is a dual core P4 or Athlon, so the application should be | able to take advantage of at least 4-way multiprocessing, thus the | interest in concurrency. Oh. Note that the FP world has been historically attracted to the "green" thread model, where threads are implemented in the runtime like (old) Stackless micro-threads, much faster and more tractable for runtime data structures ... but runs on only one CPU at a time. Ocaml & I believe current ghc support native OS threads, Erlang I would guess not but wouldn't know for sure. Don't know about ghc internals, the way I remember it ocaml's native thread system has something like Python's global lock, instead of locks around each memory management function etc. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
In article <[EMAIL PROTECTED]>, Michael Hudson <[EMAIL PROTECTED]> wrote: > Donn Cave <[EMAIL PROTECTED]> writes: > > > On the contrary, there are a couple. Ghc is probably the > > leading implementation these days, and by any reasonable > > measure, it is serious. > > > > Objective CAML is indeed not a pure functional language. > > *cough* unsafePerformIO *cough* (Hope that cough isn't anything serious.) The way I understand it, you'd be a fool to use unsafePerformIO in a way that would generally compromise functional purity. It really is "unsafe", inasmuch as it violates central assumptions of the language evaluation model. Some people take "pure" too seriously. In this context, functional purity just means that we know that in principle, the value of an expression is constant - given the same inputs to a function, we always expect the same result. It doesn't mean "free from blemish." unsafePerformIO is a sort of blemish, I suppose, but it's a pure functional language in my book. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Bug on Python2.3.4 [FreeBSD]?
In article <[EMAIL PROTECTED]>, Uwe Mayer <[EMAIL PROTECTED]> wrote: > Friday 12 August 2005 22:12 pm paolino wrote: > [...] > >>>>>f = open('test', 'a+') > >>>>>f.read() > >> > >> '' > >> > >> -> append mode does not read from file, *not ok* > >> > >> > > This is right IMO 'a' is appending so seek(-1) > > True, thank you. > f.tell() shows the file pointer is at EOF. On my Debian Linux (unstable), > Python 2.3.4 +2.3.5, however, the file pointer is at the beginning of the > file. > Is that behaviour intended? I don't think Python pretends to have any intentions here, it has to take what it gets from the C library fopen(3) function. BSD man pages generally say a+ positions the stream at end of file (period.) They claim conformance with the ISO C90 standard. I couldn't dig up a (free) copy of that document, so don't know what it says on this matter. GNU C man pages say it positions the stream at end for write and at beginning for read. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Bug on Python2.3.4 [FreeBSD]?
Quoth "Terry Reedy" <[EMAIL PROTECTED]>: | "Donn Cave" <[EMAIL PROTECTED]> wrote in message | news:[EMAIL PROTECTED] | > I don't think Python pretends to have any intentions here, | > it has to take what it gets from the C library fopen(3) | > function. BSD man pages generally say a+ positions the | > stream at end of file (period.) They claim conformance | > with the ISO C90 standard. I couldn't dig up a (free) copy | > of that document, so don't know what it says on this matter. | | STandard C, by Plauger & Brodie says that 'a' plus whatever else means all | writes start at the current end-of-file. Of course, but the question was, where do reads start? I would guess the GNU C library "innovated" on this point. But in the end it doesn't really matter unless Python is going to try to square that all up and make open() consistent across platforms. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Bug on Python2.3.4 [FreeBSD]?
In article <[EMAIL PROTECTED]>, "Terry Reedy" <[EMAIL PROTECTED]> wrote: ... > If there is a hole in the standard, 'innovation' is required. I hope this perspective is a rarity. When you exploit an opportunity to make something work differently while conforming to the existing standards, you're creating the kind of problem standards are there to prevent. In the end I don't care if my software works because someone followed the standards to the letter, or because someone took the trouble to follow existing practice whether it was codified in a standard or not, I just don't want it to work differently on one platform than on another. Holes in standards are at best an excuse for accidental deviations. In the present case, so far I see a strong Berkeley vs. everyone else pattern, so GNU C probably wasn't the culprit after all. Along with already documented FreeBSD, I find MacOS X, NetBSD 2 and Ultrix 4.2 position the read stream to EOF. Linux, AIX and DEC/OSF1 (or whatever it's called these days) position it to 0. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: while c = f.read(1)
Quoth "Greg McIntyre" <[EMAIL PROTECTED]>: | I have a Python snippet: | | f = open("blah.txt", "r") | while True: | c = f.read(1) | if c == '': break # EOF | # ... work on c | | Is some way to make this code more compact and simple? It's a bit | spaghetti. Actually I'd make it a little less compact -- put the "break" on its own line -- but in any case this is fine. It's a natural and ordinary way to express this in Python. ... | But I get a syntax error. | | while c = f.read(1): |^ | SyntaxError: invalid syntax | | And read() doesn't work that way anyway because it returns '' on EOF | and '' != False. If I try: This is the part I really wanted to respond to. Python managed without a False for years (and of course without a True), and if the introduction of this superfluous boolean type really has led to much of this kind of confusion, then it was a bad idea for sure. The condition that we're looking at here, and this is often the way to look at conditional expressions in Python, is basically something vs. nothing. In this and most IO reads, the return value will be something, until at end of file it's nothing. Any type of nothing -- '', {}, [], 0, None - will test "false", and everything else is "true". Of course True is true too, and False is false, but as far as I know they're never really needed. You are no doubt wondering when I'm going to get to the part where you can exploit this to save you those 3 lines of code. Sorry, it won't help with that. | Is this related to Python's expression vs. statement syntactic | separation? How can I be write this code more nicely? Yes, exactly. Don't worry, it's nice as can be. If this is the worst problem in your code, you're far better off than most of us. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: while c = f.read(1)
In article <[EMAIL PROTECTED]>, Antoon Pardon <[EMAIL PROTECTED]> wrote: ... > But '', {}, [] and () are not nothing. They are empty containers. Oh come on, "empty" is all about nothing. > And 0 is not nothing either it is a number. Suppose I have > a variable that is either None if I'm not registered and a > registration number if I am. In this case 0 should be treated > as any other number. > > Such possibilities, make me shy away from just using 'nothing' > as false and writing out my conditionals more explicitly. Sure, if your function's type is "None | int", then certainly you must explicitly check for None. That is not the case with fileobject read(), nor with many functions in Python that reasonably and ideally return a value of a type that may meaningfully test false. In this case, comparison (==) with the false value ('') is silly. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: global interpreter lock
In article <[EMAIL PROTECTED]>, Bryan Olson <[EMAIL PROTECTED]> wrote: > km wrote: > > Hi all, > > > > is true parallelism possible in python ? or atleast in the > > coming versions ? is global interpreter lock a bane in this > > context ? > > No; maybe; and currently, not usually. > > On a uniprocessor system, the GIL is no problem. On multi- > processor/core systems, it's a big loser. I rather suspect it's a bigger winner there. Someone who needs to execute Python instructions in parallel is out of luck, of course, but that has to be a small crowd. I would have to assume that in most applications that need the kind of computational support that implies, are doing most of the actual computation in C, in functions that run with the lock released. Rrunnable threads is 1 interpreter, plus N "allow threads" C functions, where N is whatever the OS will bear. Meanwhile, the interpreter's serial concurrency limits the damage. The unfortunate reality is that concurrency is a bane, so to speak -- programming for concurrency takes skill and discipline and a supportive environment, and Python's interpreter provides a cheap and moderately effective support that compensates for most programmers' unrealistic assessment of their skill and discipline. Not that you can't go wrong, but the chances you'll get nailed for it are greatly reduced - especially in an SMP environment. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: global interpreter lock
Quoth Paul Rubin <http://[EMAIL PROTECTED]>: | Mike Meyer <[EMAIL PROTECTED]> writes: |> The real problem is that the concurrency models available in currently |> popular languages are still at the "goto" stage of language |> development. Better models exist, have existed for decades, and are |> available in a variety of languages. | | But Python's threading system is designed to be like Java's, and | actual Java implementations seem to support concurrent threads just fine. I don't see a contradiction here. "goto" is "just fine", too -- you can write excellent programs with goto. 20 years of one very successful software engineering crusade against this feature have made it a household word for brokenness, but most current programming languages have more problems in that vein that pass without question. If you want to see progress, it's important to remember that goto was a workable, useful, powerful construct that worked fine in the right hands - and that wasn't enough. Anyway, to return to the subject, I believe if you follow this subthread back you will see that it has diverged a little from simply whether or how Python could support SMP. Mike, care to mention an example or two of the better models you had in mind there? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: global interpreter lock
Quoth Mike Meyer <[EMAIL PROTECTED]>: [... wandering from the nominal topic ...] | *) The most difficult task was writing horizontal microcode, which | also had serious concurrency issues in the form of device settling | times. I dealt with that by inventing a programming model that hid | most of the timing details from the programmer. It occasionally lost a | cycle, but the people who used it after me were *very* happy with it | compared to the previous model. My favorite concurrency model comes with a Haskell variant called O'Haskell, and it was last seen calling itself "Timber" with some added support for time as an event source. The most on topic thing about it -- its author implemented a robot controller in Timber, and the robot is a little 4-wheeler called ... "Timbot". Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: while c = f.read(1)
Before leaving this topic, I wanted to make a rare visit to the ruins of Google's USENET archive and pull out this giant post on the subject of True and False, when they were being considered for adoption into Python. There is some stuff to ignore, where she addresses questions that didn't go anywhere, but she goes on to write a well articulated case that makes very interesting reading, and possibly has had some effect on how people think about it around here. http://groups-beta.google.com/group/comp.lang.python/msg/2de5e1c8384c0360 Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: pipes like perl
In article <[EMAIL PROTECTED]>, "max(01)*" <[EMAIL PROTECTED]> wrote: > in perl i can do this: ... > but i do not know how to do it in python, because "if *command*:" gives > syntax error. > > moreover, if i use ... > it doesn't work, since "*do_something*" and *do_something_more* are > always executed (it seems like > > MYPIPE = os.popen("*some_system_command*", "r") > > does not raise any exception even if *some_system_command* does not > exist/work... Just to address this last point -- if you're running 2.4, you can get this through the subprocess module. With its popen equivalent, something like subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout will raise an exception if the command is not found. The command in this case would specified as an argv list, not a shell command. The basic problem is that you have to fork first, then exec, and by the time the forked interpreter finds out that the exec didn't work, its parent has gone on to do the I/O it's expecting. I think subprocess gets around that, on UNIX, with a trick involving an extra pipe, that would work only on UNIX. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: named pipe input
In article <[EMAIL PROTECTED]>, "max(01)*" <[EMAIL PROTECTED]> wrote: > i have some problems understanding following behaviour. > > consider this: > $ cat file_input_3.pl > #!/usr/bin/perl > > open MIAPIPE, "una_pipe"; > > while ($riga = ) ... > $ cat file_input_3.py > #!/usr/bin/python > > import sys > > MIAPIPE = open("una_pipe", "r") > > for riga in MIAPIPE: ... > BUT if i try to do the same with the python code, something different > happens: i have to type ALL the lines on console #2 and complete the cat > command (ctrl-d) before seeing the lines echoed on console #1. Seems to me something like this came up here not long ago. It turns out that for line in file: doesn't do the same thing as Perl's while ($line = ) If you use file.readline() instead (in a loop, of course, I think you'll get the data one line at a time, but "in file" apparently reads the whole file first. That's what I vaguely remember, I don't use it myself. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Find day of week from month and year
In article <[EMAIL PROTECTED]>, "Laguna" <[EMAIL PROTECTED]> wrote: > I want to find the expiration date of stock options (3rd Friday of the > month) for an any give month and year. I have tried a few tricks with > the functions provided by the built-in module time, but the problem was > that the 9 element tuple need to be populated correctly. Can anyone > help me out on this one? ... > Requirements: > > d0 = expiration(9, 2005) # d0 would be 16 > d1 = expiration(6, 2003) # d1 would be 20 > d2 = expiration(2, 2006) # d2 would be 17 What do you mean by, "the 9 element tuple need to be populated correctly"? Do you need someone to tell you what values it needs? What happens if you use (2005, 9, 1, 0, 0, 0, 0, 0, 0), for example? If you make this tuple with localtime or gmtime, do you know what the 7th (tm[6]) element of the tuple is? What tricks did you try, exactly? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Find day of week from month and year
In article <[EMAIL PROTECTED]>, "Laguna" <[EMAIL PROTECTED]> wrote: > > What do you mean by, "the 9 element tuple need to be populated > > correctly"? Do you need someone to tell you what values it > > needs? What happens if you use (2005, 9, 1, 0, 0, 0, 0, 0, 0), > > for example? If you make this tuple with localtime or gmtime, > > do you know what the 7th (tm[6]) element of the tuple is? > > What tricks did you try, exactly? > > > >Donn Cave, [EMAIL PROTECTED] > > Thanks for pointing out. tm[6] = weekday, and tm[7] = Julian data, but > I wouldn't know these values when my input values are month and year. > > I will try out the more constructive suggestions from Paul and Robert. > > Following is what I have tried. As you can tell, the results are wrong! > > >>> import time > >>> time.asctime((2003, 9, 1, 0, 0, 0, 0, 0, 0)) > 'Mon Sep 01 00:00:00 2003' > >>> time.asctime((2003, 8, 1, 0, 0, 0, 0, 0, 0)) > 'Mon Aug 01 00:00:00 2003' > >>> time.asctime((2003, 7, 1, 0, 0, 0, 0, 0, 0)) > 'Mon Jul 01 00:00:00 2003' Well, sure, that tm value will certainly not be the 3rd Friday, but it does correctly represent the first day of the month. With localtime() you can find out the day of the week, on the first day of the month. When you know that, the 3rd Friday is simple arithmetic. Since other followups have already spoon-fed you a solution (assuming it works, haven't tried), here's an example of what I mean - import time for m in range(1, 13): c1 = time.mktime((2005, m, 1, 0, 0, 0, 0, 0, 0)) d1 = time.localtime(c1)[6] if d1 > 4: f3 = 26 - d1 else: f3 = 19 - d1 # f3 = 19 + (d1 // 5) * 7 - d1 c3 = time.mktime((2005, m, f3, 0, 0, 0, 0, 0, 0)) print time.ctime(c3) I don't know if you intend to go on to do much more programming after this, but that's who I normally assume we're talking to here, programmers. No one knows everything and misses nothing, certainly not me, but it's nice when people come to comp.lang.python and can account for at least the beginning of some analysis of their problem. When that's missing, it's hard to know what's really constructive. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: read stdout/stderr without blocking
In article <[EMAIL PROTECTED]>, Peter Hansen <[EMAIL PROTECTED]> wrote: > Jacek Pop³awski wrote: > > Grant Edwards wrote: > > > >> On 2005-09-12, Jacek Pop?awski <[EMAIL PROTECTED]> wrote: > >> > >>>>ready = select.select(tocheck, [], [], 0.25) ##continues > >>>> after 0.25s > >>>>for file in ready[0]: > >>>>try: > >>>>text = os.read(file, 1024) > >>> > >>> > >>> How do you know here, that you should read 1024 characters? > >>> What will happen when output is shorter? > >> > >> It will return however much data is available. > > > > My tests showed, that it will block. > > Not if you use non-blocking sockets, as I believe you are expected to > when using select(). On the contrary, you need non-blocking sockets only if you don't use select. select waits until a read [write] would not block - it's like "if dict.has_key(x):" instead of "try: val = dict[x] ; except KeyError:". I suppose you knew that, but have read some obscure line of reasoning that makes non-blocking out to be necessary anyway. Who knows, but it certainly isn't in this case. I don't recall the beginning of this thread, so I'm not sure if this is the usual wretched exercise of trying to make this work on both UNIX and Windows, but there are strong signs of the usual confusion over os.read (a.k.a. posix.read), and file object read. Let's hopefully forget about Windows for the moment. The above program looks fine to me, but it will not work reliably if file object read() is substituted for os.read(). In this case, C library buffering will read more than 1024 bytes if it can, and then that data will not be visible to select(), so there's no guarantee it will return in a timely manner even though the next read() would return right away. Reading one byte at a time won't resolve this problem, obviously it will only make it worse. The only reason to read one byte at a time is for data-terminated read semantics, specifically readline(), in an unbuffered file. That's what happens -- at the system call level, where it's expensive -- when you turn off stdio buffering and then call readline(). In the C vs. Python example, read() is os.read(), and file object read() is fread(); so of course, C read() works where file object read() doesn't. Use select, and os.read (and UNIX) and you can avoid blocking on a pipe. That's essential if as I am reading it there are supposed to be two separate pipes from the same process, since if one is allowed to fill up, that process will block, causing a deadlock if the reading process blocks on the other pipe. Hope I'm not missing anything here. I just follow this group to answer this question over and over, so after a while it gets sort of automatic. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Odd behavior with os.fork and time.sleep
In article <[EMAIL PROTECTED]>, "Yin" <[EMAIL PROTECTED]> wrote: > I am writing a script that monitors a child process. If the child > process dies on its own, then the parent continues on. If the child > process is still alive after a timeout period, the parent will kill the > child process. Enclosed is a snippet of the code I have written. For > some reason, unless I put in two time.sleep(4) commands in the parent, > the process will not sleep. Am I forgetting something? Any reasons > for this strange behavior? ... > signal.signal(signal.SIGCHLD, chldhandler) If you can possibly revise your design to avoid the need for this, by all means do so. The SIGCHLD signal interrupts functions like sleep(), and that's what you're seeing: the parent process didn't return to its sleep after handling the signal. What's worse, it affects other functions in a similar way, such as I/O. Try to read some input from the terminal, instead if sleeping, and you should crash with an EINTR error. Makes it harder to write a reliable program, when you're inviting such trouble. So far this is a low level UNIX issue that isn't peculiar to Python, but Python adds to the difficulties just in the general awkwardness of signal handling in an interpreted language, where handlers may execute somewhat later than you would expect from experience with lower level languages. And then if you decide to add threads to the mix, there are even more issues as signals may be delivered to one thread and handled in another, etc. If you're dispatching on I/O, for example with select, you can use an otherwise unused pipe to notice the child fork's exit -- close the parent's write end right away, and then when the pipe becomes readable it must be because it closed on child exit. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: End or Identify (EOI) character ?
In article <[EMAIL PROTECTED]>, "Terry Reedy" <[EMAIL PROTECTED]> wrote: > "Madhusudan Singh" <[EMAIL PROTECTED]> wrote in message > news:[EMAIL PROTECTED] > > Hi > > > > I was wondering how does one detect the above character. It is returned > > by > > an instrument I am controlling via GPIB. > > EOI = chr(n) # where n is ASCII number of the character. > # then whenever later > if gpid_in == EOI: #do whatever Which begs the question, what is the ASCII number of the character? I was curious enough to feed GPIB and EOI into a search engine, and from what I got back, I believe it is not a character, but rather a hardware line that may be asserted or not. GPIB, whatever that is, may support some configuration options where EOI causes a character output, but the actual value depends on configuration. The documentation is probably the place to find out more about this stuff. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Allocation?
In article <[EMAIL PROTECTED]>, "Chris S." <[EMAIL PROTECTED]> wrote: > Is it possible to determine how much memory is allocated by an arbitrary > Python object? There doesn't seem to be anything in the docs about this, > but considering that Python manages memory allocation, why would such a > module be more difficult to design than say, the GC? Sorry, I didn't follow that - such a module as what? Along with the kind of complicated internal implementation details, you may need to consider the possibility that the platform malloc() may reserve more than the allocated amount, for its own bookkeeping but also for alignment. It isn't a reliable guide by any means, but something like this might be at least entertaining - >>> >>> class A: ... def __init__(self, a): ... self.a = a ... >>> d = map(id, map(A, [0]*32)) >>> d.sort() >>> k = 0 >>> for i in d: ... print i - k ... k = i ... This depends on the fact that id(a) returns a's storage address. I get very different results from one platform to another, and I'm not sure what they mean, but at a guess, I think you will see a fairly small number, like 40 or 48, that represents the immediate allocation for the object, and then a lot of intervals three or four times larger that represent all the memory allocated in the course of creating it. It isn't clear that this is all still allocated - malloc() doesn't necessarily reuse a freed block right away, and in fact the most interesting thing about this experiment is how different this part looks on different platforms. Of course we're still a bit in the dark as to how much memory is really allocated for overhead. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Kill GIL
Quoth Dave Brueck <[EMAIL PROTECTED]>: ... | Another related benefit is that a lot of application state is implicitly and | automatically managed by your local variables when the task is running in a | separate thread, whereas other approaches often end up forcing you to think in | terms of a state machine when you don't really care* and as a by-product you | have to [semi-]manually track the state and state transitions - for some | problems this is fine, for others it's downright tedious. I don't know if the current Stackless implementation has regained any of this ground, but at least of historical interest here, the old one's ability to interrupt, store and resume a computation could be used to As you may know, it used to be, in Stackless Python, that you could have both. Your function would suspend itself, the select loop would resume it, for something like serialized threads. (The newer version of Stackless lost this continuation feature, but for all I know there may be new features that regain some of that ground.) I put that together with real OS threads once, where the I/O loop was a message queue instead of select. A message queueing multi-threaded architecture can end up just as much a state transition game. I like threads when they're used in this way, as application components that manage some device-like thing like a socket or a graphic user interface window, interacting through messages. Even then, though, there tend to be a lot of undefined behaviors in events like termination of the main thread, receipt of signals, etc. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Kill GIL
In article <[EMAIL PROTECTED]>, Dave Brueck <[EMAIL PROTECTED]> wrote: > Donn Cave wrote: [... re stackless inside-out event loop ] > > I put that together with real OS threads once, where the I/O loop was a > > message queue instead of select. A message queueing multi-threaded > > architecture can end up just as much a state transition game. > > Definitely, but for many cases it does not - having each thread represent a > distinct "worker" that pops some item of work off one queue, processes it, > and > puts it on another queue can really simplify things. Often this maps to > real-world objects quite well, additional steps can be inserted or removed > easily (and dynamically), and each worker can be developed, tested, and > debugged > independently. Well, one of the things that makes the world interesting is how many different universes we seem to be coming from, but in mine, when I have divided an application into several thread components, about the second time I need to send a message from one thread to another, the sender needs something back in return, as in T2 = from_thread_B(T1). At this point, our conventional procedural model breaks up along a state fault, so to speak, like ... to_thread_B(T1) return def continue_from_T1(T1, T2): ... So, yeah, now I have a model where each thread pops, processes and pushes messages, but only because my program spent the night in Procrustes' inn, not because it was a natural way to write the computation. In a procedural language, anyway - there are interesting alternatives, in particular a functional language called O'Haskell that models threads in a "reactive object" construct, an odd but elegant mix of state machine and pure functional programming, but it's kind of a research project and I know of nothing along these lines that's really supported today. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Kill GIL
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Aahz) wrote: > Yes. I just get a bit irritated with some of the standard lines that > people use. Hey, stop me if you've heard this one: "I used threads to solve my problem - and now I have two problems!" Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: [Fwd: Re: [Uuu-devel] languages] <-- Why Python
Quoth Nick Coghlan <[EMAIL PROTECTED]>: [... re Python OS interface vs. language-generic model ] | *Allowing* other languages is one thing, but that shouldn't preclude having a | 'default' language. On other OS's, the default language is some form of shell | scripting (i.e. Unix shell scripts, or Windows batch files). It would be good to | have a real language to fill that role. I don't know what the Windows version is like, but for all the UNIX shell's weaknesses, it's very well suited to its role. The Plan 9 shell (rc) is similar with much improved syntax, and has a distant relative "es" that I think is the closest thing I've ever seen to a 1st class language that works as a shell (though the implementation was only at the proof of concept level.) (I'm not forgetting REXX, it's a fairly respectable effort but not 1st class.) ... | Still, the builtin shell is going to need *some* form of scripting support. An | if that looks like IPython's shell mode, so much the better. | | Anyway, the reason to prefer Python to LISP for something like this, is that | Python reads much more naturally for most people, whereas LISP requires that y | write things 'out of order'. | | Compare out-of-the-box Python: |a = 1 + 2 + 3 + 4 | | And out-of-the-box Lisp: |(setq a (+ 1 2 3 4)) | | Which language has the lower barrier for entry? That should be a fairly | important consideration for a language that is going to sit at the heart of an OS. Well, honestly I think that's stretching it. Your order issue here seems to apply only to operators, and they don't really figure that heavily in the kinds of things we normally do with the OS. The only operator I can think of in "rc" is ^, an odd sort of string multiplication thing, and I can't think of any in the original Bourne shell. Meanwhile, the idea that barriers to entry are built out of things like "+ 1 2 3 4" vs. "1 + 2 + 3 + 4" is really quite open to question. 10 years ago when BeOS was a little hotter than it is today, there were a couple enthusiasts pushing Python as an official language. A few of the people following BeOS at that point had come from a strong Amiga background, and I remember one of them arguing vehemently against Python because its arcane, complicated syntax was totally unsuited to casual use. Compared to, say, REXX. Now, we Python users know very well that's not true, Python's as clear as could be. But theoretically, if you wanted to talk about order issues, for example ... is it really easier to understand when a language sometimes expresses a function f over x and y this way f(x, y) sometimes this way (+ is a function, really) x f y and sometimes this way x.f(y) ? I don't know, I'm just thinking that while Python's notation might be just fine for people who've gotten here the way most of us have, it's not obvious from this that it's just fine 4 everyone. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: select + ssl
In article <[EMAIL PROTECTED]>, Ktm <[EMAIL PROTECTED]> wrote: > I don't have the same behaviour with two codes who are quite the same, > one using SSL, the other not. I tested the programs with stunnel and > telnet , respectively. [... program source ...] > The server blocks on recv here. SSL is a layer on top of the socket. It reads and writes SSL protocol data on the socket connection, while its recv() and send() methods return and accept the unencrypted protocol payload (you already knew this.) The select() function does not however deal with this layer, it looks directly at the socket. It's telling you that recv() won't block -- but it means the recv(2) that SSL uses, not the SSL.Connection.recv() that you have to use. > In both case I don't send anything with the client. (Perhaps stunnel > send something that I don't see ?) > > Why does the server block ? Probably you're seeing the initial exchange of data during the SSL connection - certificates and so forth. You may find that after this is done, further exchanges will work OK with select(). Or maybe not -- I really don't know enough about SSL to predict this. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Threading and consuming output from processes
In article <[EMAIL PROTECTED]>, Jack Orenstein <[EMAIL PROTECTED]> wrote: > I am developing a Python program that submits a command to each node > of a cluster and consumes the stdout and stderr from each. I want all > the processes to run in parallel, so I start a thread for each > node. There could be a lot of output from a node, so I have a thread > reading each stream, for a total of three threads per node. (I could > probably reduce to two threads per node by having the process thread > handle stdout or stderr.) > > I've developed some code and have run into problems using the > threading module, and have questions at various levels of detail. > > 1) How should I solve this problem? I'm an experienced Java programmer > but new to Python, so my solution looks very Java-like (hence the use of > the threading module). Any advice on the right way to approach the > problem in Python would be useful. > > 2) How many active Python threads is it reasonable to have at one > time? Our clusters have up to 50 nodes -- is 100-150 threads known to > work? (I'm using Python 2.2.2 on RedHat 9.) > > 3) I've run into a number of problems with the threading module. My > program seems to work about 90% of the time. The remaining 10%, it > looks like notify or notifyAll don't wake up waiting threads; or I > find some other problem that makes me wonder about the stability of > the threading module. I can post details on the problems I'm seeing, > but I thought it would be good to get general feedback > first. (Googling doesn't turn up any signs of trouble.) One of my colleagues here wrote a sort of similar application in Python, used threads, and had plenty of troubles with it. I don't recall the details. Some of the problems could be specific to Python. For example, there are some extra signal handling issues - but this is not to say that there are no signal handling issues with a multithreaded C application. For my money, you just don't get robust applications when you solve problems like multiple I/O sources by throwing threads at them. As I see another followup has already mentioned, the classic "pre threads" solution to multiple I/O sources is the select(2) function, which allows a single thread to serially process multiple file descriptors as data becomes available on them. When using select(), you should read from the file descriptor, using os.read(fd, size), socketobject.recv(size) etc., to avoid reading into local buffers as would happen with a file object. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Threading and consuming output from processes
Quoth Jack Orenstein <[EMAIL PROTECTED]>: [ ... re alternatives to threads ] | Thanks for your replies. The streams that I need to read contain | pickled data. The select call returns files that have available input, | and I can use read(file_descriptor, max) to read some of the input | data. But then how can I convert the bytes just read into a stream for | unpickling? I somehow need to take the bytes arriving for a given file | descriptor and buffer them until the unpickler has enough data to | return a complete unpickled object. | | (It would be nice to do this without copying the bytes from one place | to another, but I don't even see how do solve the problem with | copying.) Note that the file object copies bytes from one place to another, via C library stdio. If we could only see the data in those stdio buffers, it would be possible to use file objects with select() in more applications. (Though not with pickle.) Since input very commonly needs to be buffered for various reasons, we end up writing our own buffer code, all because stdio has no standard function that tells you how much data is in a buffer. But unpickling consumes an I/O stream, as you observe, so as a network data protocol by itself, it's unsuitable for use with select. I think the only option would be a packet protocol - a count field followed by the indicated amount of pickle data. I suppose I would copy the received data into a StringIO object, and unpickle that when all the data has been received. Incidentally, I think I read here yesterday, someone held a book about Python programming up to some ridicule for suggesting that pickles would be a good way to send data around on the network. The problem with this was supposed to have something to do with "overloading". I have no idea what he was talking about, but you might be interested in this issue. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Popen3 and capturestderr
Quoth Kenneth Pronovici <[EMAIL PROTECTED]>: ... | If ignoreStderr=False, I use popen2.Popen4 so that stderr and stdout are | intermingled. If ignoreStderr=True, I use popen2.Popen3 with | capturestderr=True so stderr doesn't appear in the output. This | functionality exists so I have an equivalent of command-line redirection | of stderr, i.e. command 2>/dev/null. ... | After some digging, I've decided that this behavior probably occurs | because I am ignoring the pipe.childerr file object. Indeed, if I call | pipe.childerr.close() right after opening the pipe, my "ls" command that | had been hanging completes normally. However, other commands which | actually attempt to write to stderr don't seem to like this very much. | | What is the right way to discard stderr when working with a pipe? I | want to consistently throw it away, and I don't see a good way to do | this with the popen2 implementation. Right, popen2 gives you about 3 options, out of probably dozens that you could get with shell redirections. On the other hand, the source is available, and Python is an OOP language, so I assume there is no reason you can't make a derived class that does just what you want. In the present case I guess that would mean something like null = os.open('/dev/null', os.O_RDWR) os.dup2(null, 0) os.dup2(null, 2) (depending) os.close(null) along with other stuff you can just copy from Popen4. Donn -- http://mail.python.org/mailman/listinfo/python-list
Re: non blocking read()
In article <[EMAIL PROTECTED]>, Uwe Mayer <[EMAIL PROTECTED]> wrote: > Hi, > > I use select() to wait for a file object (stdin) to become readable. In that > situation I wanted to read everything available from stdin and return to > the select statement to wait for more. > > However, the file object's read method blocks if the number of bytes is 0 or > negative. > > Is there no way to read everything a channel's got currently got without > blocking? Yes, there is a way - os.read() (also known as posix.read()) It's better not to mix buffered I/O (like file object I/O functions) with select() at all, because select() actually applies to system level file descriptors and doesn't know anything about the buffer. Get the file descriptor with fileno(), and never refer to the file object again after that. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: non blocking read()
In article <[EMAIL PROTECTED]>, Gustavo Córdova Avila <[EMAIL PROTECTED]> wrote: > David Bolen wrote: > > >Jp Calderone <[EMAIL PROTECTED]> writes: > > > >>def nonBlockingReadAll(fileObj): > >>bytes = [] > >>while True: > >>b = fileObj.read(1024) > >>bytes.append(b) > >>if len(b) < 1024: > >>break > >>return ''.join(bytes) > >> > >Wouldn't this still block if the input just happened to end at a > >multiple of the read size (1024)? > > > >-- David > > > No, it'll read up to 1024 bytes or as much as it can, and > then return an apropriatly sized string. Depends. I don't believe the original post mentioned that the file is a pipe, socket or similar, but it's kind of implied by the use of select() also mentioned. It's also kind of implied by use of the term "block" - disk files don't block. If we are indeed talking about a pipe or something that really can block, and you call fileobject.read(1024), it will block until it gets 1024 bytes. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: non blocking read()
In article <[EMAIL PROTECTED]>, Greg Ewing <[EMAIL PROTECTED]> wrote: > Donn Cave wrote: > > If we are indeed talking about a pipe or something that > > really can block, and you call fileobject.read(1024), > > it will block until it gets 1024 bytes. > > No, it won't, it will block until *some* data is > available, and then return that (up to a maximum of > 1024). You can test this on your platform, I will append a 9 line program. For me, fileobject.read blocks until all the requested data can be returned. > If the fd has just been reported by select() as ready > for reading, then something is there, so the first > read() call won't block. But if there was exactly > 1024 bytes there, the second read() call *will* block, > because there are now 0 bytes available (which I think > is what an earlier poster was hinting at). > > For this reason, if you have no way of knowing how > much data to expect in advance, it's better to avoid > making more than one read() call on a fd per select(). > If you don't get a whole line (or whatever chunk you're > looking for), put what you've got into a buffer, and > go back to select(). When you've built up a complete > chunk in the buffer, process it. Keep in mind that > part of the next chunk may be in the tail of the > buffer, so be prepared to chop a chunk off the > beginning of the buffer and leave the rest for later. Yes, this looks right to me, but I think we're talking about os.read(), not fileobject.read(). > Another possibility that's been suggested is putting > the fd into non-blocking mode. I wouldn't recommend > that; the last time I tried it (which was quite a long > time ago) select() and non-blocking I/O didn't mix > well. While it may be possible to get it to work, I > don't think you'd gain much. You need to understand > that there's no guaranteed relationship between the > chunks of data written to one end of a pipe or socket > and those returned by reading the other end. So you'd > still need to be prepared to buffer and re-chunk the > data. You'd end up doing all of what I outlined above, > with the extra complication of non-blocking I/O thrown > in. I don't see any advantage in it. Exactly. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: results of division
In article <[EMAIL PROTECTED]>, Brad Tilley <[EMAIL PROTECTED]> wrote: > > Brad Tilley wrote: > > > >> What is the proper way to limit the results of division to only a few > >> spaces after the decimal? I don't need rocket-science like precision. > >> Here's an example: > >> > >> 1.775 is as exact as I need to be and normally, 1.70 will do ... > I'm summing up the bytes in use on a hard disk drive and generating a > report that's emailed based on the percentage of the drive in use. Stilling guessing a little about what you're trying to do - probably implicitly or explicitly invoking the "repr" function on this values (implicitly for example via repr() or str() on a sequence of them.) So, a = [1.775, 1.949] print a yields [1.7749, 1.9491] You will get something more like what you want with the str() function instead. str(1.775) == '1.775' from types import FloatType class ClassicFloat(FloatType): def __repr__(self): return self.__str__() print map(ClassicFloat, [1.775, 1.949]) yields [1.775, 1.949] (Seems to me the standard float type behaved like this in Python 1.5.4, hence "classic".) Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess vs. proctools
Keith Dart <[EMAIL PROTECTED]> wrote: |>> Oh, I forgot to mention that it also has a more user- and |>> programmer-friendly ExitStatus object that processess can return. This |>> is directly testable in Python: |>> |>> proc = proctools.spawn("somecommand") |>> exitstatus = proc.wait() |>> |>> if exitstatus: |>> print "good result (errorlevel of zero)" |>> else: |>> print exitstatus # prints message with exit value This is indeed how the shell works, though the actual failure value is rarely of any interest. It's also in a more general sense how C works - whether errors turn out to be "true" or "false", in either case you test for that status (or you don't.) Python doesn't work that way, there is normally no such thing as an error return. An idiomatic Python interface would be try: proc = proctools.spawn(command) proc.wait() print 'good result' except proctools.error, ev: print >> sys.stderr, '%s: %s' % (proc.name, ev.text) [... list of features ...] | You always invoke the spawn* functions with a string. This is parsed by | a shell-like parser (the shparser module that comes with it), but no | /bin/sh is invoked. The parser can handle single and double quotes, and | backslash escapes. It was sounding good up to here. A lot depends on the quality of the parser, but it's so easy to support a list of arguments that gets passed unmodified to execve(), and such an obvious win in the common case where the command parameters are already separate values, that an interface where you "always" have to encode them in a string to be submitted to your parser seems to be ignoring the progress that os.spawnv and popen2.Popen3 made on this. Of course you don't need to repeat their blunders either and accept either string or list of strings in the same parameter, which makes for kind of a shabby API, but maybe a keyword parameter or a separate function would make sense. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: how to start a new process while the other ist running on
In article <[EMAIL PROTECTED]>, Erik Geiger <[EMAIL PROTECTED]> wrote: > Fredrik Lundh schrieb: > > > Erik Geiger wrote: > > > [...] > >> How to start a shell script without waiting for the exit of that shell > >> script? It shall start the shell script and immediately execute the next > >> python command. > > > > if you have Python 2.4, you can use the subprocess module: > > > > http://docs.python.org/lib/module-subprocess.html > > > > see the spawn(P_NOWAIT) example for how to use it in your > > case: > > > > http://docs.python.org/lib/node236.html > > Thats what I've tried, but it did not work. Maybe it's because I want to > start something like su -c '/path/to/skript $parameter1 $parameter2' user > I don't understand the syntax of spawn os.spawnlp(os.P_NOWAIT, "/path/to > script", "the script again?", "the args for the script?") Unfortunately this particular case kind of dilutes the advantages of spawnv. In the common case, parameter1 et al. would be submitted directly as the parameter list. I believe it may be clearer to start with to think about the spawnv() function - os.spawnv(os.P_NOWAIT, path, [cmdname, parameter1, parameter2]) If one of the parameters is itself another command, then of course it has to be rendered as a string os.spawnv(os.P_NOWAIT, '/bin/su', ['su', '-c', '%s %s %s' % (cmd, parameter1, parameter2)]) so you have almost as much work to scan the parameters for shell metacharacters as you would have with system(). Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Optional Static Typing - Haskell?
Quoth [EMAIL PROTECTED] (Alex Martelli): ... | Haskell's a great language, but beware: its static typing is NOT | optional -- it's rigorous. It can INFER types for you (just like, say, | boo), that's a different issue. It also allows bounded genericity at | compile time (like, say, C++'s templates without the hassles), and | that's yet another (typeclasses are a great mechanism, btw). He didn't dwell much on it, but there was some mention of type inference, kind of as though that could be taken for granted. I guess this would necessarily be much more limited in scope than what Haskell et al. do. Donn Cave, [EMAIL PROTECTED] ---== Posted via Newsfeed.Com - Uncensored Usenet News ==-- http://www.newsfeed.com The #1 Newsgroup Service in the World! -= Over 100,000 Newsgroups - Unlimited Fast Downloads - 19 Servers =- -- http://mail.python.org/mailman/listinfo/python-list
Re: Optional Static Typing - Haskell?
Quoth [EMAIL PROTECTED] (Alex Martelli): | Donn Cave <[EMAIL PROTECTED]> wrote: ... | > He didn't dwell much on it, but there was some mention of type | > inference, kind of as though that could be taken for granted. | > I guess this would necessarily be much more limited in scope | > than what Haskell et al. do. | | Assuming that by "he" you mean GvR, I think I saw that too, yes. And | yes, a language and particularly a typesystem never designed to | facilitate inferencing are hard-to-impossible to retrofit with it in as | thorough a way as one that's designed around the idea. (Conversely, | making a really modular system work with static typing and inferencing | is probably impossible; in practice, the type inferencer must examine | all code, or a rather copious summary of it... it can't really work | module by module in a nice, fully encapsulated way...). Well, I would assume that a modules in a static system would present a typed external interface, and inference would apply only within the module being compiled. for example, Objective CAML revised syntax - $ cat mod.ml module T = struct type op = [On | Off]; value print t a = match t with [ On -> print_string a | Off -> () ]; value decide t a b = match t with [ On -> a | Off -> b ]; end; $ ocamlc -i -pp camlp4r mod.ml module T : sig type op = [ On | Off ]; value print : op -> string -> unit; value decide : op -> 'a -> 'a -> 'a; end; This is fairly obvious, so I'm probably missing the point, but the compiler here infers types and produces an interface definition. The interface definition must be available to any other modules that rely on this one, so they are relieved of any need to examine code within this module. There might be tricky spots, but I imagine the Objective CAML folks would object to an assertion like "making a really modular system work with static typing and inferencing is probably impossible"! Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Optional Static Typing - Haskell?
Quoth Mike Meyer <[EMAIL PROTECTED]>: | [EMAIL PROTECTED] (Alex Martelli) writes: ... |> But then, the above criticism applies: if interface and implementation |> of a module are tightly coupled, you can't really do fully modular |> programming AND static typing (forget type inferencing...). | | I beg to differ. Eiffel manages to do this quite well. Then again, | every Eiffel environment comes with tools to extract the interface | information from the code. With SmartEiffel, it's a command called | "short". Doing "short CLASSNAME" is like doing "pydoc modulename", | except that it pulls routine headers and DbC expression from the code, | and not just from comments. And you probably think Eiffel supports fully modular programming, as I thought Objective CAML did. But Alex seems not to agree. The way I understand it, his criteria go beyond language level semantics to implementation details, like whether a change to a module may require dependent modules to be recompiled when they don't need to be rewritten. I don't know whether it's a reasonable standard, but at any rate hopefully he will explain it better than I did and you can decide for oneself whether it's an important one. Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Optional Static Typing
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Alex Martelli) wrote: > John Roth <[EMAIL PROTECTED]> wrote: >... > > question: static typing is an answer. What's the question? > > (That's a paraphrase.) > > > > The answer that everyone seems to give is that it > > prevents errors and clarifies the program. >... > > Most of the kinds of error that static typing is supposed > > to catch simply don't persist for more than a minute when > > you do test driven development. > > ...which is exactly the point of the famous post by Robert ("Uncle Bob") > Martin on another artima blog, > http://www.artima.com/weblogs/viewpost.jsp?thread=4639 . Wait a minute, isn't he same fellow whose precious dependency inversion principle shows us the way to support fully modular programming? What would he say about unit testing to catch up with changes in dependent modules, do you think? Do we have a combinatorial explosion potential here? Donn Cave, [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list