Tk and raw_input segfault
Python newbie disclaimer on I am running an app with Tkinter screen in one thread and command-line input in another thread using raw_input(). First question - is this legal, should it run without issue? If not can you point me to a description of why. While updating objects on the screen I get a segfault after an indeterminate number of updates. It doesn't seem to matter how quickly the updates occur, but it does segfault faster when there are more objects on the screen ( as I said failure time seems to have a random factor added to it ). Commenting out the raw_input() makes problem go away. I can schedule as many updates as I wish without error. And it doesn't seem to matter if I actually hit any keys for raw_input(), it can just sit there. I have read other posts about readline library failures with Esc O M sequences and could not recreate those failures. This happens on 2 separate machines 1st: development workstation tk-8.4.6-28 tcl-8.4.6-23 Python 2.3.3 (#1, Feb 5 2005, 16:30:27) [GCC 3.3.3 (SuSE Linux)] on linux2 Linux 2.6.5-7.151-smp #1 SMP Fri Mar 18 11:31:21 UTC 2005 x86_64 x86_64 x86_64 GNU/Linux 2nd: target machine tk-8.4.6-37 tcl-8.4.6-26 Python 2.3.3 (#1, Apr 6 2004, 01:47:39) [GCC 3.3.3 (SuSE Linux)] on linux2 Linux 2.6.4-52-default #1 Wed Apr 7 02:08:30 UTC 2004 i686 i686 i386 GNU/Linux I have tried to simplify the code as much as possible to make error more visible( no actual updates of the screen etc ) I've uncommented the "after" code line so it fails much more rapidly. I know its ugly repeatedly scheduling the after but, the same code runs without the raw_input, and it shows the error more readily. To see the error start the code and click the button repeatedly until it segfaults. It will still segfault if you remove the 'command' funtion and call update dirctly - it just takes a bit longer and your wrist will probably get tired. Thanks in advance for any responses. -- from Tkinter import * from time import sleep import thread class Test(Frame): def __init__(self, parent=None): Frame.__init__(self, parent, bg='white') # Button Definition: CLEAR ALL OUTPUTS caB = Button( self, text='CLEAR ALL\nOUTPUTS',\ #command = (lambda: self.update()) ) command = (lambda: self.command()) ) caB.pack() self.updateCount = 0 self.commanded = 0 self.update() def command( self ): self.commanded = 1 self.update() def update( self ): self.updateCount += 1 print 'updatin... num = ', self.updateCount self.after( 1, self.update ) def test(): root = Tk() root.geometry('640x480') Test().pack() root.mainloop() if __name__ == '__main__': scanTID= thread.start_new_thread( test, () ) sleep(1) while True: f= raw_input() print 'f=', f sleep(1) -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Cleveland Area Python Interest Group
Is the first meeting on June 6th, I only ask due to short notice. If so I'll be there. -- http://mail.python.org/mailman/listinfo/python-list
improving performance of python webserver running python scripts in cgi-bin
I am using a simple python webserver (see code below) to serve up python scripts located in my cgi-bin directory. import BaseHTTPServer import CGIHTTPServer class Handler(CGIHTTPServer.CGIHTTPRequestHandler): cgi_directories = ['/cgi-bin'] httpd = BaseHTTPServer.HTTPServer(('',8000), Handler) httpd.serve_forever() This works fine, but now I would like to combine the python scripts into the server program to eliminate starting the python interpreter on each script call. I am new to python, and was wondering if there is a better techique that will be faster. Also, can someone reccommend an alternative approach to httpd.serve_forever(). I would like to perform other python functions (read a serial port, write to an Ethernet port, write to a file, etc.) inside the web server program above. Is there an example of how to modify the code for an event loop style of operation where the program mostly performs my python I/O functions until an HTTP request comes in, and then it breaks out of the I/O operations to handle the HTTP request. thanks Dale -- http://mail.python.org/mailman/listinfo/python-list
two brief question about abstractproperty
I've been reading PEP 3119 and the documentation for ABCs in the python documentation. According to the PEP, the following should yield an error, because the abstract property has not been overridden: import abc class C: __metaclass__ = abc.ABCMeta @abc.abstractproperty def x(self): return 1 c=C() but an error is not raised, nor for the case where I do: class D(C): pass d=D() Have I misunderstood the documentation? Why doesn't this raise an error? I see the same behavior with the @abstractmethod. Also, why isn't it possible to declare an abstract read/write property with the decorator syntax: class C: __metaclass__ = abc.ABCMeta @abc.abstractproperty def x(self): pass @x.setter def x(self, val): "this is also abstract" -- http://mail.python.org/mailman/listinfo/python-list
Re: two brief question about abstractproperty
On Mar 12, 11:16 pm, Darren Dale wrote: > I've been reading PEP 3119 and the documentation for ABCs in the > python documentation. According to the PEP, the following should yield > an error, because the abstract property has not been overridden: > > import abc > class C: > __metaclass__ = abc.ABCMeta > @abc.abstractproperty > def x(self): > return 1 > c=C() > > but an error is not raised I guess the problem was not using the appropriate syntax for python 3: class C(metaclass=abc.ABCMeta): ... > Also, why isn't it possible to declare an abstract read/write property > with the decorator syntax: > > class C: > __metaclass__ = abc.ABCMeta > @abc.abstractproperty > def x(self): > pass > @x.setter > def x(self, val): > "this is also abstract" It seems like this syntax should be possible, that instantiation would check that if the C.x is an abstract property and the x.setter has been specified, then subclasses of C need to specify a setter before they can be instantiated. -- http://mail.python.org/mailman/listinfo/python-list
multiprocessing in subpackage on windows
I have two really simple scripts: C:\Python27\Scripts\foo --- if __name__ == '__main__': import bar bar.main() C:\Python27\Lib\site-packages\bar.py --- from multiprocessing import Pool def task(arg): return arg def main(): pool = Pool() res = pool.apply_async(task, (3.14,)) print res.get() if __name__ == '__main__': main() If I run "python C:\[...]bar.py", 3.14 is printed. If I run "python C:\ [...]foo", I get a long string of identical errors: File "", line 1 in File "C:\Python27\lib\multiprocessing\forking.py", line 346, in main prepare(preparation_data) File "C:\Python27\lib\multiprocessing\forking.py", line 455, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named foo This same scheme works on Linux. What step have I missed to allow a script to run code from a subpackage that uses multiprocessing? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: Grab metadata from images and save to file, batch mode
On 4/1/16 2:20 PM, accessnew...@gmail.com wrote: I have a directory (and sub-directories) full of images that I want to cycle through and grab certain metadata values and save those values to a single row in a cvs file. I would like to tack on the full path name to the row as a separate value. Folder C:\Images\Family Brother.jpg Sister.jpg Mom.jpg Keys/Values Original Date/Time User Name File Name Thus, data might look like this in a Family.csv file 2014:11:10 13:52:12; BillyBob111; Brother.jpg; C:\Images\Family\Brother.jpg 2015:10:54 11:45:34; BillyBob111; Sister.jpg; C:\Images\Family\Sister.jpg 2010:10:31 19:22:11; SallySue232; Mom.jpg; C:\Images\Family\Mom.jpg Big time noob. Much of what I have read cites command line examples dealing with single files and no info as to how to extract specific keys and their values. What module would some of you users recommend I use (I want it to be python as that is what I am trying to learn) Can you give me some coding suggestions to get me goings? I haven't found any substantive scripts to use as guides. Many thanks in advance Hi accessnewbie, I do a fair amount of media processing and automation with python. Look at exiftool <http://www.sno.phy.queensu.ca/~phil/exiftool/> There are python bindings as well <http://smarnach.github.io/pyexiftool/>. Dale -- https://mail.python.org/mailman/listinfo/python-list
Introduction
I just sent my first post, been using python for about 12 years to automate media production tasks. Lately I've been adding testing (Thanks Ned Batchelder: <http://nedbatchelder.com/text/test0.html>), and documentation with Sphinx/rst. Thanks Dale Marvin digital OutPost -- https://mail.python.org/mailman/listinfo/python-list
Binding a variable?
Hi everyone, Is it possible to bind a list member or variable to a variable such that temp = 5 list = [ temp ] temp == 6 list would show list = [ 6 ] Thanks in advance? Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: Binding a variable?
Thanks everyone for your comments and suggestions! I haven't quite decided which approach I'll take, but it's nice to have some options. Paul Tom Anderson wrote: >On Fri, 21 Oct 2005, Paul Dale wrote: > > > >>Is it possible to bind a list member or variable to a variable such that >> >>temp = 5 >>list = [ temp ] >>temp == 6 >>list >> >>would show >> >>list = [ 6 ] >> >> > >As you know by now, no. Like any problem in programming, this can be >solved with a layer of abstraction: you need an object which behaves a bit >like a variable, so that you can have multiple references to it. The >simplest solution is to use a single-element list: > > > >>>>temp = [None] # set up the list >>>>temp[0] = 5 >>>>list = [temp] >>>>temp[0] = 6 >>>>list >>>> >>>> >[[6]] > >I think this is a bit ugly - the point of a list is to hold a sequence of >things, so doing this strikes me as a bit of an abuse. > >An alternative would be a class: > >class var: > def __init__(self, value=None): > self.value = value > def __str__(self): # not necessary, but handy > return "<<" + str(self.val) + ">>" > > > >>>>temp = var() >>>>temp.value = 5 >>>>list = [temp] >>>>temp.value = 6 >>>>list >>>> >>>> >[<<6>>] > >This is more heavyweight, in terms of both code and execution resources, >but it makes your intent clearer, which might make it worthwhile. > >tom > > > -- http://mail.python.org/mailman/listinfo/python-list
Re: Redirect os.system output
You might want to try python expect which gives you a very simple and scriptable interface to a process. http://pexpect.sourceforge.net/ I've been using it on windows to automate a few things. Cheers, Paul jas wrote: >Kent, > Yes, your example does work. So did os.popen...however, the problem >is specific to "cmd.exe". > Have you tried that yet? > >Thanks! > >Kent Johnson wrote: > > >>jas wrote: >> >> >>>Ok, I tried this... >>> >>>C:\>python >>>Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] >>>on win32 >>>Type "help", "copyright", "credits" or "license" for more information. >>> >>> >>> >>import subprocess as sp >>p = sp.Popen("cmd", stdout=sp.PIPE) >> >>result = p.communicate("ipconfig") >> >> >>>'result' is not recognized as an internal or external command, >>>operable program or batch file. >>> >>> >>> >>>basically I was opening to send the "ipconfig" command to cmd.exe and >>>store the result in the "result" variable. But you can see there was >>>an error with result. >>> >>> >>This works for me: >>import subprocess as sp >>p = sp.Popen("ipconfig", stdout=sp.PIPE) >>result = p.communicate()[0] >>print result >> >>Kent >> >> > > > -- http://mail.python.org/mailman/listinfo/python-list
Re: Redirect os.system output
pexpect is POSIX compliant and works under Cygwin. I haven't tried it under pythonw. Just install cgywin (including python) then follow the standard instructions for pexpect. There was one small trick I had to do to get cygwin working totally properly on my machine which was run a rebaseall. Rebaseall sets the memory addresses for the DLLs or something like that. However, there is a slight problem. The rebaseall runs inside cygwin and uses one of the DLLs. To get around this I change the rebaseall script to write it's command to a text file and then run those commands in a DOS cmd shell. After that everything has worked without problem. Good luck, Paul jas wrote: >Paul, > I did ceck out the PExpect, however, I thought it was not ported for >Windows. Did you find a ported version? If not, what did you have to >do to be able to use it? > >Thanks > >Paul Dale wrote: > > >>You might want to try python expect which gives you a very simple and >>scriptable interface to a process. >> >>http://pexpect.sourceforge.net/ >> >>I've been using it on windows to automate a few things. >> >>Cheers, >> >>Paul >> >>jas wrote: >> >> >> >>>Kent, >>> Yes, your example does work. So did os.popen...however, the problem >>>is specific to "cmd.exe". >>> Have you tried that yet? >>> >>>Thanks! >>> >>>Kent Johnson wrote: >>> >>> >>> >>> >>>>jas wrote: >>>> >>>> >>>> >>>> >>>>>Ok, I tried this... >>>>> >>>>>C:\>python >>>>>Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] >>>>>on win32 >>>>>Type "help", "copyright", "credits" or "license" for more information. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>>>>import subprocess as sp >>>>>>>>p = sp.Popen("cmd", stdout=sp.PIPE) >>>>>>>> >>>>>>>>result = p.communicate("ipconfig") >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>'result' is not recognized as an internal or external command, >>>>>operable program or batch file. >>>>> >>>>> >>>>> >>>>>basically I was opening to send the "ipconfig" command to cmd.exe and >>>>>store the result in the "result" variable. But you can see there was >>>>>an error with result. >>>>> >>>>> >>>>> >>>>> >>>>This works for me: >>>>import subprocess as sp >>>>p = sp.Popen("ipconfig", stdout=sp.PIPE) >>>>result = p.communicate()[0] >>>>print result >>>> >>>>Kent >>>> >>>> >>>> >>>> >>> >>> >>> >>> > > > -- http://mail.python.org/mailman/listinfo/python-list
Re: xml.dom.minidom - parseString - How to avoid ExpatError?
Hi Greg, Not really an answer to your question but I've found 4Suite ( http://4suite.org/index.xhtml ) quite useful for my XML work and the articles linked to from there authored by Uche Ogbuji to be quite informative. Best, Paul Gregory Piñero wrote: > Thanks, John. That was all very helpful. It looks like one option > for me would be to put cdata[ around my text with all the weird > characters. Otherwise running it through on of the SAX utilities > before parsing might work. > > I wonder if the sax utilities would give me a performance hit. I have > 6000 xml files to parse at 100KB each. > > -Greg > -- http://mail.python.org/mailman/listinfo/python-list
Tkinter problem
Hi everybody! I've recently installed python2.4.2 on Fedora 4 (from downloaded sources), but it appeared, that I can't use Tkinter module: >>> import Tkinter Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 38, in ? import _tkinter # If this fails your Python may not be configured for Tk ImportError: libBLT24.so: cannot open shared object file: No such file or directory I tried the solution given in README file for RH9 (./configure --enable-unicode=ucs4) despite they wrote the newer wersion didn't need this hack. This is what I had after make instruction: INFO: Can't locate Tcl/Tk libs and/or headers *** WARNING: renaming "array" since importing it failed: build/lib.linux-i686-2.4/array.so: undefined symbol: PyUnicodeUCS2_FromUnicode *** WARNING: renaming "_testcapi" since importing it failed: build/lib.linux-i686-2.4/_testcapi.so: undefined symbol: PyUnicodeUCS2_Decode *** WARNING: renaming "unicodedata" since importing it failed: build/lib.linux-i686-2.4/unicodedata.so: undefined symbol: PyUnicodeUCS2_FromUnicode *** WARNING: renaming "_locale" since importing it failed: build/lib.linux-i686-2.4/_locale.so: undefined symbol: PyUnicodeUCS2_AsWideChar *** WARNING: renaming "cPickle" since importing it failed: build/lib.linux-i686-2.4/cPickle.so: undefined symbol: PyUnicodeUCS2_AsUTF8String *** WARNING: renaming "pyexpat" since importing it failed: build/lib.linux-i686-2.4/pyexpat.so: undefined symbol: PyUnicodeUCS2_DecodeUTF8 *** WARNING: renaming "_multibytecodec" since importing it failed: build/lib.linux-i686-2.4/_multibytecodec.so: undefined symbol: PyUnicodeUCS2_FromUnicode running build_scripts It seems that --enable-unicode=ucs4 wasn't the right way. I tried another hack: ./configure --enable-shared, but it still couldn't locate Tcl/Tk libs and/or headers. Then I installed RPMs: [EMAIL PROTECTED] Python-2.4.2]# rpm -q tk tk-8.4.9-3 [EMAIL PROTECTED] Python-2.4.2]# rpm -q tcl tcl-8.4.9-3 [EMAIL PROTECTED] Python-2.4.2]# rpm -q tkinter tkinter-2.4.1-2 and tried to do the hacks above, but it still could'nt find these libs. What can I do? -- http://mail.python.org/mailman/listinfo/python-list
Re: Tkinter problem
Thanks, but I've got another question: can't find Tcl configuration script "tclConfig.sh" This is what I received trying to install TkBLT. What is tclConfig.sh? I did installed tcl/tk 8.4.9-3 as I mentioned before, I tried to find this file, but I don't have it in my filesystem. How to get it? -- http://mail.python.org/mailman/listinfo/python-list
Re: Tkinter problem
Thanks! At this moment I can see the first python generated Tk window on my screen. It's great ;-))) -- http://mail.python.org/mailman/listinfo/python-list
command line reports
Is there a module somewhere that intelligently deals with reports to the command line? I would like to report the progress of some pretty lengthy simulations, and currently I have the new reports written on a new line rather rather than overwriting the previous report. Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: command line reports
Peter Hansen wrote: > Darren Dale wrote: >> Is there a module somewhere that intelligently deals with reports to the >> command line? I would like to report the progress of some pretty lengthy >> simulations, and currently I have the new reports written on a new line >> rather rather than overwriting the previous report. > > You mean you want sys.stdout.write(report + '\r') instead of "print > report" ? > > It's not really clear what you want. What's a "report" to you? > > -Peter I am printing something like trial 1 of 100 trial 2 of 100 ... so I get 100 lines by the time the code is finished. I would like to replace the previous report with the current one, so I only use one line by the time the code is finished. I was also hoping that there was a set of tools out there for spinners, meters, etc... -- http://mail.python.org/mailman/listinfo/python-list
Re: command line reports
Bengt Richter wrote: > On Thu, 11 Aug 2005 15:43:23 -0400, Darren Dale <[EMAIL PROTECTED]> wrote: > >>Peter Hansen wrote: >> >>> Darren Dale wrote: >>>> Is there a module somewhere that intelligently deals with reports to >>>> the command line? I would like to report the progress of some pretty >>>> lengthy simulations, and currently I have the new reports written on a >>>> new line rather rather than overwriting the previous report. >>> >>> You mean you want sys.stdout.write(report + '\r') instead of "print >>> report" ? >>> >>> It's not really clear what you want. What's a "report" to you? >>> >>> -Peter >> >>I am printing something like >> >>trial 1 of 100 >>trial 2 of 100 >>... > Peter's suggestion will work, but it's easy to get something like > > >>> import sys, time > >>> def test(): > ... for i in xrange(5): > ... sys.stdout.write(('trial %s of 5'%(i+1)) + '\r') > ... time.sleep(.25) > ... print "We're done!" > ... > >>> test() > We're done! Thanks, I didnt realize that \r is different from \n. -- http://mail.python.org/mailman/listinfo/python-list
Re: up to date books?
I highly recommend the "Safari" library service from Oreilly ( http://safari.oreilly.com ) you can check out all of the books listed below and about 10,000 more. The library contains much more than just Oreilly's books, but they are, of course, all in there. The first 2 weeks is free after that it's $20/month. You can check out 10 books at a time and you have to keep them for a month. You can download chapters, print pages, and search all the books in the library, as well as search across books you've checked out. It's a great way to get access to a broad range of technical books. One thing to be careful of. As the old books are there too it's possible to grab a first version when you might want a second or third version. Always list by date and make sure you're looking at the new stuff. Cheers, Paul Adriaan Renting wrote: >I learned Python from the "Learning Python" book that's first on Alessandros >list. If you have the Second Edition, that includes coverage for Python 2.3, I >think you have quite a nice introductory book. >As a reference book "Python in a Nutshell" and of course the Python >documentation itself are quite good. > >Adriaan > > > > Alessandro Bottoni <[EMAIL PROTECTED]> 08/18/05 9:02 am >>> >John Salerno wrote: > > > >>hi all. are there any recommendations for an intro book to python that >>is up-to-date for the latest version? >> >> > >I do not know how much up-to-date they are but I have to suggest you these >books: > >- Learning Python >By Mark Lutz and David Ascher >published by O'Reilly >Most likely the best introductory book on Python > >- Python Cookbook >By Alex Martelli and David Ascher >published by O'Reilly >By far the most useful book on Python after your first week of real use of >this language > >Also, the fundamental >- Programming Python (the 2nd edition ONLY) >By Mark Lutz >published by O'Reilly >Is very useful for understanding the most inner details of Python > > > >>would reading a book from a year or two ago cause me to miss much? >> >> > >No. Python did not changed too much since rel. 1.5. You can still use a book >published in 2001 as a introductory book (as I do). The changes are >exhaustively described both in the official documentation and in the very >fine "what's new in..." articles written by Andrew Kuchlin for every new >release (see www.python.org). > >CU > >--- >Alessandro Bottoni > > -- http://mail.python.org/mailman/listinfo/python-list
Nested Regex Conditionals
Hi All, I know that several of you will probably want to reply "you should write a parser", and I may. For that matter any tips on theory in that direction would be appreciated. However, if you would indulge me in my regex question I would also be most grateful. I'm writing an edi parser and to be in compliance with the specification I need to have conditionals that are dependent on conditionals. In some regular expression implementations this is possible. The code ... #!/usr/bin/env python import re pattern = re.compile(r""" (?P(first)) (?(first) (?P(second)) ) (?(second) (?P(third)) ) """, re.VERBOSE) string = 'firstsecondthird' match = re.match(pattern,string) print match.group('first','second','third') Prints ('first', 'second', None) and I haven't found any way to have a second conditional, nor any reference to it in any documentation I've found. Am I missing something, and it is possible? Or is it not possible in python? It seems like it might be a bug, as it knows there is a group (None, instead of an IndexError), but it doesn't match ... Thanks for any help :) Paul -- http://mail.python.org/mailman/listinfo/python-list
Re: Should I move to Amsterdam?
>But yes, the Netherlands is a highly civilised country - up there with >Denmark and Canada, and above the UK, France or Germany, IMNERHO. I'm not >going to bother comparing it to the US! > > How strange that you put Canada so high on your list. -- http://mail.python.org/mailman/listinfo/python-list
How to tell if an exception has been caught ( from inside the exception )?
Hi everyone, I'm writing an exception that will open a trouble ticket for certain events. Things like network failure. I thought I would like to have it only open a ticket if the exception is not caught. Is there a way to do this inside the Exception? As far as I can see there are only two events called on the exception, __init__ and __del__, both of which will be called wether or not an exception is caught (right?) Am I missing something, or is there a way to do this with exceptions? Thanks! Paul -- http://mail.python.org/mailman/listinfo/python-list
function namespaces
Hi, I have a variable saved in a file like this #contents of myfile.py: testvar = [1,2,3,4] and I am trying to write a function that does something like this: def myfunction(filename): execfile(filename) print testvar The problem I am running into is that the global name testvar is not defined, but I dont understand why. I tried calling dir() in the function, which does list testvar. I tried declaring tesvar a global before calling execfile, and that didnt help. If I just run execfile('myfile.py') in the interactive interpretter, testvar is loaded and I can continue my work. What am I doing wrong? -- http://mail.python.org/mailman/listinfo/python-list
Re: function namespaces
> Generally, I avoid execfile within a function. What's your use case? > There may be a better way to approach this problem... I am writing a simulation that loads some predefined constants, depending on the options called by the user. I originally had it set up to parse the file, and load the constants explicitly, but then I thought that with the existence of this handy builtin execfile, I could make write my constants file in python, and just load it. I guess it is not the best approach. Thanks for the advice though (everyone), I learned something. -- http://mail.python.org/mailman/listinfo/python-list
question on regular expressions
I'm stuck. I'm trying to make this: file://C:%5Cfolder1%5Cfolder2%5Cmydoc1.pdf,file://C %5Cfolderx%5Cfoldery%5Cmydoc2.pdf (no linebreaks) look like this: ./mydoc1.pdf,./mydoc2.pdf my regular expression abilities are dismal. I won't list all the unsuccessful things I've tried, in a nutshell, the greedy operators are messing me up, truncating the output to ./mydoc2.pdf. Could someone offer a suggestion? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: question on regular expressions
Michael Fuhr wrote: > Darren Dale <[EMAIL PROTECTED]> writes: > >> I'm stuck. I'm trying to make this: >> >> file://C:%5Cfolder1%5Cfolder2%5Cmydoc1.pdf,file://C >> %5Cfolderx%5Cfoldery%5Cmydoc2.pdf >> >> (no linebreaks) look like this: >> >> ./mydoc1.pdf,./mydoc2.pdf >> >> my regular expression abilities are dismal. > > This works for the example string you gave: > > newstring = re.sub(r'[^,]*%5[Cc]', './', examplestring) > > This replaces all instances of zero or more non-commas that are > followed by '%5C' or '%5c' with './'. Greediness causes the pattern > to replace everything up to the last '%5C' before a comma or the > end of the string. > > Regular expressions aren't the only way to do what you want. Python > has standard modules for parsing URLs and file paths -- take a look > at urlparse, urllib/urllib2, and os.path. > Thanks to both of you. I thought re's were appropriate because the string I gave is buried in an xml file. A more representative example is: [...snip...]file://C:%5Cfolder1%5Cfolder2%5Cmydoc1.pdf[...snip... data]file://C%5Cfolderx%5Cfoldery%5Cmydoc2.pdf[...snip...] -- http://mail.python.org/mailman/listinfo/python-list
WeakValueDict and threadsafety
I am using a WeakValueDict in a way that is nearly identical to the example at the end of http://docs.python.org/library/weakref.html?highlight=weakref#example , where "an application can use objects IDs to retrieve objects that it has seen before. The IDs of the objects can then be used in other data structures without forcing the objects to remain alive, but the objects can still be retrieved by ID if they do." My program is multithreaded, so I added the necessary check for liveliness that was discussed at http://docs.python.org/library/weakref.html?highlight=weakref#weak-reference-objects . Basically, I have: import threading import weakref registry = weakref.WeakValueDictionary() reglock = threading.Lock() def get_data(oid): with reglock: data = registry.get(oid, None) if data is None: data = make_data() registry[id(data)] = data return data I'm concerned that this is not actually thread-safe. When I no longer hold strong references to an instance of data, at some point the garbage collector will kick in and remove that entry from my registry. How can I ensure the garbage collection process does not modify the registry while I'm holding the lock? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: WeakValueDict and threadsafety
On Dec 10, 11:19 am, Duncan Booth wrote: > Darren Dale wrote: > > I'm concerned that this is not actually thread-safe. When I no longer > > hold strong references to an instance of data, at some point the > > garbage collector will kick in and remove that entry from my registry. > > How can I ensure the garbage collection process does not modify the > > registry while I'm holding the lock? > > You can't, but it shouldn't matter. > > So long as you have a strong reference in 'data' that particular object > will continue to exist. Other entries in 'registry' might disappear while > you are holding your lock but that shouldn't matter to you. > > What is concerning though is that you are using `id(data)` as the key and > then presumably storing that separately as your `oid` value. If the > lifetime of the value stored as `oid` exceeds the lifetime of the strong > references to `data` then you might get a new data value created with the > same id as some previous value. > > In other words I think there's a problem here, but nothing to do with the > lock. Thank you for the considered response. In reality, I am not using id(data). I took that from the example in the documentation at python.org in order to illustrate the basic approach, but it looks like I introduced an error in the code. It should read: def get_data(oid): with reglock: data = registry.get(oid, None) if data is None: data = make_data(oid) registry[oid] = data return data Does that look better? I am actually working on the h5py project (bindings to hdf5), and the oid is an hdf5 object identifier. make_data(oid) creates a proxy object that stores a strong reference to oid. My concern is that the garbage collector is modifying the dictionary underlying WeakValueDictionary at the same time that my multithreaded code is trying to access it, producing a race condition. This morning I wrote a synchronized version of WeakValueDictionary (actually implemented in cython): class _Registry: def __cinit__(self): def remove(wr, selfref=ref(self)): self = selfref() if self is not None: self._delitem(wr.key) self._remove = remove self._data = {} self._lock = FastRLock() __hash__ = None def __setitem__(self, key, val): with self._lock: self._data[key] = KeyedRef(val, self._remove, key) def _delitem(self, key): with self._lock: del self._data[key] def get(self, key, default=None): with self._lock: try: wr = self._data[key] except KeyError: return default else: o = wr() if o is None: return default else: return o Now that I am using this _Registry class instead of WeakValueDictionary, my test scripts and my actual program are no longer producing segfaults. -- http://mail.python.org/mailman/listinfo/python-list
Re: WeakValueDict and threadsafety
On Dec 10, 2:09 pm, Duncan Booth wrote: > Darren Dale wrote: > > On Dec 10, 11:19 am, Duncan Booth > > wrote: > >> Darren Dale wrote: > > def get_data(oid): > > with reglock: > > data = registry.get(oid, None) > > if data is None: > > data = make_data(oid) > > registry[oid] = data > > return data > > > Does that look better? I am actually working on the h5py project > > (bindings to hdf5), and the oid is an hdf5 object identifier. > > make_data(oid) creates a proxy object that stores a strong reference > > to oid. > > Yes, that looks better. > > > > > Now that I am using this _Registry class instead of > > WeakValueDictionary, my test scripts and my actual program are no > > longer producing segfaults. > > I think that so far as multi-thread race conditions are concerned Python > usually tries to guarantee that you won't get seg faults. So if you were > getting seg faults my guess would be that either you've found a bug in the > WeakValueDictionary implementation or you've got a bug in some of your code > outside Python. Have you seen Alex Martelli's answer at http://stackoverflow.com/questions/3358770/python-dictionary-is-thread-safe ? The way I read that, it seems pretty clear that deleting items from a dict can lead to crashes in threaded code. (Well, he says as long as you don't performing an assignment or a deletion in threaded code, there may be issues, but at least it shouldn't crash.) > For example if your proxy object has a __del__ method to clean up the > object it is proxying then you could be creating a new object with the same > oid as one that is in the process of being destroyed (the object disappears > from the WeakValueDictionary before the __del__ method is actually called). > > Without knowing anything about HDF5 I don't know if that's a problem but I > could imagine you could end up creating a new proxy object that references > something in the HDF5 library which you then destroy as part of cleaning up > a previous incarnation of the object but continue to access through the new > proxy. We started having problems when HDF5 began recycling oids as soon as their reference count went to zero, which was why we began using IDProxy and the registry. The IDProxy implementation below does have a __dealloc__ method, which we use to decrease the HDF5's internal reference count to the oid. Adding these proxies and registry dealt with the issue of creating a new proxy that references an old oid (even in non-threaded code), but it created a rare (though common enough) segfault in multithreaded code. This synchronized registry is the best I have been able to do, and it seems to address the problem. Could you suggest another approach? cdef IDProxy getproxy(hid_t oid): # Retrieve an IDProxy object appropriate for the given object identifier cdef IDProxy proxy proxy = registry.get(oid, None) if proxy is None: proxy = IDProxy(oid) registry[oid] = proxy return proxy cdef class IDProxy: property valid: def __get__(self): return H5Iget_type(self.id) > 0 def __cinit__(self, id): self.id = id self.locked = 0 def __dealloc__(self): if self.id > 0 and (not self.locked) and H5Iget_type(self.id) > 0 \ and H5Iget_type(self.id) != H5I_FILE: H5Idec_ref(self.id) -- http://mail.python.org/mailman/listinfo/python-list
how to test for a dependency
Hello, I would like to test that latex is installed on a windows, mac or linux machine. What is the best way to do this? This should work: if os.system('latex -v'): print 'please install latex' but I dont actually want the latex version information to print to screen. I tried redirecting sys.stdout to a file, but that doesnt help. Is there a better way to do this in a cross-platform friendly way? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: how to test for a dependency
Dennis Benzinger wrote: > Darren Dale schrieb: >> Hello, >> >> I would like to test that latex is installed on a windows, mac or linux >> machine. What is the best way to do this? This should work: >> >> if os.system('latex -v'): >> print 'please install latex' >> >> but I dont actually want the latex version information to print to >> screen. I tried redirecting sys.stdout to a file, but that doesnt help. >> Is there a better way to do this in a cross-platform friendly way? >> >> Thanks, >> Darren > > > I didn't try it, but you could use the subprocess module > <http://python.org/doc/2.4.2/lib/module-subprocess.html>. > Create a Popen object with stdout = PIPE so that a pipe to the child > process is created and connected to the client's stdout. Thanks for the suggestion, that would probably work. Unfortunately, I need to support Python 2.3 for some time to come. I wonder, will this work across platforms? if os.system('latex -v > temp.log'): print 'install latex' -- http://mail.python.org/mailman/listinfo/python-list
Re: how to test for a dependency
Sybren Stuvel wrote: > Darren Dale enlightened us with: >> I would like to test that latex is installed on a windows, mac or linux >> machine. What is the best way to do this? This should work: >> >> if os.system('latex -v'): >> print 'please install latex' > > The downside is that you can only use this to test by executing. > Perhaps it would be better to make a function that can search the PATH > environment variable in a cross-platform way. Also make sure you > include any platform-specific executable postfixes like Window's > ".exe". I guess that would work. I was hoping there was a more elegant, batteries included way to do it. By the way, great Zappa quote. -- http://mail.python.org/mailman/listinfo/python-list
More on Tk event_generate and threads
There have been a number of posts about calling gui methods from other threads. Eric Brunel. has reccommended calling the gui's .event_generate method with data passed thru a queue. This worked great for me until trying to write to the gui from multiple threads. There I had problems: random typesof crashes almost all resulting in seg faults. I thought this info might help anyone trying to do the sameor at least save some time debugging. I wrote a (hopefully) simple and minimal piece of code to demonstrate the problem included below. My knowledge of the Tcl/Tk interface is minimal. Another interesting thing is it fails on some systems and not on others. All Linux: Runs fine on: Dual Opteron Fedora5 Linux 2.6.16-1.2096_FC5 #1 SMP Wed Apr 19 05:14:26 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux Singe Pentium Fedora4 uname info unavailable right now Fails on: Singe Xeon SuSE 10.1 Linux 2.6.16.13-4-smp #1 SMP Wed May 3 04:53:23 UTC 2006 i686 i686 i386 GNU/Linux Single Celeron 400 (Embedded BlueCat) Linux 2.6.7 #45 i686 i686 i386 (not copy pasted) Single Celeron 400 SuSE 10.1 - not available My solution (others have done the same) was to go back to having the gui thread call its own event_generate method with the event string passed in through queue and using a polling loop with after methods. Would someone please verify that this shouldn't be done for some reason such as thread-safety or point out what I'm doing wrong? It seems from some of the errors that the event data is getting overwritten. I've included the code, if anyone wants to see some of the errors I have saved them. --- from Tkinter import * import threading import Queue from time import sleep import random class Thread_0(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): count=0 while True: count+=1 hmi.thread_0_update(count) sleep(random.random()/100) class Thread_1(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): count=0 while True: count+=1 hmi.thread_1_update(count) sleep(random.random()/100) class HMI: def __init__(self): self.master=Tk() self.master.geometry('200x200+1+1') f=Frame(self.master) f.pack() self.l0=Label(f) self.l0.pack() self.l1=Label(f) self.l1.pack() self.q0=Queue.Queue() self.q1=Queue.Queue() self.master.bind("<>",self.thread_0_update_e) self.master.bind("<>",self.thread_1_update_e) def start(self): self.master.mainloop() self.master.destroy() # def thread_0_update(self,val): self.q0.put(val) self.master.event_generate('<>',when='tail') def thread_1_update(self,val): self.q1.put(val) self.master.event_generate('<>',when='tail') def thread_0_update_e(self,e): while self.q0.qsize(): try: val=self.q0.get() self.l0.config(text=str(val)) except Queue.Empty: pass def thread_1_update_e(self,e): while self.q1.qsize(): try: val=self.q1.get() self.l1.config(text=str(val)) except Queue.Empty: pass ## if __name__=='__main__': hmi=HMI() t0=Thread_0() t1=Thread_1() t0.start() t1.start() hmi.start() --- -- http://mail.python.org/mailman/listinfo/python-list
Re: what are you using python language for?
hacker1017 wrote: > im just asking out of curiosity. Embedded control system -- http://mail.python.org/mailman/listinfo/python-list
Re: languages with full unicode support
Tim Roberts wrote: > "Xah Lee" <[EMAIL PROTECTED]> wrote: > >> Languages with Full Unicode Support >> >> As far as i know, Java and JavaScript are languages with full, complete >> unicode support. That is, they allow names to be defined using unicode. >> (the JavaScript engine used by FireFox support this) >> >> As far as i know, here's few other lang's status: >> >> C ? No. > > This is implementation-defined in C. A compiler is allowed to accept > variable names with alphabetic Unicode characters outside of ASCII. I don't think it is implementation defined. I believe it is actually required by the spec. The trouble is that so few compilers actually comply with the spec. A few years ago I asked for someone to actually point to a fully compliant compiler and no one could. -- Dale King -- http://mail.python.org/mailman/listinfo/python-list
Re: prime number
lostinpython> I'm having trouble writing a program that figures lostinpython> out a prime number. Does anyone have an idea on how lostinpython> to write it? [I can't quite tell from your posts what your level of programming knowledge is, so I've aimed low. If this was wrong, please accept my apologies. --rdh] It's not quite clear precisely what the problem is. Do you want to find all primes less than some number N, or find the prime factorization of some input number, or something else altogether? Are there any other constraints? Ie, does the program have to be recursive? I'm going to assume that you want to find all primes less than N, and that are no constraints on the form of the program. First, pretend you have a function called "is_prime". It will look something like this: def is_prime(n): # return True if n is prime, False otherwise # magic goes here If we had this function, the our main program would look something like this: N = 20 for n in range(1, N+1): if is_prime(n): print n NOTE: range(1,N+1) returns a list of numbers from 1 to N inclusive. The question now is how to fill in the missing part of is_prime. In your original post, you noted that n>2 is prime if no number between 2 and sqrt(n) inclusive divides into n evenly. This tells us that we have to test a list of possible factors for divisibility into n. Ignoring, for a little while, how we figure out the possible factors, we can now expand is_prime as follows: def is_prime(n): # return 1 if n is prime, 0 otherwise from math import sqrt for i in possible_factors: # if i divides n, n isn't prime # if no i divided into n, ie, we fell out of the loop, # n is prime The possible factor i divides into n if n % i == 0. Once we've decided that n is or isn't prime, we can just return True or False respectively. Adding these details: def is_prime(n): # return 1 if n is prime, 0 otherwise from math import sqrt for i in possible_factors: if n % i == 0: return True return False The final step is to figure out the list of possible factors. By the definition, this is the list of integers from 2 to the integer value of sqrt(n), inclusive. To get this list, we can use the range function again, assuming that sqrt(n) to compute the square root for us. The list of possible factors is given by range(2, int(sqrt(n)) + 1) The final trick to is to get the definition of sqrt. It turns out that this function is defined in the math module. Adding the bit of syntax needed to access the sqrt function, we have: def is_prime(n): # return 1 if n is prime, 0 otherwise from math import sqrt for i in range(2, int(sqrt(n)) + 1): if n % i == 0: return False return True Put together with the main program above, this will print out a list as follows: 1 2 3 5 7 11 13 17 19 However, there's a slight problem here. By definition, 1 is not a prime. We could fix this a couple of ways. Either we could change the main program not to start its range with 1, or we can fix is_prime. The latter is simple: def is_prime(n): # return 1 if n is prime, 0 otherwise from math import sqrt if n == 1: return False for i in range(2, int(sqrt(n)) + 1): if n % i == 0: return False return True Another algorithm mentioned in the followups to your posting is the Sieve of Eratosthenes, named for its ancient Greek discoverer. You can find a description of the Sieve in many places on the web. Trying to implement it might be a good next step. Dale. -- http://mail.python.org/mailman/listinfo/python-list
Re: prime number
Sigh ... one of my intermediate versions of is_prime() returns True if the n is *not* prime, and false otherwise. The final version is correct, though. Dale. -- http://mail.python.org/mailman/listinfo/python-list
Re: What are OOP's Jargons and Complexities?
David Formosa (aka ? the Platypus) wrote: > On Tue, 24 May 2005 09:16:02 +0200, Tassilo v. Parseval > <[EMAIL PROTECTED]> wrote: > >>Also sprach John W. Kennedy: > > [...] > > >>Most often, languages with strong typing can be found on the functional >>front (such as ML and Haskell). These languages have a dynamic typing >>system. I haven't yet come across a language that is both statically and >>strongly typed, in the strictest sense of the words. I wonder whether >>such a language would be usable at all. > > > Modula2 claims to be both statically typed and strongly typed. And > your wonder at its usablity is justified. I used a variant of Modula-2 and it was one of the best languages I have ever used. That strong, static type checking was a very good thing. It often took a lot of work to get the code to compile without error. Usually those errors were the programmers fault for trying to play fast and loose with data. But once you got it to compile it nearly always worked. -- Dale King -- http://mail.python.org/mailman/listinfo/python-list
Re: What are OOP's Jargons and Complexities?
Anno Siegel wrote: > Tassilo v. Parseval <[EMAIL PROTECTED]> wrote in comp.lang.perl.misc: > >>Also sprach Dale King: >> >> >>>David Formosa (aka ? the Platypus) wrote: >>> >>>>On Tue, 24 May 2005 09:16:02 +0200, Tassilo v. Parseval >>>><[EMAIL PROTECTED]> wrote: >>>> >>>> >>>>>[...] I haven't yet come across a language that is both statically and >>>>>strongly typed, in the strictest sense of the words. I wonder whether >>>>>such a language would be usable at all. >>>> >>>> >>>>Modula2 claims to be both statically typed and strongly typed. And >>>>your wonder at its usablity is justified. >>> >>>I used a variant of Modula-2 and it was one of the best languages I have >>>ever used. That strong, static type checking was a very good thing. It >>>often took a lot of work to get the code to compile without error. >>>Usually those errors were the programmers fault for trying to play fast >>>and loose with data. But once you got it to compile it nearly always worked. >> >>I am only familiar with its successor Modula-3 which, as far as I >>understand, is Modula-2 with uppercased keywords and some OO-notion >>bolted onto it (I still recall 'BRANDED' references). >> >>I have to say that doing anything with this language was not exactly a >>delight. > > > I've been through Pascal, Modula2 and Oberon, and I agree. > > These languages had an axe to grind. They were designed (by Niklas > Wirth) at a time of a raging discussion whether structured programming > (goto-less programming, mostly) is practical. Their goal was to prove > that it is, and in doing so the restrictive aspects of the language > were probably a bit overdone. I fail to see how they were that different in terms of structured programming than C. The main benefit I was talking had more to do with types. It had types that were not compatible just because they had the same base type. For example you could have a type inches that was an integer and a type ounces that was also integral. Just because they were both integral did not make them type compatible. You couldn't just assign one to the other without you as the programmer explicitly saying that it was OK (by casting). In the environment I was programming in (engine controls for cars) where safety was a critical thing and a programming bug could kill people that safety was a very good thing. I think that also has a lot to do with why the government uses Ada. > In the short run they succeeded. For a number of years, languages of > that family were widely used, primarily in educational programming > but also in implementing large real-life systems. > > In the long run, the languages have mostly disappeared from the scene. I've posted before that hardly any language that has ever been somewhat popular has actually died (depending on your definition of that word). When asked for someone to name one once I got Simula for example (the forerunner of OO languages). Turns out that it continues to actually grow in popularity. > It has been discovered that "structured programming" is possible in > about any language. It turns out that programmers prefer the > self-discipline it takes to do that in a liberal language over the > enforced discipline exerted by Papa Pascal and his successors. There are lots of reasons they have not taken over, although Ada is still in wide use. It seems to me that too many people like playing with dangerous power tools without the guards in place. -- Dale King -- http://mail.python.org/mailman/listinfo/python-list
python style guide inconsistencies
I was just searching for some guidance on how to name packages and modules, and discovered some inconsistencies on the www.python.org. http://www.python.org/doc/essays/styleguide.html says "Module names can be either MixedCase or lowercase." That page also refers to PEP 8 at http://www.python.org/dev/peps/pep-0008/, which says "Modules should have short, all-lowercase names. ... Python packages should also have short, all-lowercase names ...". Which is most up to date? Is this the right place to point out that one of those pages needs to be updated? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: python style guide inconsistencies
Bjoern Schliessmann wrote: > Darren Dale wrote: > >> I was just searching for some guidance on how to name packages and >> modules, and discovered some inconsistencies on the >> www.python.org. http://www.python.org/doc/essays/styleguide.html >> says "Module names can be either MixedCase or lowercase." That >> page also refers to PEP 8 at >> http://www.python.org/dev/peps/pep-0008/, which says "Modules >> should have short, all-lowercase names. ... Python packages should >> also have short, all-lowercase names ...". >> >> Which is most up to date? > > The priority is, IMHO, clear. The old style guide essay says, at the > beginning: > > | This style guide has been converted to several PEPs (Python > | Enhancement Proposals): PEP 8 for the main text, PEP 257 for > | docstring conventions. See the PEP index. > > So PEP 8 is the most recent. Then perhaps http://www.python.org/doc/essays/styleguide.html should either be updated to either agree with or simply link to PEPs 8 and 257. What is the point of keeping old, out-of-date essays up on python.org? That beginning comment does not indicate that the essay is any different from the PEPs. -- http://mail.python.org/mailman/listinfo/python-list
string formatting: engineering notation
Does anyone know if it is possible to represent a number as a string with engineering notation (like scientific notation, but with 10 raised to multiples of 3: 120e3, 12e-6, etc.). I know this is possible with the decimal.Decimal class, but repeatedly instantiating Decimals is inefficient for my application (matplotlib plotting library). If it is not currently possible, do you think the python devs would be receptive to including support for engineering notation in future releases? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
question about class methods
I've run across some code in a class method that I don't understand: def example(self, val=0) if val and not self: if self._exp < 0 and self._exp >= -6: I am unfamiliar with some concepts here: 1) Under what circumstances would "if not self" be True? 2) If "not self" is True, how can self have attributes? (This is slightly simplified code from the decimal.Decimal.__str__ method, line 826 in python-2.4.4) Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: string formatting: engineering notation
Steve Holden wrote: > Darren Dale wrote: >> Does anyone know if it is possible to represent a number as a string with >> engineering notation (like scientific notation, but with 10 raised to >> multiples of 3: 120e3, 12e-6, etc.). I know this is possible with the >> decimal.Decimal class, but repeatedly instantiating Decimals is >> inefficient for my application (matplotlib plotting library). If it is >> not currently possible, do you think the python devs would be receptive >> to including support for engineering notation in future releases? >> > How close is this: > > >>> "%.3e" % 3.14159 > '3.142e+00' >>> "%.3e" % 31415.9 '3.142e+04' What I am looking for is '31.4159e+03' Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: string formatting: engineering notation
[EMAIL PROTECTED] wrote: > On Mar 14, 1:14 pm, Darren Dale <[EMAIL PROTECTED]> wrote: >> Does anyone know if it is possible to represent a number as a string with >> engineering notation (like scientific notation, but with 10 raised to >> multiples of 3: 120e3, 12e-6, etc.). I know this is possible with the >> decimal.Decimal class, but repeatedly instantiating Decimals is >> inefficient for my application (matplotlib plotting library). If it is >> not currently possible, do you think the python devs would be receptive >> to including support for engineering notation in future releases? > > > Do you also consider this to be too inefficient? > > > import math > > for exponent in xrange(-10, 11): > flt = 1.23 * math.pow(10, exponent) > l = math.log10(flt) > if l < 0: > l = l - 3 > p3 = int(l / 3) * 3 > multiplier = flt / pow(10, p3) > print '%e => %fe%d' % (flt, multiplier, p3) > That's a good suggestion. It's probably fast enough. I was hoping that something like '%n'%my_number already existed. -- http://mail.python.org/mailman/listinfo/python-list
python webserver question
I am working on a task to display a wireless network nodes using Google Earth (GE) with KML network links. I am using a simple python webserver (see code below) to serve up the python scripts as KML output to GE for this. import BaseHTTPServer import CGIHTTPServer class Handler(CGIHTTPServer.CGIHTTPRequestHandler): cgi_directories = ['/cgi-bin'] httpd = BaseHTTPServer.HTTPServer(('',8000), Handler) httpd.serve_forever() This works fine for my initial scripts, but now I thinking of combining the python scripts into the server program and making things more efficient. The script file currently reads sensor data from different I/O ports (GPS ->serial, network data -> ethernet). I am new to python, and was wondering if there might be a better way to run the server side process than how I am doing it now: 1. GE client requests data through KML network link on a periodic update interval 2. python webserver handles request and executes python script in cgi-bin directory 3. python script reads sensor input from serial and ethernet ports and writes data as KML output to GE client 4. repeat process at update interval I am thinking if it would be better to make the process server-side focussed as opposed to client side. Have the server only update sensor data to the client when there has been a change in sensor data, and only send the data that has changed. Has anyone had experience in doing this with python that could point me in the right direction? thx Dale-- http://mail.python.org/mailman/listinfo/python-list
Re: Tabs versus Spaces in Source Code
Iain King wrote: > Oh God, I agree with Xah Lee. Someone take me out behind the chemical > sheds... > > Xah Lee wrote: Please don't feed the troll! And for the record, spaces are 100% portable, tabs are not. That ends the argument for me. Worse than either tabs or spaces however is Sun's mixture of the two. -- Dale King -- http://mail.python.org/mailman/listinfo/python-list
Re: John Bokma harassment
Xah Lee wrote: > I'm sorry to trouble everyone. But as you might know, due to my > controversial writings and style, recently John Bokma lobbied people to > complaint to my web hosting provider. After exchanging a few emails, my > web hosting provider sent me a 30-day account cancellation notice last > Friday. I'm probably stupid for contributing in this flame fest, but here goes. The reason that I consider Xah a troll and net abuser has little to do with cross-posting (which is still bad) or the length of his messages (he really should post them on his website and provide a summary and a link). My main problem is that he unloads his crap and then runs away. He doesn't participate in any discussion after that. This shows that he has no actual interest in discussion of the issues just in using Usenet as a form of publishing. The mention of free speech was raised. But the fact is that Usenet is not free (as in beer). We all pay for it. Your ISP has to pay for a server, the space for the messages, the bandwidth to download the messages, and the bandwidth to send them to your news reader. In reality the cost is shared among all of us. Therefore you do not have the "right" to do what you want with Usenet. You have a responsibility to use Usenet in a way that benefits the group as a whole (e.g. asking interesting questions that educate others). -- Dale King -- http://mail.python.org/mailman/listinfo/python-list
what gives with "'import *' not allowed with 'from .'"?
I know the use of "from foo import *" is discouraged, but I'm writing a package that I hope others may want to integrate as a subpackage of their own projects, I know what I'm doing, and I want to use the "from .bar import *" syntax internally. It works fine with python-2.6, but with python-2.5 I get a SyntaxError: "'import *' not allowed with 'from .'" Judging from http://bugs.python.org/issue2400 , this issue was fixed back in May 2008, but it is still present with python-2.5.4, which was released in December. Why wont python-2.5 allow this kind of import? -- http://mail.python.org/mailman/listinfo/python-list
Re: what gives with
On Jan 22, 10:07 pm, Benjamin Peterson wrote: > Darren Dale gmail.com> writes: > > > Judging fromhttp://bugs.python.org/issue2400, this issue > > was fixed back in May 2008, but it is still present with python-2.5.4, > > which was released in December. Why wont python-2.5 allow this kind of > > import? > > Allowing that would be a new feature which is disallowed in bug fix releases > like 2.5.4. I was talking about the behavior after doing "from __future__ import absolute_import". I've been developing on python-2.6 using absolute_import for weeks, knowing that I could do "from __future__ import absolute import" on python-2.5. Now when I try to use python-2.5 I cant import my package. What is the point of providing absolute_import in __future__ if the api is completely different than the implementation in future python versions? Its bizarre. -- http://mail.python.org/mailman/listinfo/python-list
how to assert that method accepts specific types
I would like to assert that a method accepts certain types. I have a short example that works: from functools import wraps def accepts(*types): def check_accepts(f): @wraps(f) def new_f(self, other): assert isinstance(other, types), \ "arg %r does not match %s" % (other, types) return f(self, other) return new_f return check_accepts class Test(object): @accepts(int) def check(self, obj): print obj t = Test() t.check(1) but now I want Test.check to accept an instance of Test as well. Does anyone know how this can be accomplished? The following class definition for Test raises a NameError: class Test(object): @accepts(int, Test) def check(self, obj): print obj Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: how to assert that method accepts specific types
On Feb 20, 8:20 pm, Chris Rebert wrote: > On Fri, Feb 20, 2009 at 5:12 PM, Darren Dale wrote: > > I would like to assert that a method accepts certain types. I have a > > short example that works: > > > from functools import wraps > > > def accepts(*types): > > def check_accepts(f): > > �...@wraps(f) > > def new_f(self, other): > > assert isinstance(other, types), \ > > "arg %r does not match %s" % (other, types) > > return f(self, other) > > return new_f > > return check_accepts > > > class Test(object): > > > �...@accepts(int) > > def check(self, obj): > > print obj > > > t = Test() > > t.check(1) > > > but now I want Test.check to accept an instance of Test as well. Does > > anyone know how this can be accomplished? The following class > > definition for Test raises a NameError: > > > class Test(object): > > > �...@accepts(int, Test) > > def check(self, obj): > > print obj > > You're going to have to either inject it after the class definition > somehow, or give the class name as a string and use eval() or similar. > The class object doesn't exist until the entire class body has > finished executing, so you can't refer to the class within its own > body. Thats too bad, thanks for clarifying. -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 17, 5:39 pm, Joe Strout <[EMAIL PROTECTED]> wrote: > On Oct 17, 2008, at 3:19 PM, Grant Edwards wrote: > > >> And my real point is that this is exactly the same as in every > >> other modern language. > > > No, it isn't. In many other languages (C, Pascal, etc.), a > > "variable" is commonly thought of as a fixed location in memory > > into which one can put values. Those values may be references > > to objects. > > Right, though not in languages like C and Pascal that don't HAVE the > notion of objects. We really ought to stop bringing up those > dinosaurs and instead compare Python to any modern OOP language. > > > In Python, that's not how it works. There is no > > "location in memory" that corresponds to a variable with a > > particular name the way there is in C or Pascal or Fortran or > > many other languages. > > No? Is there any way to prove that, without delving into the Python > source itself? > > If not, then I think you're talking about an internal implementation > detail. I think this "uncontrived" example addresses the (C/C++)/Python difference fairly directly. -- C: struct {int a;} s1, s2; int main() { s1.a = 1; s2 = s1; printf("s1.a %d s2.a %d\n", s1.a, s2.a); s1.a = 99; printf("s1.a %d s2.a %d\n", s1.a, s2.a); } -- Python: class mystruct: pass s1 = mystruct() s1.a = 1 s2 = s1 print "s1.a %2d s2.a %2d" % (s1.a,s2.a) s1.a = 99 print "s1.a %2d s2.a %2d" % (s1.a,s2.a) --- C OUTPUT: s1.a 1 s2.a 1 s1.a 99 s2.a 1 Python OUTPUT: s1.a 1 s2.a 1 s1.a 99 s2.a 99 Note that in C (or C++) the value of s2.a remains unchanged, because the VALUE of s1 (the contents of the memory where s1 resides) was COPIED to the memory location of s2, and subsequently, only the VALUE of s2.a was changed. In Python, s2.a is "changed" (but not really) because it turns out that s2 is just another name for the object that s1 pointed to. So there is no programatically accessible "location in memory" that corresponds to s1 or s2 in Python. There is only a location in memory that corresponds to the object that s1 is currently pointing to. In C, by contrast, there are definite locations in memory that correspond to both variables s1 and s2, and those locations remain always separate, distinct and unchanged throughout the execution of the program. This is not an "implementation detail", it is a fundamental part of each language. As C was an "improvement" on assembler, the variable names have always just been aliases for memory locations (or registers). You can see this in the output of any C/C++ compiler. In Python, variables are just names/aliases for *references* to objects, not names for the objects/values themselves. The variable names themselves do not correspond directly to the objects' memory locations. While yes, technically, it is true that those reference values must be stored somewhere in memory, *that* is the implementation detail. But is is not the *locations* of these references (i.e., the locations of the Python *variables*) that are copied around, it is the references themselves (the locations of the Python *objects*) that are copied. > > All that exists in Python is a name->object mapping. > > And what does that name->object mapping consist of? At some level, > there has to be a memory location that stores the reference to the > object, right? I think this is answered above, but just to drive it home, in Python the memory locations of the variables themselves (an implementation detail), which hold the references to the objects, are inaccessible . In C/C++, by contrast, variable names correspond directly to memory locations and objects, and you can easily find the addresses of variables, and the addresses do not change, although the values can. In C/C++, if you choose, you may have a variable that is itself a reference/pointer to some other memory/object/array. In C, we would say that the VALUE of that variable is the memory address of another object. But you can, if you need to, get the address of the pointer variable, which points to the *address* of the other object. In Python, a variable is ONLY EVER a reference to an object. You cannot get the address of a Python variable, only of a Python object. Hope this clears things up. dale -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
printf("%x", &foo), and that is different from the address of the object which you get when you say printf("%x", foo). The value &foo cannot be changed. because Python arguments are ALWAYS passed by value. There is no call by reference in Python. Period, end of story, nothing to see here. Yea, BUT... If you tell this to a C++ programer without any further explanation, they will be thoroughly confused and misinformed, unless you point them to this thread or amend that statement with your version of what "pass by value" means. I know what you mean, and you know what I mean, but that's because we've both read through this thread ;-) Look at my example. The ByVal() routine behaves how C++ programmers expect "pass by value" to work. The contents of the caller's object cannot be modified. So, then, what to tell a C++ programmer about how Python passes arguments? You say: tell them Python only passes by value. I disagree, because I think that would confuse them. Rather than try to map C++ conventions onto Python, I think it is more useful to just tell them how it really works. Maybe a few statements like this: All values in Python are objects, from simple integers up to complex user-defined classes. An assignment in Python binds a variable name to an object. The internal "value" of the variable is the memory address of an object, and can be seen with id(var), but is rarely needed in practice. The "value" that gets passed in a Python function call is the address of an object (the id()). When making a function call, myfunc(var), the value of id(var) can never be changed by the function. Not sure if these are the best. To get into much more detail, you have to start explaining mutable and immutable objects and such. dale Cheers, - Joe -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 28, 2:33 am, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote: > En Tue, 28 Oct 2008 01:16:04 -0200, Dale Roberts <[EMAIL PROTECTED]> > escribió: > > > > > So, then, what to tell a C++ programmer about how Python passes > > arguments? You say: tell them Python only passes by value. I disagree, > > because I think that would confuse them. Rather than try to map C++ > > conventions onto Python, I think it is more useful to just tell them how > > it really works. Maybe a few statements like this: > > > All values in Python are objects, from simple integers up to complex > > user-defined classes. > > > An assignment in Python binds a variable name to an object. The > > internal "value" of the variable is the memory address of an object, > > and can be seen with id(var), but is rarely needed in practice. > > > The "value" that gets passed in a Python function call is the address > > of an object (the id()). > > > When making a function call, myfunc(var), the value of id(var) can > > never be changed by the function. > > > Not sure if these are the best. To get into much more detail, you have > > to start explaining mutable and immutable objects and such. > > I don't think the above explanation is desirable, nor needed. Objects in > Python don't have an "address" - it's just a CPython implementation > detail. The fact that id() returns that address is just an implementation > detail too. The calling mechanism should be explained without refering to > those irrelevant details. > > -- > Gabriel Genellina I agree that it was a feeble and ill-advised attempt, and would like to strike those lines from the record... But the rest of the post is strong and accurate, I think, and coming from a mainly C/C++ background, visualizing these object references being passed around does help improve my understanding of Python's *behavior* (and it apparently helps Joe too). [May I refer to you as "Joe The Programmer" in my rhetoric? Are you a "licensed" programmer making over $250K/year? ;-)] If asked by a C++ programmer about Python's argument passing, I will go with the Pass By Object explanation (which is why I called my Python routine above ByObject()). Although there will be more 'splaining to do, at least it will give the C++ programmer pause, and help them realize that there is something a little different that they need to stop and try to understand. If I just say "Python is Pass By Value, Period" without further explanation, they will expect things to work as in my ByValue() example above (where the caller's object contents cannot be changed), and that is incorrect. And they may go and tell someone else, without the detailed explanation, that "Dale says it's Pass By Value", and I'll get blamed when their function surprisingly changes the contents of the caller's object. If I just say "Python is Pass By Reference", that is wrong too. As Joe The Programmer points out, they will expect that the calling *variable* itself can be changed, and that is wrong too. They need to understand that Python does not map neatly, directly, and completely to their C++ experience of Pass By Value (which involves copying a variable), or Pass By Reference (which involves taking the address of a variable). The Key Concept in for a C++ programmer looking at Python is that, unlike in C++, **Variables Cannot "Contain" Values**. All values in Python are objects, and all variables in Python simply point to, or are bound to, or refer to, these objects. The variables themselves do not contain the objects - if you must (like Joe the Programmer), you can say that all Python variables *contain* an object reference, and it is these references that are passed around. Unlike C++, an object reference is the ONLY thing that a Python variable can contain. A function parameter is always passed as a reference to an object, not a reference to a variable, or a copy of a variable or object. And that is where the confusion arises. When people say ByRef or ByVal, they usually mean by Reference to (address) or Value of (copy) the *contents* of the passed variable. But, PYTHON VARIABLES DO NOT "CONTAIN" VALUES (sorry for the shouting, but it is the most important point here), so ByRef and ByVal lose their commonly accepted meanings. In C++, ByValue requires a copy (and Python does not copy). In C++, ByReference requires the address of a *variable* (an "lvalue"), and variables do not have accessible addresses in Python. ByObject, in contrast, requires neither (unless, like Joe The Programmer, you consider the "value" of a variable to be the id(), which is not what most
Re: Finding the instance reference of an object
On Oct 28, 11:59 am, Joe Strout <[EMAIL PROTECTED]> wrote: > ... > > There are only the two cases, which Greg quite succinctly and > accurately described above. One is by value, the other is by > reference. Python quite clearly uses by value. Parameters are > expressions that are evaluated, and the resulting value copied into > the formal parameter, pure and simple. The continued attempts to > obfuscate this is pointless and wrong. > > Best, > - Joe 5 + 3 What is the "value" of that expression in Python? Can you tell me? 99.99% of programmers (who do not have this thread as context) will say that the value is 8. But you say the value is the memory address of the resulting object created when the + operator is applied to the 5 object and the 3 object. That is the "value" that is copied. Okay, you can have it that way, but every time you explain to someone that Python passes "By Value", you will have to add the additional baggage that, oh, by the way, there is a completely different meaning for "value" in Python than what you are used to. Then the questions and puzzled looks will start... And when they tell their friend that Joe The Programmer said it's Pass By Value, your additional context may not be present any longer, and the friend will be very confused. In my opinion, best just to head it off and call it something different so as not to confuse. dale -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 28, 11:59 am, Joe Strout <[EMAIL PROTECTED]> wrote: > ... > > There are only the two cases, which Greg quite succinctly and > accurately described above. One is by value, the other is by > reference. Python quite clearly uses by value. Parameters are > expressions that are evaluated, and the resulting value copied into > the formal parameter, pure and simple. The continued attempts to > obfuscate this is pointless and wrong. > > Best, > - Joe Joe, you are being too generous and expansive here. [Play along with me for a minute here...] Don't you know? There is really only *ONE* case, and, you are right, it is Pass By Value. There is no such thing as Pass By Reference at the physical CPU level at all, right? If there is, show it to me. Pass By Reference is just a silly idiom developed by high-minded CS academics to confuse the rest of us. It has no practical use and should not be given its own name, when we already have a good an proper name for it. Let me demonstrate with 3 examples of a function definition, and the appropriate calling syntax for that function in C++, all sharing the common "int i" global variable: int i = 5; myfunc(int &val){} /*CALL:*/ myfunc(i);// "ByRef" (ya, right!) myfunc(int val){}/*CALL:*/ myfunc(i);// ByVal myfunc(int *val){} /*CALL:*/ myfunc(&i); // Joe's ByVal The first is what all the fakers call "Pass By Reference" - sheesh, how naive. We all know that what *really* happens internally is that the *address* of val (A VALUE itself, or course) is copied and passed on the stack, right? There couldn't be a more straightforward example of Pass By Value (unless it's an inline function, or optimized away, or possibly when implemented in a VM, or...). It passes the *address* of i by value, then we can access the *value* of i too via indirection. Hmm, did we need to have two definitions of VALUE there? Well, never mind, no one will notice... The next is obviously pass by value. It's right out there. The value of i (which is what we are talking about, right?) is copied out, and passed right on the stack in plain daylight where we can all see it. How about the third? Pass By Value, obviously, of course. This is the version you are defending, right? The parameter's value, &i, is evaluated and copied right onto the stack, just like in the first example. In fact, if you compare the assembler output of the first and third examples, you may not even see a difference. Never mind the actual contents of that pesky "i" variable that most people are referring to when they use the term "value". We don't need to dress up example 3 and call it an "idiom" where we are really passing a so- called "reference" of the variable "i". Indeed! Don't insult our intelligence. We can all see that it's an address passed by value, plain and simple. Pass By Reference? So "postmodern". Who needs it. Show me a so-called "reference". I've looked at the assembler output and have never seen one. There is no such thing. "The continued attempts to obfuscate this is pointless and wrong." --- I hate to have to add this, but for those not paying close attention: ;-) dale (tongue back out of cheek now) -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 29, 9:13 pm, Joe Strout <[EMAIL PROTECTED]> wrote: > On Oct 29, 2008, at 4:52 PM, Fuzzyman wrote: > > > You're pretty straightforwardly wrong. In Python the 'value' of a > > variable is not the reference itself. > > That's the misconception that is leading some folks around here into > tangled nots of twisty mislogic, ultimately causing them to make up > new terms for what every other modern language is perfectly happy > calling Call-By-Value. Doesn't this logic also apply to Call By Reference? Isn't that term redundant too? (see my 3 C++ examples above). If not, why not? Are you saying that C++ is capable of using the Call By Reference idiom, but C is not, because C does not have a reference designation for formal function parameters? "Call By Object Reference" is an idiom, just like Call By Reference. It is not a physical description of what is going on internally at the register/stack level (which is always just shuffling values around - or flipping bits, as Steven points out), it is a higher level concept that helps people understand the *intention* (not necessarily the implementation) of the mechanism. You cannot look a C++ programmer straight in the eye and say that "Python uses Call By Value, Period", without also informing them that "Python variables can ONLY EVER hold object references - that is the only "value" they can ever hold". Then the C++ programmer will go "Oh, yea, that makes sense". Instead of having to say all of that, we just give it a new name. Instead of "Call By Value, Where Every Single Value Is Only Ever A Reference To An Object Which Contains The Actual Value That Programmers Usually Refer To", we just say "Call By Object Reference". > ... > 2. Because everything in Python is an object, you're not forced to > think clearly (and more generally) about references as values I think we've shown that we are all in fact thinking clearly about it, and we all (you included, of course!) understand what is going on. It's just a matter of what words we choose to describe it. Using your definition of value, though, I believe that if you want to throw out Call By Object Reference, you also have to throw out Call By Reference. See my 3 C++ examples above. And just for fun I did look at the assembler output, and, indeed, the output for examples 1 and 3 is absolutely identical. They are the same thing, as far as the CPU is concerned. Would you give them different names? dale -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 30, 11:03 am, Joe Strout <[EMAIL PROTECTED]> wrote: > ... >> Are you saying that C++ is capable of using the Call By Reference idiom, >> but C is not, because C does not have a reference designation for formal >> function parameters? > > It's been a LONG time since I did anything in C, but yes, I believe > that reference parameters were an addition that came with C++. Okay, I completely understand you, and I think we will just have to agree to disagree about the best term to use for Python's parameter passing mechanism, and this will likely be my concluding post on this topic (although I have enjoyed it very much and have solidified my own understanding). I even found a few web sites that very strongly support your viewpoint (as it relates to Java): http://www.ibm.com/developerworks/library/j-praxis/pr1.html http://javadude.com/articles/passbyvalue.htm http://www.yoda.arachsys.com/java/passing.html The difference is that I would say that C supports the Pass By Reference idiom using this syntax: myfunc(int *val){} /*CALL:*/ myfunc(&i); which actually passes an address expression (not a variable) by value, but "looks and feels" like a reference to the "i" variable, which contains the real value that we care about - and allows modification of that value. C++ adds a syntactic change for this very commonly used C idiom, but does not add any new capability - the results are absolutely indistinguishable. Python, likewise, in relation to the values we care about (the values contained only in objects, never in variables) behaves like Call by Object Reference. If I tell someone that Python uses only Call By Value (and that is all I tell them), they may come away with the impression that variables contain the values they care about, and/or that the contents of objects are copied, neither of which is the case, even for so-called "simple", immutable objects (indeed, at the start of this thread, you said you believed that, like Java, simple values were contained within Python variables). But Python, unlike Java or most other commonly used languages, can ONLY EVER pass an object reference, and never an actual value I care about, and I think that idiom deserves a different name which distinguishes it from the commonly accepted notion of Pass By Value. Thanks for a thoughtful discussion, dale -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 30, 3:06 pm, Dale Roberts <[EMAIL PROTECTED]> wrote: > ... that idiom deserves a different name which > distinguishes it from the commonly accepted notion of Pass By Value. Bah, what I meant to end with was: Just as the Pass By Reference idiom deserves a unique name to distinguish it from Pass By Value (even though it is often Pass By (address) Value internally), so Pass By Object Reference deserves a unique name (even though it too is Pass By (reference) Value internally). Again, thanks for the discussion, dale -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 31, 3:15 am, greg <[EMAIL PROTECTED]> wrote: > Dale Roberts wrote: > > Just as the Pass By Reference idiom deserves a unique name to > > distinguish it from Pass By Value (even though it is often Pass By > > (address) Value internally), so Pass By Object Reference deserves a > > unique name (even though it too is Pass By (reference) Value > > internally). > > Since Python only has one parameter passing mechanism, > there's no need to give it a name at all. If you're > having to explain it, just explain it, and don't > bother naming it! > > -- > Greg On Oct 31, 3:15 am, greg <[EMAIL PROTECTED]> wrote: > Dale Roberts wrote: > > Just as the Pass By Reference idiom deserves a unique name to > > distinguish it from Pass By Value (even though it is often Pass By > > (address) Value internally), so Pass By Object Reference deserves a > > unique name (even though it too is Pass By (reference) Value > > internally). > > Since Python only has one parameter passing mechanism, > there's no need to give it a name at all. If you're > having to explain it, just explain it, and don't > bother naming it! > > -- > Greg But then why bother having any other names at all for other languages that have only one calling mechanism, like Call By Name, Call By Macro Expansion, etc. If it is a different process, it needs a different name. OR, as you suggest, no name at all, just an explanation. But please don't give it the WRONG name! -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the instance reference of an object
On Oct 31, 2:27 am, greg <[EMAIL PROTECTED]> wrote: > Dale Roberts wrote: > > Are you > > saying that C++ is capable of using the Call By Reference idiom, but C > > is not, because C does not have a reference designation for formal > > function parameters? > > Call by reference is not an "idiom", it's a *language > feature*. > ... > You can use an idiom in C to get the same effect, but this > is not the same thing as the language having it as a feature. Okay, I'll grant that, but is there a language other than Python that uses the Call By Value feature that does not do it by assigning/ copying the result of an expression into the formal parameter? The terms "result" and "value" are generally understood to refer to the working data of the program, not the internal workings of the interpreter, VM, or compiler. So, yes, internally the C Python runtime does use Call By Value. It's written in C after all - that's all it can do. But Call By Value is not a feature of the Python language. dale [Somebody unplug my network cable! I can't stop!] -- http://mail.python.org/mailman/listinfo/python-list
question about ctrl-d and atexit with threads
I have a function that stops execution of a thread, and this function is registered with atexit.register. A simple example module is included at the end of this post, say its called test.py. If I do the following in the interactive interpreter, the thread stops executing as I hoped: >>> from test import my_thread >>> import sys >>> sys.exit() If instead I do the following: >>> from test import my_thread >>> the interpreter hangs up and my_thread continues to execute indefinitely (confirmed by uncommenting the print statement in run). I've seen this behavior on python-2.5 and 2.6 on 64 bit linux systems (gentoo and kubuntu). Can anyone else confirm that invoking ctrl-D hangs up the interactive interpreter with this code? And if so, could anyone explain how ctrl-d is different than sys.exit? Thank you, Darren import atexit import threading import time class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.lock = threading.Lock() self.stopEvent = threading.Event() def run(self): while not self.stopEvent.isSet(): # print 'running' time.sleep(0.1) def stop(self): self.stopEvent.set() self.join() my_thread = MyThread() def stop_execution(): my_thread.stop() atexit.register(stop_execution) my_thread.start() -- http://mail.python.org/mailman/listinfo/python-list
Re: question about ctrl-d and atexit with threads
Actually, this problem can also be seen by running this code as a script, it hangs up if the sys.exit lines are commented, and exits normally if uncommented. import atexit import threading import time class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.lock = threading.Lock() self.stopEvent = threading.Event() def run(self): while not self.stopEvent.isSet(): time.sleep(0.1) def stop(self): self.stopEvent.set() self.join() my_thread = MyThread() def stop_execution(): my_thread.stop() atexit.register(stop_execution) my_thread.start() #import sys #sys.exit() -- http://mail.python.org/mailman/listinfo/python-list
Re: question about ctrl-d and atexit with threads
On Mar 5, 12:02 pm, s...@pobox.com wrote: > What happens if you simply call > > my_thread.setDaemon(True) > > (or in Python 2.6): > > my_thread.daemon = True > > ? That is the documented way to exit worker threads when you want the > application to exit. From the threading module docs: > > "The entire Python program exits when no alive non-daemon threads are > left." Thank you Skip, that solves the problem. I'm still curious what the difference is between python's handling of sys.exit and EOF, but its academic at this point. Thanks again, Darren -- http://mail.python.org/mailman/listinfo/python-list
Re: question about ctrl-d and atexit with threads
On Mar 5, 6:27 pm, "Gabriel Genellina" wrote: > En Thu, 05 Mar 2009 15:26:18 -0200, Darren Dale > escribió: > > > > > On Mar 5, 12:02 pm, s...@pobox.com wrote: > >> What happens if you simply call > > >> my_thread.setDaemon(True) > > >> (or in Python 2.6): > > >> my_thread.daemon = True > > >> ? That is the documented way to exit worker threads when you want the > >> application to exit. From the threading module docs: > > >> "The entire Python program exits when no alive non-daemon threads > >> are > >> left." > > > Thank you Skip, that solves the problem. I'm still curious what the > > difference is between python's handling of sys.exit and EOF, but its > > academic at this point. > > Some applications open a new window for each document they're handling. > When you close the last window, the application exits ("close" does an > implicit "quit"). Note that this does *not* happen when you close the > first, original document you opened, but when there are no more documents > open. The first document is not special in this regard. > > Python threads work the same way; a thread may finish, but as long as > there are other threads alive, the process continues running. Only after > the last thread has finished, the application quits. The main thread *is* > special sometimes, but not in this aspect, > Setting daemon=True is like telling Python "I don't care about this > thread; don't wait for it if that's the only thing you have to do". > > Calling sys.exit() is an explicit statement: "I want this program to > finish now" (or as soon as possible). It doesn't wait for the remaining > threads (unless you explicitely do so, like in your code). Right, I understand all that. I don't understand how calling sys.exit at the python command line is different from invoking ctrl-D. They should both trigger the same mechanism if they are advertised as equivalent mechanisms for exiting the interpreter, shouldnt they? -- http://mail.python.org/mailman/listinfo/python-list
Re: question about ctrl-d and atexit with threads
On Mar 5, 6:27 pm, "Gabriel Genellina" wrote: > En Thu, 05 Mar 2009 15:26:18 -0200, Darren Dale > escribió: > > > > > On Mar 5, 12:02 pm, s...@pobox.com wrote: > >> What happens if you simply call > > >> my_thread.setDaemon(True) > > >> (or in Python 2.6): > > >> my_thread.daemon = True > > >> ? That is the documented way to exit worker threads when you want the > >> application to exit. From the threading module docs: > > >> "The entire Python program exits when no alive non-daemon threads > >> are > >> left." > > > Thank you Skip, that solves the problem. I'm still curious what the > > difference is between python's handling of sys.exit and EOF, but its > > academic at this point. > > Some applications open a new window for each document they're handling. > When you close the last window, the application exits ("close" does an > implicit "quit"). Note that this does *not* happen when you close the > first, original document you opened, but when there are no more documents > open. The first document is not special in this regard. > > Python threads work the same way; a thread may finish, but as long as > there are other threads alive, the process continues running. Only after > the last thread has finished, the application quits. The main thread *is* > special sometimes, but not in this aspect, > Setting daemon=True is like telling Python "I don't care about this > thread; don't wait for it if that's the only thing you have to do". > > Calling sys.exit() is an explicit statement: "I want this program to > finish now" (or as soon as possible). It doesn't wait for the remaining > threads (unless you explicitely do so, like in your code). Right, I understand all that. I don't understand how calling sys.exit at the python command line is different from invoking ctrl-D. They should both trigger the same mechanism if they are advertised as equivalent mechanisms for exiting the interpreter, shouldnt they? -- http://mail.python.org/mailman/listinfo/python-list
Re: question about ctrl-d and atexit with threads
On Mar 6, 1:32 pm, rdmur...@bitdance.com wrote: > Darren Dale wrote: > >On Mar 5, 6:27 pm, "Gabriel Genellina" wrote: > >> En Thu, 05 Mar 2009 15:26:18 -0200, Darren Dale > >> escribi : > > >> > On Mar 5, 12:02 pm, s...@pobox.com wrote: > >> >> What happens if you simply call > > >> >> my_thread.setDaemon(True) > > >> >> (or in Python 2.6): > > >> >> my_thread.daemon = True > > >> >> ? That is the documented way to exit worker threads when you want the > >> >> application to exit. From the threading module docs: > > >> >> "The entire Python program exits when no alive non-daemon threads > >> >> are left." > > >> > Thank you Skip, that solves the problem. I'm still curious what the > >> > difference is between python's handling of sys.exit and EOF, but its > >> > academic at this point. > > >> Some applications open a new window for each document they're handling. > >> When you close the last window, the application exits ("close" does an > >> implicit "quit"). Note that this does *not* happen when you close the > >> first, original document you opened, but when there are no more documents > >> open. The first document is not special in this regard. > > >> Python threads work the same way; a thread may finish, but as long as > >> there are other threads alive, the process continues running. Only after > >> the last thread has finished, the application quits. The main thread *is* > >> special sometimes, but not in this aspect, > >> Setting daemon=True is like telling Python "I don't care about this > >> thread; don't wait for it if that's the only thing you have to do". > > >> Calling sys.exit() is an explicit statement: "I want this program to > >> finish now" (or as soon as possible). It doesn't wait for the remaining > >> threads (unless you explicitely do so, like in your code). > > >Right, I understand all that. I don't understand how calling sys.exit > >at the python command line is different from invoking ctrl-D. They > >should both trigger the same mechanism if they are advertised as > >equivalent mechanisms for exiting the interpreter, shouldnt they? > > First, just to make sure we are on the same page, I assume you > understand that 'ctlr-D' at the python interpreter prompt is > completely equivalent to your example file that does not call > sys.exit before the end of the script file. That is, ctrl-D is > "end of file" for the 'script' you are creating at the > interactive interpreter prompt. > > When Gabriel says "as long as there are other threads alive, the > process continues running", that is the key to your question. > The main thread has exited (either the interpreter or your main > script file) but another thread is still running (your child > thread), so Python keeps executing that thread. sys.exit is > _not_ called at the end of the "main" script file, but only after > all threads have exited. > > Unless, that is, you call sys.exit explicitly, or set daemon = True. OK, I understand now. Thank you both for the clarification, I learned something important. -- http://mail.python.org/mailman/listinfo/python-list
Unix programmers and Idle
I wonder if someone could point me at documentation on how to debug some of the standard Unix type things in Idle. I cannot seem to figure out how to set my argument line for the program I am debugging in an Idle window. for example: vlmdeckcheck.py --strict --debug file.dat There must be a way to tell it what the command line args are for the test run but I can't find it so far. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Unix programmers and Idle
On Mon, Mar 30, 2009 at 08:11:10PM -0500, Dave Angel wrote: > I don't know what Idle has to do with it. sys.args contains the command > line arguments used to start a script. > > Dale Amon wrote: >> I wonder if someone could point me at documentation on how to debug >> some of the standard Unix type things >> in Idle. I cannot seem to figure out how to set my >> argument line for the program I am debugging in an Idle >> window. for example: >> >> vlmdeckcheck.py --strict --debug file.dat >> >> There must be a way to tell it what the command line args >> are for the test run but I can't find it so far. >> The line above represent what I want to emulate within idle. If you run idle, select File->Open; then select the program name as above to open; select Debug->Debugger; then start the program with F5... which is lovely but I cannot find a way to tell idle what the args are. idle is really nice but I am stuck doing my debugging in pdb because of this. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Unix programmers and Idle
On Mon, Mar 30, 2009 at 09:47:24PM -0500, Dave Angel wrote: > See http://docs.python.org/library/idle.html and search for command line > > According to that page (for Python 2.6.1), you can set those parameters > on the command line that starts IDLE itself. > > I haven't tried it yet, as I'm using Komodo. That at least makes some debug with it possible... but you would still have to terminate idle and all your set up just to run a test with a different regression data set for the input file. Or to toggle the switches to test that the code is doing the write thing. I think one of the others just posted a more definitive answer... the code isn't there but possibly could be. That of course doesn't help me right now, but it might well help lots of other folk in the future! signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Unix programmers and Idle
On Mon, Mar 30, 2009 at 10:54:56PM -0700, Niklas Norrthon wrote: > I make sure my scripts are on the form: > > # imports > # global initialization (not depending on sys.argv) > def main(): > # initialization (might depend on sys.argv) > # script logic > # other functions > if __name__ == '__main__': > main() > > Then I have a trivial debug script named debug_whatever.py, which I > use as my entry point during debugging: > > # debug_whatever.py: > import sys > sys.argv[1:] = ['arg1', 'arg2', 'arg3'] > import whatever > whatever.main() I've found this approach very useful in another way as well. I write all my programs, in whatever language, with a perldoc section at the very end of the file where it doesn't get in the way of my seeing the code, but is still at least in the same module. Due to the problems of forward referencing I had not heretofore been able to use it inside the program module as it was obviously not defined when the code ran. However, with your method and with those two lines place at the very end after the perldoc documentation the code execution is always delayed until I can stuff it into __doc__ and thus have it available to print with a --man command line switch which I like to have in all my code. (I just pipe the __doc__ string through perl2man there) Works for me! signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Unix programmers and Idle
Just in case anyone else finds it useful, to be precise I use: if opts.man: p1 = Popen(["echo", __doc__], stdout=PIPE) p2 = Popen(["pod2man"], stdin=p1.stdout, stdout=PIPE) p3 = Popen(["nroff","-man"], stdin=p2.stdout, stdout=PIPE) output = p3.communicate()[0] print output inside the def main(). signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Computed attribute names
There are a number of things which I have been used to doing in other OO languages which I have not yet figured out how to do in Python, the most important of which is passing method names as args and inserting them into method calls. Here are two cases I have been trying to figure out for a current project. The first is passing methods to dispatcher methods. In pseudocode, something like this: def dispatcher(self,methodname): self.obj1.methodname() self.obj2.methodname() and another case is selecting behavior of an object by setting a type string, with pseudo code like this: self.IBM029 = re.compile([^acharset] self.IBM026 = re.compile([^anothercharset] self.type = "IBM029" errs = self.(self.type).findall(aCardImage) I have yet to find any way to do either, although it appears I could do some of it using a long and roundabout call string using __dict__. What is the Python dialect for this sort of runtime OO? signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Computed attribute names
On Wed, Apr 08, 2009 at 09:03:00PM +0200, paul wrote: > I'd say you can use: Thanks. I could hardly ask for a faster response on a HowTo than this! signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
regexp strangeness
This finds nothing: import re import string card = "abcdef" DEC029 = re.compile("[^&0-9A-Z/ $*,.\-:#@'=\"[<(+\^!);\\\]%_>?]") errs = DEC029.findall(card.strip("\n\r")) print errs This works correctly: import re import string card = "abcdef" DEC029 = re.compile("[^&0-9A-Z/ $*,.\-:#@'=\"[<(+\^!)\\;\]%_>?]") errs = DEC029.findall(card.strip("\n\r")) print errs They differ only in the positioning of the quoted backslash. Just in case it is of interest to anyone. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
design question, metaclasses?
I am working on a project that provides a high level interface to hdf5 files by implementing a thin wrapper around h5py. I would like to generalize the project so the same API can be used with other formats, like netcdf or ascii files. The format specific code exists in File, Group and Dataset classes, which I could reimplement for each format. But there are other classes deriving from Group and Dataset which do not contain any format-specific code, and I would like to find a way to implement the functionality once and apply uniformly across supported formats. This is really abstract, but I was thinking of something along the lines of: format1.Group # implementation of group in format1 format2.Group # ... Base.DerivedGroup # base implementation of DerivedGroup, not directly useful format1.DerivedGroup = Base.DerivedGroup(format1.Group) # useful format2.DerivedGroup = Base.DerivedGroup(format2.Group) # useful Could anyone please offer a comment, is this an appropriate use of metaclassing, or is there maybe an easier/better alternative? -- http://mail.python.org/mailman/listinfo/python-list
Re: design question, metaclasses?
On Apr 11, 2:15 pm, Darren Dale wrote: > I am working on a project that provides a high level interface to hdf5 > files by implementing a thin wrapper around h5py. I would like to > generalize the project so the same API can be used with other formats, > like netcdf or ascii files. The format specific code exists in File, > Group and Dataset classes, which I could reimplement for each format. > But there are other classes deriving from Group and Dataset which do > not contain any format-specific code, and I would like to find a way > to implement the functionality once and apply uniformly across > supported formats. This is really abstract, but I was thinking of > something along the lines of: > > format1.Group # implementation of group in format1 > format2.Group # ... > Base.DerivedGroup # base implementation of DerivedGroup, not directly > useful > format1.DerivedGroup = Base.DerivedGroup(format1.Group) # useful > format2.DerivedGroup = Base.DerivedGroup(format2.Group) # useful > > Could anyone please offer a comment, is this an appropriate use of > metaclassing, or is there maybe an easier/better alternative? I don't fully understand metaclasses, but I think I have convinced myself that they are not what I was looking for. I think this will do what I want it to: class Group1(object): def origin(self): return "Group1" class Group2(object): def origin(self): return "Group2" def _SubGroup(superclass): class SubGroup(superclass): pass return SubGroup SubGroup = _SubGroup(Group2) sub_group = SubGroup() print sub_group.origin() -- http://mail.python.org/mailman/listinfo/python-list
Re: design question, metaclasses?
On Apr 12, 3:23 pm, Aaron Brady wrote: > On Apr 12, 1:30 pm, Darren Dale wrote: > > > > > On Apr 11, 2:15 pm, Darren Dale wrote: > > _ > > > > format1.Group # implementation of group in format1 > > > format2.Group # ... > > > Base.DerivedGroup # base implementation of DerivedGroup, not directly > > > useful > > > format1.DerivedGroup = Base.DerivedGroup(format1.Group) # useful > > > format2.DerivedGroup = Base.DerivedGroup(format2.Group) # useful > > _ > > > class Group1(object): > > > def origin(self): > > return "Group1" > > > class Group2(object): > > > def origin(self): > > return "Group2" > > > def _SubGroup(superclass): > > > class SubGroup(superclass): > > pass > > > return SubGroup > > > SubGroup = _SubGroup(Group2) > > sub_group = SubGroup() > > > print sub_group.origin() > > You can create new types in one statement: > > SubGroup= type( "SubGroup", ( BaseGroup, ), { } ) But how can I implement the *instance* behavior of SubGroup with this example? In my original example: format1.Group # implementation of group in format1 format2.Group # implementation of group in format2 Base.DerivedGroup # base implementation of DerivedGroup, must subclass a group format1.DerivedGroup = Base.DerivedGroup(format1.Group) # useful format2.DerivedGroup = Base.DerivedGroup(format2.Group) # useful I'm trying to achieve uniform behavior of my derived groups across supported formats. My derived groups are abstracted such that they do not need to be reimplemented for each format, I only need to implement Group for each format. This is a real mind bender for me, even my factory function gets hairy because I have additional classes that derive from DerivedGroup. Maybe what I need is the ability to provide context at import time, is that possible? -- http://mail.python.org/mailman/listinfo/python-list
Re: design question, metaclasses?
On Apr 12, 4:50 pm, Kay Schluehr wrote: > On 11 Apr., 20:15, Darren Dale wrote: > > > I am working on a project that provides a high level interface to hdf5 > > files by implementing a thin wrapper around h5py. > > I would like to > > generalize the project so the same API can be used with other formats, > > like netcdf or ascii files. The format specific code exists in File, > > Group and Dataset classes, which I could reimplement for each format. > > But there are other classes deriving from Group and Dataset which do > > not contain any format-specific code, and I would like to find a way > > to implement the functionality once and apply uniformly across > > supported formats. > > Seems like you are doing it wrong. The classical OO approach is to add > more details / refining classes in subclasses instead of doing it the > other way round and derive the less specific classes from the more > specific ones. I think I am following the classical OO approach, refining details in subclasses. I just want a given subclass implementation describing a complex dataset to be able to work on top of multiple hierarchical file formats (like NetCDF or HDF5) by deriving from either NetCDF or HDF5 base classes that have an identical API. Those base classes encapsulate all the format-specific details, the subclasses allow a uniform image to be handled differently than a nonuniform image with a mask (for example). Maybe I should be delegating rather than subclassing. -- http://mail.python.org/mailman/listinfo/python-list
Re: any(), all() and empty iterable
On Apr 14, 8:33 am, Tim Chase wrote: > ... > I still prefer "Return False if any element of the iterable is > not true" or "Return False if any element in the iterable is > false" because that describes exactly what the algorithm does. I agree that the original doc comment is not helpful as it stands (even though the behavior of any() is of course correct!), and prefer Tim's alternative. Since Python is used by programmers (hopefully!), the doc comment should be directed toward that audience. It should be unambiguous, and should not assume everyone has perfect knowledge of mathematical logic operations, or that Python necessarily follows the rules that a logician would expect (take the varying behavior of "modulo" operators in various languages as an example). Pure logic aside, if I was presented with the original comment ('Return True if all elements of the iterable are true.') as a specification, as a programmer I would immediately have to ask what to do in the case of an empty list. It might be that the user hadn't thought about it, or would want to throw an exception, or return False. The doc should speak to the intended audience: programmers, who like to make sure all bases and cases are covered. dale -- http://mail.python.org/mailman/listinfo/python-list
Re: any(), all() and empty iterable
On Apr 16, 2:27 pm, Tim Chase wrote: > Raymond Hettinger wrote: > > I will not change the sentence to "return false if any element > > of the iterable is false." The negations make the sentence > > hard to parse mentally > > Just as a ribbing, that "return X if any element of the iterable > is X" is of the same form as the original. The negation is only > of the X, not of the sentence structure. > > > I will probably leave the lead-in sentence as-is but may > > add another sentence specifically covering the case for > > an empty iterable. > > as one of the instigators in this thread, I'm +1 on this solution. Yes, I now appreciate the motivation for having the word "all" in the text, and simply adding something like "or the iterable is empty" might head off future confusion. dale -- http://mail.python.org/mailman/listinfo/python-list
send() to a generator in a "for" loop with continue(val)??
I've started using generators for some "real" work (love them!), and I need to use send() to send values back into the yield inside the generator. When I want to use the generator, though, I have to essentially duplicate the machinery of a "for" loop, because the "for" loop does not have a mechanism to send into the generator. Here is a toy example: def TestGen1(): for i in xrange(3): sendval = yield i print " got %s in TestGen()" % sendval g = TestGen1() sendval = None try: while True: val = g.send(sendval) print 'val in "while" loop %d' % val sendval = val * 10 except StopIteration: pass I have to explicitly create the generator with an assignment, send an initial None to the generator on the first go, then have to catch the StopIteration exception. In other words, replicate the "for" mechanism, but use send() instead of next(). It would be nice if I could just do this instead: for val in TestGen1(): print 'val in "for" loop %d' % val continue(val*10) ...or something similar. Is this an old idea? Has it been shot down in the past already? Or is it worth pursuing? I Googled around and saw one hit here: http://mail.python.org/pipermail/python-ideas/2009-February/003111.html, but not much follow-up. I wonder if people want to keep the idea of an "iterator" style generator (where send() is not used) separate from the idea of a "co- routine" style generator (where send() is used). Maybe combining the two idioms in this way would cause confusion? What do folks think? dale -- http://mail.python.org/mailman/listinfo/python-list
Re: send() to a generator in a "for" loop with continue(val)??
On Apr 17, 10:07 pm, Aaron Brady wrote: > You can do it with a wrapping generator. I'm not sure if it > interferes with your needs. It calls 'next' the first time, then just > calls 'send' on the parameter with the value you send it. Aaron, Thanks for the hint. I'd made a modified version of my generator that was "for loop aware" and had two yields in it, but this seemed very fragile and hackish to me, and left my generator only usable inside a "for" loop. The wrapper method seems to be a much better way to go. dale -- http://mail.python.org/mailman/listinfo/python-list
Re: send() to a generator in a "for" loop with continue(val)??
On Apr 19, 6:10 am, Peter Otten <__pete...@web.de> wrote: > ... > I only just started reading Beazley's presentation, it looks interesting. > Thanks for the hint! > > Are you currently using coroutines in Python? If so, what kind of practical > problems do they simplify for you? I thought I'd chime in with an application too. I am using this mechanism to implement a state machine. I read through Beazley's presentation too - wow, lots of ideas in there. For my simple state machine, I am using a very simple "trampoline" function (see his slides starting at about #172). My "run" routine is a bit different, but the idea is similar. I'm using this to present images to a test subject (a person looking at a computer screen), and the person's responses guide the state machine. So I need to get data in (the subject responses) and out (the next image to be presented). So I have violated The Beazley Principle of slide #195: Keeping it Straight • If you are going to use coroutines, it is critically important to not mix programming paradigms together • There are three main uses of yield • Iteration (a producer of data) • Receiving messages (a consumer) • A trap (cooperative multitasking) • Do NOT write generator functions that try to do more than one of these at once ...whoops! But I think this is a valid use of the mechanism, in that it is very localized and self contained to just the few routines that make up the state machine. It works very well, makes it easy to implement the state machine clearly, and is easy to understand and maintain. I can see where it could get very confusing to use this mechanism in a more general way. dale -- http://mail.python.org/mailman/listinfo/python-list
import and package confusion
I am going around in circles right now and have to admit I do not understand what is going on with import of hierarchical packages/modules. Perhaps someone can get me on the road again. Here is a subset of what I am trying to accomplish: The package directory set up: VLMLegacy/ __init__.py Reader.py Conditions.py VLM4997/ __init__.py Conditions.py WINGTL/ __init__.py Conditions.py The inheritance: object Reader Conditions VLM4997.Conditions WINGTL.Conditions Now how do I use import or from to be able to use these modules? The following is not 'real' code and is only intended to give some idea of what I am trying to accomplish: import sys sys.path.extend (['../lib', '../bin']) import VLMLegacy.VLM4997.Conditions import VLMLegacy.WINGTL.Conditions b = VLM4997.Conditions(2) b.test() c = WINGTL.Conditions(2) c.test() And of course note that both of those must inherit VLMLegacy.Conditions(). signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
I am trying to get to the heart of what it is I am missing. Is it the case that if you have a module C in a package A: A.C that there is no way to load it such that you can use: x = A.C() in your code? This is just a simpler case of what I'm trying to do now, which has a module C in a sub-package to be imported: A.B.C ie with files: mydir/A/B/C.py mydir/mymain.py and executed in mymain.py as: x = A.B.C() I may still chose to do it the way you suggested, but I would still like to understand why this does not work. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Wed, Apr 29, 2009 at 01:12:33PM -0700, Scott David Daniels wrote: > Dale Amon wrote: >> I am trying to get to the heart of what it is I am >> missing. Is it the case that if you have a module C in a package A: >> A.C >> that there is no way to load it such that you can use: >> x = A.C() >> in your code? > OK, here's a simple question. What do you expect from: >import sys >sys() > sys is a module, and as such, it is not callable. > Just because you put a class inside a module, does not mean > that class magically does something by virtue of having the > same name as the module. > > A module is a namespace to hold classes, functions, etc > A package is a namespace to hold modules (possibly more). > > I don't understand why you don't use files like: > > VLMLegacy/ > __init__.py > Reader.py > VLM4997.py > WINGTL.py Well, it is far more complex than that: I just cut it down to the most minimal case I could. > But, presuming some kind of rationale, > put the code you want in VLMLegacy/VLM4997/__init__.py That doesn't really do it. Perhaps I should try to describe the situation better. There are n different similar systems, each with multiple classes. They could either be implimented as a class at the first level: VLMLegacy Condition.py Plan.py | | etc but in that case each class will be filled with conditionals that try to do the correct thing depending on which system's data they are reading. That approach has already gotten *insane* and I need to objectify things: put all the common code into abstract superclasses, and then create a subclass for each different system (of which there will be an unknown number added over time), ie: VLMLegacy/ Conditions.pyAbstract classes Plan.py | | etc TYPE1/ Subclasses of above specific to Type 1 Conditions.py Plan.py | | etc TYPE2/ Subclasses for Type 2 Conditions.py Plan.py | | etc | TYPEn/ Subclasses for Type n Conditions.py Plan.py | | etc Every VLMLegacy.TYPEn.Conditions (or other class) has exactly the same set of methods; each of those methods inherits much of its basic behavior from VLMLegacy.Conditions. If I make every subclass a unique name, things will rapidly get out of hand, especially when I start adding TYPEn+1,2... etc. So yes, the approach isn't arbitrary, it is a solution to real design problems which even the above does not fully do justice to. What I would really like to do when executing is more like: type = "VLM4997" type.Header(args) type.Plan(args) type.Conditions(args) Where the type might change from execution to execution or even on different iterations. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Wed, Apr 29, 2009 at 04:34:03PM -0400, Dale Amon wrote: > type = "VLM4997" > type.Header(args) > type.Plan(args) > type.Conditions(args) > Where the type might change from execution to execution > or even on different iterations. Actually let me make that reflect more accurately what is going on: obj = Deck(rdr) obj.header = type.Header(rdr) obj.plan[0] = type.Plan(rdr) obj.plan[1] = type.Plan(rdr) obj.cond= type.Conditions(rdr) obj.cond.calcsomething(args) and so forth through many pages of code... signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Wed, Apr 29, 2009 at 03:06:13PM -0700, Scott David Daniels wrote: > You did not answer the question above, and I think the answer is the root > of your misunderstanding. A class and a module are _not_the_same_thing_. > sys is not a package, it is a module. >>> Just because you put a class inside a module, does not mean >>> that class magically does something by virtue of having the >>> same name as the module. >>> >>> A module is a namespace to hold classes, functions, etc >>> A package is a namespace to hold modules (possibly more). >>> >>> I don't understand why you don't use files like: >>> >>> VLMLegacy/ >>> __init__.py >>> Reader.py >>> VLM4997.py >>> WINGTL.py > Unlike Java, we are free to have several things in a module: > several classes, several functions, several constants These modules would grow to be hundreds of pages long and a difficult to deal with to debug a problem related to one obscure system without looking at (or potentially screwing up) any of the others. I prefer one class per module. This gets more into philosophy, but I figure any function or method that does not fit on one page is too big; and any source file that is more than 20 pages long should be broken in half. I like my modules in the 5-10 page size range, including the embedded Unix ManPages and the cvs history. But that's just my house style. > Well, "VLM4997" is a _string_, and it has no attributes (nor methods) > named "Header", "Plan", or "Conditions." And "type" is a perfectly awful > name for a variable, since it hides the builtin named type. You seem to > confuse names, files, and classes defined in files (at least in your > writing). Actually I'm not. I am simply trying to use a pseudo code to explain roughly what is going on. There will be a string that selects what the set of classes are to be used on any given iteration and it will be used to generate the name of the class and/or name of the module where it is to be found. I'm an old ObjC hacker. I often put the class or method in a variable and do the bindings at runtime. I am already doing some of that sort of thing in this system with the method names and it works nicely. The point I take away from this is that packages and modules have dotted names, but Classes do not and there is no way to do exactly what I wanted to do. The dot syntax would have been quite nice (I quite like the "::" syntax in Perl) and would have made the code much clearer. The way you suggested with a 'typename_classname' generated using a from/import statement will just have to suffice. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
Well, I've managed to get close to what I want, and just so you can see: #!/usr/bin/python import sys sys.path.extend (['../lib', '../bin']) from VLMLegacy.CardReader import CardReader rdr = CardReader ("../example/B767.dat","PRINTABLE") iotypes = ["WINGTL","VLMPC","VLM4997"] for iotype in iotypes: packagename = "VLMLegacy." + iotype + ".Conditions" classname = iotype + "_Conditions" code= "from %s import Conditions as %s" \ % (packagename, classname) x = compile (code,"foo","exec") exec x cls = globals()[classname] a = cls(rdr,2) a.test() signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Wed, Apr 29, 2009 at 04:06:23PM -0700, Scott David Daniels wrote: > Dale Amon wrote: >> >> The point I take away from this is that packages and >> modules have dotted names, but Classes do not and there >> is no way to do exactly what I wanted to do. > Nope. You have not been clear with what you want, and part > of the lack of clarity is your imprecision about names. > > If you insist on having a class per module, you will > always have redundant-looking class names somewhere. > You will help yourself out a lot by not sharing the class > name and the base class name (not the least in error > messages), but it is possible to have them the same. That in particular may happen. This has all been a matter of running tests to see how close I could get to the desired concept using Python. With some working test code I have now, the answer is 'close enough'. Your assistance has been useful, regardless of whether it sounded that way or not ;-) signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Re: import and package confusion
On Wed, Apr 29, 2009 at 10:02:46PM -0400, Dave Angel wrote: > The dot syntax works very > predictably, and quite flexibly. The problem was that by using the same > name for module and class, you didn't realize you needed to include both. It is one of the hazards of working in many very different languages. But I see where that confusion lies and that is a useful thing to know. > And in particular if you simply do the following, you can choose between > those modules: > > if test: >mod = mymodule1 > else: > mod = mymodule2 > obj = mod.myclass(arg1, arg2) Not really applicable to the case I have. There can be lots of different ones and the input selection comes from a command line string so... > Please don't sink to exec or eval to solve what is really a > straightforward problem. I do not really see any other way to do what I want. If there is a way to get rid of the exec in the sample code I have used, I would love to know... but I can't see how to import something where part of the name comes from user command line input without interpreting the code via exec. [See the test module I posted.] I'm dealing with something like this: myprogram --type WINGTL file.dat the set of types will have new members added as they are discovered and I intend to minimize code changes to doing nothing but create a subpackage directory with the new modules, drop it in place. New functionality with no mods to the existing code... signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Thu, Apr 30, 2009 at 08:32:31AM +0200, Jeroen Ruigrok van der Werven wrote: -On [20090430 02:21], Dale Amon (a...@vnl.com) wrote: >>import sys >>sys.path.extend (['../lib', '../bin']) >> >>from VLMLegacy.CardReader import CardReader >>rdr = CardReader ("../example/B767.dat","PRINTABLE") >> >>iotypes = ["WINGTL","VLMPC","VLM4997"] >>for iotype in iotypes: >>packagename = "VLMLegacy." + iotype + ".Conditions" >>classname = iotype + "_Conditions" >>code= "from %s import Conditions as %s" \ >> % (packagename, classname) >>x = compile (code,"foo","exec") >>exec x >>cls = globals()[classname] >>a = cls(rdr,2) >>a.test() > > Right now your code boils down to: > > from VLMLegacy.VLM4997.Conditions import Conditions as VLM4997_Conditions > from VLMLegacy.VLMPC.Conditions import Conditions as VLMPC_Conditions > from VLMLegacy.WINGTL.Conditions import Conditions as WINGTL_Conditions > > And while you are, of course, allowed to do so, it's not the way you would > want to approach it in Python. > > For each subpackage/module you could add an import and __all__ to > __init__.py to expose Conditions and then shorten it all to: > > import VLMLegacy.VLM4997 as VLM4997 > import VLMLegacy.VLMPC as VLMPC > import VLMLegacy.WINGTL as WINGTL > > So that you can do: > > a = VLM4997.Conditions(rdr, 2) > a.test() If my proof of concept test code were actually all there was you would be correct. But what I wish to accomplish is that a string supplied from the command line does a run time load of code that is not even explicitely mentioned in the main body of the system. myprogram --type NEWTYPE old.dat Where NEWTYPE did not exist when the above code was written and distributed. Think of the following scenario. * Customer tells me, we have old data decks which are not quite in any of the supported formats. * I supply a new subpackage NEWTYPE with the varient code. * They copy it into the package directory, VLMLegacy/NEWTYPE. * They type the name of that package as a command line arg as above. * The code I wrote takes their string and dynamically binds and uses the new code without changing anything else in the code base. Now I agree it is hackish. I don't really want to eval, I just want to import a module whose name is contained in a variable. I'm not unhappy with the second part where I use globals()[thestring] to get from a string to a class object; I am indeed bothered that I have to eval to do the import as I have been unable so far to find a way to make it work dynamically. I'd be perfectly happy if something like this worked: from VLMLegacy.CardReader import * opts, args = p.parse_args() iotype = opts.type targetname = "Deck" packagename = "VLMLegacy." + iotype + "." targetname classname = iotype + "_" + targetname # I only wish something like this worked... from packagename import targetname as classname cls = globals()[classname] file = args[0] rdr = CardReader(file,opts.punchtype) a = cls(rdr,2) a.read() but it doesn't. Oh, and it gets worse later. The data I'm reading is potentially straight off ancient Fortran decks with no spaces between numbers. ;-) signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Thu, Apr 30, 2009 at 02:38:03AM -0400, Dave Angel wrote: > As Scott David Daniels says, you have two built-in choices, depending on > Python version. If you can use __import__(), then realize that > mod = __import__("WINGTL") > > will do an import, using a string as the import name. I don' t have the > experience to know how it deals with packages, but I believe I've heard > it does it the same as well. > One more possibility, if you're only trying to get a single package > hierarchy at a time, it might be possible to arrange them in such a way > that the sys.path search order gets you the package you want. Rather > than the top level being a package, it'd be an ordinary directory, and > you'd add it to the sys.path variable so that when you import a > subpackage (which would now really just be a package), you'd get the > right one. That would be too unpredictable. But I find the first option very interesting. I was looking at the __import__ in the Python book and thought it *might* be able to do it, but I was not sure if it was a solution or an enticing event horizon. I'm using 2.5 btw. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
On Thu, Apr 30, 2009 at 04:33:57AM -0300, Gabriel Genellina wrote: > En Thu, 30 Apr 2009 03:04:40 -0300, alex23 escribió: >> Are you familiar with __import__? >> >> iotypes = ["WINGTL","VLMPC","VLM4997"] >> for iotype in iotypes: >> packagename = "VLMLegacy." + iotype + ".Conditions" >> classname = iotype + "_Conditions" >> module = __import__(packagename) >> cls = getattr(module, classname) >> # etc > > (doesn't work as written, because __import__ returns the top package when > given a dotted name) > Replace the last three lines with: > > __import__(packagename) > module = sys.modules[packagename] > cls = getattr(module, "Conditions") Thanks. That works marvelously. I just knew there had to be a better way. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Re: import and package confusion
Gabriel gave me the key to a fine solution, so just to put a bow tie on this thread: #!/usr/bin/python import sys sys.path.extend (['../lib', '../bin']) from VLMLegacy.CardReader import CardReader rdr = CardReader ("../example/B767.dat","PRINTABLE") iotypes = ["WINGTL","VLMPC","VLM4997"] for iotype in iotypes: classname = "Conditions" __import__("VLMLegacy." + iotype + "." + classname) module = sys.modules[packagename] cls = getattr(module, classname) a = cls(rdr,2) a.test() Works like a champ! It would have taken days for me to find that by trial and error and rtfm and google. So thank you all. Even if at times I was rather unclear about what I was trying to accomplish. Now I can move on to parsing those pesky Fortran card images... There wouldn't happen to be a way to take n continguous slices from a string (card image) where each slice may be a different length would there? Fortran you know. No spaces between input fields. :-) I know a way to do it, iterating over a list of slice sizes, perhaps in a list comprehension, but some of the august python personages here no doubt know better ways. signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
windows installers and license agreement
Is it possible to create a windows installer using distutils that includes a prompt for the user to agree to the terms of the license? Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list
is it possible to add a property to an instance?
Does anyone know if it is possible to add a property to an instance at runtime? I didn't see anything about it in the standard library's new module, google hasn't turned up much either. Thanks, Darren -- http://mail.python.org/mailman/listinfo/python-list