Self-referencing decorator function parameters
Hello, Originally I posted this as a bug but it was shot down pretty quickly. I am still mildly curious about this as I'm missing a bit of understanding of Python here. Why is it that the following code snippet: def decorator( call ): def inner(func): def application( *args, **kwargs ): call(*args,**kwargs) func(*args,**kwargs) return application return inner class DecorateMe: @decorator( call=DecorateMe.callMe ) def youBet( self ): pass def callMe( self ): print "Hello!" DecorateMe().youBet() Will not compile, giving: Traceback (most recent call last): File "badpython.py", line 10, in class DecorateMe: File "badpython.py", line 11, in DecorateMe @decorator( call=DecorateMe.callMe ) NameError: name 'DecorateMe' is not defined Where if you change the "call=DecorateMe.callMe" to "call=lambda x: DecorateMe.callMe(x)" everything goes along its merry way. Nesting the call in a lambda seems to allow it to recognize the class definition. Any ideas as to what is going on here (other than ugly code)? Thank you, Thomas Dimson -- http://mail.python.org/mailman/listinfo/python-list
Re: Self-referencing decorator function parameters
On Apr 2, 10:31 am, George Sakkis <[EMAIL PROTECTED]> wrote: > On Apr 2, 8:30 am, Thomas Dimson <[EMAIL PROTECTED]> wrote: > > > > > > > Hello, > > > Originally I posted this as a bug but it was shot down pretty quickly. > > I am still mildly curious about this as I'm missing a bit of > > understanding of Python here. Why is it that the following code > > snippet: > > > def decorator( call ): > > def inner(func): > > def application( *args, **kwargs ): > > call(*args,**kwargs) > > func(*args,**kwargs) > > return application > > > return inner > > > class DecorateMe: > > @decorator( call=DecorateMe.callMe ) > > def youBet( self ): > > pass > > > def callMe( self ): > > print "Hello!" > > > DecorateMe().youBet() > > > Will not compile, giving: > > Traceback (most recent call last): > > File "badpython.py", line 10, in > > class DecorateMe: > > File "badpython.py", line 11, in DecorateMe > > @decorator( call=DecorateMe.callMe ) > > NameError: name 'DecorateMe' is not defined > > > Where if you change the "call=DecorateMe.callMe" to "call=lambda x: > > DecorateMe.callMe(x)" everything goes along its merry way. Nesting the > > call in a lambda seems to allow it to recognize the class definition. > > Any ideas as to what is going on here (other than ugly code)? > > The error message is pretty obvious; when the > "@decorator(call=DecorateMe.callMe)" line is reached, the DecorateMe > class has not been created yet, let alone the DecorateMe.callMe > method. One way to make it work (for some definition of "work" ;-) is > the following: > > # use "new-style" classes unless you have a good reason not to: > # class DecorateMe(object): > class DecorateMe: > > def callMe(self): > print "Hello!" > > @decorator(call=callMe) > def youBet(self): > pass > > The reason this works is that at the point where @decorator is > executed, callMe is already in the temporary namespace to be used for > creating the DecorateMe class (although the class itself is not built > yet). > > A subtle point is that in this case callMe is a plain function, not an > (unbound) method such as DecorateMe.callMe. This may or may not > matter, depending on what you do with it in the decorator. Some > decorators that work fine with plain functions break if they are used > to decorate methods (or vice versa) so it's good to have this in mind > when writing or debugging a decorator. > > George- Hide quoted text - > > - Show quoted text - Thanks George, that was helpful. I guess my real question is: why does wrapping the call to be "call=lambda x: DecorateMe.callMe(x)" somehow fix the issue with this temporary namespace? It seems strange to me that defining an additional function (through lambda) would allow me to see/add more members to the namespace. -- http://mail.python.org/mailman/listinfo/python-list
Ctypes and C Infinite Callback Loops
Hello, I have quite a complex issue that is arising with regards to using ctypes to hook into some legacy code. The legacy code is in infinite loop - I can not touch this. It does some listening, and periodically calls a specific callback function. What I would like to be able to do is spawn a Python thread to handle this infinite loop, and continue on my merry way. This works to an extent, however if I try to raise the SystemExit exception (or any other one) inside of this thread I get an error message of "AssertionError: cannot join current thread". I assume there is some issue with the global interpreter lock or that you can't exit the infinite loop from above Python. Any suggestions on how I can design this so the thread will be able to issue exits/raise exceptions just like a regular thread? Is there a way of terminating this thread from the python interpreter or ctypes.pythonapi? I have also tried being sneaky by using a pthread in the C code, but I had issues when I tried to create a new thread state using ctypes.pythonapi (well, I had issues swapping it in when I get to the callback). If this is the best solution, how do I create/swap in the thread state from ctypes? For some cooked up sample code that simulates this: main.c (main.o -> main.so ) #include void loop( void (*callback)() ) { while( 1 ) { callback(); sleep(1); } } void testLoop( void (*callback)() ) { loop( callback ); } test.py: import threading,ctypes,time,sys,os soPath = os.path.join( "/home/tdimson/ctypes/main.so" ) class callLoop( threading.Thread ): def callback( self ): sys.exit() def run( self ): ctypes.cdll.LoadLibrary( soPath ) mainLib = ctypes.CDLL( soPath ) _callback = ctypes.CFUNCTYPE( None )( self.callback ) mainLib.testLoop( _callback ) loopThread = callLoop() loopThread.start() while 1: print "Not blocking" time.sleep(10) Then I execute: python test.py and get Not blocking Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python2.4/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.4/threading.py", line 634, in __exitfunc t.join() File "/usr/lib/python2.4/threading.py", line 532, in join assert self is not currentThread(), "cannot join current thread" AssertionError: cannot join current thread Error in sys.exitfunc: Traceback (most recent call last): File "/usr/lib/python2.4/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.4/threading.py", line 634, in __exitfunc t.join() File "/usr/lib/python2.4/threading.py", line 532, in join assert self is not currentThread(), "cannot join current thread" AssertionError: cannot join current thread Thanks for even reading this much :) -Thomas Dimson -- http://mail.python.org/mailman/listinfo/python-list
Re: Ctypes and C Infinite Callback Loops
On Apr 9, 1:24 am, Dennis Lee Bieber <[EMAIL PROTECTED]> wrote: > On Tue, 8 Apr 2008 16:49:27 -0700 (PDT), Thomas Dimson > <[EMAIL PROTECTED]> declaimed the following in comp.lang.python: > > > > > I assume there is some issue with the global interpreter lock or that > > you can't exit the infinite loop from above Python. Any suggestions on > > how I can design this so the thread will be able to issue exits/raise > > exceptions just like a regular thread? Is there a way of terminating > > this thread from the python interpreter or ctypes.pythonapi? > > No... as I recall, you can't /EXIT/ Python from a sub-thread... > Which is what sys.exit() or whatever is trying to do -- shut down the > entire program, not just the thread. The error you get indicates that > the thread doing the shutdown wants to wait for the sub-thread to finish > -- but /it/ IS the sub-thread. Even console interrupts have to be > delivered to the main program. > > The only safe way to terminate a thread is to be able to code it > such that /it/ responds to an externally set value (a boolean, read a > message off a Queue, etc.) and for IT to then exit. Based upon the > sample you showed, that would require the C main loop to be checking for > a shutdown signal... If the API to that main loop library doesn't > include a shutdown capability I'd suggest it is a less than complete > library... And the only thing I'd suggest is not using a Python thread, > but instead spawning a separate process that somehow communicates to the > parent process -- and which can be forceably killed using OS specific > capabilities... "kill -9 pid" > > -- > WulfraedDennis Lee Bieber KD6MOG > [EMAIL PROTECTED] [EMAIL PROTECTED] > HTTP://wlfraed.home.netcom.com/ > (Bestiaria Support Staff: [EMAIL PROTECTED]) > HTTP://www.bestiaria.com/ Thanks for the response, it put me in the right direction (I didn't realize there was no way of exiting the interpreter directly from a non-main thread). If anyone ever has the same problem, the solution I ended up using went like this: I created a wrapper around the infinite loop call that had a setjmp in it, exiting if setjmp was non-zero. Inside each callback function, I had a try/except statement that caught all exceptions. If it had an exception, it would set a thread- specific exception variable to sys.exc_info() and then call a C function that did a longjmp. The thread would first call the wrapper to the infinite loop. If the wrapper returns (because of a longjmp), it would check the thread- specific exception variable for a non-None value and raise the very same exception (with the same traceback) if it found it. A fairly large hack, but it seemed to do the job. Thanks again. -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess.Popen() output to logging.StreamHandler()
On Apr 10, 8:11 am, "sven _" <[EMAIL PROTECTED]> wrote: > Version: Python 2.5.1 (r251:54863, Mar 7 2008, 04:10:12) > > My goal is to have stdout and stderr written to a logging handler. > This code does not work: > > # START > import logging, subprocess > ch = logging.StreamHandler() > ch.setLevel(logging.DEBUG) > subprocess.call(['ls', '-la'], 0, None, None, ch, ch) > # END > > Traceback (most recent call last): > File "log.py", line 5, in > subprocess.call(['ls', '-la'], 0, None, None, ch, ch) > File "/usr/lib/python2.5/subprocess.py", line 443, in call > return Popen(*popenargs, **kwargs).wait() > File "/usr/lib/python2.5/subprocess.py", line 586, in __init__ > errread, errwrite) = self._get_handles(stdin, stdout, stderr) > File "/usr/lib/python2.5/subprocess.py", line 941, in _get_handles > c2pwrite = stdout.fileno() > AttributeError: StreamHandler instance has no attribute 'fileno' > > This is because subprocess.Popen() expects file descriptors to write > to, and logging.StreamHandler() does not supply it. The StreamHandler > could supply its own stdout file descriptor, but then Popen() would > write directly to that file, bypassing all the logging fluff. > > A possible solution would be to make a named pipe (os.mkfifo()), have > Popen() write to that, and then have some horrendous hack run select() > or similar on the fifo to read from it and finally pass it to > StreamHandler. > > Are there better solutions? > > sven What is wrong with doing something like: import logging, subprocess ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) s = subprocess.Popen( ['ls','-la'], stdout=subprocess.PIPE ) while 1: ch.info( s.stdout.readline() ) if s.poll() == None: break Perhaps not the most efficient or clean solution, but that is how I usually do it (note: I didn't test the above code). -Thomas Dimson -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess.Popen() output to logging.StreamHandler()
On Apr 10, 3:05 pm, svensven <[EMAIL PROTECTED]> wrote: > Vinay Sajip wrote: > > > On Apr 10, 1:11 pm, "sven _" <[EMAIL PROTECTED]> wrote: > >> My goal is to have stdout and stderr written to a logginghandler. > > > > Thomas was almost right, but not quite - you can't call info on a > > Handler instance, only on a Logger instance. The following script: > > Yes, but that was easily fixed. Still there seemed to be a problem > there with the .poll(), since it would think the process ended while > it was actually running. The result was that only some of the command > output was shown. > > > import logging > > import subprocess > > > > logging.basicConfig(level=logging.INFO) # will log to stderr of this > > script > > > > s = subprocess.Popen( ['ls','-la'], stdout=subprocess.PIPE ) > > while 1: > > line = s.stdout.readline() > > exitcode = s.poll() > > if (not line) and (exitcode is not None): > > break > > line = line[:-1] > > logging.info("%s", line) > > This works perfectly, as far as I can tell. You seem to use another > conditional, though. > > I'll take a closer look at this tomorrow. Thanks for the clean > solution, Vinay. > > sven I think what I actually meant was: s = subprocess.Popen( ['ls','-la'], stdout=subprocess.PIPE ) while 1: line = s.stdout.readline() if not line: break logging.info( line ) The problem with p.poll() is that the process probably has ended before you have gotten all the text out of the buffer. Readline will return a falsy value when the process ends. Anyway, this post has a lot of responses so I'm sure _something_ works :) -- http://mail.python.org/mailman/listinfo/python-list