PYO versus PYC Behavior

2009-12-24 Thread Boris Arloff
All python docs and description indicate that optimization (-OO) does not do 
much anything except the removal off pydoc. A single "O" removes comments and 
asserts, and with the removal of pydoc with double "O" option the *.pyo byte 
compile is left with pure executable code.  I am experiencing a different 
behavior than described.
 
I am running Python 2.6.4 and have source code which I pre-compile either into 
pyc or pyo files depending on the optimization switch selected.  The pyo 
version fails to run with the main program module failing to import any other 
modules, such as failing on the "import os" statement (first line 
encountered).  However, the pyc version succeeds and runs correctly.  This is 
with the same code modules, same python VM and same machine.
 
One item I should note is that the Python distribution I am using is not fully 
installed with paths set by the installer.  I unpack the Python tar and compile 
it (i.e. ran configure and make; not make install).  Then I distribute the this 
Python VM, with its Lib and Modules dirs into my target machine (a Linux 
distro) and collocate my pyc or pyo modules at the root with python.
 
To further experiment, I have also compiled all python libraries to either pyc 
and pyo (e.g. os.pyc or os.pyo in the Modules dir).  If I then run with 
interactive python, I experience the same effect as executing from command 
line.  Under the IDE "import os" fails if I distribute with python modules 
compiled into pyo, but it succeeds if I distribute pyc modules.
 
This seems to be contrary to the documentation.  If the only difference was the 
removal of pydoc between pyc and pyo then both versions should behave exactly 
the same way.  There must be some additional modifications with a -OO compile.
 
Anyone can comment, please?
 
Thanks,
Boris
 
 


  -- 
http://mail.python.org/mailman/listinfo/python-list


python 2.x and running shell command

2009-12-24 Thread Boris Arloff
>On Dec 23, 5:22 pm, Sean DiZazzo  wrote:
> On Dec 23, 1:57 pm, tekion  wrote:
>
>
>
> > All,
> > some of the servers I have run python 2.2, which is a drag because I
> > can't use subprocess module.  My options that I know of is popen2
> > module.  However, it seems it does not have io blocking
> > capabilities.   So every time run a command I have open and close a
> > file handle.  I have command that requires interactive interaction. I
> > want to be be able to perform following action:
> > fout, fin = popen2.open2(cmd) #open up interactive session
> > fin.write(cmd2);
> > block (input)
> > fout.readline()
> > block output
> > fin.write(cmd2)
> > and so on...
>
> > is this possible with popen2 or do I have to use pexpect for the job?
> > Thanks.
>
> I've never done that with subprocess, but maybe this will 
> help:http://www.lysator.liu.se/~astrand/popen5/
>
> ~Sean
>Sean, popen5 is old name for subprocess.
 
If using a linux platform then look into setting up pipes.  This will give you 
a communications channel and you can interact with your processes.  I have used 
this method many times.
 


  -- 
http://mail.python.org/mailman/listinfo/python-list


Re: The rap against "while True:" loops

2009-10-10 Thread Boris Arloff
I agree there is no rap against "while True"-loops.  As an example these are 
very useful especially when receiving continuous data over a queue, pipe 
socket, or over any other connection.  You set to block, receive data, 
then process data and finally loop around to wait for next data segment.  Of 
course should protect against problems with try-except  wrappers and by 
handling exit conditions (e.g. "break") when appropriate.
 
Is the problem with the specific syntax "while True:" or is it with having 
infinite loop constructs at all?

--- On Sat, 10/10/09, Stephen Hansen  wrote:


From: Stephen Hansen 
Subject: Re: The rap against "while True:" loops
To: python-list@python.org
Date: Saturday, October 10, 2009, 8:30 PM




I use "while True"-loops often, and intend to continue doing this
"while True", but I'm curious to know: how widespread is the
injunction against such loops?


The injunction is nonexistent (save perhaps in people coming from another 
language who insist that Python just /must/ have a "proper" do-while 
construct). "while True" with an exit-test at the end is idiomatic, how you 
spell "do-while" in Python. There's nothing at all wrong with it, and no real 
Python programmer will ever say don't-do-it.


Okay, some people prefer to spell it 'while 1', but its the same difference.


Yeah, you have to be certain the exit condition is there and properly formed so 
it can exit (unless you're using a generator which never empties, of course). 
But you have to make sure you have a proper exit condition on any looping 
construct anyways.
 
No idea where your charge came across the advice, but its nonsense.




 Has it reached the status of "best
practice"?



Its simply the correct way to spell a do-while or intentionally infinite loop 
in Python, always has been.


HTH,


--S
-Inline Attachment Follows-


-- 
http://mail.python.org/mailman/listinfo/python-list



  -- 
http://mail.python.org/mailman/listinfo/python-list


SocketServer

2009-10-12 Thread Boris Arloff
This may not be the correct list for this issue, if so I would appreciate if 
anyone could forward it to the correct list.
 
I had experienced a number of problems with standard library SocketServer when 
implementing a tcp forking server under python 2.6.  I fixed every 
issue including some timing problems (e.g. socket request closed too fast 
before the last packet was grabbed) by overriding or extending methods as 
needed.
 
Nonetheless, the one issue which may require a wider attention had to deal with 
method 
collect_children() of class TCPServer.  This method makes the following os 
library call:
 
pid, result = os.waitpid(child, options=0)
 
Under some conditions the method breaks on this line with a message indicating 
that an unexpected keyword  argument "options" was provided.  In a continuous 
run reading thousands of packets from multiple client connects, this line seems 
to fail at times, but not always.  Unfortunately, I did not record the specific 
conditions that caused this "erroneous" error message, which happened 
unpredicatbly multiple times.
 
To correct the problem the line of code may be changed by removing the keyword 
to:
 
pid, result = os.waitpid(child, 0); this never fails.
 
Nonetheless, I believe that method collect_children() is too cumbersome as 
written and I did override it with the following simpler strategy.  The 
developers of SocketServer may want to consider it as a replacement to the 
current code used for collect_children().
 
 def collect_children(self):
   '''Collect Children - Overrides ForkingTCPServer collect_children method.
   The provided method in SocketServer modules does not properly work for the
   intended purpose.  This implementation is a complete replacement.
   
   Each new child process id (pid) is added to list active_children by method
   process_request().  Each time a new connection is created by the method, a
   call is then made here for cleanup of any inactive processes.
   Returns: None
   '''
   child = None
   try:
if self.active_children:  # a list of active child processes
 for child in self.active_children:
  try:
   val = os.waitpid(child, os.WNOHANG)  # val = (pid, status)
   if not val[0]:# pid 0; child is inactive
time.sleep(0.5)   # wait to kill
os.kill(child, signal.SIGKILL)   # make sure it is dead
self.active_children.remove(child) # remove from active list
   else: continue
  except OSError, err:
   if errno.ECHILD != err.errno: # do not report; child not found
msg = '\tOS error attempting to terminate child process {0}.'
self.errlog.warning(msg.format(str(child)))
   else: pass
  except:
   msg = '\tGeneral error attempting to terminate child process {0}.'
   self.errlog.exception(msg.format(str(child)))
 else: pass# for child loop
else: pass
   except:
msg = '\tGeneral error while attempting to terminate child process {0}.'
self.errlog.exception(msg.format(str(child)))

Things to note are:
1. Using os.WNOHANG instead of 0 as options values to os.waitpid
2. Detecting if we get a returned pid==0; hence assume child is done
3. Attempt a os.kill for defunct children after a time.sleep(0.5); allow 
dependent processes to complete their job before totally closing down the 
request.
4. Report os errors as exceptions; but not errno.ECHILD, which means trying to 
kill none existing child; keep this as a warning.
This is more suscinct code and does the job.  At least it does it for me.
 
Thanks,
Boris
 


  -- 
http://mail.python.org/mailman/listinfo/python-list


Metaclasses Demystified

2009-06-10 Thread Boris Arloff
Hi,

I have been studying python metaclasses for a few days now and I believe that 
slowly but surely I am grasping the subject.  The best article I have found on 
this is "Metaclass Demystified", by J. LaCour, Python Magazine, July 2008 
(http://cleverdevil.org/computing/78/).

I replicated the article's Enforcer example in python 3.0 and while I 
understand its functionality I have trouble understanding the behind the scene 
behavior.  I created a file containing classes Field, EnforcerMeta, Enforcer 
and Person, in this order.  The file is then imported with the python ide.  To 
save space I do not replicate the code here since it is available at the above 
link.  The following describes events when the file is imported and I hope that 
someone may offer clarifications on my comments/questions:

1.  First, the EnforcerMeta's __init__ method executes at import and its 
namespace (ns) shows to contain '__module__' and '__setattr__' attributes.  I 
did not expect __init__ to execute at this point since there has not been an 
instantiation yet.  Does this happens because we inherit from type and the 
python engine instantiates metaclasses?

2. Second, the for loop of EnforcerMeta checks the two attributes to be 
instances of class Field to add them in the _fields dict.  Since Field has not 
been instantiated with these attributes, they are not added to the dict.  No 
problem here, this is expected.

3.  Then,  class Person declaration is encountered with two class variables 
'name' and 'age' which are defined as Field(str) and Field(int), respectively.  
Hence, we have two instances of class Field each with a corresponding instance 
ftype attribute.  No problem with this either, as expected.

4.  The next events are somewhat puzzling however.  Class EnforcerMeta's 
__init__ executes again with a ns containing attributes 'age', 'name', and 
'__module__' .  The for loop executes and this time 'age' and 'name' are added 
to the _fields dict, while '__module__' understandably is not added.  However,

4.a.  What happened to attribute '__setattr__'?  Why is it not present anymore 
in the ns?

4.b.  What kind of magic makes EnforcerMeta to instantiate this second time?  I 
did not expect this to happen at all.  I can try to rationalize its instance in 
step 1 above, but I cannot come up with any such explanation for this second 
instantiation.  Is it because Enforcer doing this by inheriting the metaclass, 
which in turn is inherited by class Person?

I tested the code by creating an instance of class Person and then assigning 
values to its attributes name and age.  The code works correctly as per the 
article's example.

Any clarifications to the above questions will be greatly appreciated.  I am 
trying to get versed with the black magic of metaclasses and hope to use them 
in a personal academic research whereby I will try to create class objects on 
the fly at runtime out of nothing; or almost nothing.

I can attach my code if necessary, but as indicated it is identical to LaCour's 
in the article with the necessary syntax changes for python 3.0.

Thanks 
Boris




  -- 
http://mail.python.org/mailman/listinfo/python-list


Trying to get ABC to work

2009-08-03 Thread Boris Arloff
Hi,
 
Looking for ideas on getting Abstract Base Classes to work as intended within a 
metaclass.
 
I was wondering if I could use an abc method within a metaclass to force a 
reimplementation when a class is instantiated from the metaclass.  It seems 
like I cannot do so.  I implemented the following test case:
 
import abc
class MetaExample(type):
  def __init__(cls, name, bases, ns):
    setattr(cls, 'cls_meth', cls.cls_meth)    # cls method as instance method
    setattr(cls, 'cls_abc', cls.cls_abc)# abc cls method as instance 
method

 
  def cls_meth(cls):
    print('Class method defined stub')

 
  @abc.abstractmethod
  def cls_abc(cls):
    try:
  print('Class-Abstract method defined stub')
    except NotImplementedError, err:
  print('Must implement cls_abc.')
    except:
  print('General exception at cls_abc method.')
 
Then I create class MyKlass from the metaclass and instantiate it as myklass:
MyKlass(object): __metaclass__ = MetaExample
myklass = MyKlass()
myklass.cls_meth()   --> prints "Class method defined stub"
myklass.cls_abc() --> prints "Class-Abstract method defined stub"
 
I was hopping for myklass.cls_abc() to print "Must implement cls_abc."
 
However, this makes sense since MyKlass implements from the metaclass the 
cls_abc method and there will never be an abstraction of this method.
 
Any ideas on how to get this done?  Any way I could define an abstract method 
within a metaclass and have it behave with abstraction when the class is 
created off the metaclass?
 
I other words, I want to force an implementation of cls_abc() method when 
MyKlass(object): __metaclass__ = MetaExample is declared, or else get 
NotImplementedError exception.
 
Thanks,
Boris Arloff
  


  -- 
http://mail.python.org/mailman/listinfo/python-list