Is this a bug?

2013-07-15 Thread Jack Bates

Hello,

Is the following code supposed to be an UnboundLocalError?
Currently it assigns the value 'bar' to the attribute baz.foo

   foo = 'bar'
   class baz:
  foo = foo
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is this a bug?

2013-07-16 Thread Jack Bates

On 15/07/13 09:13 AM, Joshua Landau wrote:

On 15 July 2013 16:50, Jack Bates  wrote:

Hello,

Is the following code supposed to be an UnboundLocalError?
Currently it assigns the value 'bar' to the attribute baz.foo

foo = 'bar'
class baz:
   foo = foo


If so, then no. Assignments inside class bodies are special-cased in
Python. This is because all assignments refer to properties of "self"
on the LHS but external things too on the RHS. This is why you can do
"def x(): ..." instead of "def self.x(): ..." or some other weird
thing. There's also some extra special stuff that goes on.

In order to make this an UnboundLocalError, lots of dramatic and
unhelpful changes would have to take place, hence the current
behaviour. The current behaviour is useful, too.


Ah, thank you Chris Angelico for explaining how this is like what 
happens with default arguments to a function and Joshua Landau for 
pointing out how assignments inside class bodies refer to properties of 
"self" on the LHS. It makes sense now. Only I'm struggling to find where 
the behavior is defined in the language reference. Can someone please 
help point me to where in the language reference this is discussed? I've 
been hunting through the section on naming and binding:



http://docs.python.org/3/reference/executionmodel.html#naming-and-binding
--
http://mail.python.org/mailman/listinfo/python-list


Identical descriptor value, without leaking memory?

2011-07-29 Thread Jack Bates
How can you get a descriptor to return an identical value, each time
it's called with the same "instance" - without leaking memory?


#!/usr/bin/env python

class descriptor:
  class __metaclass__(type):
def __get__(self, instance, owner):
  ...

class owner:
  descriptor = descriptor

instance = owner()


I want ">>> instance.descriptor is instance.descriptor" to evaluate True

I was thinking of "caching" descriptor return values? Like a dictionary
of instances and descriptor return values? I was thinking of using
weakref.WeakKeyDictionary to avoid leaking memory? But I can't guarantee
that the descriptor return value won't indirectly reference the
instance, in which case weakref.WeakKeyDictionary *does* leak memory?

Does anyone know how to get a descriptor to return an identical value,
each time it's called with the same "instance" - without leaking memory?

Much thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Replace all references to one object with references to other

2011-08-05 Thread Jack Bates
I have two objects, and I want to replace all references to the first
object - everywhere - with references to the second object. What can I
try?
-- 
http://mail.python.org/mailman/listinfo/python-list


Measure the amount of memory used?

2011-08-18 Thread Jack Bates
I wrote a content filter for Postfix with Python,
https://github.com/jablko/cookie

It should get started once, and hopefully run for a long time - so I'm
interested in how it uses memory:

 1) How does the amount of memory used change as it runs?

 2) How does the amount of memory used change as I continue to hack on
it, and change the code?

My naive thought was that I'd periodically append to a file, the virtual
memory size from /proc/[pid]/stat and a timestamp. From this a could
make a graph of the amount of memory used as my content filter runs, and
I could compare two graphs to get a clue whether this amount changed as
I continue to hack

 - but some Googling quickly revealed that measuring memory is actually
quite complicated? Neither the virtual memory size nor the "resident set
size" accurately measure the amount of memory used by a process

Has anyone else measured the memory used by a Python program? How did
you do it?
-- 
http://mail.python.org/mailman/listinfo/python-list


Reliably call code after object no longer exists or is "unreachable"?

2011-04-27 Thread Jack Bates
In Python, how can you reliably call code - but wait until an object no
longer exists or is "unreachable"?

I want to ensure that some code is called (excluding some exotic
situations like when the program is killed by a signal not handled by
Python) but can't call it immediately. I want to wait until there are no
references to an object - or the only references to the object are from
unreachable reference cycles


#!/usr/bin/env python

class Goodbye:
  def __del__(self):
print 'Goodbye, world!'

ref = Goodbye()


$ ./goodbye
Goodbye, world!
$ 


Python's __del__ or destructor method works (above) - but only in the
absence of reference cycles (below). An object, with a __del__ method,
in a reference cycle, causes all objects in the cycle to be
"uncollectable". This can cause memory leaks and because the object is
never collected, its __del__ method is never called


> Circular references which are garbage are detected when the option
> cycle detector is enabled (it's on by default), but can only be
> cleaned up if there are no Python-level __del__() methods involved.


#!/usr/bin/env python

class Goodbye:
  def __del__(self):
print 'Goodbye, world!'

class Cycle:
  def __init__(self, cycle):
self.next = cycle
cycle.next = self

Cycle(Goodbye())


$ ./cycle
$ 


In PEP 342 I read that an object, with a __del__ method, referenced by a
cycle but not itself participating in the cycle, doesn't cause objects
to be uncollectable. If the cycle is "collectable" then when it's
eventually collected by the garbage collector, the __del__ method is
called


> If the generator object participates in a cycle, g.__del__() may not
> be called. This is the behavior of CPython's current garbage
> collector. The reason for the restriction is that the GC code needs to
> "break" a cycle at an arbitrary point in order to collect it, and from
> then on no Python code should be allowed to see the objects that
> formed the cycle, as they may be in an invalid state. Objects "hanging
> off" a cycle are not subject to this restriction.


#!/usr/bin/env python

import sys

class Destruct:
  def __init__(self, callback):
self.__del__ = callback

class Goodbye:
  def __init__(self):
self.destruct = Destruct(lambda: sys.stdout.write('Goodbye, world!\n'))

class Cycle:
  def __init__(self, cycle):
self.next = cycle
cycle.next = self

Cycle(Goodbye())


$ ./dangle
Goodbye, world!
$ 


However it's *extremely* tricky to ensure that the object with a __del__
method doesn't participate in a cycle, e.g. in the example below, the
__del__ method is never called - I suspect because the object with a
__del__ method is reachable from the global scope, and this forms a
cycle with a frame's f_globals reference? "storing a generator object in
a global variable creates a cycle via the generator frame's f_globals
pointer"


#!/usr/bin/env python

import sys

class Destruct:
  def __init__(self, callback):
self.__del__ = callback

class Goodbye:
  def __init__(self):
self.destruct = Destruct(lambda: sys.stdout.write('Goodbye, world!\n'))

class Cycle:
  def __init__(self, cycle):
self.next = cycle
cycle.next = self

ref = Cycle(Goodbye())


$ ./global
$ 


Faced with the real potential for reference cycles, how can you reliably
call code - but wait until an object no longer exists or is
"unreachable"?
-- 
http://mail.python.org/mailman/listinfo/python-list


"raise (type, value, traceback)" and "raise type, value, traceback"

2011-05-02 Thread Jack Bates
Hi, anyone know why these two statements aren't equivalent?

raise (type, value, traceback)

raise type, value, traceback
-- 
http://mail.python.org/mailman/listinfo/python-list


Pair of filenos read/write each other?

2013-08-13 Thread Jack Bates
Can anyone suggest a way to get a pair of file descriptor numbers such 
that data written to one can be read from the other and vice versa?


Is there anything like os.pipe() where you can read/write both ends?

Thanks!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pair of filenos read/write each other?

2013-08-15 Thread Jack Bates
On Wed, Aug 14, 2013 at 01:17:40AM +0100, Rhodri James wrote:
> On Wed, 14 Aug 2013 00:10:41 +0100, Jack Bates   
> wrote:
> 
> > Can anyone suggest a way to get a pair of file descriptor numbers such  
> > that data written to one can be read from the other and vice versa?
> >
> > Is there anything like os.pipe() where you can read/write both ends?
> 
> Sockets?  It depends a bit on what you're trying to do, exactly.  If you  
> give us a bit more context, we might be able to give you better advice.

Thanks, I am writing a fixture to test an app. The app normally uses the Python
wrapper for the GnuTLS library to handshake with a service that I want to
mock up in my fixture. The app passes a file descriptor number to
gnutls_transport_set_ptr()

When I test the app, the app and fixture run in the same process. I want a file
descriptor number I can foist on the app, and in the fixture access what is
read/written to it by the GnuTLS wrapper.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pair of filenos read/write each other?

2013-08-15 Thread Jack Bates
On Wed, Aug 14, 2013 at 01:55:38AM +0100, Chris Angelico wrote:
> On Wed, Aug 14, 2013 at 1:17 AM, Rhodri James
>  wrote:
> > On Wed, 14 Aug 2013 00:10:41 +0100, Jack Bates 
> > wrote:
> >
> >> Can anyone suggest a way to get a pair of file descriptor numbers such
> >> that data written to one can be read from the other and vice versa?
> >>
> >> Is there anything like os.pipe() where you can read/write both ends?
> >
> >
> > Sockets?  It depends a bit on what you're trying to do, exactly.  If you
> > give us a bit more context, we might be able to give you better advice.
> 
> Specific questions:
> 
> 1) Do you need different processes to do the reading/writing?

No, only one process is involved.

> 2) Do you need to separate individual writes (message mode)?

No, data can be split into smaller writes or merged into a bigger write, so
long as it arrives in order.

> ChrisA

Thank you for your help!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pair of filenos read/write each other?

2013-08-15 Thread Jack Bates
On Wed, Aug 14, 2013 at 08:34:36AM +, Antoine Pitrou wrote:
> Nobody  nowhere.com> writes:
> > On Tue, 13 Aug 2013 16:10:41 -0700, Jack Bates wrote:
> > > Is there anything like os.pipe() where you can read/write both ends?
> > 
> > There's socket.socketpair(), but it's only available on Unix.
> > 
> > Windows doesn't have AF_UNIX sockets, and anonymous pipes (like the ones
> > created by os.pipe()) aren't bidirectional.
> 
> I'm not sure I understand the problem: you can just create two pair of pipes
> using os.pipe().
> If that's too low-level, you can wrap the fds using BufferedRWPair:
> http://docs.python.org/3.3/library/io.html#io.BufferedRWPair
> 
> (actual incantation would be:
>  r1, w1 = os.pipe()
>  r2, w2 = os.pipe()
> 
>  end1 = io.BufferedRWPair(io.FileIO(r1, 'r'), io.FileIO(w2, 'w'))
>  end2 = io.BufferedRWPair(io.FileIO(r2, 'r'), io.FileIO(w1, 'w'))
> 
>  end1.write(b"foo")
>  end1.flush()
>  end2.read(3)  # -> return b"foo"
> )
> 
> An alternative is to use multiprocessing.Pipe():
> http://docs.python.org/3.3/library/multiprocessing.html#multiprocessing.Pipe
> 
> In any case, Python doesn't lack facilities for doing what you want.

Thank you for your help, I need to satisfy an interface that requires a single
file descriptor number that can be both read from and written to. Is it
possible with any of the solutions you pointed out to get a single file
descriptor number for each end?
-- 
http://mail.python.org/mailman/listinfo/python-list


buffer() as argument to ctypes function which expects c_void_p?

2011-09-08 Thread Jack Bates
How do you pass a Python buffer() value as an argument to a ctypes
function, which expects a c_void_p argument? I keep getting TypeError:

ctypes.ArgumentError: argument 2: : wrong type
-- 
http://mail.python.org/mailman/listinfo/python-list


ImportError: cannot import name dns

2011-09-13 Thread Jack Bates
Why is the following ImportError raised?

$ ./test
Traceback (most recent call last):
  File "./test", line 3, in 
from foo import dns
  File "/home/jablko/foo/dns.py", line 1, in 
from foo import udp
  File "/home/jablko/foo/udp.py", line 1, in 
from foo import dns
ImportError: cannot import name dns
$

I reproduce this error with the following four files and five lines:

== foo/dns.py ==
from foo import udp

== foo/udp.py ==
from foo import dns

== foo/__init__.py ==
(empty)

== test ==
#!/usr/bin/env python

from foo import dns
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ImportError: cannot import name dns

2011-09-14 Thread Jack Bates
> It is a circular dependency. Dns will try to import udp which will in turn 
> import dns (again) in an endless cycle; instead an ImportError is raised.
>
> Circular dependency is a Bad Thing.

According to this documentation:

http://docs.python.org/reference/simple_stmts.html#grammar-token-import_stmt

http://effbot.org/zone/import-confusion.htm

 - I thought Python would do something like:

1. check for "dns" in sys.modules (initially not found)
2. create new empty module, add it to sys.modules as "dns"
3. execute dns.py in new module namespace (executes "from foo import udp")
4. check for "udp" in sys.modules (not found)
5. create new empty module, add it to sys.modules as "udp"
6. execute udp.py in new module namespace (executes "from foo import dns")
7. check for "dns" in sys.modules (found!)
8. done executing udp.py
9. done executing dns.py

So I'd expect attempting to access symbols from "dns" while executing
udp.py to fail, because dns.py isn't done executing at this point.
However I don't attempt to access any symbols from "dns" - so I don't
expect this ImportError

What is my mistake?
-- 
http://mail.python.org/mailman/listinfo/python-list


method-to-instance binding, callable generator decorator

2011-01-26 Thread Jack Bates
Am struggling to understand Python method-to-instance binding

Anyone know why this example throws a TypeError?

> #!/usr/bin/env python
> 
> import functools
> 
> # Take a generator function (i.e. a callable which returns a generator) and
> # return a callable which calls .send()
> class coroutine:
>   def __init__(self, function):
> self.function = function
> 
> functools.update_wrapper(self, function)
> 
>   def __call__(self, *args, **kwds):
> try:
>   return self.generator.send(args)
> 
> except AttributeError:
>   self.generator = self.function(*args, **kwds)
> 
>   return self.generator.next()
> 
> # Each time we're called, advance to next yield
> @coroutine
> def test():
>   yield 'call me once'
>   yield 'call me twice'
> 
> # Works like a charm : )
> assert 'call me once' == test()
> assert 'call me twice' == test()
> 
> class Test:
> 
>   # Each time we're called, advance to next yield
>   @coroutine
>   def test(self):
> yield 'call me once'
> yield 'call me twice'
> 
> test = Test()
> 
> # TypeError, WTF?
> assert 'call me once' == test.test()
> assert 'call me twice' == test.test()

https://gist.github.com/797019

Am trying to write a decorator such that each time I call a function, it
advances to the next "yield" - I plan to use functions like this as
fixtures in tests

Does a decorator like this already exist in the Python standard library?
-- 
http://mail.python.org/mailman/listinfo/python-list