Re: Killing worker threads
maybe following recipe from activestate may be usefull. http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/496960 http://sebulba.wikispaces.com/recipe+thread2 -- http://mail.python.org/mailman/listinfo/python-list
Re: Magic function
[EMAIL PROTECTED] wrote: > Hi all, > > I'm part of a small team writing a Python package for a scientific > computing project. The idea is to make it easy to use for relatively > inexperienced programmers. As part of that aim, we're using what we're > calling 'magic functions', and I'm a little bit concerned that they > are dangerous code. I'm looking for advice on what the risks are (e.g. > possibility of introducing subtle bugs, code won't be compatible with > future versions of Python, etc.). > > Quick background: Part of the way our package works is that you create > a lot of objects, and then you create a new object which collects > together these objects and operates on them. We originally were > writing things like: > > obj1 = Obj(params1) > obj2 = Obj(params2) > ... > bigobj = Bigobj(objects=[obj1,obj2]) > bigobj.run() > > This is fine, but we decided that for clarity of these programs, and > to make it easier for inexperienced programmers, we would like to be > able to write something like: > > obj1 = Obj(params1) > obj2 = Obj(params2) > ... > run() > > The idea is that the run() function inspects the stack, and looks for > object which are instances of class Obj, creates a Bigobj with those > objects and calls its run() method. Well i would do it this way: no fancy stuff, all standard and fast. from weakref import ref class bigobject(set): def __iter__(self): for obj in set.__iter__(self): yield obj() def run(self): for obj in self: print obj.value class foo(object): """ weakref doesn't prevent garbage collection if last instance is destroyed """ __instances__ = bigobject() def __init__(self, value): foo.__instances__.add(ref(self,foo.__instances__.remove)) self.value = value if __name__ == "__main__": obj1 = foo("obj1") obj2 = foo("obj2") obj3 = foo("obj3") obj4 = foo("obj4") foo.__instances__.run() print "test garbage collection." del obj1, obj2, obj3, obj4 foo.__instances__.run() -- http://mail.python.org/mailman/listinfo/python-list
Re: Magic function
[EMAIL PROTECTED] wrote: > Hi Rüdiger, > > Thanks for your message. I liked your approach and I've been trying > something along exactly these sorts of lines, but I have a few > problems and queries. > > The first problem is that the id of the frame object can be re-used, > so for example this code (where I haven't defined InstanceTracker and > getInstances, but they are very closely based on the ideas in your > message): > > class A(InstanceTracker): > gval = 0 > def __init__(self): > self.value = A.gval # each time you make a new object, give > A.gval += 1 # it a value one larger > def __repr__(self): > return str(self.value) > > def f2(): > a = A() # objects 0 and 2 > return getInstances(A) > > def f3(): > a = A() # object 1 > return f2() > > inst2 = f2() > inst3 = f3() > print inst2 > print inst3 > > The output is: > > [0] > [0, 2] > > The A-variable with value 0 is not being garbage collected because > it's saved in the variable inst2, but it's also being returned by the > second call to getInstances because the frame of f2 is the same each > time (which makes sense, but may be implementation specific?). The Yes and No. id basically returns the memory address of an object. and yes this is implementation specific. As of my knowledge a stackframe is of constant size in cPython. Though you get always the same id for the same call level as you would always get the same number from your instance tracker. No A-variable with value 0 is reported the second time because it had been created at the same call level __and__ it is still accessible from that call level. If you do want such object's to be destroyed you must not create hard references to them. This may be hard for your users. However you could still do something like: def f2(): InstanceTracker.prepare() # <-- delete previously created Entrys # here or calculate some magic hash value # or random number. a = A() # objects 0 and 2 return getInstances(A) or @managedInstance # <-- see above def f2(): a = A() # objects 0 and 2 return getInstances(A) > same problem doesn't exist when you use the stack searching method > because from f2's point of view, the only bound instance of A is the > one in that particular call of f2. If you had at the end instead of > the inst2, inst3 stuff: > > print f2() > print f3() > > The output is: > > [0] > [2] You basically guess here how a user would write his programm. what if your user's writes code like this? >>> my_global_dict = dict() >>> my_global_list = list() >>> >>> def f2(): ... my_global_dict["a"] = object() ... my_global_list.append(object()) ... print locals() ... >>> f2() {} >>> you would not find such a references by inspecting the stack. > Again, I guess this because A with value 0 is being garbage collected > between print f2() and print f3(), but again I think this is > implementation specific? You don't have a guarantee that this object > will be garbage collected straight away do you? Yes inspecting the stack is pure guesswork. You don't know anything about your users program structures and inspecting the stack won't tell you. > So my concern here is that this approach is actually less safe than > the stack based approach because it depends on implementation specific > details in a non-straightforward way. That said, I very much like the > fact that this approach works if I write: > > a = [A()] > a = [[A()]] > etc. > > To achieve the same thing with the stack based approach you have to > search through all containers to (perhaps arbitrary) depth. Yes and as pointed out above you will also have to search the global namespace and all available memory because an instance could have been created by psyco, ctypes, Swig, Assembly code . > I also have another problem which is that I have a function decorator > which returns a callable object (a class instance not a function). > Unfortunately, the frame in which the callable object is created is > the frame of the decorator, not the place where the definition is. > I've written something to get round this, but it seems like a bit of a > hack. > > Can anyone suggest an approach that combines the best of both worlds, > the instance tracking approach and the stack searching approach? Or do > I need to just make a tradeoff here? well that's my last example. I hope it will help. from weakref import ref from random import seed, randint seed() class ExtendedRef(ref): def __init__(self, ob, callback=None, **annotations): super(ExtendedRef, self).__init__(ob, callback) self.__id = 0 class WeakSet(set): __inst__ = 0 def add(self, value ): wr = ExtendedRef(value, self.remove) wr.__id = WeakSet.__inst__ set.add(self, wr) def get(self, _id=None): _id = _id if _id else WeakSet.__inst__ return [ _() for _ in
Re: [x for x in <> while <>]?
urikaluzhny wrote: > It seems that I rather frequently need a list or iterator of the form > [x for x in <> while <>] > And there is no one like this. > May be there is another short way to write it (not as a loop). Is > there? > Thanks I usually have the same problem and i came up with an solution like that: from operator import ne def test(iterable, value, op=ne): _n = iter(iterable).next while True: _x = _n() if op(_x, value): yield _x else: raise StopIteration l = range(6) print [x for x in test(l, 4)] [EMAIL PROTECTED]:~/tmp> python test18.py [0, 1, 2, 3] -- http://mail.python.org/mailman/listinfo/python-list
Re: ?
Paul Hankin wrote: > This is better written using takewhile... > itertools.takewhile(lambda x: x != value, iterable) > > But if you really need to reinvent the wheel, perhaps this is simpler? > > def test(iterable, value, op=operator.ne): > for x in iterable: > if not op(x, value): > return > yield x yes you are right it is. However as i mentioned in my post i came up with an solution 'like' that. In fact my original code was to complex to post. While simplifying it, i've overseen the obvious solution. For special cases where you need to do more complex tests, the best solution is IMHO to hide it in an generator function like above. -- http://mail.python.org/mailman/listinfo/python-list
why is self not passed to id()?
Hello! Executing following little program gives me an TypeError. What makes me wonder is that foo does get an argument passed while bar doesn't. Can anyone explain why?? Thanks Ruediger class foo(list): __hash__ = lambda x: id(x) class bar(list): __hash__ = id _s_ = set() _s_.add(foo()) _s_.add(bar()) [EMAIL PROTECTED]:~> python test01.py Traceback (most recent call last): File "test01.py", line 9, in _s_.add(bar()) TypeError: id() takes exactly one argument (0 given) -- http://mail.python.org/mailman/listinfo/python-list
Re: why is self not passed to id()? < solved >
castironpi wrote: > > The answer is fairly technical. For member functions to be bound to > instances, they are required to have a __get__ method (which takes > instance and owner as parameters). 'id' does not. > > (Why does 'id' not have a __get__ method?) > > By contrast, > set.add > dir(_) > ['__call__', '__class__', '__delattr__', '__doc__', '__get__', > '__getattribute__ > ', '__hash__', '__init__', '__name__', '__new__', '__objclass__', > '__reduce__', > '__reduce_ex__', '__repr__', '__setattr__', '__str__'] > > 'set.add' does. Thank you for the quick response. However it gives me less hope that the little performance hack I had in mind will ever work. -- http://mail.python.org/mailman/listinfo/python-list
Re: why is self not passed to id()?
Fredrik Lundh wrote: > > >>> id > > >>> lambda x: id(x) > at 0x00C07C30> > > any special reason why you're not using Python to write Python programs, > btw? > > I am aware that id is a built in function why shouldn't i use it? Replaceing lambda with id was intended as an performance hack. Profiling proofed that lambda itself takes more than twice as much cpu time than id alone. (profile shortened) 3610503 function calls in 22.451 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 9600964.5930.0006.7020.000 test14.py:33() 10.0030.003 22.451 22.451 {execfile} 9600962.1090.0002.1090.000 {id} However using lambda seemed useless to me since id already took an argument and wrapping it in an python function simply has no real purpose. -- http://mail.python.org/mailman/listinfo/python-list
Re: why is self not passed to id()?
I found following solution to the problem. Instead of assigning id directly to __hash__ it has to be wrapped with an instancemethod object. It is somehow strange that this doesn't happen automatically and it is also strange that instancemethod isn't exposed in the type module. However it can easily be done and is speeding things up by almost an factor of 2. Thank's again for all the help. Rüdiger ** class foo(list): __hash__ = lambda x: id(x) instancemethod = type(foo.__hash__) class bar(list): pass bar.__hash__ = instancemethod(id, None, bar) def test0( obj ): _s_ = set() _s_add = _s_.add _s_pop = _s_.pop for _i_ in xrange(100): _s_add(obj()) _s_pop() def test1(): return test0(foo) def test2(): return test0(bar) if __name__ == '__main__': test1() test2() pass ** python -m cProfile test01.py 610 function calls in 30.547 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 10.0000.000 30.547 30.547 :1() 10.0000.000 30.547 30.547 test01.py:1() 10.0000.0000.0000.000 test01.py:1(foo) 2 10.7845.392 30.547 15.273 test01.py:10(test0) 10.0000.000 19.543 19.543 test01.py:18(test1) 1004.5540.0006.7000.000 test01.py:2() 10.0000.000 11.003 11.003 test01.py:20(test2) 10.0000.0000.0000.000 test01.py:6(bar) 10.0010.001 30.547 30.547 {execfile} 1002.1460.0002.1460.000 {id} 2008.6260.000 15.3270.000 {method 'add' of 'set'objects} 2004.4360.0004.4360.000 {method 'pop' of 'set'objects} -- http://mail.python.org/mailman/listinfo/python-list