Memory Allocation?
Is it possible to determine how much memory is allocated by an arbitrary Python object? There doesn't seem to be anything in the docs about this, but considering that Python manages memory allocation, why would such a module be more difficult to design than say, the GC? -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Allocation?
Gerrit Holl wrote: Chris S. wrote: Is it possible to determine how much memory is allocated by an arbitrary Python object? There doesn't seem to be anything in the docs about this, but considering that Python manages memory allocation, why would such a module be more difficult to design than say, the GC? Why do you want it? It would seem desirable to know how the components of one's program occupies memory. -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Allocation?
M.E.Farmer wrote: Hello Chris, I am sure there are many inaccuracies in this story but hey you asked instead of seeking your owns answers so In general you need not worry about memory allocation. Too be more specific objects have a size and most of them are known (at least to a wizard named Tim) , but it doesn't really matter because it doesn't work like that in Python. CPython interpreter( I have never read a lick of the source this all from late nite memory ) just grabs a chunk of memory and uses it as it sees fit . Jython uses Java's GC . and etc.. Now tell me do you really want to take out the garbage or look at it? Python does it for you so you don't have too. Using similar logic, we shouldn't need access to the Garbage Collector or Profiler. After all, why would anyone need to know how fast their program is running or whether or not their garbage has been collected. Python takes care of it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory Allocation?
Donn Cave wrote: In article <[EMAIL PROTECTED]>, "Chris S." <[EMAIL PROTECTED]> wrote: Is it possible to determine how much memory is allocated by an arbitrary Python object? There doesn't seem to be anything in the docs about this, but considering that Python manages memory allocation, why would such a module be more difficult to design than say, the GC? Sorry, I didn't follow that - such a module as what? GC == Garbage Collector (http://docs.python.org/lib/module-gc.html) Along with the kind of complicated internal implementation details, you may need to consider the possibility that the platform malloc() may reserve more than the allocated amount, for its own bookkeeping but also for alignment. It isn't a reliable guide by any means, but something like this might be at least entertaining - >>> >>> class A: ... def __init__(self, a): ... self.a = a ... >>> d = map(id, map(A, [0]*32)) >>> d.sort() >>> k = 0 >>> for i in d: ... print i - k ... k = i ... This depends on the fact that id(a) returns a's storage address. I get very different results from one platform to another, and I'm not sure what they mean, but at a guess, I think you will see a fairly small number, like 40 or 48, that represents the immediate allocation for the object, and then a lot of intervals three or four times larger that represent all the memory allocated in the course of creating it. It isn't clear that this is all still allocated - malloc() doesn't necessarily reuse a freed block right away, and in fact the most interesting thing about this experiment is how different this part looks on different platforms. Of course we're still a bit in the dark as to how much memory is really allocated for overhead. Donn Cave, [EMAIL PROTECTED] Are you referring to Python's general method of memory management? I was under the impression that the ISO C specification for malloc() dictates allocation of a fixed amount of memory. free(), not malloc(), handles deallocation. Am I wrong? Does Python use a custom non-standard implementation of malloc()? -- http://mail.python.org/mailman/listinfo/python-list
Re: how to drop all thread ??
Leon wrote: if class A( use threading,thread module ) to produce 100 thread,how to drop its (100 thread) when its running As Roggisch suggests, the cleanest way is if the thread kills itself once signaled by an exit condition. However, there is a non-orthodox way of pseudo-forcibly killing threads by catching the kill signal in a traceback. This method is summed up in Connelly Barnes's informal KThread module: http://www.google.com/groups?q=KThread+group:comp.lang.python&hl=en&lr=&selm=mailman.225.1083634398.25742.python-list%40python.org&rnum=1 Note it won't work in all cases, as it can't kill a thread that's made a blocking system call. However, it may come in useful. -- http://mail.python.org/mailman/listinfo/python-list
Quick Question regarding Frames
Hello All, Just starting out with Python and wxPython. I have two Frames, FrameA and FrameB. FrameA opens FrameB when a button on FrameA is clicked. I can get this. Now I want a button on FrameB to update a control on FrameA. I am having an issue with this. Can anyone point me in the right direction? Thanks. Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: Quick Question regarding Frames
A little further clarification. FrameA and FrameB are in different modules. Thanks. Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: Quick Question regarding Frames
HI Dave, Thanks for the reply. I am a bit confused by this piece of code: class FrameB(wx.Frame): def __init__(self, frameA, ...): self.frameA = frameA What is frameA in the __init__ definition? Do I need to create something called frameA in order to pass it to that __init__ function? Presently I would call FrameB as w2 = FrameB(None, -1,"") w2.Show() Where would I put the reference to frameA? Thanks. Chris -- http://mail.python.org/mailman/listinfo/python-list