Re: memory recycling/garbage collecting problem
On Feb 16, 11:21 pm, Yuanxin Xi wrote: > Could anyone please explain why this happens? It seems some memory > are not freed. There is a "bug" in versions of Python prior to 2.5 where memory really isn't released back to the OS. Python 2.5 contains a new object allocator that is able to return memory to the operating system that fixes this issue. Here's an explanation: http://evanjones.ca/python-memory-part3.html What version of Python are you using? I have a machine running several long-running processes, each of which occasionally spike up to 500M memory usage, although normally they only require about 25M. Prior to 2.5, those processes never released that memory back to the OS and I would need to periodically restart them. With 2.5, this is no longer a problem. I don't always see memory usage drop back down immediately but the OS does recover the memory eventually. Make sure you use 2.5 if this is an issue for you. --David -- http://mail.python.org/mailman/listinfo/python-list
Re: memory recycling/garbage collecting problem
Tim Peters showed a way to demonstrate the fix in http://mail.python.org/pipermail/python-dev/2006-March/061991.html > For simpler fun, run this silly little program, and look at memory > consumption at the prompts: > > """ > x = [] > for i in xrange(100): >x.append([]) > raw_input("full ") > del x[:] > raw_input("empty ") > """ > > For example, in a release build on WinXP, VM size is about 48MB at the > "full" prompt, and drops to 3MB at the "empty" prompt. In the trunk > (without this patch), VM size falls relatively little from what it is > at the "full" prompt (the contiguous vector holding a million > PyObject* pointers is freed, but the obmalloc arenas holding a > million+1 list objects are never freed). > > For more info about the patch, see Evan's slides from _last_ year's PyCon: > > http://evanjones.ca/memory-allocator.pdf I'm not sure what deleting a slice accomplishes (del x[:]); the behavior is the same whether I do del x or del x[:]. Any ideas? --David -- http://mail.python.org/mailman/listinfo/python-list
Re: default behavior
Peter Otten <__pete...@web.de> wrote: > > >>> 1 .conjugate() > This is a syntax I never noticed before. My built-in complier (eyes) took one look and said: "that doesn't work." Has this always worked in Python but I never noticed? I see other instance examples also work. >>> '1' .zfill(2) '01' >>> 1.0 .is_integer() True and properties >>> 1.0 .real 1.0 Curiously, this works -- http://mail.python.org/mailman/listinfo/python-list
Re: default behavior
[Oops, now complete...] Peter Otten <__pete...@web.de> wrote: > > > >>> 1 .conjugate() > This is a syntax I never noticed before. My built-in complier (eyes) took one look and said: "that doesn't work." Has this always worked in Python but I never noticed? I see other instance examples also work. >>> '1' .zfill(2) '01' >>> 1.0 .is_integer() True and properties >>> 1.0 .real 1.0 Curiously, a float literal works without space >>> 1.0.conjugate() 1.0 but not an int. >>> 1.conjugate() File "", line 1 1.conjugate() ^ SyntaxError: invalid syntax Anyway, I didn't realize int has a method you can call. --David -- http://mail.python.org/mailman/listinfo/python-list
Re: Clarity vs. code reuse/generality
I remember in college taking an intro programming class (C++) where the professor started us off writing a program to factor polynomials; he probably also incorporated binary search into an assignment. But people don't generally use Python to implement binary search or factor polynomials so maybe you should start with a problem more germane to typical novice users (and less algorithm-y). Wouldn't starting them off with string processing or simple calculations be a practical way to get comfortable with the language? --David On Jul 3, 9:05 am, kj wrote: > I'm will be teaching a programming class to novices, and I've run > into a clear conflict between two of the principles I'd like to > teach: code clarity vs. code reuse. I'd love your opinion about > it. > > The context is the concept of a binary search. In one of their > homeworks, my students will have two occasions to use a binary > search. This seemed like a perfect opportunity to illustrate the > idea of abstracting commonalities of code into a re-usable function. > So I thought that I'd code a helper function, called _binary_search, > that took five parameters: a lower limit, an upper limit, a > one-parameter function, a target value, and a tolerance (epsilon). > It returns the value of the parameter for which the value of the > passed function is within the tolerance of the target value. > > This seemed straightforward enough, until I realized that, to be > useful to my students in their homework, this _binary_search function > had to handle the case in which the passed function was monotonically > decreasing in the specified interval... > > The implementation is still very simple, but maybe not very clear, > particularly to programming novices (docstring omitted): > > def _binary_search(lo, hi, func, target, epsilon): > assert lo < hi > assert epsilon > 0 > sense = cmp(func(hi), func(lo)) > if sense == 0: > return None > target_plus = sense * target + epsilon > target_minus = sense * target - epsilon > while True: > param = (lo + hi) * 0.5 > value = sense * func(param) > if value > target_plus: > hi = param > elif value < target_minus: > lo = param > else: > return param > > if lo == hi: > return None > > My question is: is the business with sense and cmp too "clever"? > > Here's the rub: the code above is more general (hence more reusable) > by virtue of this trick with the sense parameter, but it is also > a bit harder to understand. > > This not an unusual situation. I find that the processing of > abstracting out common logic often results in code that is harder > to read, at least for the uninitiated... > > I'd love to know your opinions on this. > > TIA! > > kj -- http://mail.python.org/mailman/listinfo/python-list