Facundo Batista <[EMAIL PROTECTED]> added the comment: So, 0.0 would be cached, and the 414m+384m would be from the list itself, right? I tried,
>>> data = [(1.0/i) for i in xrange(1,100000000)] And the memory consumption was the big one. Grant, the 800 MB is taken by ONE 0.0, and a list of zillion positions. Furthermore, I did: >>> for x in xrange(100000000): ... i = random() And the memory didn't increase. Grant, take note that there's no gc issue, the numbers stay alive because the list itself is pointing to them. Closing this as invalid. ---------- resolution: -> invalid status: open -> closed _______________________________________ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3063> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com