I have a large (10gb) data file for which I want to parse each line into
an object and then append this object to a list for sorting and further
processing. I have noticed however that as the length of the list
increases the rate at which objects are added to it decreases
dramatically. My fir
The answer it turns out is the garbage collector. When I disable the
garbage collector before the loop that loads the data into the list
and then enable it after the loop the program runs without issue.
This raises a question though, can the logic of the garbage collector
be changed so that it is
I am using python 2.6.2, so it may no longer be a problem.
I am open to using another data type, but the way I read the
documentation array.array only supports numeric types, not arbitrary
objects. I also tried playing around with numpy arrays, albeit for
only a short time, and it seems that alth
I have been programing in python for a while now and by in large love
it. One thing I don't love though is that as far as I know iterators
have no has_next type functionality. As a result if I want to iterate
until an element that might or might not be present is found I either
wrap the while
The example I have in mind is list like [2,2,2,2,2,2,1,3,3,3,3] where
you want to loop until you see not a 2 and then you want to loop until
you see not a 3. In this situation you cannot use a for loop as
follows:
foo_list_iter = iter([2,2,2,2,2,2,1,3,3,3,3])
for foo_item in foo_list_iter:
if