On 11/12/13 23:54, Steven D'Aprano wrote:
I have some code which produces a list from an iterable using at least
one temporary list, using a Decorate-Sort-Undecorate idiom. The algorithm
looks something like this (simplified):

table = sorted([(x, i) for i,x in enumerate(iterable)])
table = [i for x,i in table]

The problem here is that for large iterables, say 10 million items or so,
this is *painfully* slow, as my system has to page memory like mad to fit
two large lists into memory at once. So I came up with an in-place
version that saves (approximately) two-thirds of the memory needed.

table = [(x, i) for i,x in enumerate(iterable)]
table.sort()
for x, i in table:
     table[i] = x

For giant iterables (ten million items), this version is a big
improvement, about three times faster than the list comp version. Since
we're talking about the difference between 4 seconds and 12 seconds (plus
an additional 40-80 seconds of general slow-down as the computer pages
memory into and out of virtual memory), this is a good, solid
optimization.

Except that for more reasonably sized iterables, it's a pessimization.
With one million items, the ratio is the other way around: the list comp
version is 2-3 times faster than the in-place version. For smaller lists,
the ratio varies, but the list comp version is typically around twice as
fast. A good example of trading memory for time.

So, ideally I'd like to write my code like this:


table = [(x, i) for i,x in enumerate(iterable)]
table.sort()
if len(table) < ?????:
     table = [i for x,i in table]
else:
     for x, i in table:
         table[i] = x

where ????? no doubt will depend on how much memory is available in one
contiguous chunk.

Is there any way to determine which branch I should run, apart from hard-
coding some arbitrary and constant cut-off value?


I had a slightly similar problem a while ago. I actually wanted to process data from a large file in sorted order. In the end I read chunks of data from the file, sorted them, then wrote each chunk of data to a temporary file. Then I used heapq.merge to merge the data in the temporary files. It vastly reduced memory consumption, and was 'quick enough'. It was based on Guido's solution for sorting a million 32-bit integers in 2MB of RAM (http://neopythonic.blogspot.co.uk/2008/10/sorting-million-32-bit-integers-in-2mb.html). Cheers.

Duncan

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to