I suppose I can but it won't be very efficient. I can have a smaller hashtable, and process those that are in the hashtable and save the ones that are not in the hash table for another round of processing. But chunked hashtable won't work that well because you don't know if they exist in other chunks. In order to do this, I'll need to have a rule to partition the data into chunks. So this is more work in general.
"kaens" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On 5/25/07, Jack <[EMAIL PROTECTED]> wrote: >> I need to process large amount of data. The data structure fits well >> in a dictionary but the amount is large - close to or more than the size >> of physical memory. I wonder what will happen if I try to load the data >> into a dictionary. Will Python use swap memory or will it fail? >> >> Thanks. >> >> >> -- >> http://mail.python.org/mailman/listinfo/python-list >> > > Could you process it in chunks, instead of reading in all the data at > once? -- http://mail.python.org/mailman/listinfo/python-list