In article <[email protected]>, forrest yang <[email protected]> wrote: > >i try to load a big file into a dict, which is about 9,000,000 lines, >something like >1 2 3 4 >2 2 3 4 >3 4 5 6 > >code >for line in open(file) > arr=line.strip().split('\t') > dict[arr[0]]=arr > >but, the dict is really slow as i load more data into the memory, by >the way the mac i use have 16G memory. >is this cased by the low performace for dict to extend memory or >something other reason.
Try gc.disable() before the loop and gc.enable() afterward. -- Aahz ([email protected]) <*> http://www.pythoncraft.com/ "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." --Red Adair -- http://mail.python.org/mailman/listinfo/python-list
