i try to load a big file into a dict, which is about 9,000,000 lines, something like 1 2 3 4 2 2 3 4 3 4 5 6
code
for line in open(file)
arr=line.strip().split('\t')
dict[arr[0]]=arr
but, the dict is really slow as i load more data into the memory, by
the way the mac i use have 16G memory.
is this cased by the low performace for dict to extend memory or
something other reason.
is there any one can provide a better solution
--
http://mail.python.org/mailman/listinfo/python-list
