Carl Banks wrote:
>When building a very large structure like you're doing, the cyclic
>garbage collector can be a bottleneck. Try disabling the cyclic
>garbage collector before building the large dictionary, and re-
>enabling it afterwards.
>
>import gc
>gc.disable()
>try:
>for line in file:
Steven D'Aprano wrote:
per wrote:
currently, this is very slow in python, even if all i do is break up
each line using split()
**
and store its values in a dictionary,
**
indexing by one of the tab separated values in the file.
If that's the problem, the sol
>
> If that's the problem, the solution is: get more memory.
>
Or maybe think about algorithm, which needs less memory... My
experience tells me, that each time when you want to store a lot of
data into dict (or other structure) to analyze it then, you can find a
way not to store so much amount of
On Mar 7, 3:06 pm, per wrote:
> hi all,
>
> i have a program that essentially loops through a textfile file thats
> about 800 MB in size containing tab separated data... my program
> parses this file and stores its fields in a dictionary of lists.
When building a very large structure like you're
per wrote:
> i have a program that essentially loops through a textfile file thats
> about 800 MB in size containing tab separated data... my program
> parses this file and stores its fields in a dictionary of lists.
>
> for line in file:
> split_values = line.strip().split('\t')
> # do stuff
per wrote:
> hi all,
>
> i have a program that essentially loops through a textfile file thats
> about 800 MB in size containing tab separated data... my program
> parses this file and stores its fields in a dictionary of lists.
>
> for line in file:
> split_values = line.strip().split('\t')
>
On Mar 8, 9:06 am, per wrote:
> hi all,
>
> i have a program that essentially loops through a textfile file thats
> about 800 MB in size containing tab separated data... my program
> parses this file and stores its fields in a dictionary of lists.
>
> for line in file:
> split_values = line.stri
On Mar 8, 9:06 am, per wrote:
> hi all,
>
> i have a program that essentially loops through a textfile file thats
> about 800 MB in size containing tab separated data... my program
> parses this file and stores its fields in a dictionary of lists.
>
> for line in file:
> split_values = line.stri
i have a program that essentially loops through a textfile file thats
about 800 MB in size containing tab separated data... my program
parses this file and stores its fields in a dictionary of lists.
for line in file:
split_values = line.strip().split('\t')
# do stuff with split_values
curre
me> reader = csv.reader(open(fname, "rb"))
me> for row in reader:
me> ...
duh... How about
reader = csv.reader(open(fname, "rb"), delimiter='\t')
for row in reader:
...
S
--
http://mail.python.org/mailman/listinfo/python-list
>> about 800 MB in size containing tab separated data... my program
>> parses this file and stores its fields in a dictionary of lists.
...
>> currently, this is very slow in python, even if all i do is break up
>> each line using split() and store its values in a dictionary,
hi all,
i have a program that essentially loops through a textfile file thats
about 800 MB in size containing tab separated data... my program
parses this file and stores its fields in a dictionary of lists.
for line in file:
split_values = line.strip().split('\t')
# do stuff with split_value
12 matches
Mail list logo