In <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:

> I need to process a really huge text file (4GB) and this is what i
> need to do. It takes for ever to complete this. I read some where that
> "list comprehension" can fast up things. Can you point out how to do
> it in this case?

No way I can see here.

> f = open('file.txt','r')
> for line in f:
>         db[line.split(' ')[0]] = line.split(' ')[-1]
>         db.sync()

You can get rid of splitting the same line twice, or use `split()` and
`rsplit()` with the `maxsplit` argument to avoid splitting the line at
*every* space character.

And if the names give the right hints `db.sync()` may be a potentially
expensive operation.  Try to call it at a lower frequency if possible.

Ciao,
        Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to