On 12 Dic, 00:08, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote: > Note that all the above (as any operation involving a whole *column*) > requires reading the whole file in memory. Working by rows, on the other > hand, only requires holding ONE row at a time. For big files this is > significant. > > An example of writing data given in columns: > > id = [1,2,3,4] > name = ['Moe','Larry','Curly','Shemp'] > hair = ['black','red',None,'black'] > writer = csv.writer(...) > writer.writerows(itertools.izip(id, name, hair)) > > I think your problem is not with the csv module, but lack of familiarity > with the Python language itself and how to use it efficiently.
Maybe. As stated at the beginning, I am not a professional programmer. I am a scientist using Python at work. It's years I use it and I love it, but I surely miss many nuances. For example, I never ever looked into itertools. I am also not so familiar with iterators. Itertools seem fantastic, and I'll definitely look into them, however I can't but feel it's a bit strange that someone wanting a quick csv parsing/writing has to dig into those apparently unrelated stuff. > > (Btw: who is using csv to read >10**6 lines of data?) > > Me, and many others AFAIK. 1M lines is not so big, btw. It's clear that I am thinking to completely different usages for CSV than what most people in this thread. I use csv to export and import numerical data columns to and from spreadsheets. That's why I found 1M lines a lot. Didn't know csv had other uses, now I see more clearly why the module is as it is. Thanks for your tips, I've learned quite a lot. m. -- http://mail.python.org/mailman/listinfo/python-list