On Nov 2, 11:50 pm, Terry Reedy <tjre...@udel.edu> wrote: > On 11/2/2011 7:06 PM, Dennis Lee Bieber wrote: > > > On Wed, 2 Nov 2011 14:13:34 -0700 (PDT), Matt<macma...@gmail.com> > > declaimed the following in gmane.comp.python.general: > > >> I have a few hundred .csv files, and to each file, I want to > >> manipulate the data, then save back to the original file. > > That is dangerous. Better to replace the file with a new one of the same > name. > > > Option 1: Read the file completely into memory (your example is > > reading line by line); close the reader and its file; reopen the > > file for "wb" (delete, create new); open CSV writer on that file; > > write the memory contents. > > and lose data if your system crashes or freezes during the write. > > > Option 2: Open a temporary file "wb"; open a CSV writer on the file; > > for each line from the reader, update the data, send to the writer; > > at end of reader, close reader and file; delete original file; > > rename temporary file to the original name. > > This works best if new file is given a name related to the original > name, in case rename fails. Alternative is to rename original x to > x.bak, write or rename new file, then delete .bak file. > > -- > Terry Jan Reedy
To the OP, I agree with Terry, but will add my 2p. What is this meant to achieve? >>> row = range(10) >>> print ">",row[0],row[4],"\n",row[1], "\n", ">", row[2], "\n", row[3] > 0 4 1 > 2 3 Is something meant to read this afterwards? I'd personally create a subdir called db, create a sqlite3 db, then load all the required fields into it (with a column for filename)... it will either work or fail, then if it succeeds, start overwriting the originals - just a "select * from some_table" will do, using itertools.groupby on the filename column, changing the open() request etc... just my 2p mind you, Jon. -- http://mail.python.org/mailman/listinfo/python-list