"Guy Lateur" <[EMAIL PROTECTED]> writes: > To be honest, I don't really understand what it means to have the same file > open for writing by several processes. You don't want to modify data which > is already being modified by someone else, do you? I mean, how do you > determine what changes to apply first, and to what version? Or is the file > just constantly being overwritten on a first-come-first-served basis?
Unix believes (well, it used to...) that the programmer should be given all the rope they feel they need. If that means they get enough rope to hang themselves, so be it. That means that when two processes write to a file, what they write will be written to the file in the order the writes occur. So a later process can well overwrite what an earlier process wrote. On the other hand, it doesn't have to do that. If the file being written has a binary record structure of some kind, then it's perfectly reasonable for two processes to update different records in the file "at the same time". utmp and lastlog are standard Unix accounting files where this happens every time someone logs in. Requiring each user that logged in to wait until all the previous login processes had finished and closed the file would unnecessarily delay a user logging in to a multi-user system. I've done this kind of thing with dbm files, but later recanted and used a real database. <mike -- Mike Meyer <[EMAIL PROTECTED]> http://www.mired.org/home/mwm/ Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information. -- http://mail.python.org/mailman/listinfo/python-list