On Fri, Aug 03, 2007 at 10:13:10PM +0200, Helmut Waitzmann wrote: > I think, it's a kernel bug.
I don't know. Why is it forbidden for example to create a file system that spends some idle time trying to look for blocks of full zeros in the background and turning them to sparse blocks, hence saving disk space? However, I still believe it's a bad idea to test for ctime, and I have new arguments for this. ctime is updated in several cases where nothing tar should care about happens. For example ctime is updated if you chmod a file to exactly the same permissions as it had before. Still you want tar to go on and succeed. Another example is when you create or remove a hard link to a file. As by default Linux and AFAIK most other Unix systems allow a user to create a hard link to other users' files, this even implies a DoS kind of security problem: I can cause another user's tar process to fail when he's archiving his files, without having write access to these files! So I think checking for ctime is a very bad idea. I have two proposals instead. First: check for mtime only. Second: compare all the elements of struct stat that are relevant to tar: user, group, access mode and so on, but not ctime, nr of blocks, hard link count... > GNU tar maintainers, if you change GNU tar's behavior, then, please, do > it by providing an option to switch this new behavior on (and perhaps, an > option to switch it off again), not by changing its default behavior. Can you see any particular situations where you would need this current broken behavior? I see no reason to keep backward-compatibility bugs in a software. bye, Egmont
