Michael Stone <[EMAIL PROTECTED]> writes: > On Wed, Mar 15, 2006 at 09:23:08PM +0000, Roger Leigh wrote: >>1) The file can be unlinked by another process in between the stat and >> the unlink, causing the unlink to fail. ln should be aware this >> can happen, and not treat this as a fatal error. > > Why?
Because otherwise the command fails when it should not:
process1 process2
stat stat
unlink
unlink
symlink
FAIL
>>2) Another process can create file between unlink and symlink, leading
>> to symlink creation failing. Again, ln should be aware creation
>> may fail. If the error is EEXIST, it should repeat the unlink and
>> try again.
>
> Again, why?
Because otherwise the command fails when it should not:
process1 process2
stat
unlink
stat
symlink
symlink
FAIL
In both these cases, the -f switch tells ln to unlink and replace any
preexisting file, if it exists. If the file appears or disappears
during the execution of ln, this should not affect its behaviour. As
it is, ln does not behave deterministically.
> Is there any practical purpose to changing any of this? I'm not sure I
> care too much about what happens if you're running a tight endless
> loop like that, because the outcome is going to be random (dependent
> on execute order) anyway.
This example was an artificial test case. Its only purpose is to
demonstrate the problem in an efficient manner.
This *is* a real problem, which we are seeing with the sbuild part of
buildd. It has (perl):
system "/bin/ln -sf $main::pkg_logfile
$conf::build_dir/current-$main::distribution";
system "/bin/ln -sf $main::pkg_logfile $conf::build_dir/current";
When buildd schedules builds concurrently, sbuild is showing the
problem intermittently.
Regards,
Roger
--
Roger Leigh
Printing on GNU/Linux? http://gutenprint.sourceforge.net/
Debian GNU/Linux http://www.debian.org/
GPG Public Key: 0x25BFB848. Please sign and encrypt your mail.
pgphpbVCDIF88.pgp
Description: PGP signature

