I disagree with the measurements here.  The granularity default should
not be based on popularity of FS or current usage of ant.  Ant must
work in a predictable, reliable manner.  And maintain function from
release to release.  As ant tries to get intelligent and make
assumptions on its own, it is no longer useful as a build tool.

I don't know the back-story to granularity.  Nor do I have a file
system where I can observe this need for granularity by my own hands. 
But, consider the failure modes.  I have a Windows 2000 system where
one file is measureably younger than a second file.  A DIR shows it,
Java shows it, and pre-1.6.2 ant files show it.  Yet ant now fails to
perform the copy.

Imagine the case where you need granularity.  In that case, I presume,
a DIR and Java would show that the two files had the same timestamp or
there would be an obvious discrepency.  I could easily observe the
failure of a copy here, determine the cause of the failure, demonstrate
it to my staff, and come up with a work-around using existing ant tasks
and targets.

ant now tests granularity by setting a default of 2000ms if the path
separator contains a semi-colon, and the OS is not netware.  This is
just the kind of silly hack stuff that belongs inside the build.xml
file (with a bunch of conditions, targets, if and unless's).  The test
should not be hard-coded within the ant java source just because lots
of people happen to use FAT.

I think, also, that you are too focussed on my example and that while
it may be unusual, degree of "unusualness" should not be an relevant
metric here.  And we can certainly come up with more usual examples, if
you like.





Bruce Atherton wrote:
>Xxx Yyy wrote:
>
>>Thanks for your consideration.  I think you are testing the wrong
thing
>>-- an OS test is not a substitute for FS test.
>>
>That is true, but there is no FS test. This code is not trying to be
an 
>FS test, it is trying to be smart about guessing a default value using

>whatever information is available. It is a heuristic.
>
>Now, is this the right guess to make? Are we correct more than 50% of 
>the time? Nowadays, probably not. But more importantly, do we "do no 
>harm" more frequently with this default? I'd guess probably so, since
it 
>is unusual to do copies from and to the same file within two seconds.
>
>So absent some file system test you can point us to, someone is going
to 
>bear the burden of specifying granularity. By going with the default
we 
>have, I think that burden is borne by the fewest number of people. It
is 
>just unfortunate that in this instance, you are among that fewest
number.
>
>>  This whole thing has a
>>"bad smell" to it because the granularity is worming its way into
Unix
>>special cases, per your note, and ANT users now need to compensate
for
>>the cases at the build-file level (with flags such as override,
>>granularity, for each and every COPY; and I think this also appears
in
>>other tasks such as ZIP).  I wish that an alternative approach had
been
>>taken to granularity (e.g. explicit granularity file-system override
>>pragmas at the beginning of the build-file, or some such).
>>
>Could you not use <presetdef>?
>
>Granularity is not on <zip>, since ZIP files have a 2 second
granularity 
>themselves that is part of the file format. You are probably thinking
of 
>the roundup attribute.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to