On 9 May 2007, at 16:30, Joseph M Gwinn wrote:
In fact, POSIX "Seconds Since the Epoch" is effectively TAI minus an
unspecified offset because POSIX counts ~SI seconds regardless of
astronomy and thus leap anything.
I think the specs ignore the issue, so it is only accurate within a
couple of ten seconds. I figure typical system just ignore the leap
seconds from the epoch, and adjusts the internal clock on the first
lookup after the time server has changed. It is these jumps in the
internal clock that may pose a problem: it is hard to tell which
computer that have adjusted and which have not.
If one would follow the suggestion of using a TAI-JD the way I did,
one would end up with a system that ignores the leap system in the
internal second count, from the epoch. It means that one must have
time servers that do not introduce a jump when a leap second appears.
Instead, when a human readable time stamp occurs, one makes a lookup
from a file, that adjusts the time accordingly.
(The fact that ordinary computer clock
hardware isn't nearly as accurate as that collection of caesium beam
clocks is neither here nor there - it's the semantics of the
timescale,
not its accuracy, that counts here.)
Right. The idea is not to impose a system on the current hardware,
but admitting future hardware with more accurate clocks to adjust at
need. This need not only to be file time stamps, but say distributed
ronoting, or something.
POSIX time cannot actually be TAI bacause not all POSIX systems have
access to (or need for) time that's accurate with respect to any
external
timescale. Think isolated networks with no access to the sky.
A completely isolated system only needs adjustment to its one and
only clock. In a distributed setting, be it over the network, or by
radio broadcasts, need access to time servers which do not introduce
a leap second in the count.
As for the choice of the one true clock, the original and still a core
reason for POSIX to care about time is to support causal ordering
of file
updates by comparison of timestamps.
The granularity issue has always been with us. While it is known
that no
finite-resolution timestamp scheme can ensure causal order, the
alternative (a central guaranteed-sequence hardware utility) is
usually
impractical, so people have always used timestamps. (IBM sells such a
utility box for use in their transaction systems.) What one can most
easily do is to require much better timestamp resolution as technology
progresses, thus reducing the window of non-causality in such
things as
make.
Different system will require different granularity. The interesting
thing is that a high performance system distributed around planet
Earth might in principle have an accuracy of 10^-7 seconds.
Hans Aberg
_______________________________________________
Bug-make mailing list
Bug-make@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-make