Hi all, here is a straightforward patch for the intrinsic procedure SYSTEM_CLOCK. It does two things: 1) It reduces the resolution of the int8 version from 1 nanosecond to 1 microsecond (COUNT_RATE = 1000000). 2) It adds an int16 version with nanosecond precision.
The motivation for item #1 was mainly that the actual precision is usually not better than 1 microsec anyway (unless linking with -lrt). This results in SYSTEM_CLOCK giving values whose last three digits are zero. One can argue that this is not a dramatic misbehavior, but it has disadvantages for certain applications, like e.g. using SYSTEM_CLOCK to initialize the random seed in a Monte-Carlo simulation. In general, I would say that the value of COUNT_RATE should not be larger than the actual precision of the clock used. Moreover, the microsecond resolution for int8 arguments has the advantage that it is compatible with ifort's behavior. Also I think a resolution of 1 microsecond is sufficient for most applications. If someone really needs more, he can now use the int16 version (and link with -lrt). Regtested on x86_64-unknown-linux-gnu (although we don't actually seem to have any test cases for SYSTEM_CLOCK yet). Ok for trunk? Btw, does it make sense to also add an int2 version? If yes, which resolution? Note that most other compilers seem to have an int2 version of SYSTEM_CLOCK ... Cheers, Janus 2012-12-01 Janus Weil <ja...@gcc.gnu.org> PR fortran/55548 * gfortran.map (GFORTRAN_1.5): Add _gfortran_system_clock_16. * intrinsics/system_clock.c (system_clock_8): Change resolution to one microsec. (system_clock_16): New function (with nanosecond resolution). 2012-12-01 Janus Weil <ja...@gcc.gnu.org> PR fortran/55548 * intrinsic.texi (SYSTEM_CLOCK): Update documentation of SYSTEM_CLOCK. 2012-12-01 Janus Weil <ja...@gcc.gnu.org> PR fortran/55548 * gfortran.dg/system_clock_1.f90: New test case.
pr55548.diff
Description: Binary data
system_clock_1.f90
Description: Binary data