George Bosilca <bosi...@icl.utk.edu> writes: > MPI_Wtick is not about the precision but about the resolution of the > underlying timer (aka. the best you can hope to get).
What's the distinction here? (clock_getres(2) says "resolution (precision)".) My point (like JH's?) is that it doesn't generally return the interval between ticks, so it doesn't either seem very useful or obey the spec -- whether or not the spec is reasonable or the clock has reasonable resolution. For instance, on a (particular?) sandybridge system, the interval for CLOCK_MONOTONIC is experimentally ~30ns, not 1; clock_getres itself isn't accurate. On a particular core2 system, it appears to be an order of magnitude bigger, but the clock ticks at least once per call in that case. > Thus, the measured > time will certainly be larger, but, and this is almost a certainty, it will > hardly be smaller. As a result, I am doubtful that an MPI implementation > will provide an MPI_Wtime with a practical resolution smaller that whatever > the corresponding MPI_Wtick returns. I don't think it's an issue not having a lower bound on resolution, but isn't that the case with non-Linux high-res timers used by OMPI now? My technique when gettimeofday turns up as a timer is to replace it on Linux with clock_gettime via an LD_PRELOAD, which seems legitimate. Not understanding this can definitely lead to bogus results on the happy occasions when users and others are actually prepared to make measurements, and despite the general practice, measurements without good error estimates are pretty meaningless. (No apologies for an experimental physics background!) The example which got me looking at current OMPI was link latency tests which suggested there was something badly wrong with the fabric.