Hello Kuroda-san and Takatsuka-san,

24.07.2025 03:49, TAKATSUKA Haruka wrote:
{snip}
Maybe you could try tools.syncTime = "0" by any chance?
It has been already tools.syncTime = "0" so far.
I confirmed the following GUI setting.
...


23.07.2025 09:15, Hayato Kuroda (Fujitsu) wrote:
It looks like for me that we measured the execution time of the function in
millisecond but it was "zero", right?

Yes, my understanding is the same.

So I think we could observe such anomalies if, say, the OS kernel can't
read system clock in time (stalls for a millisecond when accessing it)...
I also feel like that. But if so, how should we fix tests? We must remove all
stuff which assumes the time is monotonic?

From what Takatsuka-san shared on hamerkop's configuration, I still
suspect there could be some platform specifics there. I've found another
interesting reading on the subject, which describes effects of CPU
pressure and mentions other low-level parameters, e. g.
monitor_control.virtual_rdtsc: [1].

Probably there could be some experiments performed there to measure the
maximum timer resolution (e. g. with a simple program attached).

I also observed a failure of pg_stat_statements on ARMv7 device in the past:
--- .../contrib/pg_stat_statements/expected/entry_timestamp.out 2024-04-11 
07:20:32.563588101 +0300
+++ .../contrib/pg_stat_statements/results/entry_timestamp.out 2024-04-15 
11:16:00.217396694 +0300
@@ -45,7 +45,7 @@
 WHERE query LIKE '%STMTTS%';
  total | minmax_plan_zero | minmax_exec_zero | minmax_stats_since_after_ref | 
stats_since_after_ref
 
-------+------------------+------------------+------------------------------+-----------------------
-     2 |                0 |                0 |                            0 |  
                   0
+     2 |                0 |                1 |                            0 |  
                   0
 (1 row)

with clocksource =  32k_counter, which gave me the maximum resolution
0.030517 sec.

So if to choose fixing tests, then it's not clear to me, what lowest timer
resolution to consider acceptable.

[1] https://www.vmware.com/docs/vmware_timekeeping

Best regards,
Alexander
#include <windows.h>

#define FILETIME_UNITS_PER_SEC  10000000L
#define FILETIME_UNITS_PER_USEC 10

void main(int argc, char *argv[]) {
    FILETIME ft1, ft2;
    ULARGE_INTEGER uli1, uli2;
	int r = 0;
	
	int n = (argc > 1) ? atol(argv[1]) : 0;

    GetSystemTimePreciseAsFileTime(&ft1);
	for (long int i = 0; i < n; i++) r += i;
    GetSystemTimePreciseAsFileTime(&ft2);

    uli1.LowPart = ft1.dwLowDateTime; uli1.HighPart = ft1.dwHighDateTime;

    long usec1 = (long) (((uli1.QuadPart) % FILETIME_UNITS_PER_SEC)
                          / FILETIME_UNITS_PER_USEC);

    uli2.LowPart = ft2.dwLowDateTime; uli2.HighPart = ft2.dwHighDateTime;

    long usec2 = (long) (((uli2.QuadPart) % FILETIME_UNITS_PER_SEC)
                          / FILETIME_UNITS_PER_USEC);

	printf("usec1: %llu, usec2: %llu, usec2 - usec1: %d, uli2 - uli1: %ld, r: %d\n", usec1, usec2, usec2 - usec1, (long)(uli2.QuadPart - uli1.QuadPart), r);
}

Reply via email to