Svante Signell, le Tue 01 Jul 2014 14:40:44 +0200, a écrit : > > > > $ ./test > > > > start = 3870 > > > > end = 3910 > > > > cpu_time_used = 0.000040 > > > > > > I get: > > > gcc -g -Wall test_clock.c > > > ./a.out > > > start = 0 > > > end = 0 > > > cpu_time_used = 0.000000 > > > > Well, yes, as I said sleep() doesn't consume CPU while sleeping, so > > clock() would only account the small overhead for starting the sleep, > > which is very small. Since the granularity is 1/100th second on the > > Hurd, that eventually amounts to zero. > > Why are the integers start and end zero?
For the same reason: the program doesn't even need 1/100th of a second to start, so the CPU consumption is basically zero. > cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC; > > and on Hurd: > start = 0 > end = 423 > cpu_time_used = 0.000423 It seems there is inconsistency between the value returned by clock() and CLOCKS_PER_SEC. See the implementation of clock() on the Hurd in ./sysdeps/mach/hurd/clock.c, it's really in 1/100th of seconds. I guess unsubmitted-clock_t_centiseconds.diff should probably also fix CLOCKS_PER_SEC. Samuel -- To UNSUBSCRIBE, email to debian-hurd-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140701124737.gk6...@type.ens-lyon.fr