Alexander Belopolsky added the comment: > 1. .. It's preferable to cast into a clock_t immediately rather than > doing a conversion for each of the ensuing divisions.
If that's your motivation, then you should cast to double instead. However, I would leave it to compiler to do micro-optimizations like these. I am not aware of a standard that says that clock_t must be wider than long. I agree that it is unlikely to produces wrong results given that we are realistically talking about 50-1000 range, but on some platforms you may see a warning. > 2. .. Is that -1 return value documented somewhere? Yes, see man sysconf on a sufficiently unix-like system or http://www.opengroup.org/onlinepubs/009695399/functions/sysconf.html > 4. You're right about the overhead, but someone (amk?) measured it and > it's only 5% compared to the old buggy behaviour. It's still possible to > do a million calls to os.times() from Python in a second, which is > plenty fast enough. Clearly the speed could be improved, but it > doesn't appear worth the complications to me. 5% is a lot and IIRC os.times is used by some standard python profilers and 5% slowdown will affect people. What I suggest is a simpler solution than your patch: (1) Define USE_SYSTEM_HZ in config.h (will require some autoconf hacking). (2) Define Py_HZ as system HZ on the systems where HZ can be trusted. (Some systems already define HZ as sysconf(_SC_CLK_TCK)) and fix the system definition appropriately where necessary. (3) Replace HZ with Py_HZ in posixmodule.c The advantage is that the systems where os.times is not broken will not be affected. BTW, does anyone know if sysconf(_SC_CLK_TCK)) result can change during process lifetime? _____________________________________ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1040026> _____________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com