Hallvard B Furuseth <h.b.furus...@usit.uio.no> added the comment: Mark Dickinson writes: > Thanks for the report; I agree that there's a potential issue here, and > I also think that all these definitions *should* be preprocessor > defines.
Indeed, my suggestion to castify everything for uniformity was silly: My own PY_TIMEOUT_MAX fix illustrates why that won't promote portability. It breaks code which #ifdefs between using LONG_MAX and PY_LLONG_MAX. > (Idle question: does C99 require that LONG_MAX and friends are > usable in the preprocessor? ...) 5.2.4.2.1p1: Sizes of integer types <limits.h>. > Can you suggest a suitable fix for the PY_ULLONG_MAX and PY_LLONG_MAX > defines? (...) As far as I can tell, PC/pyconfig.h already solves it for Windows. For pyport.h, since you do #define SIZEOF_LONG_LONG: #define PY_LLONG_MAX \ (1 + 2 * ((Py_LL(1) << (CHAR_BIT*SIZEOF_LONG_LONG-2)) - 1)) #define PY_ULLONG_MAX (PY_LLONG_MAX * 2ULL + 1) You could check PY_ULLONG_MAX with a compile-time assertion if you want: #ifndef __cplusplus /* this requires different magic in C++ */ /* Compile-time assertion -- max one per post-preprocessed line */ #define Py_static_assert(expr) Py_static_assert1_(expr, __LINE__) #define Py_static_assert1_(expr, line) Py_static_assert2_(expr, line) #define Py_static_assert2_(expr, line) struct Py_static_assert##line { \ int Assert1_[(expr) ? 9 : -9]; int Assert2_: (expr) ? 9 : -9; } Py_static_assert(PY_ULLONG_MAX == (unsigned long long)-1); #endif /* __cplusplus */ > BTW, do you know of any modern non-Windows platforms that don't define > LLONG_MIN and LLONG_MAX? It may well be that the "two's complement" > fallback hasn't been exercised in recent years. Anyting compiled with strict ANSI pre-C99 mode, e.g. gcc -ansi, which you do have a workaround for. But gcc isn't the only one to be slow in upgrading to C99. And unfortunately, even if Python is built without a compiler's equivalent of -ansi, a user embedding Python might be compiling with it. Beyond that: No, I know none, but I don't know many platforms anyway. >> Incidentally, the "two's complement" comment is wrong. >> It also relies on unsigned long long being widest type with no >> padding bits, and -LLONG_MAX-1 not being a trap representation. > > Agreed---that comment needs to be better. I think it's fine, though, > for practical purposes to assume an absence of padding bits and no trap > representation; IIRC there are places internally (e.g., in the bitwise > operators section of the 'int' type implementation) that already assume > two's complement + no padding bits + no trap representation. I expect so, yes. It's easy to find breakage with non-two's complement, just grep the C code for '~'. I just get peeved when people get this wrong, then document and promote the errors:) ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue10325> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com