Hi, On Thu 06 Oct 2016 22:49, Jens Bauer <jens-guile-...@plustv.dk> writes:
> I get the following warnings, when building on Mac OS X. > (It should show up for all platforms, though): > > In file included from > /Users/jens/open-source/Source/guile-2.0.12/libguile/numbers.c:9731: > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c: In > function 'scm_to_int8': > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: > warning: comparison is always true due to limited range of data type > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: > warning: comparison is always true due to limited range of data type These are not really bugs. I mean, we shouldn't produce warnings, but GCC doesn't warn on these, so clearly there is a heuristic which clang has set differently; but the actual code is fine. In your investigations below there are some errors. I include a couple of inline comments for your enjoyment. > In file included from > /Users/jens/open-source/Source/guile-2.0.12/libguile/numbers.c:9747: > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c: In > function 'scm_to_int16': > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: > warning: comparison is always true due to limited range of data type > /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: > warning: comparison is always true due to limited range of data type > > Notice that it's only from line 94, which reads... > if (n >= TYPE_MIN && n <= TYPE_MAX) > > ... looking at the top of the file, it says: "It is only for signed types", > so I look in ... > > numbers.c:9731 > numbers.c:9747 > ... which is int8 and int16 (signed integers); this should be as intended. > > The variable 'n' is declared as scm_t_signed_bits, which is a scm_t_intptr, > which is an intptr_t, which is just a 'long'. > > So my guess is that the problem must be with TYPE_MIN and TYPE_MAX. > > In numbers.c, line 9742, they're defined as follows: > #define TYPE scm_t_int16 > #define TYPE_MIN SCM_T_INT16_MIN > #define TYPE_MAX SCM_T_INT16_MAX > > ... looks good to me, but where's the definition of SCM_T_INT16_MIN and > SCM_T_INT16_MAX ? > -It seems to be in __scm.h: > > #define SCM_I_UTYPE_MAX(type) ((type)-1) > #define SCM_I_TYPE_MAX(type,umax) ((type)((umax)/2)) > #define SCM_I_TYPE_MIN(type,umax) (-((type)((umax)/2))-1) > > #define SCM_T_UINT8_MAX SCM_I_UTYPE_MAX(scm_t_uint8) > #define SCM_T_INT8_MIN SCM_I_TYPE_MIN(scm_t_int8,SCM_T_UINT8_MAX) > #define SCM_T_INT8_MAX SCM_I_TYPE_MAX(scm_t_int8,SCM_T_UINT8_MAX) > > #define SCM_T_UINT16_MAX SCM_I_UTYPE_MAX(scm_t_uint16) > #define SCM_T_INT16_MIN SCM_I_TYPE_MIN(scm_t_int16,SCM_T_UINT16_MAX) > #define SCM_T_INT16_MAX SCM_I_TYPE_MAX(scm_t_int16,SCM_T_UINT16_MAX) > > Now, this is where things get interesting. The macros are cool, but I think > the use seems to be incorrect. > > Let's try an example (SCM_T_INT16_MIN): > SCM_T_INT16_MIN = SCM_I_TYPE_MIN(scm_t_int16,SCM_T_UINT16_MAX) > Expands to ... > SCM_T_INT16_MIN = (-((scm_t_int16)((-1)/2))-1) SCM_T_UINT16_MAX expands to ((scm_t_uint16)-1) which expands to the uint16_t value 0xffff. (These intermediate expansions have type in addition to value.) SCM_T_INT16_MIN is -(0xffff/2)-1, which is (int16_t)-0x8000. > ... which can be cleaned up ... > > SCM_T_INT16_MIN = (-(((-1)/2))-1) > > A signed integer of value -1 divided by 2, is the same as bitshifting to the > right by using ASR; the result will be -1. > > SCM_T_INT16_MIN = (-(((-1)))-1) > SCM_T_INT16_MIN = (-((-1))-1) > SCM_T_INT16_MIN = (-(-1)-1) > SCM_T_INT16_MIN = (+1-1) > SCM_T_INT16_MIN = (0) > > ... Ehm ... Did I do something wrong ? > I expected the value -32768, but got 0. > > Wouldn't it be correct to typecast as scm_t_uint16 instead of scm_t_int16 > (and thus scm_t_uint8 instead of scm_t_int8) ? > Happy hacking, Andy