I get the following warnings, when building on Mac OS X. (It should show up for all platforms, though):
In file included from /Users/jens/open-source/Source/guile-2.0.12/libguile/numbers.c:9731: /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c: In function 'scm_to_int8': /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: warning: comparison is always true due to limited range of data type /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: warning: comparison is always true due to limited range of data type In file included from /Users/jens/open-source/Source/guile-2.0.12/libguile/numbers.c:9747: /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c: In function 'scm_to_int16': /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: warning: comparison is always true due to limited range of data type /Users/jens/open-source/Source/guile-2.0.12/libguile/conv-integer.i.c:94: warning: comparison is always true due to limited range of data type Notice that it's only from line 94, which reads... if (n >= TYPE_MIN && n <= TYPE_MAX) ... looking at the top of the file, it says: "It is only for signed types", so I look in ... numbers.c:9731 numbers.c:9747 ... which is int8 and int16 (signed integers); this should be as intended. The variable 'n' is declared as scm_t_signed_bits, which is a scm_t_intptr, which is an intptr_t, which is just a 'long'. So my guess is that the problem must be with TYPE_MIN and TYPE_MAX. In numbers.c, line 9742, they're defined as follows: #define TYPE scm_t_int16 #define TYPE_MIN SCM_T_INT16_MIN #define TYPE_MAX SCM_T_INT16_MAX ... looks good to me, but where's the definition of SCM_T_INT16_MIN and SCM_T_INT16_MAX ? -It seems to be in __scm.h: #define SCM_I_UTYPE_MAX(type) ((type)-1) #define SCM_I_TYPE_MAX(type,umax) ((type)((umax)/2)) #define SCM_I_TYPE_MIN(type,umax) (-((type)((umax)/2))-1) #define SCM_T_UINT8_MAX SCM_I_UTYPE_MAX(scm_t_uint8) #define SCM_T_INT8_MIN SCM_I_TYPE_MIN(scm_t_int8,SCM_T_UINT8_MAX) #define SCM_T_INT8_MAX SCM_I_TYPE_MAX(scm_t_int8,SCM_T_UINT8_MAX) #define SCM_T_UINT16_MAX SCM_I_UTYPE_MAX(scm_t_uint16) #define SCM_T_INT16_MIN SCM_I_TYPE_MIN(scm_t_int16,SCM_T_UINT16_MAX) #define SCM_T_INT16_MAX SCM_I_TYPE_MAX(scm_t_int16,SCM_T_UINT16_MAX) Now, this is where things get interesting. The macros are cool, but I think the use seems to be incorrect. Let's try an example (SCM_T_INT16_MIN): SCM_T_INT16_MIN = SCM_I_TYPE_MIN(scm_t_int16,SCM_T_UINT16_MAX) Expands to ... SCM_T_INT16_MIN = (-((scm_t_int16)((-1)/2))-1) ... which can be cleaned up ... SCM_T_INT16_MIN = (-(((-1)/2))-1) A signed integer of value -1 divided by 2, is the same as bitshifting to the right by using ASR; the result will be -1. SCM_T_INT16_MIN = (-(((-1)))-1) SCM_T_INT16_MIN = (-((-1))-1) SCM_T_INT16_MIN = (-(-1)-1) SCM_T_INT16_MIN = (+1-1) SCM_T_INT16_MIN = (0) ... Ehm ... Did I do something wrong ? I expected the value -32768, but got 0. Wouldn't it be correct to typecast as scm_t_uint16 instead of scm_t_int16 (and thus scm_t_uint8 instead of scm_t_int8) ? Love Jens