On Thu, 24 Oct 2024, Jakub Jelinek wrote: > + if (TYPE_UNSIGNED (TREE_TYPE (arg2)) > + || sanitize_flags_p (SANITIZE_SHIFT)) > + arg2 = build2_loc (loc, TRUNC_MOD_EXPR, TREE_TYPE (arg2), > + arg2, build_int_cst (TREE_TYPE (arg2), > + prec)); > + else > + { > + /* When not sanitizing and second argument is signed, > + just do the modulo in unsigned type, that results > + in better generated code (for power of 2 precisions > + bitwise AND). */ > + tree utype = c_common_unsigned_type (TREE_TYPE (arg2)); > + arg2 = build2_loc (loc, TRUNC_MOD_EXPR, utype, > + fold_convert (utype, arg2), > + build_int_cst (utype, prec)); > + }
If sanitizing makes sense for these built-in functions, surely it should check for all negative shifts, including those that are multiples of the width (and there should be tests for it in the testsuite). So sanitizing would require more complicated logic to avoid reducing a negative shift modulo the width at all. Or in the absence of support for sanitizing these functions, the logic for modulo reduction shouldn't need to depend on whether sanitizing. > @@ -409,7 +416,9 @@ c_fully_fold_internal (tree expr, bool i > warning_at (loc, OPT_Wshift_count_overflow, > (code == LSHIFT_EXPR > ? G_("left shift count >= width of type") > - : G_("right shift count >= width of type"))); > + : code == RSHIFT_EXPR > + ? G_("right shift count >= width of type") > + : G_("rotate count >= width of type"))); This shouldn't be reachable, since larger rotate counts are valid and should have been reduced modulo the width. -- Joseph S. Myers josmy...@redhat.com