On Tue, Mar 26, 2013 at 6:55 PM, Frederic Riss <frederic.r...@gmail.com> wrote:
> While working on having the divisions by constants optimized by my GCC
> targeting, I realized that whatever *muldi3_highpart my backend
> provides, it would never be used because of the bounds checks that
> expmed.c does on the cost arrays. For example:
>
>    choose_multiplier (abs_d, size, size - 1,
>       &mlr, &post_shift, &lgup);
>    ml = (unsigned HOST_WIDE_INT) INTVAL (mlr);
>    if (ml < (unsigned HOST_WIDE_INT) 1 << (size - 1))
>      {
>         rtx t1, t2, t3;
>
> =>    if (post_shift >= BITS_PER_WORD
> =>        || size - 1 >= BITS_PER_WORD)
>           goto fail1;
>
>        extra_cost = (shift_cost[speed][compute_mode][post_shift]
>              + shift_cost[speed][compute_mode][size - 1]
>              + add_cost[speed][compute_mode]);
>
> According to the commit log where these checks where added, they only
> serve as to not overflow the cost arrays bellow. Even though a backend
> is fully capable of DImode shifts and multiplies, they won't be
> considered because of this check. The cost arrays are filled up to
> MAX_BITS_PER_WORD, thus as a temporary workaround I have defined
> MAX_BITS_PER_WORD to 64, and I have softened the checks to fail only
> above MAX_BITS_PER_WORD. This allows my 32bits backend to specify that
> it wants these optimizations to take place for 64bits arithmetic.
>
> What do people think about this approach? does it make sense?

Another approach would be to simply use the cost of a BITS_PER_WORD
shift for bigger shifts.  Adjusting MAX_BITS_PER_WORD sounds like a hack
to me.

Note that on trunk I see the cost arrays are now inline functions, so things
may have changed for the better already.

Richard.

> Many thanks,
> Fred

Reply via email to