Kenneth Zadeck <zad...@naturalbridge.com> writes: > However, i would also say that it would be nice if were actually done > right. There are (as far as i know) actually 3 ways handling shift > amounts that have been used in the past: > > (1) mask the shift amount by (bitsize - 1). This is the most popular > and is what is done if you set SHIFT_COUNT_TRUNCATED. > (2) mask the shift amount by ((2*bitsize) -1). There are a couple of > architectures that do this including the ppc. However, gcc has never > done it correctly for these platforms.
There's TARGET_SHIFT_TRUNCATION_MASK for these cases. It applies only to the shift instruction patterns though, not to any random rtl shift expression. Part of the reason for this mess is that the things represented as rtl shifts didn't necessarily start out as shifts. E.g. during combine, extractions are rewritten as shifts, optimised, then converted back. So since SHIFT_COUNT_TRUNCATED applies to all rtl shift expressions, it also has to apply to all bitfield operations, since bitfield operations temporarily appear as shifts (see tm.texi). TARGET_SHIFT_TRUNCATION_MASK deliberately limits the scope to the shift instruction patterns because that's easier to contain. E.g. knowing that shifts operate to a certain mask allows you to generate more efficient doubleword shift sequences, which was why TARGET_SHIFT_TRUNCATION_MASK was added in the first place. > (3) assume that the shift amount can be a big number. The only > machine that i know of that ever did this was the Knuth's mmix, and > AFAIK, it has never "known" silicon. (it turns out that this is almost > impossible to build efficiently). And yet, we support this if you do > not set SHIFT_COUNT_TRUNCATED. Well, we support it in the sense that we make no assumptions about what happens, at least at the rtl level. We just punt on any out-of-range shift and leave it to be evaluated at run time. Thanks, Richard