On 1/11/22 02:01, Richard Biener wrote:
On Tue, Jan 11, 2022 at 12:28 AM Andrew MacLeod via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
This test case demonstrates an unnoticed exponential situation in range-ops.

We end up unrolling the  loop, and the pattern of code creates a set of
cascading multiplies for which we can precisely evaluate them with
sub-ranges.

For instance, we calculated :

_38 = int [8192, 8192][24576, 24576][40960, 40960][57344, 57344]

so _38 has 4 sub-ranges, and then we calculate:

_39 = _38 * _38;

we do 16 sub-range multiplications and end up with:  int [67108864,
67108864][201326592, 201326592][335544320, 335544320][469762048,
469762048][603979776, 603979776][1006632960, 1006632960][1409286144,
1409286144][1677721600, 1677721600][+INF, +INF]

This feeds other multiplies (_39 * _39)  and progresses rapidly to blow
up the number of sub-ranges in subsequent operations.

Folding of sub-ranges is an O(n*m) process. We perform the operation on
each pair of sub-ranges and union them.   Values like _38 * _38 that
continue feeding each other quickly become exponential.

Then combining that with union (an inherently linear operation over the
number of sub-ranges) at each step of the way adds an additional
quadratic operation on top of the exponential factor.

This patch adjusts the wi_fold routine to recognize when the calculation
is moving in an exponential direction, simply produce a summary result
instead of a precise one.  The attached patch does this if (#LH
sub-ranges * #RH sub-ranges > 12)... then it just performs the operation
with the lower and upper bound instead.    We could choose a different
number, but that one seems to keep things under control, and allows us
to process up to a 3x4 operation for precision (there is a testcase in
the testsuite for this combination gcc.dg/tree-ssa/pr61839_2.c).
Longer term, we might want adjust this routine to be slightly smarter
than that, but this is a virtually zero-risk solution this late in the
release cycle.
I'm not sure we can do smarter in a good way other than maybe having
a range helper that reduces a N component range to M components
with maintaining as much precision as possible?  Like for [1, 1] u [3, 3]
u [100, 100] and requesting at most 2 elements merge [1, 1] and [3, 3]
and not [100, 100].  That should eventually be doable in O(n log n).
Yeah, similar to my line of thought.  It may also be worth considering something similar after we have calculated a range sometimes.  if the resulting range has more than N sub-ranges, look to see if it is worthwhile trying to compress it at that point too maybe.  Something for the next stage-1 to consider.

Andrew

Reply via email to