As described in the covering note, one of big differences for SVE is that
things like mode sizes, offsets, and numbers of vector elements can depend
on a runtime parameter.  This message describes how the SVE patches handle
that and how they deal with vector constants in which the number of elements
isn't fixed at compile time.


Mode sizes and numbers of elements
==================================

Having runtime mode sizes and numbers of elements means for example that:

  GET_MODE_SIZE
  GET_MODE_BITSIZE
  GET_MODE_PRECISION
  GET_MODE_NUNITS
  TYPE_VECTOR_SUBPARTS

are now runtime invariants rather than compile-time constants.  The first
question is what the representation of these runtime invariants should be.
Two obvious choices are:

  (1) Make them tree or rtl expressions (as appropriate for the IR
      they're part of).
  (2) Use a new representation.

One of the main problems with (1) is that it's much more general than
we need.  If we made something like GET_MODE_SIZE an rtx, it would be
hard to enforce statically that the value has a suitable form.  It would
also slow down the compiler, including for targets that don't need runtime
sizes.

We therefore went for approach (2).  The idea is to add a new
"polynomial integer" (poly_int) class that represents a general:

  C0 + C1 * X1 + ... + Cn * Xn

where each coefficient Ci is a compile-time constant and where each
indeterminate Xi is a nonnegative runtime parameter.  The class takes
"n" and the coefficient type as template parameters, so unlike (1) it
can continue to occupy less memory than a pointer where appropriate.

The value of "n" for mode sizes and offsets depends on the target.
For all targets except AArch64, "n" is 1 and the class degenerates
to a constant.

One difficulty with using runtime sizes is that some common questions
might not be decidable at compile time.  E.g. if mode A has size 2 + 2X
and mode B has size 4, the condition:

  GET_MODE_SIZE (A) <= GET_MODE_SIZE (B)

is true for X<=1 and false for X>=2.  It's therefore no longer possible
for target-independent code to use these kinds of comparison for modes
that might be vectors.  Instead it needs to ask "might the size be <=?"
or "must the size be <=?".

If a target only has constant sizes, it would be silly for target-specific
code to have to make the distinction between "may" and "must", since the
target knows that they amount to the same thing.  poly_int therefore
provides an implicit conversion to a constant if "n" is 1 and if we're
compiling target-specific code.  Whether this conversion is available
is controlled by a new TARGET_C_FILE macro.

The idea is to allow current targets to compile as-is with very few
changes while at the same time ensuring that people working on target-
independent code can be reasonably confident of "doing the right thing"
for runtime sizes without having to test SVE specifically.

However, even with SVE, all non-vector modes still have a compile-time size.
In these cases we had two options: use may/must operations anyway, or add
static type checking to enforce the fact that the mode isn't a vector.
The latter seemed better in most cases.  The patches therefore add the
following classes to wrap a machine mode enum:

  scalar_int_mode: modes that satisfy SCALAR_INT_MODE_P
  scalar_float_mode: modes that satisfy SCALAR_FLOAT_MODE_P
  scalar_mode: modes that hold some kind of scalar
  complex_mode: modes that hold a complex value

These wrappers have other benefits too.  They replace some runtime asserts
with static type checking and also make sure that the size or precision of
a vector mode isn't accidentally used instead of the size of precision of
an element.  (This sometimes happened when handling vector shifts by a
scalar amount, for example.)

We reused the is_a<>, as_a<> and dyn_cast<> operators for machine modes.
E.g.:

  is_a <scalar_mode> (M)

tests whether M is scalar and:

  as_a <scalar_int_mode> (M)

forcibly converts M to a scalar_int_mode, asserting if it isn't one.
We also used:

  is_a <scalar_int_mode> (M, &RES)

as a convenient way of testing whether M is a scalar_int_mode and
storing it as one in RES if so.  This helps with various multi-line
"if" statements, particularly in simplification routines.

For consistency, the patches make machine_mode itself a wrapper class
and rename the enum to machine_mode_enum.  FOOmode identifiers have
the most specific type appropriate to them, so for example DImode is a
scalar_int_mode and DFmode is a scalar_float_mode.  The raw enum values
are still available with the E_ prefix (e.g. E_DImode) and are useful
for things like case statements.

I've attached the implementation of poly_int.  It contains a big block
comment at the start describing the approach and summarising the
available operations.

I've also attached the new version of machmode.h, with the wrapper
classes described above.

One thing we haven't done but should is add self-tests for the
poly_int class.  A lot of this code was written before self tests
were available, but one of the reasons for making "n" a template
parameter was precisely to allow n==2 to be tested on targets that
don't need runtime parameters.

Note that many things besides the macros above need to become polynomial.
Other examples include SUBREG_BYTE, frame offsets, frame sizes, and the
values returned by get_inner_reference.


Representing runtime parameters in the IR
=========================================

Even though we used polynomials rather than IR to encode things like
mode sizes, we still need a way of representing the runtime parameters
in IR.  This is used when incrementing vector ivs and allocating stack
frames, for example.

There were two ways we considered doing this in rtl:

(1) Add a new rtl code for the poly_ints themselves.  This would give
    constants like:

      (const_poly_int [(const_int C0)
                       (const_int C1)
                       ...
                       (const_int Cn)])

    (although the coefficients could be const_wide_ints instead
    of const_ints where appropriate).  The runtime value would be:

      C0 + C1 * X1 + ... + Cn * Xn

(2) Add a new rtl code for the polynomial indeterminates Xi,
    then use them in const wrappers.  A constant like C0 + C1 * X1
    would then look like:

      (const:M (plus:M (mult:M (const_param:M X1)
                               (const_int C1))
                       (const_int C0)))

There didn't seem to be that much to choose between them.  However,
DWARF location expressions that depend on the SVE vector length use
a pseudo register to encode that length.  This is very similar to the
const_param used in expression (2), and the DWARF expression would use
similar arithmetic operations to construct the full polynomial constant.
We therefore went for (2).

Most uses of rtx polynomial constants use helper functions that abstract
the underlying representation, so it would be easy to change to (1) (or
to a third approach) in future.

Unlike rtl, trees have no established practice of wrapping arbitrary
arithmetic in a const-like wrapper, so (1) seemed like the best approach.
The patches therefore add a new POLY_CST node that holds one INTEGER_CST
per coefficient.  Again, the actual representation is usually hidden
behind accessor functions; very little code operates on POLY_CSTs directly.


Constructing variable-length vectors
====================================

Currently both tree and rtl vector constants require the number of
elements to be known at compile time and allow the elements to be
arbitrarily different from one another.  SVE vector constants instead
have a variable number of elements and require the constant to have
some inherent structure, so that the values remain predictable however
long the vector is.  In practice there are two useful types of constant:

(a) a duplicate of a single value to all elements.

(b) a linear series in which element E has the value BASE + E * STEP,
    for some given BASE and STEP.

For integers, (a) could simply be seen as a special form of (b) in
which the step is zero.  However, we've deliberately not defined (b)
for floating-point modes, in order to avoid specifying whether element
E should really be calculcated as BASE + E * STEP or as BASE with STEP
added E times (which would round differently).  So treating (a) as a
separate kind of constant from (b) is useful for floating-point types.

We need to support the same operations for non-constant vectors as well
as constant ones.  Both operations have direct analogues in SVE.

rtl already supports (a) for variables via vec_duplicate.  For constants
we simply wrapped such vec_duplicates in a (const ...), so for example:

  (const:VnnHI (vec_duplicate:VnnHI (const_int 10)))

represents a vector constant in which each element is the 16-bit value 10.

For (b) we created a new vec_series rtl code that takes the base and step
as operands.  A vec_series is constant if it has constant operands, in which
case it too can be wrapped in a (const ...).  For example:

  (const:VnnSI (vec_series:VnnSI (const_int 1) (const_int 3)))

represents the constant { 1, 4, 7, 10, ... }.

We only use constant vec_duplicate and vec_series when the number of
elements is variable.  Vectors with a constant number of elements
continue to use const_vector.  It might be worth considering using
vec_duplicate across the board in future though, since it's significantly
more compact when the number of elements is large.

In both vec_duplicate and vec_series constants, the value of the element
can be any constant that is valid for the element's mode; it doesn't have
to be a const_int.

The patches take a similar approach for trees.  A new VEC_DUPLICATE_EXPR
returns a vector in which every element is equal to operand 0, while a new
VEC_SERIES_EXPR creates a linear series, taking the same two operands as the
rtl code.  The trees are TREE_CONSTANT if the operands are TREE_CONSTANT.

The new trees are valid gimple values iff they are TREE_CONSTANT.
This means that the constant forms can be used in a very similar way
to VECTOR_CST, rather than always requiring a separate gimple assignment.


Variable-length permutes
========================

SVE has a similar set of permute instructions to Advanced SIMD: it has
a TBL instruction for general permutes and specific instructions like
TRN1 for certain common operations.  Although it would be possible to
construct variable-length masks for all the special-purpose permutes,
the expression to construct things like "interleave high" would be
relatively complex.  It seemed better to add optabs and internal
functions for certain kinds of well-known operation, specifically:

  - IFN_VEC_INTERLEAVE_HI
  - IFN_VEC_INTERLEAVE_LO
  - IFN_VEC_EXTRACT_EVEN
  - IFN_VEC_EXTRACT_ODD
  - IFN_VEC_REVERSE

These names follow existing target-independent terminology rather than
the usual AArch64 scheme.

Thanks,
Richard

/* Polynomial integer classes.
   Copyright (C) 2014-2016 Free Software Foundation, Inc.

This file is part of GCC.

GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
version.

GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
for more details.

You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3.  If not see
<http://www.gnu.org/licenses/>.  */

/* This file provides a representation of sizes and offsets whose exact
   values depend on certain runtime properties.  The motivating example
   is the ARM SVE ISA, in which the number of vector elements is only
   known at runtime.

   Overview
   ========

   We define indeterminates x1...xn whose values are only known at
   runtime and use polynomials of the form:

     c0 + c1 * x1 + ... + cn * xn

   to represent a size or offset whose value might depend on one
   of these indeterminates.  The coefficients c0...cn are always
   known at compile time, with the c0 term being the "constant" part
   that doesn't depend on any runtime value.

   The number of indeterminates n is a fixed property of the target
   and at the moment is usually 0 (since most targets don't have such
   runtime values).  When n is 0, the class degenerates to a single
   compile-time constant c0.

   We make the simplifying requirement that each indeterminate must be a
   nonnegative integer.  An indeterminate value of 0 should usually
   represent the minimum possible runtime value, with c0 indicating
   the value in that case.

   E.g., for SVE, the single indeterminate represents the number of
   128-bit blocks in a vector _beyond the minimum width of 128 bits_.
   Thus the number of 64-bit doublewords in a vector is 2 + 2 * x1.
   If an aggregate has a single SVE vector and 16 additional bytes,
   its total size is 32 + 16 * x1 bytes.

   Challenges
   ==========

   The two main problems with using polynomial sizes and offsets are that:

   (1) there's no total ordering at compile time, and
   (2) some operations have results that can't be expressed as a compile-time
       polynomial.

   For example, we can't tell at compile time whether:

      3 + 4x <= 1 + 5x

   since the condition is false for x <= 1 and true for x >= 2.
   Similarly we can't compute:

      (3 + 4x) * (1 + 5x)
   or
      (3 + 4x) / (1 + 5x)

   and represent the result as the same kind of polynomial.  These problems
   are addressed below.

   Ordering
   ========

   In general we need to compare sizes and offsets in two situations:
   those in which there is no definite link between the two values,
   and those in which there is a definite link.  An example of the
   former is bounds checking: we might want to check whether two
   arbitrary memory references overlap.  An example of the latter is the
   relationship between the inner and outer sizes of a subreg, where we
   must know at compile time whether the subreg is paradoxical, partial,
   or complete.

   Referring back to the example polynomials above, it makes sense to
   ask whether a memory reference of size 3 + 4x overlaps one of size
   1 + 5x, but it doesn't make sense to have a subreg in which the outer
   mode has 3 + 4x bytes and the inner mode has 1 + 5x bytes (or vice
   versa).  Such subregs are always invalid and should trigger an ICE
   if formed.

   Ordering without a definite link
   ================================

   In the case where there is no definite link between two sizes,
   we can usually make a conservatively-correct assumption.  E.g.
   for alias analysis the conservative assumption is that two references
   might alias.  For these situations the file provides routines for
   checking whether a particular relationship "may" (= might) hold:

      may_lt may_le may_eq may_ge may_gt
                    may_ne

   All relations on the first line are transitive, so for example:

      may_lt (a, b) && may_lt (b, c) implies may_lt (a, c)

   may_lt, may_gt and may_ne are irreflexive, so for example:

      !may_lt (a, a)

   is always true.  may_le, may_eq and may_ge are reflexive, so for example:

      may_le (a, a)

   is always true.  may_eq and may_ne are symmetric, so:

      may_eq (a, b) == may_eq (b, a)
      may_ne (a, b) == may_ne (b, a)

   In addition:

      may_le (a, b) == may_lt (a, b) || may_eq (a, b)
      may_ge (a, b) == may_gt (a, b) || may_eq (a, b)
      may_lt (a, b) == may_gt (b, a)
      may_le (a, b) == may_ge (b, a)

   However:

      may_le (a, b) && may_le (b, a) does not imply !may_ne (a, b)
      may_ge (a, b) && may_ge (b, a) does not imply !may_ne (a, b)

   One example where this doesn't hold is again a == 3 + 4x and
   b == 1 + 5x, where may_le (a, b), may_ge (a,b) and may_ne (a, b)
   all hold.  may_le and may_ge are therefore not antisymetric and so
   don't form a partial order.

   One way of checking whether [begin1, end1) might overlap [begin2, end2)
   using these relations is:

      may_gt (end1, begin2) && may_gt (end2, begin1)

   (but see the description of the range-checking predicates below).

   For readability, the file also provides "must" inverses of the above:

      must_lt (a, b) == !may_ge (a, b)
      must_le (a, b) == !may_gt (a, b)
      must_eq (a, b) == !may_ne (a, b)
      must_ge (a, b) == !may_lt (a, b)
      must_gt (a, b) == !may_le (a, b)
      must_ne (a, b) == !may_eq (a, b)

   An alternative way of writing the range check above is therefore:

      !(must_le (end1, begin2) || must_le (end2, begin1))

   All "must" relations except "must_ne" are transitive.  must_lt,
   must_ne and must_gt are irreflexive.  must_le, must_eq and must_ge
   are reflexive.  Also:

      must_lt (a, b) == must_gt (b, a)
      must_le (a, b) == must_ge (b, a)
      must_lt (a, b) implies !must_lt (b, a)   [asymmetry]
      must_gt (a, b) implies !must_gt (b, a)
      must_le (a, b) && must_le (b, a) == must_eq (a, b) [== !may_ne (a, b)]
      must_ge (a, b) && must_ge (b, a) == must_eq (a, b) [== !may_ne (a, b)]

   must_le and must_ge are therefore antisymmetric and are partial orders.
   However:

      must_le (a, b) does not imply must_lt (a, b) || must_eq (a, b)
      must_ge (a, b) does not imply must_gt (a, b) || must_eq (a, b)

   For example, must_le (4, 4 + 4x) holds due the requirements on x,
   but neither must_lt (4, 4 + 4x) nor must_eq (4, 4 + 4x) hold.

   Ordering with a definite link
   =============================

   In cases where there is a definite link between values, such as the
   outer and inner sizes of subregs, we usually require the sizes to be
   ordered by the must_le partial order.  ordered_p (a, b) checks
   whether this is true for a given a and b.

   For example, if a subreg has an outer mode of size OUTER and an
   inner mode of size INNER:

   - the subreg is complete if must_eq (INNER, OUTER)
   - otherwise, the subreg is paradoxical if must_le (INNER, OUTER)
   - otherwise, the subreg is partial if must_le (OUTER, INNER)
   - otherwise, the subreg is ill-formed

   If the sizes are already known to be valid then:

   - the subreg is complete if must_eq (INNER, OUTER)
   - the subreg is paradoxical if may_lt (INNER, OUTER)
   - rhe subreg is partial if may_lt (OUTER, INNER)

   The file also provides the following utility functions for
   ordered values:

   ordered_min (a, b)
      Asserts that a and b are ordered by must_le and returns the
      minimum of the two.

   ordered_max (a, b)
      Asserts that a and b are ordered by must_le and returns the
      maximum of the two.

   These routines should only be used if there is a definite link
   between the two values.  See the bounds routines below for routines
   that can help with other cases.

   Marker values
   =============

   Sometimes it's useful to have a special marker value that isn't
   supposed to be taken literally.  For example, some code uses a size
   of -1 to represent an unknown size, rather than having to carry around
   a separate boolean to say whether the size is known.

   The best way of checking whether something is a marker value
   is must_eq.  Conversely the best way of checking whether something
   _isn't_ a marker value is may_ne.

   Thus in the size example just mentioned, must_eq (size, -1) would
   check for an unknown size and may_ne (size, -1) would check for a
   known size.

   Range checks
   ============

   As well as the core comparisons described above, the file provides
   utilities for various kinds of range check.  In each case the range
   is represented by a start position and a length rather than a start
   position and an end position.  (This is because the former is used
   much more often than the latter in GCC.)  Also, the sizes can be
   -1 (or all-1s for unsigned sizes) to indicate a range with a known
   start position but an unknown size.

   ranges_may_overlap_p (pos1, size1, pos2, size2)
      Return true if the range described by pos1 and size1 might overlap
      the range described by pos2 and size2.

   known_subrange_p (pos1, size1, pos2, size2)
      Return true if the range described by pos1 and size1 is known to
      be contained in the range described by pos2 and size2.

   known_in_range_p (val, pos, size)
      Return true if value val is known to be in the range described
      by pos and size.

   maybe_in_range_p (val, pos, size)
      Return true if value val might be in the range described by pos
      and size (i.e. if it isn't known to be outside that range).

   Arithmetic
   ==========

   Addition, subtraction, negation and bit inverse work normally.
   Multiplication by a constant multiplier and left shifting by
   a constant shift amount also work normally.  General multiplication
   of two polynomials isn't supported and isn't useful in practice.

   These arithmetic operators handle integer ranks in a similar way to C.
   The main difference is that everything narrower than HOST_WIDE_INT
   promotes to HOST_WIDE_INT, whereas in C everything narrower than int
   promotes to int.  For example:

       poly_uint16     + int          -> poly_int64  (both narrower than HWI)
       unsigned int    + poly_uint16  -> poly_int64  (likewise)
       poly_int64      + int          -> poly_int64  (highest rank wins)
       poly_int32      + poly_uint64  -> poly_uint64 (likewise)
       uint64          + poly_int64   -> poly_uint64 (likewise)
       poly_offset_int + int32        -> poly_offset_int (likewise)
       offset_int      + poly_uint16  -> poly_offset_int (likewise)

   If one of the operands is wide_int or poly_wide_int, the rules
   are the same as for wide_int arithmetic.

   Division of polynomials is possible for certain inputs.  The functions
   for division return true if the operation is possible and in most cases
   return the results by pointer.  The routines are:

   multiple_p (a, b[, &quotient])
      Return true if a is an exact multiple of b, storing the result
      in quotient if so.  There are overloads for various combinations
      of polynomial and constant a, b and quotient.

   constant_multiple_p (a, b[, &quotient])
      Test multiple_p and also test whether the multiple is a
      compile-time constant.

   can_div_trunc_p (a, b, &quotient[, &remainder])
      Return true if we can calculate trunc (a / b) at compile time,
      storing the result in quotient and remainder if so.

   can_div_away_from_zero_p (a, b, &quotient)
      Return true if we can calculate a / b at compile time,
      rounding away from zero.  Store the result in quotient if so.

      Note that this is possible iff can_div_trunc_p is possible.
      The only difference is in the way the result is rounded.

   The file also provides an asserting form of division:

   exact_div (a, b)
      Assert that a is a multiple of b and return a / b.  The result
      is a polynomial if a is a polynomial.

   The file provides similar routines for other operations:

   can_ior_p (a, b, &result):
      Return true if we can calculate a | b at compile time, storing the
      result in the given location if so.

   Please add any others that you find to be useful.

   In addition, the following overflow-checking wi:: routines are also
   supported for polynomials:

   - wi::add
   - wi::sub
   - wi::neg
   - wi::mul

   These routines just check whether overflow occurs on any individual
   coefficient; it isn't possible to know at compile time whether the
   final runtime value would overflow.

   The following miscellaneous wi:: routines are also supported:

   - wi::sext

   Alignment
   =========

   The file provides various routines for aligning values and for querying
   misalignments.  In each case the alignment must be a power of 2.

   can_align_p (value, align)
      Return true if we can align the given value to the given alignment
      boundary at compile time.

   can_align_up (value, align, &aligned)
      Return true if we can align the given value upwards at compile time,
      storing the result in aligned if so.

   can_align_down (value, align, &aligned)
      Return true if we can align the given value downwards at compile time,
      storing the result in aligned if so.

   known_equal_after_align_up (a, b, align)
      Return true if we can align a and b up at compile time and if the
      two results are equal.

   known_equal_after_align_down (a, b, align)
      Return true if we can align a and b down at compile time and if the
      two results are equal.

   aligned_lower_bound (value, align)
      Return a result that is no greater than value and that is aligned
      to the given alignment boundary.  The result will the closest
      aligned value for some indeterminate values but not necessarily
      for all.

      For example, this function can be used to calculate a new stack
      offset for a downward-growing stack after allocating an object
      with value bytes that needs to be aligned to align bytes.

   aligned_upper_bound (value, align)
      Likewise return a result that is no less than value and that is
      aligned to the given alignment boundary.  This is the routine
      that would be used for upward-growing stacks in the scenario
      described.

   known_misalignment (value, align, &misalign)
      Return true if we can calculate the misalignment of value
      with respect to align at compile time, storing the result in
      misalign if so.

   known_alignment (value)
      Return the minimum alignment that the given value is known to have
      (in other words, the largest alignment that can be guaranteed
      whatever the values of the indeterminates turn out to be).
      Return 0 if value is known to be 0.

   force_align_up (value, align)
      Assert that value can be aligned up at compile time and return the
      result.  When using this routine, please add a comment explaining
      why the assertion is known to hold.

   force_align_down (value, align)
      As above, but aligning down.

   force_align_up_and_div (value, align)
      Divide the result of force_align_up by align.  Again, please add
      a comment explaining why the assertion in force_align_up is known
      to hold.

   force_align_down_and_div (value, align)
      Likewise for force_align_down.

   force_get_misalignment (value, align)
      Assert that we can calculate the misalignment of value with
      respect to align at compile time and return the misalignment.
      When using this function, please add a comment explaining why
      the assertion is known to hold.

   Computing bounds
   ================

   The file also provides routines for calculating lower and upper bounds:

   constant_lower_bound (a)
      Assert that a is nonnegative and return the smallest value it can have.

   upper_bound (a, b)
      Return a value that must be greater than or equal to both a and b.
      It will be the least such value for some indeterminate values
      but necessarily for all.

   Other utilities
   ===============

   common_multiple (a, b)
      Return a value that is a multiple of both a and b.  It will be
      the least common multiple for some indeterminate values but not
      necessarily for all.

   compare_sizes_for_sort (a, b)
      Compare a and b in reverse lexicographical order.  This can be
      useful when sorting data structures, since it has the effect of
      separating constant and non-constant values.  The values do not
      necessarily end up in numerical order; for example, 1 + 1x
      would come after 100 in the sort order, but may well be less
      than 100 at run time.

   print_dec (value, file, sgn)
      A polynomial version of the wide-int routine.  */

#ifndef HAVE_POLY_INT_H
#define HAVE_POLY_INT_H

/* int_traits<T1>::rank is less than int_traits<T2>::rank if T1 can
   promote to T2.

   For C-like types the rank is:

     (2 * number of bytes) + (unsigned ? 1 : 0)

   wide_ints don't have a normal rank and so use a value of INT_MAX.
   Any fixed-width integer should be promoted to wide_int if possible
   and lead to an error otherwise.

   int_traits<T>::precision is the number of bits that T can hold.

   int_traits<T>::signedness is:
      0 if T1 is unsigned
      1 if T1 is signed
     -1 if T1 has no inherent sign (as for wide_int).

   int_traits<T>::result is a type that can hold results of operations
   on T.  This is different from T itself in cases where T is the result
   of an accessor like wi::to_offset.  */
template<typename T, wi::precision_type = wi::int_traits<T>::precision_type>
struct int_traits;

template<typename T>
struct int_traits<T, wi::FLEXIBLE_PRECISION>
{
  typedef T result;
  static const int signedness = (T (0) >= T (-1));
  static const int precision = sizeof (T) * CHAR_BIT;
  static const int rank = sizeof (T) * 2 + !signedness;
};

template<typename T>
struct int_traits<T, wi::VAR_PRECISION>
{
  typedef T result;
  static const int signedness = -1;
  static const int precision = WIDE_INT_MAX_PRECISION;
  static const int rank = INT_MAX;
};

template<typename T>
struct int_traits<T, wi::CONST_PRECISION>
{
  typedef WI_UNARY_RESULT (T) result;
  static const int signedness = 1;
  static const int precision = wi::int_traits<T>::precision;
  /* These types are always signed.  */
  static const int rank = precision * 2 / CHAR_BIT;
};

/* SFINAE class that leads to substitution failure if T2 can't represent
   all the values in T1.  Either:

   - T2 should be a type with the same signedness as T1 and no less precision.
     This allows things like int16_t -> int16_t and uint32_t -> uint64_t.

   - T1 should be unsigned, T2 should be signed, and T1 should be
     narrower than T2.  This allows things like uint16_t -> int32_t.

   This rules out cases where T2 has less precision than T1 or where
   the conversion would reinterpret the top bit.  E.g. int16_t -> uint32_t
   can be dangerous and should have an explicit cast if deliberate.  */
template<typename T1, typename T2,
	  bool good = (int_traits<T1>::signedness
		       == int_traits<T2>::signedness
		       ? (int_traits<T1>::precision
			  <= int_traits<T2>::precision)
		       : (int_traits<T1>::signedness == 0
			  && int_traits<T2>::signedness == 1
			  && (int_traits<T1>::precision
			      < int_traits<T2>::precision)))>
struct if_lossless;

template<typename T1, typename T2>
struct if_lossless<T1, T2, true>
{
  typedef bool bool_type;
};

/* A base POD class for polynomial integers.  The polynomial has N
   coefficients of type C.

   Most of these functions are ALWAYS_INLINE to speed up compilers
   built at -O0.  The functions are heavily used and not interesting
   as function calls even in debug builds.  */
template<unsigned int N, typename C>
class poly_int_pod
{
public:
  typedef C t;

  template<typename Ca>
  poly_int_pod &operator = (const poly_int_pod<N, Ca> &);
  poly_int_pod &operator = (const C &);

  template<typename Ca>
  poly_int_pod &operator += (const poly_int_pod<N, Ca> &);
  poly_int_pod &operator += (const C &);

  template<typename Ca>
  poly_int_pod &operator -= (const poly_int_pod<N, Ca> &);
  poly_int_pod &operator -= (const C &);

  poly_int_pod &operator *= (const C &);

  poly_int_pod &operator <<= (unsigned int);

  bool is_constant () const;

  template<typename T>
  typename if_lossless<C, T>::bool_type is_constant (T *) const;

  C to_constant () const;

  template<typename Ca>
  static poly_int_pod from (const poly_int_pod<N, Ca> &,
			    unsigned int, signop);
  template<typename Ca>
  static poly_int_pod from (const poly_int_pod<N, Ca> &, signop);
  bool to_shwi (poly_int_pod<N, HOST_WIDE_INT> *) const;
  bool to_uhwi (poly_int_pod<N, unsigned HOST_WIDE_INT> *) const;
  poly_int_pod<N, HOST_WIDE_INT> force_shwi () const;

#if POLY_INT_CONVERSION
  operator C () const;
#endif

  C coeffs[N];
};

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator = (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = a.coeffs[0];
  if (N == 2)
    this->coeffs[1] = a.coeffs[1];
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator = (const C &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = a;
  if (N == 2)
    /* Easy way of propagating the precision of a wide_int to the
       second coefficient.  */
    this->coeffs[1] = this->coeffs[0] & 0;
  return *this;
}

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator += (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] += a.coeffs[0];
  if (N == 2)
    this->coeffs[1] += a.coeffs[1];
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator += (const C &a)
{
  this->coeffs[0] += a;
  return *this;
}

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator -= (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] -= a.coeffs[0];
  if (N == 2)
    this->coeffs[1] -= a.coeffs[1];
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator -= (const C &a)
{
  this->coeffs[0] -= a;
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator *= (const C &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] *= a;
  if (N == 2)
    this->coeffs[1] *= a;
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int_pod<N, C>&
poly_int_pod<N, C>::operator <<= (unsigned int a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = wi::lshift (this->coeffs[0], a);
  if (N == 2)
    this->coeffs[1] = wi::lshift (this->coeffs[1], a);
  return *this;
}

/* Return true if the polynomial value is a compile-time constant.  */

template<unsigned int N, typename C>
ALWAYS_INLINE bool
poly_int_pod<N, C>::is_constant () const
{
  STATIC_ASSERT (N <= 2);
  return N == 1 || this->coeffs[1] == 0;
}

/* Return true if the polynomial value is a compile-time constant,
   storing its value in CONST_VALUE if so.  */

template<unsigned int N, typename C>
template<typename T>
ALWAYS_INLINE typename if_lossless<C, T>::bool_type
poly_int_pod<N, C>::is_constant (T *const_value) const
{
  if (is_constant ())
    {
      *const_value = this->coeffs[0];
      return true;
    }
  return false;
}

/* Return the value of a polynomial that is already known to be a
   compile-time constant.

   NOTE: When using this function, please add a comment above the call
   explaining why we know the value is constant in that context.  */

template<unsigned int N, typename C>
ALWAYS_INLINE C
poly_int_pod<N, C>::to_constant () const
{
  gcc_checking_assert (is_constant ());
  return this->coeffs[0];
}

/* Convert X to a wide-int-based polynomial in which each coefficient
   has BITSIZE bits.  If X's coefficients are smaller than BITSIZE,
   extend them according to SGN.  */

template<unsigned int N, typename C>
template<typename Ca>
inline poly_int_pod<N, C>
poly_int_pod<N, C>::from (const poly_int_pod<N, Ca> &a,
			  unsigned int bitsize, signop sgn)
{
  poly_int_pod<N, C> r;
  for (unsigned int i = 0; i < N; i++)
    r.coeffs[i] = C::from (a.coeffs[i], bitsize, sgn);
  return r;
}

/* Convert X to a fixed-wide-int-based polynomial, extending according
   to SGN.  */

template<unsigned int N, typename C>
template<typename Ca>
inline poly_int_pod<N, C>
poly_int_pod<N, C>::from (const poly_int_pod<N, Ca> &a, signop sgn)
{
  poly_int_pod<N, C> r;
  for (unsigned int i = 0; i < N; i++)
    r.coeffs[i] = C::from (a.coeffs[i], sgn);
  return r;
}

/* Return true if the coefficients of this wide-int-based polynomial can
   be represented as signed HOST_WIDE_INTs.  Store the HOST_WIDE_INT
   representation in *R if so.

   If this polynomial is conceptually wider than HOST_WIDE_INT bits,
   the coefficients in the returned value should be sign-extended
   to the full width.  */

template<unsigned int N, typename C>
inline bool
poly_int_pod<N, C>::to_shwi (poly_int_pod<N, HOST_WIDE_INT> *r) const
{
  for (unsigned int i = 0; i < N; i++)
    if (!wi::fits_shwi_p (this->coeffs[i]))
      return false;
  for (unsigned int i = 0; i < N; i++)
    r->coeffs[i] = this->coeffs[i].to_shwi ();
  return true;
}

/* Return true if the coefficients of this wide-int-based polynomial can
   be represented as unsigned HOST_WIDE_INTs.  Store the unsigned
   HOST_WIDE_INT representation in *R if so.

   If this polynomial is conceptually wider than unsigned HOST_WIDE_INT
   bits, the coefficients in the returned value should be zero-extended
   to the full width.  */

template<unsigned int N, typename C>
inline bool
poly_int_pod<N, C>::to_uhwi (poly_int_pod<N, unsigned HOST_WIDE_INT> *r) const
{
  for (unsigned int i = 0; i < N; i++)
    if (!wi::fits_uhwi_p (this->coeffs[i]))
      return false;
  for (unsigned int i = 0; i < N; i++)
    r->coeffs[i] = this->coeffs[i].to_uhwi ();
  return true;
}

/* Force a wide-int based constant to HOST_WIDE_INT precision,
   truncating if necessary.  */

template<unsigned int N, typename C>
poly_int_pod<N, HOST_WIDE_INT>
poly_int_pod<N, C>::force_shwi () const
{
  poly_int_pod<N, HOST_WIDE_INT> r;
  for (unsigned int i = 0; i < N; i++)
    r.coeffs[i] = this->coeffs[i].to_shwi ();
  return r;
}

#if POLY_INT_CONVERSION
/* Provide a conversion operator to constants.  */

template<unsigned int N, typename C>
ALWAYS_INLINE
poly_int_pod<N, C>::operator C () const
{
  STATIC_ASSERT (N == 1);
  return this->coeffs[0];
}
#endif

/* The main class for polynomial integers.  The class provides
   constructors that are necessarily missing from the POD base.  */
template<unsigned int N, typename C>
class poly_int : public poly_int_pod<N, C>
{
public:
  ALWAYS_INLINE poly_int () {}

  template<typename Ca>
  poly_int (const poly_int<N, Ca> &);
  template<typename Ca>
  poly_int (const poly_int_pod<N, Ca> &);
  template<typename C0>
  poly_int (const C0 &);
  template<typename C0, typename C1>
  poly_int (const C0 &, const C1 &);

  template<typename Ca>
  poly_int &operator += (const poly_int_pod<N, Ca> &);
  poly_int &operator += (const C &);

  template<typename Ca>
  poly_int &operator -= (const poly_int_pod<N, Ca> &);
  poly_int &operator -= (const C &);

  poly_int &operator *= (const C &);

  poly_int &operator <<= (unsigned int);
};

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE
poly_int<N, C>::poly_int (const poly_int<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = a.coeffs[0];
  if (N == 2)
    this->coeffs[1] = a.coeffs[1];
}

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE
poly_int<N, C>::poly_int (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = a.coeffs[0];
  if (N == 2)
    this->coeffs[1] = a.coeffs[1];
}

template<unsigned int N, typename C>
template<typename C0>
ALWAYS_INLINE
poly_int<N, C>::poly_int (const C0 &c0)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = c0;
  if (N == 2)
    /* Easy way of propagating the precision of a wide_int to the
       second coefficient.  */
    this->coeffs[1] = this->coeffs[0] & 0;
}

template<unsigned int N, typename C>
template<typename C0, typename C1>
ALWAYS_INLINE
poly_int<N, C>::poly_int (const C0 &c0, const C1 &c1)
{
  STATIC_ASSERT (N == 2);
  this->coeffs[0] = c0;
  this->coeffs[1] = c1;
}

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator += (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] += a.coeffs[0];
  if (N == 2)
    this->coeffs[1] += a.coeffs[1];
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator += (const C &a)
{
  this->coeffs[0] += a;
  return *this;
}

template<unsigned int N, typename C>
template<typename Ca>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator -= (const poly_int_pod<N, Ca> &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] -= a.coeffs[0];
  if (N == 2)
    this->coeffs[1] -= a.coeffs[1];
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator -= (const C &a)
{
  this->coeffs[0] -= a;
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator *= (const C &a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] *= a;
  if (N == 2)
    this->coeffs[1] *= a;
  return *this;
}

template<unsigned int N, typename C>
ALWAYS_INLINE poly_int<N, C>&
poly_int<N, C>::operator <<= (unsigned int a)
{
  STATIC_ASSERT (N <= 2);
  this->coeffs[0] = wi::lshift (this->coeffs[0], a);
  if (N == 2)
    this->coeffs[1] = wi::lshift (this->coeffs[1], a);
  return *this;
}

/* SFINAE class to force T to be a non-polynomial arithmetic type.  */
template<typename T>
struct if_nonpoly
{
  typedef bool bool_type;
  typedef T t;
};
template<unsigned int N, typename T> struct if_nonpoly<poly_int_pod<N, T> > {};
template<unsigned int N, typename T> struct if_nonpoly<poly_int<N, T> > {};

/* Likewise for two types T1 and T2.  */
template<typename T1, typename T2,
	 typename T3 = typename if_nonpoly<T1>::t,
	 typename T4 = typename if_nonpoly<T2>::t>
struct if_nonpoly2
{
  typedef bool bool_type;
};

/* SFINAE class to force T to be a polynomial type.  */
template<typename T> struct if_poly {};
template<unsigned int N, typename T>
struct if_poly<poly_int_pod<N, T> >
{
  typedef bool bool_type;
  typedef poly_int_pod<N, T> t;
};
template<unsigned int N, typename T>
struct if_poly<poly_int<N, T> >
{
  typedef bool bool_type;
  typedef poly_int<N, T> t;
};

/* poly_result<T1, T2>::t gives the result type for T1 + T2.  The intention
   is to provide normal C-like rules for integer ranks, except that
   everything smaller than HOST_WIDE_INT promotes to HOST_WIDE_INT.  */
#define RANK(X) int_traits<X>::rank
template<unsigned int N, typename T1, typename T2 = T1,
	  int sel = ((RANK (T1) < RANK (HOST_WIDE_INT)
		      && RANK (T2) < RANK (HOST_WIDE_INT))
		     ? 0 : RANK (T1) < RANK (T2) ? 1 : 2)>
struct poly_result;
#undef RANK

/* Promote pair to HOST_WIDE_INT.  */
template<unsigned int N, typename T1, typename T2>
struct poly_result<N, T1, T2, 0>
{
  typedef poly_int<N, HOST_WIDE_INT> t;
};

/* T1 promotes to T2.  */
template<unsigned int N, typename T1, typename T2>
struct poly_result<N, T1, T2, 1>
{
  typedef poly_int<N, typename int_traits <T2>::result> t;
};

/* T2 promotes to T1.  */
template<unsigned int N, typename T1, typename T2>
struct poly_result<N, T1, T2, 2>
{
  typedef poly_int<N, typename int_traits <T1>::result> t;
};

#define POLY_POLY_RESULT(N, T1, T2) typename poly_result<N, T1, T2>::t
#define POLY_SCALAR_RESULT(N, T1, T2) \
  typename poly_result<N, T1, typename if_nonpoly<T2>::t>::t
#define SCALAR_POLY_RESULT(N, T1, T2) \
  typename poly_result<N, typename if_nonpoly<T1>::t, T2>::t

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE POLY_POLY_RESULT (N, Ca, Cb)
operator + (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a.coeffs[0]) + b.coeffs[0];
  if (N == 2)
    r.coeffs[1] = C (a.coeffs[1]) + b.coeffs[1];
  return r;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE POLY_SCALAR_RESULT (N, Ca, Cb)
operator + (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  typedef POLY_SCALAR_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a.coeffs[0]) + b;
  if (N == 2)
    r.coeffs[1] = C (a.coeffs[1]);
  return r;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE SCALAR_POLY_RESULT (N, Ca, Cb)
operator + (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  typedef SCALAR_POLY_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a) + b.coeffs[0];
  if (N == 2)
    r.coeffs[1] = C (b.coeffs[1]);
  return r;
}

namespace wi {
/* Poly version of wi::add, with the same interface.  */

template<unsigned int N, typename C>
poly_int<N, C>
add (const poly_int_pod<N, C> &a, const poly_int_pod<N, C> &b,
     signop sgn, bool *overflow)
{
  poly_int_pod<N, C> r;
  *overflow = false;
  bool suboverflow;
  for (unsigned int i = 0; i < N; ++i)
    {
      r.coeffs[i] = wi::add (a.coeffs[i], b.coeffs[i], sgn, &suboverflow);
      *overflow |= suboverflow;
    }
  return r;
}
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE POLY_POLY_RESULT (N, Ca, Cb)
operator - (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a.coeffs[0]) - b.coeffs[0];
  if (N == 2)
    r.coeffs[1] = C (a.coeffs[1]) - b.coeffs[1];
  return r;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE POLY_SCALAR_RESULT (N, Ca, Cb)
operator - (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  typedef POLY_SCALAR_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a.coeffs[0]) - b;
  if (N == 2)
    r.coeffs[1] = C (a.coeffs[1]);
  return r;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE SCALAR_POLY_RESULT (N, Ca, Cb)
operator - (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  typedef SCALAR_POLY_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a) - b.coeffs[0];
  if (N == 2)
    r.coeffs[1] = -C (b.coeffs[1]);
  return r;
}

namespace wi {
/* Poly version of wi::sub, with the same interface.  */

template<unsigned int N, typename C>
poly_int<N, C>
sub (const poly_int_pod<N, C> &a, const poly_int_pod<N, C> &b,
     signop sgn, bool *overflow)
{
  poly_int<N, C> r;
  *overflow = false;
  bool suboverflow;
  for (unsigned int i = 0; i < N; ++i)
    {
      r.coeffs[i] = wi::sub (a.coeffs[i], b.coeffs[i], sgn, &suboverflow);
      *overflow |= suboverflow;
    }
  return r;
}
}

template<unsigned int N, typename Ca>
ALWAYS_INLINE POLY_POLY_RESULT (N, Ca, Ca)
operator - (const poly_int_pod<N, Ca> &a)
{
  typedef POLY_POLY_RESULT (N, Ca, Ca)::t C;
  poly_int<N, C> r;
  for (unsigned int i = 0; i < N; ++i)
    r.coeffs[i] = -C (a.coeffs[i]);
  return r;
}

namespace wi {
/* Poly version of wi::neg, with the same interface.  */

template<unsigned int N, typename C>
poly_int<N, C>
neg (const poly_int_pod<N, C> &a, bool *overflow)
{
  poly_int<N, C> r;
  *overflow = false;
  bool suboverflow;
  for (unsigned int i = 0; i < N; ++i)
    {
      r.coeffs[i] = wi::neg (a.coeffs[i], &suboverflow);
      *overflow |= suboverflow;
    }
  return r;
}

/* Poly version of wi::sext, with the same interface.  */

template<unsigned int N, typename C>
inline POLY_POLY_RESULT (N, C, C)
sext (const poly_int_pod<N, C> &a, unsigned int precision)
{
  POLY_POLY_RESULT (N, C, C) r;
  for (unsigned int i = 0; i < N; ++i)
    r.coeffs[i] = wi::sext (a.coeffs[i], precision);
  return r;
}
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE POLY_SCALAR_RESULT (N, Ca, Cb)
operator * (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  typedef POLY_SCALAR_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a.coeffs[0]) * b;
  if (N == 2)
    r.coeffs[1] = C (a.coeffs[1]) * b;
  return r;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE SCALAR_POLY_RESULT (N, Ca, Cb)
operator * (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  typedef SCALAR_POLY_RESULT (N, Ca, Cb)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = C (a) * b.coeffs[0];
  if (N == 2)
    r.coeffs[1] = C (a) * b.coeffs[1];
  return r;
}

namespace wi {
/* Poly version of wi::mul, with the same interface.  */

template<unsigned int N, typename C>
poly_int<N, C>
mul (const poly_int_pod<N, C> &a, const C &b,
     signop sgn, bool *overflow)
{
  poly_int<N, C> r;
  *overflow = false;
  bool suboverflow;
  for (unsigned int i = 0; i < N; ++i)
    {
      r.coeffs[i] = wi::mul (a.coeffs[i], b, sgn, &suboverflow);
      *overflow |= suboverflow;
    }
  return r;
}
}

template<unsigned int N, typename Ca>
ALWAYS_INLINE POLY_POLY_RESULT (N, Ca, Ca)
operator << (const poly_int_pod<N, Ca> &a, unsigned int b)
{
  typedef POLY_POLY_RESULT (N, Ca, Ca)::t C;
  STATIC_ASSERT (N <= 2);
  poly_int<N, C> r;
  r.coeffs[0] = wi::lshift (C (a.coeffs[0]), b);
  if (N == 2)
    r.coeffs[1] = wi::lshift (C (a.coeffs[1]), b);
  return r;
}

/* Return true if a0 + a1 * x might equal b0 + b1 * x for some nonnegative
   integer x.  */

template<typename Ca, typename Cb>
inline bool
may_eq_2 (const Ca &a0, const Ca &a1, const Cb &b0, const Cb &b1)
{
  if (a1 != b1)
     /*      a0 + a1 * x == b0 + b1 * x
       ==> (a1 - b1) * x == b0 - a0
       ==>             x == (b0 - a0) / (a1 - b1)

       We need to test whether that's a valid value of x.
       (b0 - a0) and (a1 - b1) must not have opposite signs
       and the result must be integral.  */
    return ((a1 < b1 ? b0 <= a0 : b0 >= a0)
	    && (b0 - a0) % (a1 - b1) == 0);
  return a0 == b0;
}

/* Return true if a0 + a1 * x might equal b for some nonnegative
   integer x.  */

template<typename Ca, typename Cb>
inline bool
may_eq_2 (const Ca &a0, const Ca &a1, const Cb &b)
{
  if (a1 != 0)
     /*      a0 + a1 * x == b
       ==>             x == (b - a0) / a1

       We need to test whether that's a valid value of x.
       (b - a0) and a1 must not have opposite signs and the
       result must be integral.  For the latter test we use
       "a0 - b" rather than "b - a0" in order to cope with
       cases in which a0 is a wide_int.  */
    return ((a1 < 0 ? b <= a0 : b >= a0)
	    && (a0 - b) % a1 == 0);
  return a0 == b;
}

/* Return true if A might equal B for some indeterminate values.  */

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE bool
may_eq (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2)
    return may_eq_2 (a.coeffs[0], a.coeffs[1], b.coeffs[0], b.coeffs[1]);
  return a.coeffs[0] == b.coeffs[0];
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Cb>::bool_type
may_eq (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2)
    return may_eq_2 (a.coeffs[0], a.coeffs[1], b);
  return a.coeffs[0] == b;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Ca>::bool_type
may_eq (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2)
    return may_eq_2 (b.coeffs[0], b.coeffs[1], a);
  return a == b.coeffs[0];
}

template<typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly2<Ca, Cb>::bool_type
may_eq (const Ca &a, const Cb &b)
{
  return a == b;
}

/* Return true if A might not equal B for some indeterminate values.  */

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE bool
may_ne (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] != b.coeffs[1])
    return true;
  return a.coeffs[0] != b.coeffs[0];
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Cb>::bool_type
may_ne (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] != 0)
    return true;
  return a.coeffs[0] != b;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Ca>::bool_type
may_ne (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && 0 != b.coeffs[1])
    return true;
  return a != b.coeffs[0];
}

template<typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly2<Ca, Cb>::bool_type
may_ne (const Ca &a, const Cb &b)
{
  return a != b;
}

/* Return true if A must be equal to B.  */
#define must_eq(A, B) (!may_ne (A, B))

/* Return true if A must be unequal to B.  */
#define must_ne(A, B) (!may_eq (A, B))

/* Return true if A might be less than or equal to B for some
   indeterminate values.  */

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE bool
may_le (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] < b.coeffs[1])
    return true;
  return a.coeffs[0] <= b.coeffs[0];
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Cb>::bool_type
may_le (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] < 0)
    return true;
  return a.coeffs[0] <= b;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Ca>::bool_type
may_le (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && 0 < b.coeffs[1])
    return true;
  return a <= b.coeffs[0];
}

template<typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly2<Ca, Cb>::bool_type
may_le (const Ca &a, const Cb &b)
{
  return a <= b;
}

/* Return true if A might be less than B for some indeterminate values.  */

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE bool
may_lt (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] < b.coeffs[1])
    return true;
  return a.coeffs[0] < b.coeffs[0];
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Cb>::bool_type
may_lt (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && a.coeffs[1] < 0)
    return true;
  return a.coeffs[0] < b;
}

template<unsigned int N, typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly<Ca>::bool_type
may_lt (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);
  if (N == 2 && 0 < b.coeffs[1])
    return true;
  return a < b.coeffs[0];
}

template<typename Ca, typename Cb>
ALWAYS_INLINE typename if_nonpoly2<Ca, Cb>::bool_type
may_lt (const Ca &a, const Cb &b)
{
  return a < b;
}

/* Return true if A may be greater than or equal to B.  */
#define may_ge(A, B) may_le (B, A)

/* Return true if A may be greater than B.  */
#define may_gt(A, B) may_lt (B, A)

/* Return true if A must be less than or equal to B.  */
#define must_le(A, B) (!may_gt (A, B))

/* Return true if A must be less than B.  */
#define must_lt(A, B) (!may_ge (A, B))

/* Return true if A must be greater than B.  */
#define must_gt(A, B) (!may_le (A, B))

/* Return true if A must be greater than or equal to B.  */
#define must_ge(A, B) (!may_lt (A, B))

/* Return true if A and B are ordered by the partial ordering must_le.  */

template<typename T1, typename T2>
inline bool
ordered_p (const T1 &a, const T2 &b)
{
  return must_le (a, b) || must_le (b, a);
}

/* Given that A and B are known to be ordered, return the minimum
   of the two.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_POLY_RESULT (N, Ca, Cb)
ordered_min (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  if (must_le (a, b))
    return a;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return b;
    }
}

template<unsigned int N, typename Ca, typename Cb>
inline SCALAR_POLY_RESULT (N, Ca, Cb)
ordered_min (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  if (must_le (a, b))
    return a;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return b;
    }
}

template<unsigned int N, typename Ca, typename Cb>
inline POLY_SCALAR_RESULT (N, Ca, Cb)
ordered_min (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  if (must_le (a, b))
    return a;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return b;
    }
}

/* Given that A and B are known to be ordered, return the maximum
   of the two.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_POLY_RESULT (N, Ca, Cb)
ordered_max (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  if (must_le (a, b))
    return b;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return a;
    }
}

template<unsigned int N, typename Ca, typename Cb>
inline SCALAR_POLY_RESULT (N, Ca, Cb)
ordered_max (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  if (must_le (a, b))
    return b;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return a;
    }
}

template<unsigned int N, typename Ca, typename Cb>
inline POLY_SCALAR_RESULT (N, Ca, Cb)
ordered_max (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  if (must_le (a, b))
    return b;
  else
    {
      gcc_checking_assert (must_le (b, a));
      return a;
    }
}

/* Return a constant lower bound on the value of A, which is known
   to be nonnegative.  */

template<unsigned int N, typename Ca>
inline Ca
constant_lower_bound (const poly_int_pod<N, Ca> &a)
{
  gcc_checking_assert (must_ge (a, Ca (0)));
  return a.coeffs[0];
}

/* Return a value that is known to be no less than A and B, both of
   which are known to be nonnegative.  This will be the least upper
   bound for some indeterminate values but not necessarily for all.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_SCALAR_RESULT (N, Ca, Cb)
upper_bound (const poly_int_pod<N, Ca> &a, const Cb &b)
{
  typedef POLY_SCALAR_RESULT (N, Ca, Cb)::t C;
  gcc_checking_assert (must_ge (a, Ca (0)));
  gcc_checking_assert (b >= Cb (0));
  poly_int<N, C> r;
  r.coeffs[0] = MAX (C (a.coeffs[0]), C (b));
  for (unsigned int i = 1; i < N; ++i)
    r.coeffs[1] = C (a.coeffs[i]);
  return r;
}

/* Return a value that is known to be no less than A and B, both of
   which are known to be nonnegative.  This will be the least upper
   bound for some indeterminate values but not necessarily for all.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_POLY_RESULT (N, Ca, Cb)
upper_bound (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;
  gcc_checking_assert (must_ge (a, Ca (0)));
  gcc_checking_assert (must_ge (b, Cb (0)));
  poly_int<N, C> r;
  for (unsigned int i = 0; i < N; ++i)
    r.coeffs[i] = MAX (C (a.coeffs[i]), C (b.coeffs[i]));
  return r;
}

/* Return the greatest common divisor of all nonzero coefficients, or zero
   if all coefficients are zero.  */

template<unsigned int N, typename Ca>
inline Ca
coeff_gcd (const poly_int_pod<N, Ca> &a)
{
  /* Find the first nonzero coefficient, stopping at 0 whatever happens.  */
  unsigned int i;
  for (i = N - 1; i > 0; --i)
    if (a.coeffs[i] != 0)
      break;
  Ca r = a.coeffs[i];
  for (unsigned int j = 0; j < i; ++j)
    if (a.coeffs[j] != 0)
      r = gcd (r, a.coeffs[j]);
  return r;
}

/* Return a value that is a multiple of both A and B.  This will be the
   least common multiple for some indeterminte values but necessarily
   for all.  */

template<unsigned int N, typename Ca, typename Cb>
POLY_SCALAR_RESULT (N, Ca, Cb)
common_multiple (const poly_int_pod<N, Ca> &a, Cb b)
{
  Ca xgcd = coeff_gcd (a);
  return a * (least_common_multiple (xgcd, b) / xgcd);
}

/* Likewise, but for two polynomial values.  */

template<unsigned int N, typename Ca, typename Cb>
POLY_POLY_RESULT (N, Ca, Cb)
common_multiple (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  STATIC_ASSERT (N <= 2);

  if (b.is_constant ())
    return common_multiple (a, b.coeffs[0]);

  if (a.is_constant ())
    return common_multiple (b, a.coeffs[0]);

  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;

  C lcm = least_common_multiple (a.coeffs[1], b.coeffs[1]);
  C amul = lcm / a.coeffs[1];
  C bmul = lcm / b.coeffs[1];
  gcc_checking_assert (a.coeffs[0] * amul == b.coeffs[0] * bmul);

  return a * amul;
}

/* Compare A and B for sorting purposes, returning -1 if A should come
   before B, 0 if A and B are identical, and 1 if A should come after B.
   This is a lexicographical compare of the coefficients in reverse order.

   A consequence of this is that all constant sizes come before all
   non-constant ones, regardless of magnitude.  This is what most
   callers want.  E.g. when laying data out on the stack, it's better to
   keep all the constant-sized data together so that it can be accessed
   as a constant offset from a single base.  */

template<unsigned int N, typename Ca, typename Cb>
inline int
compare_sizes_for_sort (const poly_int_pod<N, Ca> &a,
			const poly_int_pod<N, Cb> &b)
{
  for (unsigned int i = N; i-- > 0; )
    if (a.coeffs[i] != b.coeffs[i])
      return a.coeffs[i] < b.coeffs[i] ? -1 : 1;
  return 0;
}

/* Return true if we can calculate VALUE & (ALIGN - 1) at compile time.  */

template<unsigned int N, typename Ca, typename Cb>
inline bool
can_align_p (const poly_int_pod<N, Ca> &value, Cb align)
{
  for (unsigned int i = 1; i < N; i++)
    if ((value.coeffs[i] & (align - 1)) != 0)
      return false;
  return true;
}

/* Return true if we can align VALUE up to the smallest multiple of
   ALIGN that is >= VALUE.  Store the aligned value in *ALIGNED if so.  */

template<unsigned int N, typename Ca, typename Cb>
inline bool
can_align_up (const poly_int_pod<N, Ca> &value, Cb align,
	      poly_int<N, Ca> *aligned)
{
  if (!can_align_p (value, align))
    return false;
  *aligned = value + (-value.coeffs[0] & (align - 1));
  return true;
}

/* Return true if we can align VALUE down to the largest multiple of
   ALIGN that is <= VALUE.  Store the aligned value in *ALIGNED if so.  */

template<unsigned int N, typename Ca, typename Cb>
inline bool
can_align_down (const poly_int_pod<N, Ca> &value, Cb align,
		poly_int<N, Ca> *aligned)
{
  if (!can_align_p (value, align))
    return false;
  *aligned = value - (value.coeffs[0] & (align - 1));
  return true;
}

/* Return true if we can align A and B to the smallest multiples of
   ALIGN that are >= A and B respectively, and if doing so gives the
   same value.  */

template<unsigned int N, typename Ca, typename Cb, typename Cc>
inline bool
known_equal_after_align_up (const poly_int_pod<N, Ca> &a,
			    const poly_int_pod<N, Cb> &b,
			    Cc align)
{
  poly_int<N, Ca> aligned_a;
  poly_int<N, Cb> aligned_b;
  return (can_align_up (a, align, &aligned_a)
	  && can_align_up (b, align, &aligned_b)
	  && must_eq (aligned_a, aligned_b));
}

/* Return true if we can align A and B to the largest multiples of
   ALIGN that are <= A and B respectively, and if doing so gives the
   same value.  */

template<unsigned int N, typename Ca, typename Cb, typename Cc>
inline bool
known_equal_after_align_down (const poly_int_pod<N, Ca> &a,
			      const poly_int_pod<N, Cb> &b,
			      Cc align)
{
  poly_int<N, Ca> aligned_a;
  poly_int<N, Cb> aligned_b;
  return (can_align_down (a, align, &aligned_a)
	  && can_align_down (b, align, &aligned_b)
	  && must_eq (aligned_a, aligned_b));
}

/* Align VALUE up to the smallest multiple of ALIGN that is >= VALUE.

   NOTE: When using this function, please add a comment above the call
   explaining why we know the non-constant coefficients must already
   be a multiple of ALIGN.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
force_align_up (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (can_align_p (value, align));
  return value + (-value.coeffs[0] & (align - 1));
}

/* Align VALUE down to the largest multiple of ALIGN that is <= VALUE.

   NOTE: When using this function, please add a comment above the call
   explaining why we know the non-constant coefficients must already
   be a multiple of ALIGN.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
force_align_down (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (can_align_p (value, align));
  return value - (value.coeffs[0] & (align - 1));
}

/* Return a value <= VALUE that is a multiple of ALIGN.  It will be the
   greatest such value for some indeterminate values but not necessarily
   for all.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
aligned_lower_bound (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (ordered_p (value, Ca (0)));
  poly_int<N, Ca> r;
  for (unsigned int i = 0; i < N; ++i)
    r.coeffs[i] = value.coeffs[i] & -Ca (align);
  return r;
}

/* Return a value >= VALUE that is a multiple of ALIGN.  It will be the
   least such value for some indeterminate values but not necessarily
   for all.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
aligned_upper_bound (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (ordered_p (value, Ca (0)));
  poly_int<N, Ca> r;
  for (unsigned int i = 0; i < N; ++i)
    r.coeffs[i] = value.coeffs[i] + (-value.coeffs[i] & Ca (align - 1));
  return r;
}

/* Align VALUE down to the largest multiple of ALIGN that is <= VALUE,
   then divide by ALIGN.

   NOTE: When using this function, please add a comment above the call
   explaining why we know the non-constant coefficients must already
   be a multiple of ALIGN.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
force_align_down_and_div (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (can_align_p (value, align));
  poly_int<N, Ca> r;
  r.coeffs[0] = (value.coeffs[0] - (value.coeffs[0] & (align - 1))) / align;
  for (unsigned int i = 1; i < N; ++i)
    r.coeffs[i] = value.coeffs[i] / align;
  return r;
}

/* Align VALUE up to the smallest multiple of ALIGN that is >= VALUE,
   then divide by ALIGN.

   NOTE: When using this function, please add a comment above the call
   explaining why we know the non-constant coefficients must already
   be a multiple of ALIGN.  */

template<unsigned int N, typename Ca, typename Cb>
inline poly_int<N, Ca>
force_align_up_and_div (const poly_int_pod<N, Ca> &value, Cb align)
{
  gcc_checking_assert (can_align_p (value, align));
  poly_int<N, Ca> r;
  r.coeffs[0] = (value.coeffs[0] + (-value.coeffs[0] & (align - 1))) / align;
  for (unsigned int i = 1; i < N; ++i)
    r.coeffs[i] = value.coeffs[i] / align;
  return r;
}

/* Return true if we know at compile time the difference between VALUE
   and the equal or preceding multiple of ALIGN.  Store the value in
   *MISALIGN if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline bool
known_misalignment (const poly_int_pod<N, Ca> &value, Cb align, Cm *misalign)
{
  gcc_checking_assert (align != 0);
  for (unsigned int i = 1; i < N; ++i)
    if ((value.coeffs[i] & (align - 1)) != 0)
      return false;
  *misalign = value.coeffs[0] & (align - 1);
  return true;
}

/* Return X & (Y - 1), asserting that this value is known.  Please add
   an a comment above callers to this function to explain why the condition
   is known to hold.  */

template<unsigned int N, typename Ca, typename Cb>
inline Ca
force_get_misalignment (const poly_int_pod<N, Ca> &a, Cb align)
{
  gcc_checking_assert (can_align_p (a, align));
  return a.coeffs[0] & (align - 1);
}

/* Return the maximum alignment that A is known to have.  Return 0
   if A is known to be zero.  */

template<unsigned int N, typename Ca>
inline Ca
known_alignment (const poly_int_pod<N, Ca> &a)
{
  /* Calculate the minimum alignment for each coefficient individually.
     Take the minimum of the nonzero values, all of which will be powers
     of 2.  */
  Ca r = a.coeffs[0] & -a.coeffs[0];
  for (unsigned int i = 1; i < N; ++i)
    {
      Ca part = a.coeffs[i] & -a.coeffs[i];
      r = (r & (part - 1)) | (part & (r - 1));
    }
  return r;
}

/* Return true if we can compute A | B at compile time, storing the
   result in RES if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cr>
inline typename if_nonpoly<Cb>::bool_type
can_ior_p (const poly_int_pod<N, Ca> &a, Cb b, Cr *result)
{
  STATIC_ASSERT (N <= 2);
  /* Coefficient 1 must be a multiple of something greater than B.  */
  if (N == 2
      && a.coeffs[1] != 0
      && (a.coeffs[1] & -a.coeffs[1]) < b)
    return false;
  *result = a;
  result->coeffs[0] |= b;
  return true;
}

/* Return true if A is a constant multiple of B, storing the
   multiple in *MULTIPLE if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline typename if_nonpoly<Cb>::bool_type
constant_multiple_p (const poly_int_pod<N, Ca> &a, Cb b, Cm *multiple)
{
  /* Do the modulus before the constant check, to catch divide by
     zero errors.  */
  if (a.coeffs[0] % b != 0 || !a.is_constant ())
    return false;
  *multiple = a.coeffs[0] / b;
  return true;
}

/* Return true if A is a constant multiple of B, storing the multiple
   in *MULTIPLE if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline bool
constant_multiple_p (const poly_int_pod<N, Ca> &a,
		     const poly_int_pod<N, Cb> &b, Cm *multiple)
{
  STATIC_ASSERT (N <= 2);
  if (b.is_constant ())
    return constant_multiple_p (a, b.coeffs[0], multiple);

  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;

  if (a.coeffs[1] % b.coeffs[1] != 0)
    return false;

  C r = a.coeffs[1] / b.coeffs[1];
  if (a.coeffs[0] % b.coeffs[0] != 0 || a.coeffs[0] / b.coeffs[0] != r)
    return false;

  *multiple = r;
  return true;
}

/* Return true if A is a (polynomial) multiple of B.  */

template<unsigned int N, typename Ca, typename Cb>
inline typename if_nonpoly<Cb>::bool_type
multiple_p (const poly_int_pod<N, Ca> &a, Cb b)
{
  for (unsigned int i = 0; i < N; ++i)
    if (a.coeffs[i] % b != 0)
      return false;
  return true;
}

/* Return true if B is a constant and A is a (constant) multiple of B.  */

template<unsigned int N, typename Ca, typename Cb>
inline typename if_nonpoly<Ca>::bool_type
multiple_p (Ca a, const poly_int_pod<N, Cb> &b)
{
  /* Do the modulus before the constant check, to catch divide by
     potential zeros.  */
  return a % b.coeffs[0] == 0 && b.is_constant ();
}

/* Return true if A is a (polynomial) multiple of B.  This handles cases
   where either B is constant or the multiple is constant.  */

template<unsigned int N, typename Ca, typename Cb>
inline bool
multiple_p (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  if (b.is_constant ())
    return multiple_p (a, b.coeffs[0]);
  Ca tmp;
  return constant_multiple_p (a, b, &tmp);
}

/* Return true if A is a (constant) multiple of B, storing the
   multiple in *MULTIPLE if so.  */

template<typename Ca, typename Cb, typename Cm>
inline typename if_nonpoly2<Ca, Cb>::bool_type
multiple_p (Ca a, Cb b, Cm *multiple)
{
  if (a % b != 0)
    return false;
  *multiple = a / b;
  return true;
}

/* Return true if A is a (polynomial) multiple of B, storing the
   multiple in *MULTIPLE if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline typename if_nonpoly<Cb>::bool_type
multiple_p (const poly_int_pod<N, Ca> &a, Cb b, poly_int_pod<N, Cm> *multiple)
{
  if (!multiple_p (a, b))
    return false;
  for (unsigned int i = 0; i < N; ++i)
    multiple->coeffs[i] = a.coeffs[i] / b;
  return true;
}

/* Return true if B is a constant and A is a (constant) multiple of B,
   storing the multiple in *MULTIPLE if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline typename if_nonpoly<Ca>::bool_type
multiple_p (Ca a, const poly_int_pod<N, Cb> &b, Cm *multiple)
{
  /* Do the modulus before the constant check, to catch divide by
     potential zeros.  */
  if (a % b.coeffs[0] != 0 || !b.is_constant ())
    return false;
  *multiple = a / b.coeffs[0];
  return true;
}

/* Return true if A is a (polynomial) multiple of B, storing the
   multiple in *MULTIPLE if so.  This handles cases where either
   B is constant or the multiple is constant.  */

template<unsigned int N, typename Ca, typename Cb, typename Cm>
inline bool
multiple_p (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b,
	    poly_int<N, Cm> *multiple)
{
  if (b.is_constant ())
    return multiple_p (a, b.coeffs[0], multiple);
  return constant_multiple_p (a, b, multiple);
}

/* Return A / B, given that A is known to be a multiple of B.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_SCALAR_RESULT (N, Ca, Cb)
exact_div (const poly_int_pod<N, Ca> &a, Cb b)
{
  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;
  poly_int<N, C> r;
  for (unsigned int i = 0; i < N; ++i)
    {
      gcc_checking_assert (a.coeffs[i] % b == 0);
      r.coeffs[i] = a.coeffs[i] / b;
    }
  return r;
}

/* Return A / B, given that A is known to be a multiple of B.  */

template<unsigned int N, typename Ca, typename Cb>
inline typename if_nonpoly<Ca>::t
exact_div (const Ca &a, const poly_int_pod<N, Cb> &b)
{
  gcc_checking_assert (b.is_constant () && a % b.coeffs[0] == 0);
  return a / b.coeffs[0];
}

/* Return A / B, given that A is known to be a multiple of B.  */

template<unsigned int N, typename Ca, typename Cb>
inline POLY_POLY_RESULT (N, Ca, Cb)
exact_div (const poly_int_pod<N, Ca> &a, const poly_int_pod<N, Cb> &b)
{
  if (a.is_constant ())
    return exact_div (a.coeffs[0], b);

  if (b.is_constant ())
    return exact_div (a, b.coeffs[0]);

  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;

  gcc_checking_assert (a.coeffs[0] % b.coeffs[0] == 0);
  C r = C (a.coeffs[0]) / b.coeffs[0];
  for (unsigned int i = 1; i < N; ++i)
    gcc_checking_assert (a.coeffs[i] % b.coeffs[i] == 0
			 && a.coeffs[i] / b.coeffs[i] == r);
  return r;
}

/* Return true if there is some constant Q and polynomial r such that:

     (1) a = b * Q + r
     (2) 0 <= |b * Q| <= |a|
     (3) |r| < |b|

   Store the value Q in *QUOTIENT if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq>
inline typename if_nonpoly2<Cb, Cq>::bool_type
can_div_trunc_p (const poly_int_pod<N, Ca> &a, Cb b, Cq *quotient)
{
  /* Do the modulus before the constant check, to catch divide by
     zero errors.  */
  Cq q = a.coeffs[0] / b;
  if (!a.is_constant ())
    return false;
  *quotient = q;
  return true;
}

/* Return true if there is some constant Q and polynomial r such that:

     (1) a = b * Q + r
     (2) 0 <= |b * Q| <= |a|
     (3) |r| < |b|

   Store the value Q in *QUOTIENT if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq>
inline typename if_nonpoly<Cq>::bool_type
can_div_trunc_p (const poly_int_pod<N, Ca> &a,
		 const poly_int_pod<N, Cb> &b,
		 Cq *quotient)
{
  STATIC_ASSERT (N <= 2);
  if (b.is_constant ())
    return can_div_trunc_p (a, b.coeffs[0], quotient);

  /* For simplicity we only handle B that are always < 0 or always > 0
     (i.e. are ordered wrt 0).  This means that both coefficients of B
     are > 0 or both coefficients are < 0.

     If trunc (ai / bi) is the same for both cofficient indices i,
     the conditions hold with Q equal to that value.

     Sketch of a proof:

     If Q = trunc (ai / bi) then the three conditions then hold for
     ai and bi:

       (1a) ai = bi * Q + ri
       (2a) 0 <= |bi * Q| <= |ai|
       (3a) |ri| < |bi|

     where ri == ai % bi.  From this we need to derive:

       (1z) (a0 + a1 * x) = (b0 + b1 * x) * Q + (r0 + r1 * x)
       (2z) 0 <= |(b0 + b1 * x) * Q| <= |a0 + a1 * x|
       (3z) |r0 + r1 * x| < |b0 + b1 * x|

     (1z) is a simple consequence of (1a).  In general:

       (a) |y + z| <= |y| + |z|

     And if y and z are known not to have opposite signs:

       (b) |y + z| = |y| + |z|

     Since x >= 0:

       (c) A * x <= B * x  if  A < B
       (d) x = |x| (and so |A| * x = |A * x| from distribution)

     so (3a) gives:

                    |r1| * x <= |b1| * x                 (from c)
             |r0| + |r1| * x <  |b0| + |b1| * x          (combining 3a)
             |r0| + |r1 * x| <  |b0| + |b1 * x|          (from d)
             |r0| + |r1 * x| <  |b0 + b1 * x|            (from b)
       |r0 + r1 * x| <= "  " <  |b0 + b1 * x|            (from a)

     for (3z).  If Q is zero then (2z) is trivially true.  If it's nonzero
     then ai must have the same sign as bi, otherwise (1a) and (2a) cannot
     both be true.  This means that a0 and a1 cannot have opposite signs,
     since b0 and b1 don't.  Then (2b) gives (2z) from similar steps:

                  0 <= |a1 * Q| * x <= |b1| * x          (from c)
       0 <= |a0 * Q| + |a1 * Q| * x <= |b0| + |b1| * x   (combining 2a)
       0 <= |a0 * Q| + |a1 * Q * x| <= |b0| + |b1 * x|   (from d)
         0 <= |a0 * Q + a1 * Q * x| <= |b0 + b1 * x|     (from b)  */
  typedef POLY_POLY_RESULT (N, Ca, Cb)::t C;

  C q = a.coeffs[1] / b.coeffs[1];
  if (a.coeffs[0] / b.coeffs[0] != q)
    return false;

  if (!ordered_p (b, Cb (0)))
    return false;

  *quotient = q;
  return true;
}

/* Likewise, but also store r in *REMAINDER.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq, typename Cr>
inline typename if_nonpoly<Cq>::bool_type
can_div_trunc_p (const poly_int_pod<N, Ca> &a,
		 const poly_int_pod<N, Cb> &b,
		 Cq *quotient, Cr *remainder)
{
  if (!can_div_trunc_p (a, b, quotient))
    return false;
  *remainder = a - *quotient * b;
  return true;
}

/* Return true if there is some polynomial q and constant R such that:

     (1) a = B * q + R
     (2) 0 <= |B * q| <= |a|
     (3) |R| < |B|

   Store the value q in *QUOTIENT if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq>
inline typename if_nonpoly<Cb>::bool_type
can_div_trunc_p (const poly_int_pod<N, Ca> &a, Cb b,
		 poly_int_pod<N, Cq> *quotient)
{
  /* The remainder must be constant.  */
  for (unsigned int i = 1; i < N; ++i)
    if (a.coeffs[i] % b != 0)
      return false;
  for (unsigned int i = 0; i < N; ++i)
    quotient->coeffs[i] = a.coeffs[i] / b;
  return true;
}

/* Likewise, but also store R in *REMAINDER.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq, typename Cr>
inline typename if_nonpoly<Cb>::bool_type
can_div_trunc_p (const poly_int_pod<N, Ca> &a, Cb b,
		 poly_int_pod<N, Cq> *quotient, Cr *remainder)
{
  if (!can_div_trunc_p (a, b, quotient))
    return false;
  *remainder = a.coeffs[0] % b;
  return true;
}

/* Return true if there is some constant Q and polynomial r such that:

     (1) a = b * Q + r
     (2) 0 <= |a| <= |b * Q|
     (3) |r| < |b|

   Store the value Q in *QUOTIENT if so.  */

template<unsigned int N, typename Ca, typename Cb, typename Cq>
inline typename if_nonpoly<Cq>::bool_type
can_div_away_from_zero_p (const poly_int_pod<N, Ca> &a,
			  const poly_int_pod<N, Cb> &b,
			  Cq *quotient)
{
  if (!can_div_trunc_p (a, b, quotient))
    return false;
  if (may_ne (*quotient * b, a))
    *quotient += (*quotient < 0 ? -1 : 1);
  return true;
}

/* Use print_dec to print VALUE to FILE, where SGN is the sign
   of the values.  */

template<unsigned int N, typename C>
void
print_dec (const poly_int_pod<N, C> &value, FILE *file, signop sgn)
{
  if (value.is_constant ())
    print_dec (value.coeffs[0], file, sgn);
  else
    {
      fprintf (file, "[");
      for (unsigned int i = 0; i < N; ++i)
	{
	  print_dec (value.coeffs[i], file, sgn);
	  fputc (i == N - 1 ? ']' : ',', file);
	}
    }
}

#undef POLY_SCALAR_RESULT
#undef SCALAR_POLY_RESULT
#undef POLY_POLY_RESULT

/* Return true if the two ranges [POS1, POS1 + SIZE1] and [POS2, POS2 + SIZE2]
   might overlap.  SIZE1 and/or SIZE2 can be the special value -1, in which
   case the range is open-ended.  */

template<typename T1, typename T2, typename T3, typename T4>
static inline bool
ranges_may_overlap_p (const T1 &pos1, const T2 &size1,
		      const T3 &pos2, const T4 &size2)
{
  /* The checks are written this way so that we can cope with signed
     offsets and unsigned sizes.  The "+ (x - x)"s avoid warnings about
     comparisons between signed and unsigned in that case.  */
  if (may_ge (pos1, pos2)
      && (must_eq (size2, (size2 - size2) - 1)
	  || may_lt (pos1 - pos2 + (size2 - size2), size2)))
    return true;
  if (may_ge (pos2, pos1)
      && (must_eq (size1, (size1 - size1) - 1)
	  || may_lt (pos2 - pos1 + (size1 - size1), size1)))
    return true;

  return false;
}

/* Return true if range [POS1, POS1 + SIZE1) is known to be a subrange of
   [POS2, POS2 + SIZE2).  SIZE1 and/or SIZE2 can be the special value -1,
   in which case the range is open-ended.  */

template<typename T1, typename T2, typename T3, typename T4>
static inline bool
known_subrange_p (const T1 &pos1, const T2 &size1,
		  const T3 &pos2, const T4 &size2)
{
  return (may_ne (size1, (size1 - size1) - 1)
	  && may_ne (size2, (size2 - size2) - 1)
	  && must_ge (pos1, pos2)
	  && must_le (pos1 - pos2 + size1 + (size2 - size2), size2));
}

/* Return true if range [POS, POS + SIZE) is known to include VAL.
   SIZE can be the special value -1, in which case the range is
   open-ended.  */

template<typename T1, typename T2, typename T3>
static inline bool
known_in_range_p (const T1 &val, const T2 &pos, const T3 &size)
{
  return (may_ne (size, (size - size) - 1)
	  && must_ge (val, pos)
	  && must_lt (val - pos + (size - size), size));
}

/* Return true if range [POS, POS + SIZE) might include VAL.
   SIZE can be the special value -1, in which case the range is
   open-ended.  */

template<typename T1, typename T2, typename T3>
static inline bool
maybe_in_range_p (const T1 &val, const T2 &pos, const T3 &size)
{
  return (must_eq (size, (size - size) - 1)
	  || (may_ge (val, pos) && may_lt (val, pos + size)));
}

template<unsigned int N, typename C>
void
gt_ggc_mx (poly_int_pod<N, C> *)
{
}

template<unsigned int N, typename C>
void
gt_pch_nx (poly_int_pod<N, C> *)
{
}

template<unsigned int N, typename C>
void
gt_pch_nx (poly_int_pod<N, C> *, void (*) (void *, void *), void *)
{
}

#endif
/* Machine mode definitions for GCC; included by rtl.h and tree.h.
   Copyright (C) 1991-2016 Free Software Foundation, Inc.

This file is part of GCC.

GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
version.

GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
for more details.

You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3.  If not see
<http://www.gnu.org/licenses/>.  */

#ifndef HAVE_MACHINE_MODES
#define HAVE_MACHINE_MODES

extern CONST_MODE_SIZE poly_uint16_pod mode_size[NUM_MACHINE_MODES];
extern CONST_MODE_PRECISION poly_uint16_pod mode_precision[NUM_MACHINE_MODES];
extern CONST_MODE_NUNITS poly_uint16_pod mode_nunits[NUM_MACHINE_MODES];
extern CONST_MODE_UNIT_SIZE unsigned char mode_unit_size[NUM_MACHINE_MODES];
extern const unsigned char mode_wider[NUM_MACHINE_MODES];
extern const unsigned char mode_2xwider[NUM_MACHINE_MODES];

/* Get the name of mode MODE as a string.  */

extern const char * const mode_name[NUM_MACHINE_MODES];
#define GET_MODE_NAME(MODE)  mode_name[MODE]

/* Mode classes.  */

#include "mode-classes.def"
#define DEF_MODE_CLASS(M) M
enum mode_class { MODE_CLASSES, MAX_MODE_CLASS };
#undef DEF_MODE_CLASS
#undef MODE_CLASSES

/* Get the general kind of object that mode MODE represents
   (integer, floating, complex, etc.)  */

extern const unsigned char mode_class[NUM_MACHINE_MODES];
#define GET_MODE_CLASS(MODE)  ((enum mode_class) mode_class[MODE])

/* Nonzero if MODE is an integral mode.  */
#define INTEGRAL_MODE_P(MODE)			\
  (GET_MODE_CLASS (MODE) == MODE_INT		\
   || GET_MODE_CLASS (MODE) == MODE_PARTIAL_INT \
   || GET_MODE_CLASS (MODE) == MODE_COMPLEX_INT \
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_BOOL \
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_INT)

/* Nonzero if MODE is a floating-point mode.  */
#define FLOAT_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_FLOAT	\
   || GET_MODE_CLASS (MODE) == MODE_DECIMAL_FLOAT \
   || GET_MODE_CLASS (MODE) == MODE_COMPLEX_FLOAT \
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_FLOAT)

/* Nonzero if MODE is a complex mode.  */
#define COMPLEX_MODE_P(MODE)			\
  (GET_MODE_CLASS (MODE) == MODE_COMPLEX_INT	\
   || GET_MODE_CLASS (MODE) == MODE_COMPLEX_FLOAT)

/* Nonzero if MODE is a vector mode.  */
#define VECTOR_MODE_P(MODE)				\
  (GET_MODE_CLASS (MODE) == MODE_VECTOR_BOOL		\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_INT		\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_FLOAT	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_FRACT	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_UFRACT	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_ACCUM	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_UACCUM)

/* Nonzero if MODE is a scalar integral mode.  */
#define SCALAR_INT_MODE_P(MODE)			\
  (GET_MODE_CLASS (MODE) == MODE_INT		\
   || GET_MODE_CLASS (MODE) == MODE_PARTIAL_INT)

/* Nonzero if MODE is a scalar floating point mode.  */
#define SCALAR_FLOAT_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_FLOAT		\
   || GET_MODE_CLASS (MODE) == MODE_DECIMAL_FLOAT)

/* Nonzero if MODE is a decimal floating point mode.  */
#define DECIMAL_FLOAT_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_DECIMAL_FLOAT)

/* Nonzero if MODE is a scalar fract mode.  */
#define SCALAR_FRACT_MODE_P(MODE)	\
  (GET_MODE_CLASS (MODE) == MODE_FRACT)

/* Nonzero if MODE is a scalar ufract mode.  */
#define SCALAR_UFRACT_MODE_P(MODE)	\
  (GET_MODE_CLASS (MODE) == MODE_UFRACT)

/* Nonzero if MODE is a scalar fract or ufract mode.  */
#define ALL_SCALAR_FRACT_MODE_P(MODE)	\
  (SCALAR_FRACT_MODE_P (MODE) || SCALAR_UFRACT_MODE_P (MODE))

/* Nonzero if MODE is a scalar accum mode.  */
#define SCALAR_ACCUM_MODE_P(MODE)	\
  (GET_MODE_CLASS (MODE) == MODE_ACCUM)

/* Nonzero if MODE is a scalar uaccum mode.  */
#define SCALAR_UACCUM_MODE_P(MODE)	\
  (GET_MODE_CLASS (MODE) == MODE_UACCUM)

/* Nonzero if MODE is a scalar accum or uaccum mode.  */
#define ALL_SCALAR_ACCUM_MODE_P(MODE)	\
  (SCALAR_ACCUM_MODE_P (MODE) || SCALAR_UACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar fract or accum mode.  */
#define SIGNED_SCALAR_FIXED_POINT_MODE_P(MODE)	\
  (SCALAR_FRACT_MODE_P (MODE) || SCALAR_ACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar ufract or uaccum mode.  */
#define UNSIGNED_SCALAR_FIXED_POINT_MODE_P(MODE)	\
  (SCALAR_UFRACT_MODE_P (MODE) || SCALAR_UACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar fract, ufract, accum or uaccum mode.  */
#define ALL_SCALAR_FIXED_POINT_MODE_P(MODE)	\
  (SIGNED_SCALAR_FIXED_POINT_MODE_P (MODE)	\
   || UNSIGNED_SCALAR_FIXED_POINT_MODE_P (MODE))

/* Nonzero if MODE is a scalar/vector fract mode.  */
#define FRACT_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_FRACT	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_FRACT)

/* Nonzero if MODE is a scalar/vector ufract mode.  */
#define UFRACT_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_UFRACT	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_UFRACT)

/* Nonzero if MODE is a scalar/vector fract or ufract mode.  */
#define ALL_FRACT_MODE_P(MODE)		\
  (FRACT_MODE_P (MODE) || UFRACT_MODE_P (MODE))

/* Nonzero if MODE is a scalar/vector accum mode.  */
#define ACCUM_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_ACCUM	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_ACCUM)

/* Nonzero if MODE is a scalar/vector uaccum mode.  */
#define UACCUM_MODE_P(MODE)		\
  (GET_MODE_CLASS (MODE) == MODE_UACCUM	\
   || GET_MODE_CLASS (MODE) == MODE_VECTOR_UACCUM)

/* Nonzero if MODE is a scalar/vector accum or uaccum mode.  */
#define ALL_ACCUM_MODE_P(MODE)		\
  (ACCUM_MODE_P (MODE) || UACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar/vector fract or accum mode.  */
#define SIGNED_FIXED_POINT_MODE_P(MODE)		\
  (FRACT_MODE_P (MODE) || ACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar/vector ufract or uaccum mode.  */
#define UNSIGNED_FIXED_POINT_MODE_P(MODE)	\
  (UFRACT_MODE_P (MODE) || UACCUM_MODE_P (MODE))

/* Nonzero if MODE is a scalar/vector fract, ufract, accum or uaccum mode.  */
#define ALL_FIXED_POINT_MODE_P(MODE)		\
  (SIGNED_FIXED_POINT_MODE_P (MODE)		\
   || UNSIGNED_FIXED_POINT_MODE_P (MODE))

/* Nonzero if CLASS modes can be widened.  */
#define CLASS_HAS_WIDER_MODES_P(CLASS)         \
  (CLASS == MODE_INT                           \
   || CLASS == MODE_PARTIAL_INT                \
   || CLASS == MODE_FLOAT                      \
   || CLASS == MODE_DECIMAL_FLOAT              \
   || CLASS == MODE_COMPLEX_FLOAT              \
   || CLASS == MODE_FRACT                      \
   || CLASS == MODE_UFRACT                     \
   || CLASS == MODE_ACCUM                      \
   || CLASS == MODE_UACCUM)

#define POINTER_BOUNDS_MODE_P(MODE)      \
  (GET_MODE_CLASS (MODE) == MODE_POINTER_BOUNDS)

/* Return the base GET_MODE_SIZE value for MODE.  */

ALWAYS_INLINE poly_uint16
mode_to_bytes (machine_mode_enum mode)
{
#if GCC_VERSION >= 4001
  return __builtin_constant_p (mode) ?
	 mode_size_inline (mode) : mode_size[mode];
#else
  return mode_size[mode];
#endif
}

/* Return the base GET_MODE_BITSIZE value for MODE.  */

ALWAYS_INLINE poly_uint16
mode_to_bits (machine_mode_enum mode)
{
  return mode_to_bytes (mode) * BITS_PER_UNIT;
}

/* Return the base GET_MODE_PRECISION value for MODE.  */

ALWAYS_INLINE poly_uint16
mode_to_precision (machine_mode_enum mode)
{
  return mode_precision[mode];
}

/* Return the base GET_MODE_NUNITS value for MODE.  */

ALWAYS_INLINE poly_uint16
mode_to_nunits (machine_mode_enum mode)
{
#if GCC_VERSION >= 4001
  return __builtin_constant_p (mode) ?
         mode_nunits_inline (mode) : mode_nunits[mode];
#else
  return mode_nunits[mode];
#endif
}

/* An optional T (i.e. a T or nothing), where T is some form of mode class.
   operator * gives the T value.  */
template<typename T>
class opt_mode
{
public:
  ALWAYS_INLINE opt_mode () : m_mode (E_VOIDmode) {}
  ALWAYS_INLINE opt_mode (const T &m) : m_mode (m) {}
  machine_mode_enum else_void () const;
  machine_mode_enum else_blk () const;
  T operator * () const;

  /* Return true if the object contains a T rather than nothing.  */
  ALWAYS_INLINE bool exists () const { return m_mode != E_VOIDmode; }
  template<typename U> bool exists (U *) const;

private:
  machine_mode_enum m_mode;
};

/* If the object contains a T, return its enum value, otherwise return
   E_VOIDmode.  */

template<typename T>
ALWAYS_INLINE machine_mode_enum
opt_mode<T>::else_void () const
{
  return m_mode;
}

/* If the T exists, return its enum value, otherwise return E_BLKmode.  */

template<typename T>
inline machine_mode_enum
opt_mode<T>::else_blk () const
{
  return m_mode == E_VOIDmode ? E_BLKmode : m_mode;
}

/* Assert that the object contains a T and return it.  */

template<typename T>
inline T
opt_mode<T>::operator * () const
{
  gcc_checking_assert (m_mode != E_VOIDmode);
  return T::from_int (m_mode);
}

/* Return true if the object contains a T, storing it in *MODE if so.  */

template<typename T>
template<typename U>
inline bool
opt_mode<T>::exists (U *mode) const
{
  if (m_mode != E_VOIDmode)
    {
      *mode = T::from_int (m_mode);
      return true;
    }
  return false;
}

/* A POD version of mode class T.  */

template<typename T>
struct pod_mode
{
  typedef typename T::measurement_type measurement_type;

  machine_mode_enum m_mode;
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }
  ALWAYS_INLINE operator T () const { return T::from_int (m_mode); }
  ALWAYS_INLINE pod_mode &operator = (const T &m) { m_mode = m; return *this; }
};

/* Return true if mode M has type T.  */

template<typename T>
inline bool
is_a (machine_mode_enum m)
{
  return T::includes_p (m);
}

/* Assert that mode M has type T, and return it in that form.  */

template<typename T>
inline T
as_a (machine_mode_enum m)
{
  gcc_checking_assert (T::includes_p (m));
  return T::from_int (m);
}

/* Convert M to an opt_mode<T>.  */

template<typename T>
inline opt_mode<T>
dyn_cast (machine_mode_enum m)
{
  if (T::includes_p (m))
    return T::from_int (m);
  return opt_mode<T> ();
}

/* Return true if mode M has type T, storing it as a T in *RESULT
   if so.  */

template<typename T, typename U>
inline bool
is_a (machine_mode_enum m, U *result)
{
  if (T::includes_p (m))
    {
      *result = T::from_int (m);
      return true;
    }
  return false;
}

/* Represents a machine mode that is known to be a SCALAR_INT_MODE_P.  */
class scalar_int_mode
{
public:
  typedef unsigned short measurement_type;

  ALWAYS_INLINE scalar_int_mode () {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static bool includes_p (machine_mode_enum);
  static scalar_int_mode from_int (int);

protected:
  ALWAYS_INLINE scalar_int_mode (machine_mode_enum m) : m_mode (m) {}

  machine_mode_enum m_mode;
};

/* Return true if M is a scalar_int_mode.  */

inline bool
scalar_int_mode::includes_p (machine_mode_enum m)
{
  return SCALAR_INT_MODE_P (m);
}

/* Return M as a scalar_int_mode.  This function should only be used by
   utility functions; general code should use as_a<T> instead.  */

ALWAYS_INLINE scalar_int_mode
scalar_int_mode::from_int (int i)
{
  return machine_mode_enum (i);
}

/* Represents a machine mode that is known to be a SCALAR_FLOAT_MODE_P.  */
class scalar_float_mode
{
public:
  typedef unsigned short measurement_type;

  ALWAYS_INLINE scalar_float_mode () {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static bool includes_p (machine_mode_enum);
  static scalar_float_mode from_int (int i);

protected:
  ALWAYS_INLINE scalar_float_mode (machine_mode_enum m) : m_mode (m) {}

  machine_mode_enum m_mode;
};

/* Return true if M is a scalar_float_mode.  */

inline bool
scalar_float_mode::includes_p (machine_mode_enum m)
{
  return SCALAR_FLOAT_MODE_P (m);
}

/* Return M as a scalar_float_mode.  This function should only be used by
   utility functions; general code should use as_a<T> instead.  */

ALWAYS_INLINE scalar_float_mode
scalar_float_mode::from_int (int i)
{
  return machine_mode_enum (i);
}

/* Represents a machine mode that is known to be scalar.  All properties
   (size, precision, etc.) are compile-time constants.  */
class scalar_mode
{
public:
  typedef unsigned short measurement_type;

  ALWAYS_INLINE scalar_mode () {}
  ALWAYS_INLINE scalar_mode (const scalar_int_mode &m) : m_mode (m) {}
  ALWAYS_INLINE scalar_mode (const scalar_float_mode &m) : m_mode (m) {}
  ALWAYS_INLINE scalar_mode (const scalar_int_mode_pod &m) : m_mode (m) {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static bool includes_p (machine_mode_enum);
  static scalar_mode from_int (int);

protected:
  ALWAYS_INLINE scalar_mode (machine_mode_enum m) : m_mode (m) {}

  machine_mode_enum m_mode;
};

/* Return true if M represents some kind of scalar value.  */

inline bool
scalar_mode::includes_p (machine_mode_enum m)
{
  switch (GET_MODE_CLASS (m))
    {
    case MODE_INT:
    case MODE_PARTIAL_INT:
    case MODE_FRACT:
    case MODE_UFRACT:
    case MODE_ACCUM:
    case MODE_UACCUM:
    case MODE_FLOAT:
    case MODE_DECIMAL_FLOAT:
    case MODE_POINTER_BOUNDS:
      return true;
    default:
      return false;
    }
}

/* Return M as a scalar_mode.  This function should only be used by
   utility functions; general code should use as_a<T> instead.  */

ALWAYS_INLINE scalar_mode
scalar_mode::from_int (int i)
{
  return machine_mode_enum (i);
}

/* Represents a machine mode that is known to be a COMPLEX_MODE_P.  */
class complex_mode
{
public:
  typedef unsigned short measurement_type;

  ALWAYS_INLINE complex_mode () {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static bool includes_p (machine_mode_enum);
  static complex_mode from_int (int);

protected:
  ALWAYS_INLINE complex_mode (machine_mode_enum m) : m_mode (m) {}

  machine_mode_enum m_mode;
};

/* Return true if M is a complex_mode.  */

inline bool
complex_mode::includes_p (machine_mode_enum m)
{
  return COMPLEX_MODE_P (m);
}

/* Return M as a complex_mode.  This function should only be used by
   utility functions; general code should use as_a<T> instead.  */

ALWAYS_INLINE complex_mode
complex_mode::from_int (int i)
{
  return machine_mode_enum (i);
}

/* Represents a general machine mode (scalar or non-scalar).  */
class machine_mode
{
public:
  typedef poly_uint16 measurement_type;

  ALWAYS_INLINE machine_mode () {}
  template<typename T>
  ALWAYS_INLINE machine_mode (const T &m) : m_mode (m) {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static ALWAYS_INLINE bool includes_p (machine_mode_enum) { return true; }
  static machine_mode from_int (int i);

protected:
  machine_mode_enum m_mode;
};

ALWAYS_INLINE machine_mode
machine_mode::from_int (int i)
{
  return (machine_mode_enum) i;
}

/* Represents a machine mode that must have a fixed size.  The main
   use of this class is to represent the modes of objects that always
   have static storage duration, such as constant pool entries.
   (No current target supports the concept of variable-size static data.)  */
class fixed_size_mode
{
protected:
  ALWAYS_INLINE fixed_size_mode (machine_mode_enum m) : m_mode (m) {}

  machine_mode_enum m_mode;

public:
  typedef unsigned short measurement_type;

  ALWAYS_INLINE fixed_size_mode () {}
  ALWAYS_INLINE fixed_size_mode (const scalar_mode &m) : m_mode (m) {}
  ALWAYS_INLINE fixed_size_mode (const scalar_int_mode &m) : m_mode (m) {}
  ALWAYS_INLINE fixed_size_mode (const scalar_float_mode &m) : m_mode (m) {}
  ALWAYS_INLINE fixed_size_mode (const scalar_mode_pod &m) : m_mode (m) {}
  ALWAYS_INLINE fixed_size_mode (const scalar_int_mode_pod &m) : m_mode (m) {}
  ALWAYS_INLINE fixed_size_mode (const complex_mode &m) : m_mode (m) {}
  ALWAYS_INLINE operator machine_mode_enum () const { return m_mode; }

  static bool includes_p (machine_mode_enum);
  static fixed_size_mode from_int (int i);
};

/* Return true if MODE has a fixed size.  */

inline bool
fixed_size_mode::includes_p (machine_mode_enum mode)
{
  return mode_to_bytes (mode).is_constant ();
}

/* Return M as a fixed_size_mode.  This function should only be used by
   utility functions; general code should use as_a<T> instead.  */

ALWAYS_INLINE fixed_size_mode
fixed_size_mode::from_int (int i)
{
  return machine_mode_enum (i);
}

/* Get the size in bytes of an object of mode MODE.  */

#if POLY_INT_CONVERSION
#define GET_MODE_SIZE(MODE) ((unsigned short) mode_to_bytes (MODE).coeffs[0])
#else
ALWAYS_INLINE poly_uint16
GET_MODE_SIZE (machine_mode_enum mode)
{
  return mode_to_bytes (mode);
}

template<typename T>
ALWAYS_INLINE typename if_poly<typename T::measurement_type>::t
GET_MODE_SIZE (const T &mode)
{
  return mode_to_bytes (mode);
}

template<typename T>
ALWAYS_INLINE typename if_nonpoly<typename T::measurement_type>::t
GET_MODE_SIZE (const T &mode)
{
  return mode_to_bytes (mode).coeffs[0];
}
#endif

/* Get the size in bits of an object of mode MODE.  */

#if POLY_INT_CONVERSION
#define GET_MODE_BITSIZE(MODE) ((unsigned short) mode_to_bits (MODE).coeffs[0])
#else
ALWAYS_INLINE poly_uint16
GET_MODE_BITSIZE (machine_mode_enum mode)
{
  return mode_to_bits (mode);
}

template<typename T>
ALWAYS_INLINE typename if_poly<typename T::measurement_type>::t
GET_MODE_BITSIZE (const T &mode)
{
  return mode_to_bits (mode);
}

template<typename T>
ALWAYS_INLINE typename if_nonpoly<typename T::measurement_type>::t
GET_MODE_BITSIZE (const T &mode)
{
  return mode_to_bits (mode).coeffs[0];
}
#endif

/* Get the number of value bits of an object of mode MODE.  */

#if POLY_INT_CONVERSION
#define GET_MODE_PRECISION(MODE) \
  ((unsigned short) mode_to_precision (MODE).coeffs[0])
#else
ALWAYS_INLINE poly_uint16
GET_MODE_PRECISION (machine_mode_enum mode)
{
  return mode_to_precision (mode);
}

template<typename T>
ALWAYS_INLINE typename if_poly<typename T::measurement_type>::t
GET_MODE_PRECISION (const T &mode)
{
  return mode_to_precision (mode);
}

template<typename T>
ALWAYS_INLINE typename if_nonpoly<typename T::measurement_type>::t
GET_MODE_PRECISION (const T &mode)
{
  return mode_to_precision (mode).coeffs[0];
}
#endif

/* Get the number of integral bits of an object of mode MODE.  */
extern CONST_MODE_IBIT unsigned char mode_ibit[NUM_MACHINE_MODES];
#define GET_MODE_IBIT(MODE) mode_ibit[MODE]

/* Get the number of fractional bits of an object of mode MODE.  */
extern CONST_MODE_FBIT unsigned char mode_fbit[NUM_MACHINE_MODES];
#define GET_MODE_FBIT(MODE) mode_fbit[MODE]

/* Get a bitmask containing 1 for all bits in a word
   that fit within mode MODE.  */

extern const unsigned HOST_WIDE_INT mode_mask_array[NUM_MACHINE_MODES];

#define GET_MODE_MASK(MODE) mode_mask_array[MODE]

/* Return the mode of the basic parts of MODE.  For vector modes this is the
   mode of the vector elements.  For complex modes it is the mode of the real
   and imaginary parts.  For other modes it is MODE itself.  */

extern const unsigned char mode_inner[NUM_MACHINE_MODES];
#if GCC_VERSION >= 4001
#define GET_MODE_INNER(MODE) \
  (scalar_mode::from_int (__builtin_constant_p (MODE) \
			   ? mode_inner_inline (MODE) : mode_inner[MODE]))
#else
#define GET_MODE_INNER(MODE) \
  (scalar_mode::from_int (machine_mode_enum (mode_inner[MODE])))
#endif

/* Get the size in bytes or bits of the basic parts of an
   object of mode MODE.  */

extern CONST_MODE_UNIT_SIZE unsigned char mode_unit_size[NUM_MACHINE_MODES];
#if GCC_VERSION >= 4001
#define GET_MODE_UNIT_SIZE(MODE) \
  ((unsigned char) (__builtin_constant_p (MODE) \
		   ? mode_unit_size_inline (MODE) : mode_unit_size[MODE]))
#else
#define GET_MODE_UNIT_SIZE(MODE) mode_unit_size[MODE]
#endif

#define GET_MODE_UNIT_BITSIZE(MODE) \
  ((unsigned short) (GET_MODE_UNIT_SIZE (MODE) * BITS_PER_UNIT))

extern const unsigned short mode_unit_precision[NUM_MACHINE_MODES];
#if GCC_VERSION >= 4001
#define GET_MODE_UNIT_PRECISION(MODE) \
  ((unsigned short) (__builtin_constant_p (MODE) \
		    ? mode_unit_precision_inline (MODE)\
		    : mode_unit_precision[MODE]))
#else
#define GET_MODE_UNIT_PRECISION(MODE) mode_unit_precision[MODE]
#endif

/* Get the number of units in an object of mode MODE.  This is 2 for
   complex modes and the number of elements for vector modes.  */

#if POLY_INT_CONVERSION
#define GET_MODE_NUNITS(MODE) (mode_to_nunits (MODE).coeffs[0])
#else
ALWAYS_INLINE poly_uint16
GET_MODE_NUNITS (machine_mode_enum mode)
{
  return mode_to_nunits (mode);
}

template<typename T>
ALWAYS_INLINE typename if_poly<typename T::measurement_type>::t
GET_MODE_NUNITS (const T &mode)
{
  return mode_to_nunits (mode);
}

template<typename T>
ALWAYS_INLINE typename if_nonpoly<typename T::measurement_type>::t
GET_MODE_NUNITS (const T &mode)
{
  return mode_to_nunits (mode).coeffs[0];
}
#endif

/* Get the next wider natural mode (eg, QI -> HI -> SI -> DI -> TI).  */

template<typename T>
inline opt_mode<T>
GET_MODE_WIDER_MODE (const T &m)
{
  machine_mode_enum wider = (machine_mode_enum) mode_wider[m];
  if (wider != E_VOIDmode)
    return T::from_int (wider);
  return opt_mode<T> ();
}

/* For scalars, this is a mode with twice the precision.  For vectors,
   this is a mode with the same inner mode but with twice the elements.  */

template<typename T>
inline opt_mode<T>
GET_MODE_2XWIDER_MODE (const T &m)
{
  machine_mode_enum wider = (machine_mode_enum) mode_2xwider[m];
  if (wider != E_VOIDmode)
    return T::from_int (wider);
  return opt_mode<T> ();
}

/* Get the complex mode from the component mode.  */
extern const unsigned char mode_complex[NUM_MACHINE_MODES];
#define GET_MODE_COMPLEX_MODE(MODE) \
  (machine_mode ((machine_mode_enum) mode_complex[MODE]))

/* Wrapper for mode arguments to target macros, so that if a target
   doesn't need polynomial-sized modes, its header file can continue
   to treat everything as fixed_size_mode.  This should go away once
   macros are moved to target hooks.  It shouldn't be used in other
   contexts.  */
#if NUM_POLY_INT_COEFFS == 1
#define MACRO_MODE(MODE) (as_a <fixed_size_mode> (MODE))
#else
#define MACRO_MODE(MODE) (MODE)
#endif

/* Return the mode for data of a given size SIZE and mode class CLASS.
   If LIMIT is nonzero, then don't use modes bigger than MAX_FIXED_MODE_SIZE.
   The value is BLKmode if no other mode is found.  */

extern machine_mode mode_for_size (poly_int64, enum mode_class, int);

/* As above, but for specific mode classes.  */

extern opt_scalar_int_mode int_mode_for_size (poly_int64, int);
extern opt_scalar_float_mode float_mode_for_size (poly_int64);

/* Similar to mode_for_size, but find the smallest mode for a given width.  */

extern machine_mode smallest_mode_for_size (poly_int64, enum mode_class);

/* As above, but for specific mode classes.  */

extern scalar_int_mode smallest_int_mode_for_size (poly_int64);

/* Return an integer mode of exactly the same size as the input mode.  */

extern opt_scalar_int_mode int_mode_for_mode (machine_mode);

extern machine_mode bitwise_mode_for_mode (machine_mode);

/* Return a mode that is suitable for representing a vector,
   or BLKmode on failure.  */

extern machine_mode mode_for_vector (scalar_mode, poly_int64);

/* A class for iterating through possible bitfield modes.  */
class bit_field_mode_iterator
{
public:
  bit_field_mode_iterator (HOST_WIDE_INT, HOST_WIDE_INT,
			   poly_int64, poly_int64,
			   unsigned int, bool);
  bool next_mode (scalar_int_mode *);
  bool prefer_smaller_modes ();

private:
  opt_scalar_int_mode m_mode;
  /* We use signed values here because the bit position can be negative
     for invalid input such as gcc.dg/pr48335-8.c.  */
  HOST_WIDE_INT m_bitsize;
  HOST_WIDE_INT m_bitpos;
  poly_int64 m_bitregion_start;
  poly_int64 m_bitregion_end;
  unsigned int m_align;
  bool m_volatilep;
  int m_count;
};

/* Find the best mode to use to access a bit field.  */

extern bool get_best_mode (int, int, poly_uint64, poly_uint64, unsigned int,
			   unsigned HOST_WIDE_INT, bool, scalar_int_mode *);

/* Determine alignment, 1<=result<=BIGGEST_ALIGNMENT.  */

extern CONST_MODE_BASE_ALIGN unsigned char mode_base_align[NUM_MACHINE_MODES];

extern unsigned get_mode_alignment (machine_mode);

#define GET_MODE_ALIGNMENT(MODE) get_mode_alignment (MODE)

/* For each class, get the narrowest mode in that class.  */

extern const unsigned char class_narrowest_mode[MAX_MODE_CLASS];
#define GET_CLASS_NARROWEST_MODE(CLASS) \
  (machine_mode ((machine_mode_enum) class_narrowest_mode[CLASS]))

/* The narrowest full integer mode available on the target.  */

#define NARROWEST_INT_MODE \
  (scalar_int_mode::from_int (class_narrowest_mode[MODE_INT]))

/* Return the narrowest mode in T's class.  */

template<typename T>
inline T
get_narrowest_mode (T mode)
{
  return T::from_int (class_narrowest_mode[GET_MODE_CLASS (mode)]);
}

/* Define the integer modes whose sizes are BITS_PER_UNIT and BITS_PER_WORD
   and the mode whose class is Pmode and whose size is POINTER_SIZE.  */

extern scalar_int_mode byte_mode;
extern scalar_int_mode word_mode;
extern scalar_int_mode ptr_mode;

/* Target-dependent machine mode initialization - in insn-modes.c.  */
extern void init_adjust_machine_modes (void);

#define TRULY_NOOP_TRUNCATION_MODES_P(MODE1, MODE2) \
  TRULY_NOOP_TRUNCATION (GET_MODE_PRECISION (MACRO_MODE (MODE1)), \
			 GET_MODE_PRECISION (MACRO_MODE (MODE2)))

/* Return true if MODE is a scalar integer mode that fits in a
   HOST_WIDE_INT.  */

inline bool
HWI_COMPUTABLE_MODE_P (machine_mode mode)
{
  machine_mode_enum mme = mode;
  return (SCALAR_INT_MODE_P (mme)
	  && mode_to_precision (mme).coeffs[0] <= HOST_BITS_PER_WIDE_INT);
}

inline bool
HWI_COMPUTABLE_MODE_P (scalar_int_mode mode)
{
  return GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT;
}

struct int_n_data_t {
  /* These parts are initailized by genmodes output */
  unsigned int bitsize;
  scalar_int_mode_pod m;
  /* RID_* is RID_INTN_BASE + index into this array */
};

/* This is also in tree.h.  genmodes.c guarantees the're sorted from
   smallest bitsize to largest bitsize. */
extern bool int_n_enabled_p[NUM_INT_N_ENTS];
extern const int_n_data_t int_n_data[NUM_INT_N_ENTS];

/* Return true if MODE has class MODE_INT, storing it as a scalar_int_mode
   in *INT_MODE if so.  */

template<typename T>
inline bool
is_int_mode (machine_mode mode, T *int_mode)
{
  if (GET_MODE_CLASS (mode) == MODE_INT)
    {
      *int_mode = scalar_int_mode::from_int (mode);
      return true;
    }
  return false;
}

/* Return true if MODE has class MODE_FLOAT, storing it as a
   scalar_float_mode in *FLOAT_MODE if so.  */

template<typename T>
inline bool
is_float_mode (machine_mode mode, T *float_mode)
{
  if (GET_MODE_CLASS (mode) == MODE_FLOAT)
    {
      *float_mode = scalar_float_mode::from_int (mode);
      return true;
    }
  return false;
}

/* Return true if MODE has class MODE_COMPLEX_INT, storing it as
   a complex_mode in *CMODE if so.  */

template<typename T>
inline bool
is_complex_int_mode (machine_mode mode, T *cmode)
{
  if (GET_MODE_CLASS (mode) == MODE_COMPLEX_INT)
    {
      *cmode = complex_mode::from_int (mode);
      return true;
    }
  return false;
}

/* Return true if MODE is a scalar integer mode with a precision
   smaller than LIMIT's precision.  */

inline bool
is_narrower_int_mode (machine_mode mode, scalar_int_mode limit)
{
  scalar_int_mode int_mode;
  return (is_a <scalar_int_mode> (mode, &int_mode)
	  && GET_MODE_PRECISION (int_mode) < GET_MODE_PRECISION (limit));
}

namespace mode_iterator
{
  template<typename T>
  inline void
  start (opt_mode<T> *iter, enum mode_class mclass)
  {
    if (GET_CLASS_NARROWEST_MODE (mclass) == E_VOIDmode)
      *iter = opt_mode<T> ();
    else
      *iter = as_a<T> (GET_CLASS_NARROWEST_MODE (mclass));
  }

  inline void
  start (machine_mode *iter, enum mode_class mclass)
  {
    *iter = GET_CLASS_NARROWEST_MODE (mclass);
  }

  template<typename T>
  inline bool
  iterate_p (opt_mode<T> *iter)
  {
    return iter->exists ();
  }

  inline bool
  iterate_p (machine_mode *iter)
  {
    return *iter != E_VOIDmode;
  }

  template<typename T>
  inline void
  get_wider (opt_mode<T> *iter)
  {
    *iter = GET_MODE_WIDER_MODE (**iter);
  }

  inline void
  get_wider (machine_mode *iter)
  {
    *iter = GET_MODE_WIDER_MODE (*iter).else_void ();
  }

  template<typename T>
  inline void
  get_known_wider (T *iter)
  {
    *iter = *GET_MODE_WIDER_MODE (*iter);
  }

  template<typename T>
  inline void
  get_2xwider (opt_mode<T> *iter)
  {
    *iter = GET_MODE_2XWIDER_MODE (**iter);
  }

  inline void
  get_2xwider (machine_mode *iter)
  {
    *iter = GET_MODE_2XWIDER_MODE (*iter).else_void ();
  }
}

#define FOR_EACH_MODE_IN_CLASS(ITERATOR, CLASS)  \
  for (mode_iterator::start (&(ITERATOR), CLASS); \
       mode_iterator::iterate_p (&(ITERATOR)); \
       mode_iterator::get_wider (&(ITERATOR)))

#define FOR_EACH_MODE(ITERATOR, START, END) \
  for ((ITERATOR) = (START); \
       (ITERATOR) != (END); \
       mode_iterator::get_known_wider (&(ITERATOR)))

#define FOR_EACH_MODE_FROM(ITERATOR, START) \
  for ((ITERATOR) = (START); \
       mode_iterator::iterate_p (&(ITERATOR)); \
       mode_iterator::get_wider (&(ITERATOR)))

#define FOR_EACH_MODE_UNTIL(ITERATOR, END) \
  FOR_EACH_MODE (ITERATOR, get_narrowest_mode (END), END)

#define FOR_EACH_WIDER_MODE(ITERATOR, START) \
  for ((ITERATOR) = (START), mode_iterator::get_wider (&(ITERATOR)); \
       mode_iterator::iterate_p (&(ITERATOR)); \
       mode_iterator::get_wider (&(ITERATOR)))

#define FOR_EACH_2XWIDER_MODE(ITERATOR, START) \
  for ((ITERATOR) = (START), mode_iterator::get_2xwider (&(ITERATOR)); \
       mode_iterator::iterate_p (&(ITERATOR)); \
       mode_iterator::get_2xwider (&(ITERATOR)))

template<typename T>
void
gt_ggc_mx (pod_mode<T> *)
{
}

template<typename T>
void
gt_pch_nx (pod_mode<T> *)
{
}

template<typename T>
void
gt_pch_nx (pod_mode<T> *, void (*) (void *, void *), void *)
{
}

#endif /* not HAVE_MACHINE_MODES */

Reply via email to