Quoting Ian Lance Taylor <i...@google.com>:

I'm not sure why you are singling me out.

You seemed to be actively working on the branch, and the c++ enum type
checks provide a motivation to make changes.  Also, this issue should
be considered in general when people change their coding habits in order
for the code to be c++ ready.

If we decide that a
multi-targeted gcc is a goal we want to support, it's certainly fine
with me to change those enums to int and add casts where appropriate.

OK.  This is also the route I have followed so far.

(I'm not personally convinced that a multi-targeted gcc is particularly
useful, though I don't object if there is a general desire to support
it.)

As the number of cores integrated into each computing device increases, I
think it is only natural that we'll see the emergence of specialized cores.
E.g. we already see low power cores for base load/standby,
superscalar out-of-order cores for sequential processing, and massively
parallel architectures for media processing.  Having a single compiler that
can generate code for all target architectures in a computing device allows
to shift (parts of) functions from one architecture to the other depending
the nature of the processing task.  This also increases the leverage for
profile-based feedback and machine learning.

It is perhaps worth noting that the natural way to handle the target
vector in C++ is to make a Target class with a set of virtual methods.

Yes, but unfortunately that requires ferrying around the this pointer.
If we had virtual static member functions, that would be different.

And you'd still have to decide on one function signature for each virtual
function - having one target return a pointer to an 8 bit enum and another
one return a pointer to a 16 bit enum by the same name just won't do.

Then methods like eh_return_filter_mode would use a covariant return
type in specific implementations.  Where the mode is a parameter it
would still have to be changed to int.

The signature of these virtual functions would have to be defined with
a type that is wide enough to fit all target's enums.

And you couldn't fix a hook like ira_cover_classes this way.  The caller
has to know how wide each element is in the of the array the pointer to
the first element of which is being returned.

I think you need to take a step back.  What is a natural way to
represent a register class in the machine independent code when writing
in C++?  I don't think it is to use an enum type.

Having all register classes in one enum gives you some things that you don't
get with an compiler implementation language class for each register class.
E.g. you can easily iterate over all register classes, and have bitmasks
which act as sets of register classes.  You can use them as indices in
lookup tables.

A register class is
basically an object which implements methods like
    bool is_regno_in_class(unsigned int);
    HARD_REG_SET registers_in_class();
    bool equals(Reg_class);
    bool is_superset_of(Reg_class);
    bool is_subset_of(Reg_classS);
Obviously that is a long way off in gcc.

Indeed.  Expressing this as an actual C++ class would be incompatible with
the current goal to be able to build GCC with a C compiler.

Another enum problem is enum attr_cpu / enum processor_type in the
sh machine description.  We need a variable  (sh_cpu aka sh_cpu_attr)
with this type to be visible inside the attributes generated by genattrtab.
Choices to solve these kind of problems would be:
- introduce another target header which is guaranteed to be included only
  after enum definitions from generator files are in effect.
- allow the gcc extension of statement expressions to be used in
  target descriptions, so that a macro could provide an extern declaration
  of a variable before using its value.

A simple approach would be to have a way to say that the enum values for
an attribute were already declared.  Then instead of listing the values
in the define_attr, we would just have a standard mapping from constants
of the attribute to the enum names to use.  We would have less error
checking in the generator programs, but the errors would be caught when
the generated code was compiled.

I don't see how this would work.  Remember, genattrtab needs to know about
every value in the enumeration so that it can perform its optimizations.
I also don't see how your proposal is simpler than adding another header
file.
FWIW, the current approach of using two enums in parallel works
(awkward as it is) also for C++.
It just needed adjustments because the two sets of enum constants are no
longer interchangable, and a cast is now required when going from one enum
to the other.

Reply via email to