On Thu, 1 Oct 2015, Ulrich Weigand wrote: > The _DecimalN types are already supported by DWARF using a base type with > encoding DW_ATE_decimal_float and the appropriate DW_AT_byte_size.
Which doesn't actually say whether the DPD or BID encoding is used, but as long as each architecture uses only one that's not a problem in practice. > For the interchange type, it seems one could define a new encoding, > e.g. DW_ATE_interchange_float, and use this together with the > appropriate DW_AT_byte_size to identify the format. It's not clear to me that (for example) distinguishing float and _Float32 (other than by name) is useful in DWARF (and if you change float from DW_ATE_float to DW_ATE_interchange_float that would affect old debuggers - is the idea to use DW_ATE_interchange_float only for the new types, not for old types with the same encodings, so for _Float32 but not float?). But it's true that if you say it's an interchange type then together with size and endianness that uniquely determines the encoding. > I'm not sure how to handle an extended decimal format that does not > match any of the decimal interchange formats. Does this occur in > practice at all? I don't know, but I doubt it. > Well, complex types have their own encoding (DW_ATE_complex_float), so we'd > have to define the corresponding variants for those as well, e.g. > DW_ATE_complex_interchange_float or the like. And DW_ATE_imaginary_interchange_float, I suppose. -- Joseph S. Myers jos...@codesourcery.com