Re: [PATCH] Fix coroutine tests for libstdc++ gnu-version-namespace mode

2023-10-11 Thread Iain Sandoe
Hi François,

> On 11 Oct 2023, at 05:49, François Dumont  wrote:
> On 08/10/2023 15:59, Iain Sandoe wrote:
>>> On 23 Sep 2023, at 21:10, François Dumont  wrote:
>>> 
>>> I'm eventually fixing those tests the same way we manage this problem in 
>>> libstdc++ testsuite.
>>> 
>>>testsuite: Add optional libstdc++ version namespace in expected 
>>> diagnostic
>>> 
>>> When libstdc++ is build with --enable-symvers=gnu-versioned-namespace 
>>> diagnostics are
>>> showing this namespace, currently __8.
>>> 
>>> gcc/testsuite/ChangeLog:
>>> 
>>> * testsuite/g++.dg/coroutines/coro-bad-alloc-00-bad-op-new.C: 
>>> Add optional
>>> '__8' version namespace in expected diagnostic.
>>> * testsuite/g++.dg/coroutines/coro-bad-alloc-01-bad-op-del.C: 
>>> Likewise.
>>> * testsuite/g++.dg/coroutines/coro-bad-alloc-02-no-op-new-nt.C: 
>>> Likewise.
>>> * 
>>> testsuite/g++.dg/coroutines/coro-bad-grooaf-01-grooaf-expected.C: Likewise.
>>> * testsuite/g++.dg/coroutines/pr97438.C: Likewise.
>>> * testsuite/g++.dg/coroutines/ramp-return-b.C: Likewise.
>>> 
>>> Tested under Linux x86_64.
>>> 
>>> I'm contributing to libstdc++ so I already have write access.
>>> 
>>> Ok to commit ?
>> As author of the tests, this LGTM as a suitable fix for now (at least, once 
>> the main
>> patch to fix versioned namespaces lands).
> 
> I just realized it was a "go", no ? Then why after the main patch ?
> 
> The "main patch" do not fix the versioned namespace. It just makes it adopt 
> the cxx11 abi.
> 
> This patch fixes a problem that is as old as the tests and that is totally 
> unrelated with the main one. I just wanted to improve the situation so that 
> versioned namespace mode do not look more buggy than necessary when someone 
> (like you) run those.

Maybe a misunderstanding on my part.  I was under the impression that 
versioned-namespace was currently unusable because it forces the old string 
ABI.  If that is not the case, then I guess the changes are OK now.

I am pretty concerned about the maintainability of this tho, hence this …

>> However, IMO, this could become quite painful as more g++ tests make use of 
>> std headers
>> (which is not really optional for facilities like this that are 
>> tightly-coupled between the FE and
>> the library).
>> 
>> For the future, it does seem that a more complete solution might be to 
>> introduce a
>> testsuite-wide definition for the C++ versioned std:: introducer, so that we 
>> can update it in one
>> place as the version changes.
>> 
>> So (as a thought experiment):
>>  - we’d have something of the form “CXX_STD” as a tcl global
>>  - we’d add the presence/absence of versioning to the relevant site.exp 
>> (which
>>means recognising the versioning choice also in the GCC configure)
>>  - we’d migrate tests to using ${CXX_STD} instead of "std::__N”  in matches
>> 
>> … I guess an alternative could be to cook up some alternate warning/error/etc
>>match functions that cater for arbitrary inline namespaces but that seems 
>> like a much
>>more tricky and invasive testsuite change.
>> 
>> thoughts?
> 
> I considered amending gcc/testsuite/lib/prune.exp to simply remove the 
> version from the diagnostic. But the reply on why it was not working scared 
> me, so this patch.
> 
> https://gcc.gnu.org/pipermail/gcc/2023-September/242526.html

Ah, I didn’t see that mail - will try to take a look at the weekend.
Iain



Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread David Brown via Gcc

On 10/10/2023 18:30, Jason Merrill via Gcc wrote:

On Tue, Oct 10, 2023 at 7:30 AM Florian Weimer via Gcc 
wrote:


Are these code fragments valid C89 code?

   int i1 = 1;
   char *p1 = i;

   char c;
   char *p2 = &c;
   int i2 = p2;

Or can we generate errors for them even with -std=gnu89?

(It will still be possible to override this with -fpermissive or
-Wno-int-conversion.)



Given that C89 code is unlikely to be actively maintained, I think we
should be permissive by default in that mode.  People compiling with an old
-std flag are presumably doing it to keep old code compiling, and it seems
appropriate to respect that.



That is - unfortunately, IMHO - not true.

In particular, in the small-systems embedded development world (and that 
is a /big/ use-case for C programming), there is still a lot done in 
C89/C90.  It is the dominant variety of C for things like RTOS's (such 
as FreeRTOS and ThreadX), network stacks (like LWIP), microcontroller 
manufacturers' SDK's and libraries, and so on.  There are also still 
some microcontrollers for which the main toolchains (not GCC, obviously) 
do not have full C99 support, and there is a significant proportion of 
embedded C programmers who write all their code in C90, even for new 
projects.  There is a "cult" within C coders who think "The C 
Programming Language" is the "Bible", and have never learned anything 
since then.


The biggest target device in this field is the 32-bit ARM Cortex-M 
family, and the the most used compiler is gcc.


Taking numbers out of thin air, but not unrealistically I believe, there 
are millions of devices manufactured every day running code compiled by 
gcc -std=gnu89 or -std=c89 (or an equivalent).


Add to that the libraries on "big" systems that are written to C89/C90 
standards.  After all, that is the lowest common denominator of the 
C/C++ world - with a bit of care, the code will be compatible with all 
other C and C++ standards.  It is not just of old code, though a great 
deal of modern library code has roots back to pre-C99 days, but it is 
also cross-platform code.  It is only relatively recently that 
Microsoft's development tools have had reasonable support for C99 - many 
people writing code to work in both the *nix world and the Windows world 
stick to C89/C90 if they want a clear standard (rather than "the subset 
of C99 supported by the MSVC version they happen to have").


Now, pretty much all of that code could also be compiled with -std=c99 
(or -std=gnu99).  And in a great many cases, it /is/ compiled as C99. 
But for those that want to be careful about their coding, and many do, 
the natural choice here is "-std=c90 -pedantic-errors".



So IMHO (and as I am not a code contributor to GCC, my opinion really is 
humble) it is better to be stricter than permissive, even in old 
standards.  It is particularly important for "-std=c89", while 
"-std=gnu89" is naturally more permissive.  (I have seen more than 
enough terrible code in embedded programs - I don't want to make it 
easier for them to write even worse code!)




I'm also (though less strongly) inclined to be permissive in C99 mode, and
only introduce the new strictness by default for C11/C17 modes.



I suspect (again with numbers taken from thin air) that the proportion 
of C programmers or projects that actively choose C11 or C17 modes, as 
distinct from using the compiler defaults, will be less than 1%.  C99 
(or gnu99) is the most commonly chosen standard for small-systems 
embedded programming, combining C90 libraries, stacks, and RTOS's with 
user code in C99.  So again, my preference is towards stricter control, 
not more permissive tools.


I am aware, however, that I personally am a lot fussier than most 
programmers.  I run gcc with lots of additional warnings and 
-Wfatal-errors, and want ever-stricter tools.  I don't think many people 
would be happy with the choices /I/ would prefer for default compiler 
flags!


I am merely a happy GCC user, not a contributor, much less anyone 
involved in decision making.  But I hope it is helpful to you to hear 
other opinions here, especially about small-systems embedded 
programming, at least in my own experience.


David








Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread Florian Weimer via Gcc
* David Brown:

> So IMHO (and as I am not a code contributor to GCC, my opinion really
> is humble) it is better to be stricter than permissive, even in old
> standards.  It is particularly important for "-std=c89", while
> "-std=gnu89" is naturally more permissive.  (I have seen more than
> enough terrible code in embedded programs - I don't want to make it
> easier for them to write even worse code!)

We can probably make (say) -std=gnu89 -fno-permissive work, in a way
that is a bit less picky than -std=gnu89 -pedantic-errors today.

And of course there's still -Werror, that's not going to go away.  So if
you are using -Werror=implicit-function-declaration today (as you
probably should 8-), nothing changes for you in GCC 14.

> I suspect (again with numbers taken from thin air) that the proportion
> of C programmers or projects that actively choose C11 or C17 modes, as
> distinct from using the compiler defaults, will be less than 1%.  C99
> (or gnu99) is the most commonly chosen standard for small-systems
> embedded programming, combining C90 libraries, stacks, and RTOS's with
> user code in C99.  So again, my preference is towards stricter
> control, not more permissive tools.

I don't think the estimate is accurate.  Several upstream build systems
I've seen enable -std=gnu11 and similar options once they are supported.
Usually, it's an attempt to upgrade to newer language standards that
hasn't aged well, not a downgrade.  It's probably quite bit more than
1%.

Thanks,
Florian



Re: Clarification regarding various classes DIE's attribute value class

2023-10-11 Thread Richard Biener via Gcc
On Tue, 10 Oct 2023, Rishi Raj wrote:

> Hello,
> I am working on a project to produce the LTO object file from the compiler
> directly. So far, we have
> correctly outputted .symtab along with various .debug sections. The only
> thing remaining is to
> correctly output attribute values and their corresponding values in the
> .debug_info section. This is done by the output_die function in
> dwarf2out.cc based on the value's class. However, the same
> function is used in dwarf2out_finish  as well as dwarf2out_early_finish, so
> I suspect that not every value class is being used in dwarfout_early_finish
> (mainly I am interested in -flto mode). As there is little documentation on
> the same, I experimented by commenting out the various cases of value class
> in output die. I found that the classes such as dw_val_class_addr,
> dw_val_class_high_pc, and dw_val_class_vms_delta aren't being used during
> the early_finish of LTO mode. I might be wrong, as my observation is based
> on commenting out and testing a few pieces of code that might need to be
> completed. So, can anyone please tell out of these 30 classes that are
> relevant to dwarf2out_early_finish in LTO mode or at least point out some
> documentation if it exists?

There's no documentation.  The constraint is that early debug should not
have relocations to .text, thus it doesn't have location lists for 
example.

I believe you should be able to mostly hook into the dwarf2asm hooks
that perform the output (but those also add labels and label references).

If leaving out support for some value classes makes your live easier
I suggest to handle them with a gcc_unreachable () handler so you'll
get ICEs whenever one turns out to be required.

> enum dw_val_class
> {
>   dw_val_class_none,
>   dw_val_class_addr,
>   dw_val_class_offset,
>   dw_val_class_loc,
>   dw_val_class_loc_list,
>   dw_val_class_range_list,
>   dw_val_class_const,
>   dw_val_class_unsigned_const,
>   dw_val_class_const_double,
>   dw_val_class_wide_int,
>   dw_val_class_vec,
>   dw_val_class_flag,
>   dw_val_class_die_ref,
>   dw_val_class_fde_ref,
>   dw_val_class_lbl_id,
>   dw_val_class_lineptr,
>   dw_val_class_str,
>   dw_val_class_macptr,
>   dw_val_class_loclistsptr,
>   dw_val_class_file,
>   dw_val_class_data8,
>   dw_val_class_decl_ref,
>   dw_val_class_vms_delta,
>   dw_val_class_high_pc,
>   dw_val_class_discr_value,
>   dw_val_class_discr_list,
>   dw_val_class_const_implicit,
>   dw_val_class_unsigned_const_implicit,
>   dw_val_class_file_implicit,
>   dw_val_class_view_list,
>   dw_val_class_symview
> };
> 
> --
> Rishi
> 

-- 
Richard Biener 
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)


Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread David Brown via Gcc




On 11/10/2023 10:10, Florian Weimer wrote:

* David Brown:


So IMHO (and as I am not a code contributor to GCC, my opinion really
is humble) it is better to be stricter than permissive, even in old
standards.  It is particularly important for "-std=c89", while
"-std=gnu89" is naturally more permissive.  (I have seen more than
enough terrible code in embedded programs - I don't want to make it
easier for them to write even worse code!)


We can probably make (say) -std=gnu89 -fno-permissive work, in a way
that is a bit less picky than -std=gnu89 -pedantic-errors today.



The gcc manual has "-permissive" under "C++ Dialect Options".  Are you 
planning to have it for C as well?  That sounds like a good idea 
(perhaps with some examples in the documentation?).  Ideally (and I 
realise I like stricter checking than many people) some long-obsolescent 
features like non-prototype function declarations could be marked as 
errors unless "-permissive" were used, even in C89 standards.


(As a side note, I wonder if "-fwrapv" and "-fno-strict-aliasing" should 
be listed under "C Dialect Options", as they give specific semantics to 
normally undefined behaviour.)




And of course there's still -Werror, that's not going to go away.  So if
you are using -Werror=implicit-function-declaration today (as you
probably should 8-), nothing changes for you in GCC 14.


I have long lists of explicit warnings and flags in my makefiles, so I 
am not concerned for my own projects.  But I always worry about the less 
vigilant users - the ones who don't know the details of the language or 
the features of the compiler, and don't bother finding out.  I don't 
want default settings to be less strict for them, as it means higher 
risks of bugs escaping out to released code.





I suspect (again with numbers taken from thin air) that the proportion
of C programmers or projects that actively choose C11 or C17 modes, as
distinct from using the compiler defaults, will be less than 1%.  C99
(or gnu99) is the most commonly chosen standard for small-systems
embedded programming, combining C90 libraries, stacks, and RTOS's with
user code in C99.  So again, my preference is towards stricter
control, not more permissive tools.


I don't think the estimate is accurate.  Several upstream build systems
I've seen enable -std=gnu11 and similar options once they are supported.
Usually, it's an attempt to upgrade to newer language standards that
hasn't aged well, not a downgrade.  It's probably quite bit more than
1%.



Fair enough.  My experience is mostly within a particular field that is 
probably more conservative than a lot of other areas of programming.


David






Re: Register allocation cost question

2023-10-11 Thread Andrew Stubbs




On 10/10/2023 20:09, Segher Boessenkool wrote:

Hi Andrew,

On Tue, Oct 10, 2023 at 04:11:18PM +0100, Andrew Stubbs wrote:

I'm also seeing wrong-code bugs when I allow more than 32 new registers,
but that might be an unrelated problem. Or the allocation is broken? I'm
still analyzing this.


It could be connected.  both things should not happen.


If it matters, ... the new registers can't be used for general purposes,


What does this mean?  I think you mean they *can* be used for anything,
you just don't want to (maybe it is slow)?  If you make it allocatable
registers, they *will* be allocated for anythin the compilers deems a
good idea.


Nope, the "Accelerator VGPR" registers are exclusively for the use of 
the new matrix multiply instructions that we don't support (yet).


The compiler is free to use them for storing data, but there are no real 
instructions to do there.



so I'm trying to set them up as a temporary spill destination. This
means they're typically not busy. It feels like it shouldn't be this
hard... :(


So what did you do, put them later in the allocation order?  Make their
register_move_cost higher than for normal registers (but still below
memory_move_cost)?  Or what?  TARGEt_SPILL_CLASS maybe?


We put them in a new register class, with a new constraint, and 
implemented the move instructions (only) with new alternatives for the 
new class. Then implemented TARGET_SPILL_CLASS in the obvious way.


All this is working just fine as long as there are only 32 new registers 
unfixed (a0-a31); the code even runs correctly and I can see the 
spilling happening correctly.


If I enable register a32 then it prefers that, and I get wrong code. 
Using that register ought to be logically correct, albeit suboptimal, so 
I don't understand that either.


Andrew


Re: Register allocation cost question

2023-10-11 Thread Andrew Stubbs

On 11/10/2023 07:54, Chung-Lin Tang wrote:



On 2023/10/10 11:11 PM, Andrew Stubbs wrote:

Hi all,

I'm trying to add a new register set to the GCN port, but I've hit a
problem I don't understand.

There are 256 new registers (each 2048 bit vector register) but the
register file has to be divided between all the running hardware
threads; if you can use fewer registers you can get more parallelism,
which means that it's important that they're allocated in order.

The problem is that they're not allocated in order. Somehow the IRA pass
is calculating different costs for the registers within the class. It
seems to prefer registers a32, a96, a160, and a224.

The internal regno are 448, 512, 576, 640. These are not random numbers!
They all have zero for the 6 LSB.

What could cause this? Did I overrun some magic limit? What target hook
might I have miscoded?

I'm also seeing wrong-code bugs when I allow more than 32 new registers,
but that might be an unrelated problem. Or the allocation is broken? I'm
still analyzing this.

If it matters, ... the new registers can't be used for general purposes,
so I'm trying to set them up as a temporary spill destination. This
means they're typically not busy. It feels like it shouldn't be this
hard... :(


Have you tried experimenting with REG_ALLOC_ORDER? I see that the GCN port 
currently isn't using this target macro.


The default definition is 0,1,2,3,4 and is already the desired 
behaviour.


Andrew


Re: Function multiversioning ABI issues

2023-10-11 Thread Florian Weimer via Gcc
* Andrew Carlotti via Gcc:

> I've also seen the GCC documentation for the ifunc attribute [1].
> This states that "the indirect function needs to be defined in the
> same translation unit as the resolver function".  This is not how
> function multiversioning is currently implemented.  Instead, the
> resolver functions are added to the translation units of every caller.

I don't see how this can happen.  Do you have a declaration of the
resolver function in a shared header, by chance?

Thanks,
Florian



Re: Function multiversioning ABI issues

2023-10-11 Thread Andrew Carlotti via Gcc
On Wed, Oct 11, 2023 at 10:59:10AM +0200, Florian Weimer wrote:
> * Andrew Carlotti via Gcc:
> 
> > I've also seen the GCC documentation for the ifunc attribute [1].
> > This states that "the indirect function needs to be defined in the
> > same translation unit as the resolver function".  This is not how
> > function multiversioning is currently implemented.  Instead, the
> > resolver functions are added to the translation units of every caller.
> 
> I don't see how this can happen.  Do you have a declaration of the
> resolver function in a shared header, by chance?
> 
> Thanks,
> Florian

I haven't explicity declared a separate function. I just included the normal
function multiversioning attributes on the function declarations.

If you don't include the attributes on the declarations, then you will get one
of two issues:

- For target_clones, you would need to ensure that the mutiversioned function
  has a caller in the same translation unit as the implementations. If you 
  don't do this, then no resolver will be generated, with all of the callers
  referencing a non-existent symbol.

- For target versions, any caller that cannot see the function multiversioning
  attributes will only ever use the default version of the function. This also
  applies to the current aarch64 specification for target_clones and
  target_version.


Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread Florian Weimer via Gcc
* David Brown:

> On 11/10/2023 10:10, Florian Weimer wrote:
>> * David Brown:
>> 
>>> So IMHO (and as I am not a code contributor to GCC, my opinion really
>>> is humble) it is better to be stricter than permissive, even in old
>>> standards.  It is particularly important for "-std=c89", while
>>> "-std=gnu89" is naturally more permissive.  (I have seen more than
>>> enough terrible code in embedded programs - I don't want to make it
>>> easier for them to write even worse code!)
>> We can probably make (say) -std=gnu89 -fno-permissive work, in a way
>> that is a bit less picky than -std=gnu89 -pedantic-errors today.
>> 
>
> The gcc manual has "-permissive" under "C++ Dialect Options".  Are you
> planning to have it for C as well?

Yes, I've got local patches on top of Jason's permerror enhancement:

  [PATCH v2 RFA] diagnostic: add permerror variants with opt
  



> That sounds like a good idea (perhaps with some examples in the
> documentation?).  Ideally (and I realise I like stricter checking than
> many people) some long-obsolescent features like non-prototype
> function declarations could be marked as errors unless "-permissive"
> were used, even in C89 standards.

For some of such declarations, this falls out of the implicit-int
removal.

C23 changes meaning of of extern foo(); to match the C++ interpretation
of extern foo(void);.  I don't think we should warn about that.  If we
warn, it would be at the call site.

> (As a side note, I wonder if "-fwrapv" and "-fno-strict-aliasing"
> should be listed under "C Dialect Options", as they give specific
> semantics to normally undefined behaviour.)

They are code generation options, too.

>> And of course there's still -Werror, that's not going to go away.  So if
>> you are using -Werror=implicit-function-declaration today (as you
>> probably should 8-), nothing changes for you in GCC 14.
>
> I have long lists of explicit warnings and flags in my makefiles, so I
> am not concerned for my own projects.  But I always worry about the
> less vigilant users - the ones who don't know the details of the
> language or the features of the compiler, and don't bother finding
> out.  I don't want default settings to be less strict for them, as it
> means higher risks of bugs escaping out to released code.

We have a tension regarding support for legacy software, and ongoing
development.  I think we should draw the line at C99.  That's the first
language standard that removes most of these obsolescent features, after
all.

Thanks,
Florian



Re: Register allocation cost question

2023-10-11 Thread Richard Earnshaw (lists) via Gcc
On 11/10/2023 09:58, Andrew Stubbs wrote:
> On 11/10/2023 07:54, Chung-Lin Tang wrote:
>>
>>
>> On 2023/10/10 11:11 PM, Andrew Stubbs wrote:
>>> Hi all,
>>>
>>> I'm trying to add a new register set to the GCN port, but I've hit a
>>> problem I don't understand.
>>>
>>> There are 256 new registers (each 2048 bit vector register) but the
>>> register file has to be divided between all the running hardware
>>> threads; if you can use fewer registers you can get more parallelism,
>>> which means that it's important that they're allocated in order.
>>>
>>> The problem is that they're not allocated in order. Somehow the IRA pass
>>> is calculating different costs for the registers within the class. It
>>> seems to prefer registers a32, a96, a160, and a224.
>>>
>>> The internal regno are 448, 512, 576, 640. These are not random numbers!
>>> They all have zero for the 6 LSB.
>>>
>>> What could cause this? Did I overrun some magic limit? What target hook
>>> might I have miscoded?
>>>
>>> I'm also seeing wrong-code bugs when I allow more than 32 new registers,
>>> but that might be an unrelated problem. Or the allocation is broken? I'm
>>> still analyzing this.
>>>
>>> If it matters, ... the new registers can't be used for general purposes,
>>> so I'm trying to set them up as a temporary spill destination. This
>>> means they're typically not busy. It feels like it shouldn't be this
>>> hard... :(
>>
>> Have you tried experimenting with REG_ALLOC_ORDER? I see that the GCN port 
>> currently isn't using this target macro.
> 
> The default definition is 0,1,2,3,4 and is already the desired behaviour.
> 
> Andrew

You may need to define HONOR_REG_ALLOC_ORDER though.


Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread David Brown via Gcc




On 11/10/2023 12:17, Florian Weimer wrote:

* David Brown:


On 11/10/2023 10:10, Florian Weimer wrote:

* David Brown:


So IMHO (and as I am not a code contributor to GCC, my opinion really
is humble) it is better to be stricter than permissive, even in old
standards.  It is particularly important for "-std=c89", while
"-std=gnu89" is naturally more permissive.  (I have seen more than
enough terrible code in embedded programs - I don't want to make it
easier for them to write even worse code!)

We can probably make (say) -std=gnu89 -fno-permissive work, in a way
that is a bit less picky than -std=gnu89 -pedantic-errors today.



The gcc manual has "-permissive" under "C++ Dialect Options".  Are you
planning to have it for C as well?


Yes, I've got local patches on top of Jason's permerror enhancement:

   [PATCH v2 RFA] diagnostic: add permerror variants with opt
   




That sounds like a good idea (perhaps with some examples in the
documentation?).  Ideally (and I realise I like stricter checking than
many people) some long-obsolescent features like non-prototype
function declarations could be marked as errors unless "-permissive"
were used, even in C89 standards.


For some of such declarations, this falls out of the implicit-int
removal.


Yes.



C23 changes meaning of of extern foo(); to match the C++ interpretation
of extern foo(void);.  I don't think we should warn about that.  If we
warn, it would be at the call site.


I'm not sure I fully agree.  "extern foo();" became invalid when 
implicit int was removed in C99.  But "extern T foo();", where "T" is 
void or any type, has changed meaning between C17 (and before) and C23.


With C23, it means the same as "extern T foo(void);", like in C++ (and 
like all C standards if it is part of the definition of the function). 
However, prior to C23, a declaration of "T foo();" that is not part of 
the definition of the function declares the function and "specifies that 
no information about the number or types of the parameters is supplied". 
 This use was obsolescent from C90.


To my mind, this is very different.  I think it is fair to suppose that 
for many cases of pre-C23 declarations with empty parentheses, the 
programmer probably meant "(void)".  But the language standards have 
changed the meaning of the declaration.


IMHO I think calling "foo" with parameters should definitely be a 
warning, enabled by default, for at least -std=c99 onwards - it is 
almost certainly a mistake.  (Those few people that use it as a feature 
can ignore or disable the warning.)  I would also put warnings on the 
declaration itself at -Wall, or at least -Wextra (i.e., 
"-Wstrict-prototypes").  I think that things that change between 
standards, even subtly, should be highlighted.  Remember, this concerns 
a syntax that was marked obsolescent some 35 years ago, because the 
alternative (prototypes) was considered "superior to the old style on 
every count".


It could be reasonable to consider "extern T foo();" as valid in 
"-std=gnu99" and other "gnu" standards - GCC has an established history 
of "back-porting" useful features of newer standards to older settings. 
But at least for "-std=std99" and other "standard" standards, I think it 
is best to warn about the likely code error.





(As a side note, I wonder if "-fwrapv" and "-fno-strict-aliasing"
should be listed under "C Dialect Options", as they give specific
semantics to normally undefined behaviour.)


They are code generation options, too.


I see them as semantic extensions to the language, and code generation 
differences are a direct result of that (even if they historically arose 
as code generation options and optimisation flags respectively). 
Perhaps they could be mentioned or linked to in the C dialect options 
page?  Maybe it would be clearer to have new specific flags for the 
dialect options, which are implemented by activating these flags? 
Perhaps that would be confusing.





And of course there's still -Werror, that's not going to go away.  So if
you are using -Werror=implicit-function-declaration today (as you
probably should 8-), nothing changes for you in GCC 14.


I have long lists of explicit warnings and flags in my makefiles, so I
am not concerned for my own projects.  But I always worry about the
less vigilant users - the ones who don't know the details of the
language or the features of the compiler, and don't bother finding
out.  I don't want default settings to be less strict for them, as it
means higher risks of bugs escaping out to released code.


We have a tension regarding support for legacy software, and ongoing
development.  


Agreed, and I fully understand that there is no easy answer here.  On 
the one hand, you don't want to break existing code bases or build 
setups, and on the other hand you want to help developers write good 
code (and avoid bad code) going forwards.



I think

Re: C89 question: Do we need to accept -Wint-conversion warnings

2023-10-11 Thread Florian Weimer via Gcc
* David Brown:

>> C23 changes meaning of of extern foo(); to match the C++
>> interpretation of extern foo(void);.  I don't think we should warn
>> about that.  If we warn, it would be at the call site.
>
> I'm not sure I fully agree.  "extern foo();" became invalid when
> implicit int was removed in C99.  But "extern T foo();", where "T" is
> void or any type, has changed meaning between C17 (and before) and
> C23.

My concern is that the warning would not really be actionable.
Encouraging programmers to change foo() to foo(void) in declarations
seems merely busywork.  C++ doesn't need this, and future C won't need
it, either.

> IMHO I think calling "foo" with parameters should definitely be a
> warning, enabled by default, for at least -std=c99 onwards - it is
> almost certainly a mistake.  (Those few people that use it as a
> feature can ignore or disable the warning.)

It's possible to disable this warning in C23 by declaring foo as ”extern
T foo(...);”.  Not sure if this has ABI implications.

> I would also put warnings on the declaration itself at -Wall, or at
> least -Wextra (i.e., "-Wstrict-prototypes").  I think that things that
> change between standards, even subtly, should be highlighted.
> Remember, this concerns a syntax that was marked obsolescent some 35
> years ago, because the alternative (prototypes) was considered
> "superior to the old style on every count".

I still think the declaration is quite harmless if we warn at call
sites.

Thanks,
Florian



Using "--enable-standard-branch-protection" flag for arm64 cause test failures

2023-10-11 Thread Yash Shinde via Gcc
Hi,

I am using yocto to build and run ARM64 toolchain(SDK)

GCC was configured with the "--enable-standard-branch-protection" flag for
arm64.

It was having better performance while running some benchmarks.

However, it resulted in many regressions as it has been generating bti
instruction.

Testcases expecting specific assembly are failing.

Can you please let us know whether it's valid to configure arm64 with the "
--enable-standard-branch-protection" flag?

Regards,
Yash.


Re: Register allocation cost question

2023-10-11 Thread Andrew Stubbs

On 10/10/2023 20:09, Segher Boessenkool wrote:

Hi Andrew,

On Tue, Oct 10, 2023 at 04:11:18PM +0100, Andrew Stubbs wrote:

I'm also seeing wrong-code bugs when I allow more than 32 new registers,
but that might be an unrelated problem. Or the allocation is broken? I'm
still analyzing this.


It could be connected.  both things should not happen.


This is now confirmed to be unrelated: the instruction moving values 
from the new registers to the old must be followed by a no-op in certain 
instruction combinations due to GCN having only partial hardware 
dependency detection.


The register allocation is therefore valid (at least in the testcases 
I've been looking at).


The question of why it prefers registers with round numbers remains open 
(and important for optimization reasons).


Andrew


Re: Clarification regarding various classes DIE's attribute value class

2023-10-11 Thread Jan Hubicka via Gcc
> Hello,
> I am working on a project to produce the LTO object file from the compiler
> directly. So far, we have
> correctly outputted .symtab along with various .debug sections. The only
> thing remaining is to
> correctly output attribute values and their corresponding values in the
> .debug_info section. This is done by the output_die function in
> dwarf2out.cc based on the value's class. However, the same
> function is used in dwarf2out_finish  as well as dwarf2out_early_finish, so
> I suspect that not every value class is being used in dwarfout_early_finish
> (mainly I am interested in -flto mode). As there is little documentation on
> the same, I experimented by commenting out the various cases of value class
> in output die. I found that the classes such as dw_val_class_addr,
> dw_val_class_high_pc, and dw_val_class_vms_delta aren't being used during
> the early_finish of LTO mode. I might be wrong, as my observation is based
> on commenting out and testing a few pieces of code that might need to be
> completed. So, can anyone please tell out of these 30 classes that are
> relevant to dwarf2out_early_finish in LTO mode or at least point out some
> documentation if it exists?

You can probably do gcc_assert (!in_lto_p && flag_lto && !flag_fat_lto_objects)
on parts of output_die which you think are unused and then do make check
and if it passes also make bootstrap.  That should probably catch all
relevant cases.

Honza
> enum dw_val_class
> {
>   dw_val_class_none,
>   dw_val_class_addr,
>   dw_val_class_offset,
>   dw_val_class_loc,
>   dw_val_class_loc_list,
>   dw_val_class_range_list,
>   dw_val_class_const,
>   dw_val_class_unsigned_const,
>   dw_val_class_const_double,
>   dw_val_class_wide_int,
>   dw_val_class_vec,
>   dw_val_class_flag,
>   dw_val_class_die_ref,
>   dw_val_class_fde_ref,
>   dw_val_class_lbl_id,
>   dw_val_class_lineptr,
>   dw_val_class_str,
>   dw_val_class_macptr,
>   dw_val_class_loclistsptr,
>   dw_val_class_file,
>   dw_val_class_data8,
>   dw_val_class_decl_ref,
>   dw_val_class_vms_delta,
>   dw_val_class_high_pc,
>   dw_val_class_discr_value,
>   dw_val_class_discr_list,
>   dw_val_class_const_implicit,
>   dw_val_class_unsigned_const_implicit,
>   dw_val_class_file_implicit,
>   dw_val_class_view_list,
>   dw_val_class_symview
> };
> 
> --
> Rishi


Re: [PATCH] Fix coroutine tests for libstdc++ gnu-version-namespace mode

2023-10-11 Thread François Dumont via Gcc

Hi Iain

On 11/10/2023 09:30, Iain Sandoe wrote:

Hi François,


On 11 Oct 2023, at 05:49, François Dumont  wrote:
On 08/10/2023 15:59, Iain Sandoe wrote:

On 23 Sep 2023, at 21:10, François Dumont  wrote:

I'm eventually fixing those tests the same way we manage this problem in 
libstdc++ testsuite.

testsuite: Add optional libstdc++ version namespace in expected diagnostic

 When libstdc++ is build with --enable-symvers=gnu-versioned-namespace 
diagnostics are
 showing this namespace, currently __8.

 gcc/testsuite/ChangeLog:

 * testsuite/g++.dg/coroutines/coro-bad-alloc-00-bad-op-new.C: Add 
optional
 '__8' version namespace in expected diagnostic.
 * testsuite/g++.dg/coroutines/coro-bad-alloc-01-bad-op-del.C: 
Likewise.
 * testsuite/g++.dg/coroutines/coro-bad-alloc-02-no-op-new-nt.C: 
Likewise.
 * 
testsuite/g++.dg/coroutines/coro-bad-grooaf-01-grooaf-expected.C: Likewise.
 * testsuite/g++.dg/coroutines/pr97438.C: Likewise.
 * testsuite/g++.dg/coroutines/ramp-return-b.C: Likewise.

Tested under Linux x86_64.

I'm contributing to libstdc++ so I already have write access.

Ok to commit ?

As author of the tests, this LGTM as a suitable fix for now (at least, once the 
main
patch to fix versioned namespaces lands).

I just realized it was a "go", no ? Then why after the main patch ?

The "main patch" do not fix the versioned namespace. It just makes it adopt the 
cxx11 abi.

This patch fixes a problem that is as old as the tests and that is totally 
unrelated with the main one. I just wanted to improve the situation so that 
versioned namespace mode do not look more buggy than necessary when someone 
(like you) run those.

Maybe a misunderstanding on my part.  I was under the impression that 
versioned-namespace was currently unusable because it forces the old string 
ABI.  If that is not the case, then I guess the changes are OK now.


Said this way it sure makes this mode usability quite limited.

It's only functional to (almost) pass make check-c++ :-)



I am pretty concerned about the maintainability of this tho, hence this …


However, IMO, this could become quite painful as more g++ tests make use of std 
headers
(which is not really optional for facilities like this that are tightly-coupled 
between the FE and
the library).

For the future, it does seem that a more complete solution might be to 
introduce a
testsuite-wide definition for the C++ versioned std:: introducer, so that we 
can update it in one
place as the version changes.

So (as a thought experiment):
  - we’d have something of the form “CXX_STD” as a tcl global
  - we’d add the presence/absence of versioning to the relevant site.exp (which
means recognising the versioning choice also in the GCC configure)
  - we’d migrate tests to using ${CXX_STD} instead of "std::__N”  in matches

… I guess an alternative could be to cook up some alternate warning/error/etc
match functions that cater for arbitrary inline namespaces but that seems 
like a much
more tricky and invasive testsuite change.

thoughts?

I considered amending gcc/testsuite/lib/prune.exp to simply remove the version 
from the diagnostic. But the reply on why it was not working scared me, so this 
patch.

https://gcc.gnu.org/pipermail/gcc/2023-September/242526.html

Ah, I didn’t see that mail - will try to take a look at the weekend.


Ok, I'll instead chase for the patches on libstdc++ side then.

Moreover adopting cxx11 abi in versioned-namespace mode will imply a 
version bump which would force to patch those files again if we do not 
find another approach before.


Thanks,

François




Sourceware Open Office, Friday October 13, 18:00 UTC

2023-10-11 Thread Mark Wielaard
Every second Friday of the month is the Sourceware Overseers Open
Office hour in #overseers on irc.libera.chat from 18:00 till 19:00
UTC. That is this Friday October 13th.

Please feel free to drop by with any Sourceware services and hosting
questions. Some specific topics we will likely discuss:

- Mailinglists

  Last month we changed the settings on various project patches
  mailinglists to avoid From rewriting. We also got various requests
  for special security and code of conduct lists. If your project
  needs an extra mailinglist, or you have questions about settings,
  admin or moderation, please reach out.

- Online mini BoFs

  One item that came up at Cauldron last month was about meeting more
  frequently in smaller (virtual) groups.

  Thanks to our fiscal sponsor, Software Freedom Conservancy, all
  Sourceware projects can use their Big Blue Button instance to
  organize online mini BoFs or for having periodic meetings for their
  project.

  For this meeting we will also have a BBB meeting room:
  https://bbb.sfconservancy.org/b/mar-aom-dmo-fko

  Please participate to test it out. You can also create your own
  account at https://bbb.sfconservancy.org/b/signup which we can then
  activate for you. Note: Anyone is able to join a meeting, accounts
  are only required to create new meetings.

- Integrate builder CI, Full and Try builds and patchwork PreCommit-CI

  https://builder.sourceware.org/ runs CI, Full and Try builders for
  various projects, https://patchwork.sourceware.org/ collects patches
  and can run Pre-Commit CI for various projects. It should be
  possible to combine the two so the buildbot CI, Full and Try
  builders can also be triggered by new patchwork patches and
  patchwork receives Checks from builder for completed builds with all
  test results going into the bunsen database.

Of course you are welcome to drop into the #overseers channel at any
time and we can also be reached through email and bugzilla:
https://sourceware.org/mission.html#organization

If you aren't already and want to keep up to date on Sourceware
infrastructure services then please also subscribe to the overseers
mailinglist. https://sourceware.org/mailman/listinfo/overseers

We are also on the fediverse these days:
https://fosstodon.org/@sourceware

The Sourceware Project Leadership Committee also meets once a month to
discuss all community input. The committee will set priorities and
decide how to spend any funds, negotiate with hardware and service
partners, create budgets together with the Conservancy, or decides
when a new fundraising campaign is needed. Up till now we have been
able to add new services without needing to use any of the collected
funds. Our hardware partners have also been very generous with
providing extra servers when requested. The current committee includes
Frank Ch. Eigler, Christopher Faylor, Ian Kelling, Ian Lance Taylor,
Tom Tromey, Jon Turney, Mark J. Wielaard and Elena Zannoni.