aaron.ballman added a comment.

In D122983#3426661 <https://reviews.llvm.org/D122983#3426661>, @paulwalker-arm 
wrote:

> Thanks for this.

Any time, thank you for speaking up when I was about to drop test coverage by 
accident!

In D122983#3426716 <https://reviews.llvm.org/D122983#3426716>, @erichkeane 
wrote:

> In D122983#3426661 <https://reviews.llvm.org/D122983#3426661>, 
> @paulwalker-arm wrote:
>
>> Thanks for this.  I can see about cleaning up the tests but I'm still not 
>> sure what the advantage is.  The affected RUN lines are already 
>> `-fsyntax-only` tests.  Is it about where the test files live or is there 
>> something else I should be considering?  The benefit of the current tests is 
>> that it's easy to spot holes as a single function tests both requirements.  
>> My fear is that separating the tests based on Sema/CodeGen could mean we'll 
>> miss something and never know.
>
> There are two big problems with the way these are split up.  First, the 
> 'list' of these functions that you expect to not be declared under a flag 
> don't exist all in 1 place, and end up being difficult to follow/track down 
> as they are spread out as much as they are.  Second, they are in tests that 
> have a "REQUIRES" clause in it, which causes them to not be caught in many 
> configurations.  We typically avoid doing -verify in CodeGen (unless the 
> diagnostic is ACTUALLY in CodeGen) as a matter of business.  The diagnostic 
> you're using to validate these is likely a poor choice (as it is a 
> non-standard extension based warning) though, which is what makes this so 
> difficult on folks trying to modernize Clang to follow standards.  IMO, if 
> these had been C++ tests (which would have 'not found' failures), or 
> validated the existence of these in some other mechanism, we likely would 
> have never noticed other than via an audit.

+1 to this.

It's also about expectations -- the test directories are set up to be largely 
reflective of the libraries which comprise Clang and the contents of those 
directories are intended to test that particular component and not others. 
Mixing codegen and sema tests winds up being a surprise quite often when 
working on diagnostics because running the *whole* test suite is expensive 
while running just the Sema tests is quite quick. Then we can run the whole 
test suite once locally when we're finished changing diagnostics, just in case 
we missed stuff. So already, codegen tests with diagnostic checks come as a 
surprise from that workflow. But as Erich points out, these particular tests 
don't run everywhere to begin with, so it came as a rather unpleasant surprise 
that there's ~200 more tests impacted by changing the diagnostic. It would have 
been far less painful if these were in a single file in Sema -- we'd only have 
to fix it once instead of ~200 times and we'd catch the diagnostic regardless 
of what targets are registered.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D122983/new/

https://reviews.llvm.org/D122983

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to