carlosgalvezp added inline comments.

================
Comment at: clang-tools-extra/clang-tidy/ClangTidyDiagnosticConsumer.h:61-64
-  unsigned errorsIgnored() const {
+  int errorsIgnored() const {
     return ErrorsIgnoredNOLINT + ErrorsIgnoredCheckFilter +
            ErrorsIgnoredNonUserCode + ErrorsIgnoredLineFilter;
   }
----------------
carlosgalvezp wrote:
> aaron.ballman wrote:
> > carlosgalvezp wrote:
> > > aaron.ballman wrote:
> > > > The type changes overflow here from well-defined behavior to undefined 
> > > > behavior. I don't think that's a serious concern (we have error limits 
> > > > already), but in general, I don't think we should be changing types 
> > > > like this without stronger rationale. This particular bit feels like 
> > > > churn without benefit -- what do we gain by introducing the possibility 
> > > > for UB here? (I'm not strongly opposed, but I get worried when 
> > > > well-defined code turns into potentially undefined code in the name of 
> > > > an NFC cleanup.)
> > > Do we have coding guidelines for when to use signed vs unsigned? I have 
> > > quite ugly past experiences using unsigned types for arithmetic and as 
> > > "counters" so these kinds of things really catch my eye quickly. This 
> > > goes in line with e.g. the [[ 
> > > https://google.github.io/styleguide/cppguide.html#Integer_Types | Google 
> > > style guide ]].
> > > 
> > > Yes, you are correct that signed int overflow is UB, but would that then 
> > > be an argument for banning signed ints in the codebase at all? IMO what's 
> > > more important is to know the domain in which this is being applied. Do 
> > > we expect people to have more than 2^31 linter warnings? If we do, would 
> > > 2^32 really solve the problem, or just delay it? Maybe the proper 
> > > solution is to go to 64-bit ints if that's an expected use case.
> > > 
> > > Anyway if this is a concern I'm happy to revert and take the discussion 
> > > outside the patch. We used to be over-defensive in this topic as well (UB 
> > > is not to be taken lightly) but realized that perhaps the problem lies 
> > > somewhere else.
> > > Do we have coding guidelines for when to use signed vs unsigned?
> > 
> > We don't, and I don't think we could even if we wanted to. There are times 
> > when unsigned is correct and there are times when signed is correct, and no 
> > blanket rule covers all the cases.
> > 
> > > Yes, you are correct that signed int overflow is UB, but would that then 
> > > be an argument for banning signed ints in the codebase at all? 
> > 
> > No. My point about the change to UB wasn't "I think this is dangerous" -- 
> > as I mentioned, we have limits on the number of errors that can be emitted, 
> > so I don't believe we can hit the UB in practice. It's more that changing 
> > types is rarely an NFC change without further validation and it wasn't 
> > clear if this validation had happened. Changing types is a nontrivial (and 
> > rarely NFC) change in C++.
> > 
> > > Anyway if this is a concern I'm happy to revert and take the discussion 
> > > outside the patch. We used to be over-defensive in this topic as well (UB 
> > > is not to be taken lightly) but realized that perhaps the problem lies 
> > > somewhere else.
> > 
> > Personally, I'm not super motivated to see a change here, but I'm also not 
> > opposed to it. It mostly just feels like churn. If we're solving a problem, 
> > I'm happy to see the changes, but we should call out what problem we're 
> > solving in the patch summary. But this seems more a style choice and I 
> > certainly wouldn't want to see these changes used as a precedent to go 
> > switch more unsigned ints to be signed ints in the name of style.
> Thanks, I see how this change is not as NFC as I thought would be. Reverting!
> But this seems more a style choice

PS: IMHO signed vs unsigned is not just a style choice. Using unsigned types 
for the wrong purpose (e.g. doing arithmetic) can lead to nasty logical bugs, 
especially when implicit conversions happen under the hood:

https://godbolt.org/z/5qox7b1nW

The above code is obvious, but in a real-world case you might not keep in the 
back of your head that you are operating with an "unsigned" variable in the 
middle of a 100-lines long function.

In the end, it's all about where you place the "singularity". With unsigned 
types, underflow happens around 0. With signed types, UB happens at 
min()/max(). In real-world applications, we usually code much more often around 
0 than around min/max.

That's my 2 cents, don't mean to change/promote any guidelines. If you think 
this is a topic of interest I can bring it to the mailing list, otherwise I'll 
just drop it :)


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D113847/new/

https://reviews.llvm.org/D113847

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to