Hi Linus, On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote: > [ Coming back to this after a week of trying to clean up the disaster > that is my inbox after the merge window ] > > On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote: > > > > The use of automated tools to find bugs in random locations of the kernel > > induces a raise of security reports even if most of them should just be > > reported as regular bugs. This patch is an attempt at drawing a line > > between what qualifies as a security bug and what does not, hoping to > > improve the situation and ease decision on the reporter's side. > > I actually think we may want to go further than this. > > I think we should simply make it a rule that "a 'security' bug that is > found by AI is public".
This would definitely help us a lot on [email protected], but... > Now, I may be influenced by that "my inbox is a disaster during the > merge window" thing, but I do think this is pretty fundamental: if > somebody finds a bug with more or less standard AI tools (ie we're not > talking magical special hardware and nation-state level efforts), then > that bug pretty much by definition IS NOT SECRET. I think it's only 99.9% true. I mean, I've used such tools myself to find bugs that were not found otherwise and I know that: - interactions with the tools count a lot - luck counts even more There remains a faint possibility that the reporter has worked a lot with their tool to be able to find the problem. I.e. the user helped the LLM and not the opposite. In this case it might be possible that it's not public. But clearly from what we've seen over the last few weeks, the number of duplicates has exploded, with up to 3 reports for the same issue within 2 days, so it's clear that they're not in the category I mention above. Maybe we should leave some rope for "if you are fairly confident that the work you did is unlikely to have been replicated by anyone else, the you can report it here" but I think we'll both agree that for now most reporters really think they did something exceptional while we all saw it was not the case (or they all do the same exceptional thing). Thus I'm embarrassed with that. > So why should be consider it special and have it be on the security list? > > Yes, yes, I know - some people think that "security bugs are special". > And I've been on the record before calling that opinion special - in > the short bus sense. > > Bugs are bugs. And not having them in public only makes them harder to > deal with. > > Do we want to make bugs with potential security impact harder to deal > with? No. No, we really don't. > > So I claim that the only reason for a security list is the non-public > nature of the bug and the whole "responsible disclosure" argument. As you probably guess, I totally agree with these points. I'm just trying to leave the door open for the rare exceptions without having to accept all the flood. > But that argument is complete and utter garbage in the face of some > mostly automated AI discovery (now, that argument is mostly a fiction > in the first place, but I am not going to argue with people who have > vested interest in making their special patches "security bugs"). > > To recap - I think this "document the scope of security bugs" is good, Thanks for the feedback. > but I think we should go even further, and just document the fact that > anything found by regular AI tools should just always go to public > lists and is simply not special. I'm fine with that but I'd like to add "except..." though I don't know how to phrase it. If you have any idea, we can write something for a start and see how it goes. It looks like these tools are pretty good at swallowing our doc updates to help reporters so the good thing is that we can now write instructions that are mostly followed in process docs ;-) Willy

