On Wed, May 06, 2026 at 06:02:15PM +0200, Willy Tarreau wrote: > Hi Linus, > > On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote: > > [ Coming back to this after a week of trying to clean up the disaster > > that is my inbox after the merge window ] > > > > On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote: > > > > > > The use of automated tools to find bugs in random locations of the kernel > > > induces a raise of security reports even if most of them should just be > > > reported as regular bugs. This patch is an attempt at drawing a line > > > between what qualifies as a security bug and what does not, hoping to > > > improve the situation and ease decision on the reporter's side. > > > > I actually think we may want to go further than this. > > > > I think we should simply make it a rule that "a 'security' bug that is > > found by AI is public". > > This would definitely help us a lot on [email protected], but... > > > Now, I may be influenced by that "my inbox is a disaster during the > > merge window" thing, but I do think this is pretty fundamental: if > > somebody finds a bug with more or less standard AI tools (ie we're not > > talking magical special hardware and nation-state level efforts), then > > that bug pretty much by definition IS NOT SECRET. > > I think it's only 99.9% true. I mean, I've used such tools myself to > find bugs that were not found otherwise and I know that: > - interactions with the tools count a lot > - luck counts even more
Thinking more about it, there's still something that won't go round: - people have always been looking for vulnerabilities, sometimes for fun, and often to proudly show a CVE on their resume ; we've been dealing with that for many years. - now they can do the same using AI and making much less effort, but their approach still stems from actively searching a vulnerability - when they find something, they're certain it's a vulnerability because it's what they asked for (hence the threat model addition). - if we tell them "don't report this to [email protected]" they will simply send them directly to the maintainers, who are even less accustomed to the process and will not benefit from the security team's experience in triaging nor support in saying "no". And we all know how stressful a vulnerability report can be for a developer who instantly has to stop doing everything and start to look at it just in case it would be valid. For these reasons I'd rather propose that we say something around these lines: Note that the security team will generally consider AI-assisted findings as public and will often ask you to repost your report to public lists. Another point is that for many vulns there are two types of adversaries: - criminals - script kiddies The former must be assumed to also have discovered the same vuln, possibly earlier, and to be actively exploiting it. The latter however, is just going to use whatever published exploit to say "look mum, I'm root". Public reports containing too many details will speed up usability for this group and that's not good for users. And we *know* that some reports contain working PoC that need very little modification. Passing them through [email protected] for triaging feels safer than directing them to public lists with no early validation. So in short, I think that: - AI reports should be considered public, but not necessarily well known yet - AI reports often contain repros that shouldn't be posted publicly - AI reports wording can be intimidating to developers not used to receiving these things -> the security team should remain the first filtering layer for this for new reporters even if it means continuing to see some noise. I think that instead it's the 3rd patch about the threat model that should help us receive less noise by explaining what is not a vulnerability. I can rework that part a bit to reflect this. Willy

