On Fri, May 08, 2026 at 11:54:32AM -0400, Joshua Peisach wrote: > On Fri May 8, 2026 at 11:35 AM EDT, Greg KH wrote: > > On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote: > > > [ Coming back to this after a week of trying to clean up the disaster > > > that is my inbox after the merge window ] > > > > > > On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote: > > > > > > > > The use of automated tools to find bugs in random locations of the > > > > kernel > > > > induces a raise of security reports even if most of them should just be > > > > reported as regular bugs. This patch is an attempt at drawing a line > > > > between what qualifies as a security bug and what does not, hoping to > > > > improve the situation and ease decision on the reporter's side. > > > > > > I actually think we may want to go further than this. > > > > > > I think we should simply make it a rule that "a 'security' bug that is > > > found by AI is public". > > Whether my opinion is cared about or not, I feel it should be put in here: > > Yes, *in theory* the bug is public. Anyone can find it. But just like bugs > sitting in open source code repositories, anyone can look for if it they try. > > The only difference is that a LLM is making it more apparent and noticable > to people, if you ask it to. > > The choice to then decide "therefore we can disclose it immediately", in my > opinion, is not great. Because then you are bringing attention to a bug that > nobody, or at most, relatively few people knew about (even in small circles) > to a broader audience.
It's no longer needed, please trust us. Last week we've seen about one duplicate every day and some bugs had up to 2 duplicates. I tried myself to ask my *local* LLM to find bugs in a certain class over the whole net tree, and it found one of the recently pubished ones without me having to give it any hint about this. Really, these days LLMs can swallow huge amounts of data and correlate complex patterns very easily over an immense context. You don't need to ask them to analyze a patch anymore nor to work on this or that file. An issue found by an LLM is just a proof that this issue CAN BE FOUND by an LLM, thus a good indication that someone else will find it, and very likely that someone else might already be using it. > Take Dirty Frag - even though the embargo is said to have been broken, and > all parties agreed to release the disclosure, it was put on GitHub. Of course, > information that is public, is public. But putting it on GitHub and then > buying the domain dirtyfrag.io makes it easy to bring attention to the bug > that was disclosed **with no patch or CVE.** It's really not how it works nor how it worked. > Even if the mitigation is "just disable the module", I still think that by > giving up the embargo entirely, we are creating more attention, and more > opportunity for exploitation. Even if it's a PoC and not an exploit for > malicious purposes. What is important is that we insist on no longer sharing PoCs publicly. This will slow down script kiddies (who are the ones doing the most damage because they don't need the bug for their business yet they cause harm using it). Criminals are probably already playing with it and might have been for weeks or months already. > > After the past 2 weeks, and the past 2 months, I am going to violently > > agree with you here. We've seen so many "duplicate" bug reports it's > > not funny. All of the modern LLMs are feeding the output back into the > > model for future runs, which makes the data totally public. Even if > > not, the output is being monitored by external companies at the very > > least. > > > > I think that's more "unresponsible disclosure" - maybe there is some way > that LLM emails can be filtered? No :-( If you saw the flood we're receiving on [email protected], feels like taking a shower under the niagara falls. Sometimes we just say "trim this and repost it, it's too long we can't read it". > And again, yes, the data is being trained. But **you have to look for it.**. > It is still a needle in a haystack, but it's not a black hole absorbing > said haystack. No, really not at all. Not since the last month at least. > > > So why should be consider it special and have it be on the security list? > > > > I don't think we should anymore. > > > > Yes, having a full reproducer in public is not good, but the general > > "this is a bug" comments we should start redirecting to public lists > > more. That's the only way we are going to handle this influx as our > > "normal" bug workflow works very well, especially when it comes with a > > fix, as these LLM tools can provide very easily. > > > > Could this at least be temporary? There are only a finite number of bugs > that can exist in a codebase. We regularly update the doc based on circumstances so we don't need to care now. It looks like AI-based reports consume the doc and this will make them less painful to maintainers, which is already a great thing. Willy

