On Fri May 8, 2026 at 11:35 AM EDT, Greg KH wrote:
On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
[ Coming back to this after a week of trying to clean up the disaster
that is my inbox after the merge window ]
On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote:
>
> The use of automated tools to find bugs in random locations of the kernel
> induces a raise of security reports even if most of them should just be
> reported as regular bugs. This patch is an attempt at drawing a line
> between what qualifies as a security bug and what does not, hoping to
> improve the situation and ease decision on the reporter's side.
I actually think we may want to go further than this.
I think we should simply make it a rule that "a 'security' bug that is
found by AI is public".
Whether my opinion is cared about or not, I feel it should be put in here:
Yes, *in theory* the bug is public. Anyone can find it. But just like bugs
sitting in open source code repositories, anyone can look for if it they try.
The only difference is that a LLM is making it more apparent and noticable
to people, if you ask it to.
The choice to then decide "therefore we can disclose it immediately", in my
opinion, is not great. Because then you are bringing attention to a bug that
nobody, or at most, relatively few people knew about (even in small circles)
to a broader audience.
Take Dirty Frag - even though the embargo is said to have been broken, and
all parties agreed to release the disclosure, it was put on GitHub. Of course,
information that is public, is public. But putting it on GitHub and then
buying the domain dirtyfrag.io makes it easy to bring attention to the bug
that was disclosed **with no patch or CVE.**
Even if the mitigation is "just disable the module", I still think that by
giving up the embargo entirely, we are creating more attention, and more
opportunity for exploitation. Even if it's a PoC and not an exploit for
malicious purposes.
Now, I may be influenced by that "my inbox is a disaster during the
merge window" thing, but I do think this is pretty fundamental: if
somebody finds a bug with more or less standard AI tools (ie we're not
talking magical special hardware and nation-state level efforts), then
that bug pretty much by definition IS NOT SECRET.
Yes. I agree. But in theory that person did not need to use AI to find the
bug, so by that logic, the bug was already known about.
After the past 2 weeks, and the past 2 months, I am going to violently
agree with you here. We've seen so many "duplicate" bug reports it's
not funny. All of the modern LLMs are feeding the output back into the
model for future runs, which makes the data totally public. Even if
not, the output is being monitored by external companies at the very
least.
I think that's more "unresponsible disclosure" - maybe there is some way
that LLM emails can be filtered?
And again, yes, the data is being trained. But **you have to look for it.**.
It is still a needle in a haystack, but it's not a black hole absorbing
said haystack.
So why should be consider it special and have it be on the security list?
I don't think we should anymore.
Yes, having a full reproducer in public is not good, but the general
"this is a bug" comments we should start redirecting to public lists
more. That's the only way we are going to handle this influx as our
"normal" bug workflow works very well, especially when it comes with a
fix, as these LLM tools can provide very easily.
Could this at least be temporary? There are only a finite number of bugs
that can exist in a codebase.
-Josh