Hi Richard, On Sun, 2026-03-15 at 11:46 +0100, Richard Biener via Gcc wrote: > On Sat, Mar 14, 2026 at 7:59 PM Jerry D <[email protected]> wrote: > > Some of the various LLM services available appear to be getting very good at > > generating bug fixes. I realize that one must be careful as these tools can > > at > > times do things that may be superfluous to the actual fix. By superfluous I > > mean > > lines of code that are not relevant to the lines that fix it. > > > > I saw some discussions of this subject for gcc somewhere and wanted to know > > if > > we have a specific policy established / documented somewhere regarding this. > > There are legal issues affecting yourself if contributing under the DCO > (you'll > be liable) or with assigning to the FSF (I suppose you're liable towards the > FSF > in this case). A (Co-)AuthoredBy: LLM doesn't fix this.
The binutils policy does take care of that: https://sourceware.org/binutils/wiki/LLM_Generated_Content elfutils adopted something similar: https://inbox.sourceware.org/elfutils-devel/CAJDtP-Sz6R+=hsXS5=29DX3=f3yfoewxvxkeqed_loaels7...@mail.gmail.com/ See also how qemu and gnulib handle this: https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-generated-content https://cgit.git.savannah.gnu.org/cgit/gnulib.git/tree/HACKING#n226 > There's another bit of it with LLM agents possibly hammering on sourceware > infrastructure (I'm thinking of bugzilla mostly). I wish there was some way to make these AI scraper bots behave... But it isn't our contributors (or even our real users) that do this. We are lucky to have Anubis now and that we can use it in non-javascript mode to slow down the bots. It seems to work (for now?). But yeah, this AI bubble cannot pop soon enough. Sigh. Cheers, Mark
