Generally, +1 to Jim's sentiments here. Jim Porter <[email protected]> writes: > On 3/7/2026 11:45 AM, Ihor Radchenko wrote: >> I do not see this being a problem with LLMs. If someone is pushing >> changes carelessly, that's not acceptable. With or without LLMs.
LLMs seem to have a bigger impact than just another neutral tool. For one thing, they seem to be doing for software patches what e-mail did for unsolicited advertising: changing the economics to advantage spammers. By spammers, I'm thinking of people or entities who contribute large volumes of carelessly generated patches to projects not because they care about them but to burnish their credentials as contributors. Unlike e-mail it does still have a marginal cost (tokens), and like e-mail we'll eventually get blockers to keep it manageable. Jim Porter <[email protected]> writes: > One issue here is review burden [...] With LLM-generated code, the > patch is often cleaner (at least superficially), which in my > experience requires much closer attention from the reviewer. I lack experience in reviewing code, but this resonates with my experience in my field (translation and copy-editing). > To some degree, this is unavoidable. People may post LLM-generated > patches even if we tell them not to. However, in projects where > LLM-generated patches are banned, reviewers are more able to reject > the patch and move on, rather than expending the extra energy to suss > out all the lurking bugs. I think this is an important consideration in setting policy and argues for a restrictive policy, at least initially. > However, in a social sense, I believe that we (maintainers, > contributors, etc) have a responsibility to help cultivate these > freedoms in others. The very reason we can evaluate the merit of > LLM-generated code is because we've had the time to hone our skills. > Those who haven't had those years of practice deserve our attention > and guidance so that they too can have those skills. So that they've > developed the *positive* freedom to study and change how a program > works. I don't think there's any better way to do this than to learn > by doing alongside the experts. Ihor's outlined LLM-receptive policy springs from the same pedagogical ethos, I think, and seeks pragmatically to channel new LLM-using contributors accordingly. I appreciate that, but in all the AI hype, I do think deliberately LLM-restrictive communities are needed to preserve and foster these skills and freedoms -- and also an attractive breathing space for some of us. These are general feels, not helpful feedback on Ihor's outlined policy, but perhaps that's OK for this subthread. Regards, Christian
