Jim Porter <[email protected]> writes:

> On 3/7/2026 11:45 AM, Ihor Radchenko wrote:
>> I do not see this being a problem with LLMs. If someone is pushing
>> changes carelessly, that's not acceptable. With or without LLMs.
>
> One issue here is review burden; if I'm reviewing sloppy human-generated 
> code, it's usually *very* obvious. Even so, I can often still see the 
> intent behind it, so it's worth some extra effort to guide an 
> inexperienced contributor toward writing an acceptable patch. With 
> LLM-generated code, the patch is often cleaner (at least superficially), 
> which in my experience requires much closer attention from the reviewer.

I can only see it being true for very poor human-written patches.
Maybe we are lucky here, but such patches are uncommon.
Most of the time, closer attention is needed even without LLM.
I generally allow myself to be less rigorous only for patches from
regular contributors, who have proven the ability to write a good code
regularly.

> To some degree, this is unavoidable. People may post LLM-generated 
> patches even if we tell them not to. However, in projects where 
> LLM-generated patches are banned, reviewers are more able to reject the 
> patch and move on, rather than expending the extra energy to suss out 
> all the lurking bugs.

Given that large non-trivial LLM contributions have to be prohibited for
now, this is a non-issue. For smaller patches, lurking bugs are less
common.

>> I get it that some people may be overconfident with LLMs, but that's
>> simply a sign of limited experience. Once you work with LLMs long
>> enough, it becomes very clear that blindly trusting the generated code
>> is a very, very poor idea.
>
> I fear this is too optimistic. Projects like OpenClaw and Gas Town were 
> made by experienced developers who, by now, have certainly used LLMs 
> extensively. Despite that, the authors rarely (if ever) even look at the 
> generated code. While not everyone goes that deep down the rabbit hole, 
> evidently some developers find it irresistible.

... and they get punished by not looking very quickly.
I also got punished by not looking into LLM-generated code in the
past. So, now I always look. Being a good developer does not equate
being good with LLMs. I guess my "enough" in the above is not how you
read it. Will need to clarify.

> More broadly though, I'm concerned that LLM-generated contributions 
> undermine the social basis of free software. The first essential freedom 
> is "The freedom to study how the program works, and change it so it does 
> your computing as you wish." In an individualistic sense, LLMs don't 
> hurt this, and could even be seen as helping; now, non-programmers can 
> change a program more easily.
>
> However, in a social sense, I believe that we (maintainers, 
> contributors, etc) have a responsibility to help cultivate these 
> freedoms in others. The very reason we can evaluate the merit of 
> LLM-generated code is because we've had the time to hone our skills. 
> Those who haven't had those years of practice deserve our attention and 
> guidance so that they too can have those skills. So that they've 
> developed the *positive* freedom to study and change how a program 
> works. I don't think there's any better way to do this than to learn by 
> doing alongside the experts.

> I'd much rather my limited time and energy go towards building up the 
> next generation of free software hackers than to reviewing the output of 
> a statistical model so I can root out all the highly-plausible but 
> nevertheless incorrect bits.
>
> (I've tried to keep this somewhat brief, so I hope the above doesn't 
> omit some essential part of my reasoning.)

I get your reasoning. I also believe that learning is an important part
of participating in libre software community.
However, do not underestimate the entry barriers.
LLMs can lower the barriers quite substantially and can help people get
involved. First, with some help from LLMs; later themselves.
Here is a recent example from me - I struggled with a need to use Google
Docs at work that involved getting out of Emacs for writing. But I had
no idea how to do a good integration with Emacs. With LLM, I quickly
prototyped something in Python to communicate with Google Docs and fetch
documents into Org mode. Doing it from scratch would take me a
weekend. With LLM - half an hour. From there, I rewrote the whole thing
in Elisp, having a working example in Python in front of me.

-- 
Ihor Radchenko // yantar92,
Org mode maintainer,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>

Reply via email to