Hi,

> First, with some help from LLMs; later themselves.

Sorry, Ihor, but this sounds quite naive... My experience is different:

You are normally slammed in your face by the LLM "This code is way too
old...
so here is something new and shiny" which is too large to actually
get control over it. And since it seems to work, you are happy with it.

Once people start being 'helped', they get used to 'getting something done
in 10 mins with
minimum/no effort' and being recognised for the product (and not the
production line) and go
more and more the LLM path without learning anything.

Like many other things, the promise of LLMs is to get more done with less
effort and they
appeal to layers in your brain that are so deep you even don't know they
exist.

You talk about your experience, but have you ever thought that your
approach to LLM - BTW,
it should be an example for many others- might just be an exception to the
rule?


My .2 cents, /PA

PS: I have been confronted by a person because a SW I needed him to use is
"so old,
the LLM doesn't know what to do".



On Tue, 10 Mar 2026 at 20:48, Ihor Radchenko <[email protected]> wrote:

> Jim Porter <[email protected]> writes:
>
> > On 3/7/2026 11:45 AM, Ihor Radchenko wrote:
> >> I do not see this being a problem with LLMs. If someone is pushing
> >> changes carelessly, that's not acceptable. With or without LLMs.
> >
> > One issue here is review burden; if I'm reviewing sloppy human-generated
> > code, it's usually *very* obvious. Even so, I can often still see the
> > intent behind it, so it's worth some extra effort to guide an
> > inexperienced contributor toward writing an acceptable patch. With
> > LLM-generated code, the patch is often cleaner (at least superficially),
> > which in my experience requires much closer attention from the reviewer.
>
> I can only see it being true for very poor human-written patches.
> Maybe we are lucky here, but such patches are uncommon.
> Most of the time, closer attention is needed even without LLM.
> I generally allow myself to be less rigorous only for patches from
> regular contributors, who have proven the ability to write a good code
> regularly.
>
> > To some degree, this is unavoidable. People may post LLM-generated
> > patches even if we tell them not to. However, in projects where
> > LLM-generated patches are banned, reviewers are more able to reject the
> > patch and move on, rather than expending the extra energy to suss out
> > all the lurking bugs.
>
> Given that large non-trivial LLM contributions have to be prohibited for
> now, this is a non-issue. For smaller patches, lurking bugs are less
> common.
>
> >> I get it that some people may be overconfident with LLMs, but that's
> >> simply a sign of limited experience. Once you work with LLMs long
> >> enough, it becomes very clear that blindly trusting the generated code
> >> is a very, very poor idea.
> >
> > I fear this is too optimistic. Projects like OpenClaw and Gas Town were
> > made by experienced developers who, by now, have certainly used LLMs
> > extensively. Despite that, the authors rarely (if ever) even look at the
> > generated code. While not everyone goes that deep down the rabbit hole,
> > evidently some developers find it irresistible.
>
> ... and they get punished by not looking very quickly.
> I also got punished by not looking into LLM-generated code in the
> past. So, now I always look. Being a good developer does not equate
> being good with LLMs. I guess my "enough" in the above is not how you
> read it. Will need to clarify.
>
> > More broadly though, I'm concerned that LLM-generated contributions
> > undermine the social basis of free software. The first essential freedom
> > is "The freedom to study how the program works, and change it so it does
> > your computing as you wish." In an individualistic sense, LLMs don't
> > hurt this, and could even be seen as helping; now, non-programmers can
> > change a program more easily.
> >
> > However, in a social sense, I believe that we (maintainers,
> > contributors, etc) have a responsibility to help cultivate these
> > freedoms in others. The very reason we can evaluate the merit of
> > LLM-generated code is because we've had the time to hone our skills.
> > Those who haven't had those years of practice deserve our attention and
> > guidance so that they too can have those skills. So that they've
> > developed the *positive* freedom to study and change how a program
> > works. I don't think there's any better way to do this than to learn by
> > doing alongside the experts.
>
> > I'd much rather my limited time and energy go towards building up the
> > next generation of free software hackers than to reviewing the output of
> > a statistical model so I can root out all the highly-plausible but
> > nevertheless incorrect bits.
> >
> > (I've tried to keep this somewhat brief, so I hope the above doesn't
> > omit some essential part of my reasoning.)
>
> I get your reasoning. I also believe that learning is an important part
> of participating in libre software community.
> However, do not underestimate the entry barriers.
> LLMs can lower the barriers quite substantially and can help people get
> involved. First, with some help from LLMs; later themselves.
> Here is a recent example from me - I struggled with a need to use Google
> Docs at work that involved getting out of Emacs for writing. But I had
> no idea how to do a good integration with Emacs. With LLM, I quickly
> prototyped something in Python to communicate with Google Docs and fetch
> documents into Org mode. Doing it from scratch would take me a
> weekend. With LLM - half an hour. From there, I rewrote the whole thing
> in Elisp, having a working example in Python in front of me.
>
> --
> Ihor Radchenko // yantar92,
> Org mode maintainer,
> Learn more about Org mode at <https://orgmode.org/>.
> Support Org development at <https://liberapay.com/org-mode>,
> or support my work at <https://liberapay.com/yantar92>
>
>

-- 
Fragen sind nicht da, um beantwortet zu werden,
Fragen sind da um gestellt zu werden
Georg Kreisler

"Sagen's Paradeiser" (ORF: Als Radiohören gefährlich war) => write BE!
Year 2 of the New Koprocracy

Reply via email to