Ihor Radchenko <[email protected]> writes:
I'm leaning towards more receptive attitude.
Preliminary ideas are the following:
1. First-time contributors should be discouraged to use LLM
2. The only exception to (1) is when they declare that
a. They are experienced LLM users
b. They confirm that they have reviewed the LLM-generated code and
*also the code it changes*
3. Contributors who wrote their own patches in the past may use LLM for
smallish patches. No new substantial features.
4. Regular contributors may be trusted to use LLM assist for new
features. They are probably experienced enough to review the
generated code and make sure that it is reasonable.
By reading this list, I can infer that you are looking at a purely technical
evaluation, ...
Let me know if any of the above smells disaster.
... which brings me to this point. Today **any and every** discussion about LLMs not only revolves
about the technical aspects but also about their second-order side-effects and how these tools
affect the world we live in:
- how they affect free/open-source projects: assaulted by "contributors" pushing code not reviewed
nor tested with made up claims. Sometimes maintainers not even speaking to a human because
comments are piped to an LLM
- how they affect service hosting: scraping content and putting servers on
their knees
- how they affect developers' mindset: people "unlearning" how to think about writing software and
relying on proprietary, pay-per-token services to write FOSS code
- how they affect the environment: climate impact, etc.
- how they affect labor: people tagging datasets for training
The bottom line is that I have serious concerns about LLMs state as of _today_ and I believe any
discussion should not shy away from a comprehensive view and evaluation.