Hi Ludo,

On Fri, 20 Feb 2026 at 18:11, Ludovic Courtès <[email protected]> wrote:
> Cayetano Santos via "Development of GNU Guix and the GNU System 
> distribution." <[email protected]> skribis:
>
>> If your contribution is not important enough for you to bother writing
>> it by yourself, don’t expect me to read it, even less expend some time
>> doing a serious review.
>
> On this topic, LLVM has an interesting piece in their AI Tool Policy:
> they refer to this as “extractive contributions”.
>
>   https://github.com/llvm/llvm-project/blob/main/llvm/docs/AIToolPolicy.md

Some comments on this LLVM policy, in case it’s used as a base.  Well,
from my point of view, this policy is about how to deal with spam – an
unsolicited query that brings nothing; LLM is just a mean for generating
such spam.

      My comments are indented with 6 spaces.

## Policy

LLVM's policy is that contributors can use whatever tools they would like to
craft their contributions, but there must be a **human in the loop**.
**Contributors must read and review all LLM-generated code or text before they
ask other project members to review it.**

      Because it’s unverifiable, this reads as an advice, IMHO.
      Therefore, I would write: Contributors SHOULD read and review all
      LLM-generated code.

                                          The contributor is always the author
and is fully accountable for their contributions. Contributors should be
sufficiently confident that the contribution is high enough quality that asking
for a review is a good use of scarce maintainer time, and they should be **able
to answer questions about their work** during review.

      I would write: Contributors […] and they MUST be able to answer
      questions about their work during review.

      This part is a verifiable criteria, more or less.

We expect that new contributors will be less confident in their contributions,
and our guidance to them is to **start with small contributions** that they can
fully understand to build confidence. We aspire to be a welcoming community
that helps new contributors grow their expertise, but learning involves taking
small steps, getting feedback, and iterating. Passing maintainer feedback to an
LLM doesn't help anyone grow, and does not sustain our community.

Contributors are expected to **be transparent and label contributions that
contain substantial amounts of tool-generated content**. Our policy on
labelling is intended to facilitate reviews, and not to track which parts of
LLVM are generated. Contributors should note tool usage in their pull request
description, commit message, or wherever authorship is normally indicated for
the work.

      I would write: Contributors MUST note LLM-generated tool usage in…

      This is practise we want to encode and this information might have
      legal impact later; because Copyright or else.

          For instance, use a commit message trailer like Assisted-by: <name of
code assistant>. This transparency helps the community develop best practices
and understand the role of these new tools.

This policy includes, but is not limited to, the following kinds of
contributions:

- Code, usually in the form of a pull request
- RFCs or design proposals
- Issues or security vulnerabilities
- Comments and feedback on pull requests

## Details

[…]

## Extractive Contributions

[…]

## Handling Violations

If a maintainer judges that a contribution doesn't comply with this policy,

      Here is my main concern: **judgement** based on what?

      Concretely, if we receive a message like this:
      https://lists.gnu.org/archive/html/bug-hurd/2026-02/msg00168.html
      and assume the message doesn’t open with “This is X's AI assistant”
      and close with “Claude”, how could we judge beforehand?

      As this other message explains:
      https://lists.gnu.org/archive/html/bug-hurd/2026-02/msg00169.html
      « Claude had proposed fixes that Samuel didn't accept because he
      wasn't seeing anything wrong without them. »

      Therefore, it would mean the maintainer need to invest time in
      order to be able to judge, and thus, it defeats the policy itself.

      Or if the maintainer does not invest time for judging, it paves
      the way for some arbitrary or unfair decisions.

      In other words, if one user triggers some LLM thing that
      automatically sends stuff without being “humanly in the loop”, the
      real issue isn’t the LLM-thing but the user^W spam.  And somehow
      this policy becomes about spam and how to deal with it, and not
      really about LLM.

they should paste the following response to request changes:

[...]

The best ways to make a change less extractive and more valuable are to reduce
its size or complexity or to increase its usefulness to the community. These
factors are impossible to weigh objectively, and our project policy leaves this
determination up to the maintainers of the project, i.e. those who are doing
the work of sustaining the project.

If or when it becomes clear that a GitHub issue or PR is off-track and not
moving in the right direction, maintainers should apply the `extractive` label
to help other reviewers prioritize their review time.

      Well, it appears to me already almost covered by our section
      “Reviewing the Work of Others”.

      Maybe this section could be extended with some items about the
      clarification of “Extractive Contributions”; independently of the
      question of LLMs.

      
https://guix.gnu.org/manual/devel/en/guix.html#Reviewing-the-Work-of-Others

If a contributor fails to make their change meaningfully less extractive,
maintainers should escalate to the relevant moderation or admin team for the
space (GitHub, Discourse, Discord, etc) to lock the conversation.

## Copyright

[…]

## Examples

[…]


Cheers,
simon

Reply via email to