I'm glad this was raised. Thank you, Huaxin.
I originally suggested adding a comment
<https://github.com/apache/iceberg/pull/15213#discussion_r2790681513> in
the AI Guidelines PR <https://github.com/apache/iceberg/pull/15213> to
address this issue. We should have a version of this, and/or the
consequences the community decides for contributors who don't align, added
explicitly to the guidelines.

On Mon, Mar 9, 2026 at 5:53 PM huaxin gao <[email protected]> wrote:

> Hi everyone,
>
> Some recent PRs look like they were made entirely by AI: finding issues,
> writing code, opening PRs, and replying to review comments, with no human
> review and no disclosure.
>
> Our guidelines already say contributors are expected to understand their
> code, verify AI output before submitting, and disclose AI usage. The
> problem is there's nothing about what happens when someone ignores them.
>
> Should we define consequences? For example:
>
>
>    - Closing PRs that were clearly not reviewed by a human before
>    submitting
>    - Limiting contributions from people who repeatedly ignore these
>    guidelines
>
> It's OK to use AI to help write code, but submitting AI output without
> looking at it and leaving it to maintainers to catch the problems is not
> OK.
>
> What do you all think?
>
> Thanks,
>
> Huaxin
>

Reply via email to