Hi Huaxin, Thank you for bringing up this important topic. Since reviewer time is a limited resource, it is crucial that we protect it from low-quality AI submissions.
Committers should have the authority to close PRs if guidelines are not followed or if it becomes clear that the author does not deeply understand the changes they are submitting. In such cases, the reason for closing the PR should be clearly stated. Regarding limiting contributions from those who repeatedly ignore these guidelines, this may be difficult to manage at scale. However, we should definitely find a way to prevent bots from flooding the repository with automated PRs. I wonder if GitHub or other services have mechanisms to help with this. ~ Anurag On Mon, Mar 9, 2026 at 5:53 PM huaxin gao <[email protected]> wrote: > Hi everyone, > > Some recent PRs look like they were made entirely by AI: finding issues, > writing code, opening PRs, and replying to review comments, with no human > review and no disclosure. > > Our guidelines already say contributors are expected to understand their > code, verify AI output before submitting, and disclose AI usage. The > problem is there's nothing about what happens when someone ignores them. > > Should we define consequences? For example: > > > - Closing PRs that were clearly not reviewed by a human before > submitting > - Limiting contributions from people who repeatedly ignore these > guidelines > > It's OK to use AI to help write code, but submitting AI output without > looking at it and leaving it to maintainers to catch the problems is not > OK. > > What do you all think? > > Thanks, > > Huaxin >
