On Tue, Mar 10, 2026 at 8:53 AM huaxin gao <[email protected]> wrote: > > Hi everyone, > > Some recent PRs look like they were made entirely by AI: finding issues, > writing code, opening PRs, and replying to review comments, with no human > review and no disclosure. > > Our guidelines already say contributors are expected to understand their > code, verify AI output before submitting, and disclose AI usage. The problem > is there's nothing about what happens when someone ignores them. > > Should we define consequences? For example: > > Closing PRs that were clearly not reviewed by a human before submitting > Limiting contributions from people who repeatedly ignore these guidelines > > It's OK to use AI to help write code, but submitting AI output without > looking at it and leaving it to maintainers to catch the problems is not OK. > > What do you all think?
Agreed. I'm not sure whether we could use a bot to detect and close such PRs automatically. Having maintainers close them manually can be annoying, but anyway, we should enforce the AI contribution guidelines. > > Thanks, > > Huaxin -- Regards Junwang Zhao
