Hi everyone,

Some recent PRs look like they were made entirely by AI: finding issues,
writing code, opening PRs, and replying to review comments, with no human
review and no disclosure.

Our guidelines already say contributors are expected to understand their
code, verify AI output before submitting, and disclose AI usage. The
problem is there's nothing about what happens when someone ignores them.

Should we define consequences? For example:


   - Closing PRs that were clearly not reviewed by a human before submitting
   - Limiting contributions from people who repeatedly ignore these
   guidelines

It's OK to use AI to help write code, but submitting AI output without
looking at it and leaving it to maintainers to catch the problems is not
OK.

What do you all think?

Thanks,

Huaxin

Reply via email to