Extending the issue template is an option, but let's be aware of downsides and let's make sure we believe it's net positive (also outside of current situation).
Some people (and bots) will overlook the checkboxes. If "I am a human" is not checked, do we auto-reject the issue? Some people will notice the checkboxes, and spend extra time filling them (and filling checkboxes in an issue template is non-trivial for beginners with low markdown skills) Some bots will check the checkbox (agents need to know how to fill issue templates), voiding the gain. On Fri, 31 Jan 2025 at 14:20, Steve Loughran <ste...@cloudera.com.invalid> wrote: > > What about extending the issue templates? > > Because of a growing problem with worthless LLM-generated issues, github > MAY terminate any account doing this to our project > [ ] I am a human being and am not creating AI generated issues. > [ ] I accept that if I am posting AI-generated issues, my github account > MAY be terminated > > note the use of "MAY", as per RFC-2117 -as SHALL is not used, it is > technically correct > > > > > On Fri, 31 Jan 2025 at 11:40, Jarek Potiuk <ja...@potiuk.com> wrote: > >> Hey, >> >> I am at FOSDEM now but I have some progress and more information about >> the whole stuf: >> >> Some facts first (without any judgment from my side): >> >> * my posts in social media reached many, many people (thanks to you - >> posting it and people from other foundations) - the reach out is amazing - >> more than 200.000 views, and Outlier.ai and Scale.ai noticed >> * Scale.ai reached out to me. They apologized for what they have done (in >> private so far), and admitted it has been their fault entirely. They >> admitted their instructions were poorly written and led the people they >> hired to submit issues where they were never supposed to do it >> * We had a call where we discussed why and how it happened and >> they explained also that they changed those instructions with bright flashy >> "do not submit issues" and added explicit instructions there >> * they could not (would not) really explain (what exactly the project and >> generating issues was for) - initially they mentioned training agents to >> behave like humans do and create issues that will have higher probability >> to be accepted but they backed out later - telling that it's only about >> "assessing" issue >> * they promised to come back to me with more details - and questions >> about what was the extent of the project and what they want to achieve >> * I was (and thanks to an unnamed ASF member who put me in contact with) >> contacted by a journalist from The New Stack writing a story about it. I >> had a call with her and I also contacted them with Scale.ai so that they >> can share their part of the story as well) >> * GitHub is - faster and faster it seems - reacting and blocking those >> users based on our report - they got some understanding about the situation >> and they started to see the patterns and block those users when reported. >> Even today I got 3 confirmations about blocked accounts. >> >> My take (and this is speculation from my side not fact): >> * they WERE training agent AI to be able to submit issues (even if now >> they deny it) >> * this is a VERY bad idea that will make our live harder when put in >> practice - the probabilistic / hallucinating nature of that will make >> counter-productive and make a lot of overhead for us, maintainers >> * we should work with local AI "good" partners to build system to detect >> and prevent those kind issues and "help" maintainers to battle with those - >> there will be more and more of those cases happening >> * we should continue reporting those cases to GitHub - this eventually >> works as well >> >> This shows the power of the community! I will keep you posted. >> >> J. >> >> On Thu, Jan 30, 2025 at 5:26 PM Ryan Hatter >> <ryan.hat...@astronomer.io.invalid> wrote: >> >>> Here's a boilerplate response that I'm going to start using moving >>> forward: >>> >>> Hello, >>> > >>> > This Issue appears to be AI-generated spam. If this is a mistake, >>> please >>> > let us know—otherwise, any further spam Issues may lead us to report >>> your >>> > account to GitHub. >>> > >>> >>> On Mon, Jan 27, 2025 at 10:16 AM Jarek Potiuk <ja...@potiuk.com> wrote: >>> >>> > FYI. My friend - who is an engineering manager at GitHub in the "AI" >>> space >>> > - saw my post and reached out to their internal team fighting with >>> Spam. I >>> > hope they will be able to do something about it as well. >>> > >>> > On Mon, Jan 27, 2025 at 8:21 AM Jarek Potiuk <ja...@potiuk.com> wrote: >>> > >>> > > > Given what I read about the company, they will most likely ignore >>> you. >>> > > Some people think the company is a scam, and they seem to engage in >>> > > unethical business practices. >>> > > >>> > > Yeah, but it absolutely does not mean we should give up and do >>> nothing. >>> > > >>> > > Hopefully that will give at least some people a pause when they >>> apply for >>> > > Outlier.ai jobs. There are a few people there who seem to have a >>> > "real-ish" >>> > > history of contribution and they actually came back after our >>> reaction >>> > and >>> > > apologised as they were tricked into doing it by the instructions >>> from >>> > > Outlier. >>> > > >>> > > So at least we can make rounds on awareness of the whole situation. >>> > That's >>> > > already something. Even if they will shrug seeing all the reaction >>> (which >>> > > is amazing - I did not know we have so many people caring about >>> > > maintainers). >>> > > >>> > > And hopefully also GitHub (I have a few friends from GitHub reacting >>> to >>> > > it) will take notice and will make it harder for "new" people and >>> maybe >>> > > even detect such spam on their own. Eventually we are - and this is >>> just >>> > a >>> > > beginning - in an arm's race where we will have to employ AI to >>> battle >>> > the >>> > > AI created by "roque" players - it's inevitable. It will happen, >>> > absolutely >>> > > no doubts about it. So we have to strengthen our efforts to make >>> people >>> > > aware of those dangers and prepare to react for them. >>> > > >>> > > We already (and few other Apache projects) work with DoSu - and they >>> are >>> > > great AI "open-source friendly" company - for quite a few months >>> their AI >>> > > is classifying our issues and putting the right labels on them (and >>> they >>> > do >>> > > a great job on that) - so it's only a matter of their focus (and I >>> spoke >>> > to >>> > > the Devin - the creator of DoSu) on them being able to analyse those >>> > issues >>> > > and maybe even automatically marking them as "AI spam". In fact - >>> their >>> > > algorithms already also learn from us marking the issues with the "AI >>> > spam" >>> > > label. and if they enable this label and we have enough cases, their >>> AI >>> > > will likely automatically help with that "AI spam" to be >>> automatically >>> > set. >>> > > >>> > > So ... I think being active, sharing things like that, learning and >>> > > thinking how to battle those cases is something we all should learn >>> how >>> > to >>> > > do. It will be needed. This is just a canary in the mine. >>> > > >>> > > J. >>> > > >>> > > >>> > > >>> > > >>> > > On Mon, Jan 27, 2025 at 12:57 AM Justin Mclean < >>> jus...@classsoftware.com >>> > > >>> > > wrote: >>> > > >>> > >> Hi, >>> > >> >>> > >> Given what I read about the company, they will most likely ignore >>> you. >>> > >> Some people think the company is a scam, and they seem to engage in >>> > >> unethical business practices. >>> > >> >>> > >> Kind Regards, >>> > >> Justin >>> > >> >>> --------------------------------------------------------------------- >>> > >> To unsubscribe, e-mail: dev-unsubscr...@airflow.apache.org >>> > >> For additional commands, e-mail: dev-h...@airflow.apache.org >>> > >> >>> > >> >>> > >>> >>