Hi, On 1/13/25 03:46, M. Zhou wrote:
So what I was talking is simply a choice among the two: 1. A contributor who needs help can leverage LLM for its immediate response and help even if it only correct, for 30% of the time. It requires the contributor to have knowledge and skill to properly use this new technology.
The "skill required" aspect is an important one. I'm willing to review packages and provide feedback so learning can occur.
LLMs interrupt the learning process here, because they provide a solution that makes me assume that the contributor already has a specific mental model, so my feedback is designed to fine tune this.
When the contributor is using LLM generated code, my feedback is no longer helpful for learning -- at best, it can be used to iteratively arrive at a correct solution, but playing a game of telephone is a very inefficient way to do it, and does not prepare the new contributor to work unsupervised -- so LLMs basically disrupt our intake pipeline in the same way they are doing in commercial software development.
Simon