On Sun, 2025-01-12 at 16:56 +0000, Colin Watson wrote:
> 
> (I have less fixed views on locally-trained models, but I see no very
> compelling need to find more things to spend energy on even if the costs
> are lower.)

Locally-trained models are not practical in the current stage. State-of-the-art
models can only be trained by the richest capitals who have GPU clusters. 
Training
and deploying smaller models like 1 billion can lead to a very wrong impression
and conclusion on those models.

Based on the comments, what I saw is that using LLMs as an organization is too
radical for Debian. In that sense leaving this new technology to individuals' 
personal
evaluation and usage is more reasonable.

So what I was talking is simply a choice among the two:
 1. A contributor who needs help can leverage LLM for its immediate response and
    help even if it only correct, for 30% of the time. It requires the 
contributor
    to have knowledge and skill to properly use this new technology.
 2. A contributor who needs help has to wait for a real human for indefinite 
time
    period, but the correctness is above 99$.

The existing voice chose the second one. I want to mention that "waiting for a 
real
human for help on XXX for indefinite time" was a bad experience when I was a 
new comer.
The community not agreeing on using that new technology to aid such pain point, 
seems understandable to me.

Reply via email to