On 1/12/25 7:46 PM, M. Zhou wrote:
> So what I was talking is simply a choice among the two:
>  1. A contributor who needs help can leverage LLM for its immediate response 
> and
>     help even if it only correct, for 30% of the time. It requires the 
> contributor
>     to have knowledge and skill to properly use this new technology.
>  2. A contributor who needs help has to wait for a real human for indefinite 
> time
>     period, but the correctness is above 99$.
> 
> The existing voice chose the second one. I want to mention that "waiting for 
> a real
> human for help on XXX for indefinite time" was a bad experience when I was a 
> new comer.
> The community not agreeing on using that new technology to aid such pain 
> point, 
> seems understandable to me.

No-one is stopped from using any of the free offers. I don't think we
need our own chat bot. Of course that means, in turn, that we give up on
feeding it domain-specific knowledge and our own prompt. But that's...
probably fine?

If those LLMs support that, one could still produce a guide on how to
feed more interesting data into it - or provide a LoRA. It's not like
inference requires a GPU.

But then again saying things like "oh, look, I could easily answer the
NM templates with this" is the context you want to put this work in.

Kind regards
Philipp Kern

Reply via email to