On 2024-02-29 00:43, Jean-Christophe Helary wrote:
LLM output costs "nothing". Which means that individual users
already have access to that. In fact, I argued exactly that to the Linux
Foundation JA office yesterday. Providing LLM based translation is not
doing a service to users. It is also dangerous because LLM output is
strangely false in weird and unexpected places, and besides for a human
review service that I doubt the Gnu project would be willing to provide,
there is nothing that would keep those errors to be spread in the wild,
at a real cost that you can’t imagine.

LLMs do *not* provide “more up to date and varied translations”. They
provide “probable strings that they do not understand, but it looks
human enough that a human can be tricked into thinking that a human who
understands the subject matter actually wrote that”.

Very well written.  While using LLMs may seem like a quick and easy
solution, it may actually be very dangerous in the long term.

Reply via email to