Huji wrote:
> Even free, lightweight LLMs (like LLaMa) could be helpful

LLaMa itself is not under a free license. Let's call it an "almost-free
license".
So, I'm not sure if it would be acceptable to run it, given the requisite
that
> All code in the Tools project must be published under an OSI approved
open source license

I think the debate would be whether the models (in this case LLaMa) is
"code" or "data". It might be considered both ways.

Separatedly, regarding the resources point mentioned by Siddhart:
> LLMs might also require significant memory/CPU resources and/or system
software not available to tools

LLMs are very memory-hungry, but what they would benefit most would be from
GPU memory. Of which we probably don't have any in cloud. The ideal setup
would probably be a specific host in cloud providing a LLM service and
offering that to tools and VMs.

(*) It's still possible to run LLMs without GPU and get acceptable results,
although the time required and amount of requests that can be fulfilled (as
well as number of models loaded) would be much more limited.
_______________________________________________
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/

Reply via email to