Hi,

On 2025-01-31 17:33, M. Zhou wrote:
> @ckk is planning to package llama.cpp within debian deep learning
> team (debian-ai@l.d.o). Maybe you want to discuss with the team
> whether you want to deal with the embedded copy of llama.cpp inside
> ollama source tree?

llama.cpp will land this weekend. I'm almost finished; I still need to
tweak some corner cases. But I also need to run some final tests.

Funnily enough, this package is trickier than one might initially
believe. The library needs to be private; multiple versions need to be
build (one or more for each backend); the library abstraction isn't that
strict; and it requires non-free data to properly test and benchmark.

I've solved most of these problems, so perhaps my approach can help once
it's done.

Best,
Christian

Reply via email to