Hi Simon,

Are you still working on packaging Ollama?

I have been looking at Ollama, Jan.ai, LocalAI (https://localai.io/)
and GPT4All (nomic.ai/gpt4all). It would be nice to have at least one
of these available in Debian so people can run them natively without
AppImages and whatnot, and with the assurance of at least the
traceable supply chain that Debian offers.

Like others, Ollama has llama.cpp as the main backend, which is in
Debian but never entered testing
(https://tracker.debian.org/pkg/llama.cpp) because its dependency ggml
never entered testing (https://tracker.debian.org/pkg/ggml) due to
hard dependencies on AMD ROCm libraries (even though many would end up
running llama.cpp specifically on CPU and only a minority own AMD
GPUs.

Related, I also noticed these:

#1092958 ITP: llm-ollama -- LLM plugin providing access to models
running on an Ollama server

#1108419 RFP: alpaca -- An Ollama client made with GTK4 and Adwaita

Reply via email to