Hi all,

On 2026-03-25 12:53, Otto Kekäläinen wrote:
> Like others, Ollama has llama.cpp as the main backend, which is in
> Debian but never entered testing
> (https://tracker.debian.org/pkg/llama.cpp) because its dependency ggml
> never entered testing (https://tracker.debian.org/pkg/ggml) due to
> hard dependencies on AMD ROCm libraries (even though many would end up
> running llama.cpp specifically on CPU and only a minority own AMD
> GPUs.

FWIW, the ROCm blocker is actually because of the documentation build,
blocked by some dependency 4 layers below:

  rocblas -> doxysphinx -> sphinx-needs -> python-memray -> libunwind

The underlying cause in libunwind appears to have been fixed, but I now
see a regression on riscv64...

Alternatively, the rocblas/hipblas maintainers could temporarily disable
the documentation build.

Best,
Christian

Reply via email to