Package: libggml0
Version: 0.9.7-1
Severity: normal

Please consider changing the -backends from Suggests to Recommends. The original reporter also has other ideas to solve the issue.

https://bugs.launchpad.net/ubuntu/+source/llama.cpp/+bug/2141980
--------------------- Original Bug Report ---------------------------
Right now installing llama.cpp from resolute-proposed (8064+dfsg-1) will not pull in any libggml backends.

This is because all backends are Suggests for libggml0.

# apt depends libggml0
libggml0
  Depends: libc6 (>= 2.38)
  Depends: libgcc-s1 (>= 3.3.1)
  Depends: libgomp1 (>= 4.9)
  Depends: libstdc++6 (>= 11)
  Breaks: libggml
  Breaks: <libggml0-backend-cpu>
  Suggests: libggml0-backend-blas
  Suggests: libggml0-backend-cuda
  Suggests: libggml0-backend-hip
  Suggests: libggml0-backend-vulkan
  Replaces: libggml
  Replaces: <libggml0-backend-cpu>

This means that someone who runs `apt install llama.cpp` will use CPU by default and might not even know there is another backend that supports their hardware for libggml0.

There's a few ways to solve this I can think of:
1) llama.cpp metapackages (IE llama.cpp-rocm, llama.cpp-cuda, etc)
2) Metapackages do it (IE apt install rocm has a recommends for libggml0-backend-hip)
3) Add Recommends for all backends to llama.cpp.

I personally like #3 the most, this will mean when a user runs 'apt install llama.cpp' they will have maximum compatibility across their hardware available. If a user doesn't want all backends, they can remove them because they're recommends.
---------------------------------------------------------------------

Thanks

Reply via email to