On 2019-08-31 06:02, Konstantin Shalygin wrote:
On 8/31/19 3:42 AM, Zoltan Arnold Nagy wrote:
Originally our osd tree looked like this:
ID CLASS WEIGHT TYPE
NAME STATUS REWEIGHT PRI-AFF
-1 2073.15186 root default
-14 176.63100 rack s01-rack
-19
Hi all,
Am I missing the ceph buster build built by ceph.com ?
http://download.ceph.com/debian-nautilus/dists/
Should I be using the Croit supplied builds?
Thanks!
Chad.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
Hi all,
I am also looking for https://eu.ceph.com/debian-luminous buster
The current stretch version is incompatible with buster and breaks curl
and libcurl4 (it is probably build against libcurl3).
Can we get a luminous build for buster please?
The following additional packages will be inst
HI,
I recently upgraded my cluster from 12.2 to 14.2 and I'm having some
trouble getting the mgr dashboards for grafana working.
I setup Prometheus and Grafana per
https://docs.ceph.com/docs/nautilus/mgr/prometheus/#mgr-prometheus
However, for the osd disk performance statistics graphs on the ho
On 8/31/19 4:14 PM, Zoltan Arnold Nagy wrote:
Could you elaborate a bit more? upmap is used to map specific PGs to
specific OSDs
in order to deal with CRUSH inefficiencies.
Why would I want to add a layer of indirection when the goal is to
remove the bucket
entirely?
As I understood you wan