[ceph-users] Re: removing/flattening a bucket without data movement?

2019-08-31 Thread Zoltan Arnold Nagy

On 2019-08-31 06:02, Konstantin Shalygin wrote:

On 8/31/19 3:42 AM, Zoltan Arnold Nagy wrote:


Originally our osd tree looked like this:

ID  CLASS WEIGHT TYPE 
NAME STATUS REWEIGHT PRI-AFF

 -1   2073.15186 root default
-14    176.63100 rack s01-rack
-19    176.63100 host s01

-15    171.29900 rack s02-rack
-20    171.29900 host s02


etc. You get the idea. It was a legacy thing as we've been upgrading 
this cluster

since probably firefly, and started with way less hardware.

The crush rule was set up like this originally:

    step take default
    step chooseleaf firstn 0 type rack

which we have modified to

    step take default
    step chooseleaf firstn 0 type host

taking advantage of chooseleaf's behavior (eg searching in depth 
instead of just

a single level).

Now we thought we could get rid of the rack buckets simply by moving 
the
host buckets to the root using "ceph osd crush move s01 root=default", 
however

this resulted in a bunch of data movement.

Swapping the IDs manually in the crushmap seems to work (verified via 
crushtool's
--compare), eg. changing the ID of s01 to s01-rack's and vice versa, 
including

all shadow trees.

Looking around I saw that there is a swap-bucket command but that does 
not swap

the IDs just bucket contents, so would result in data movement.

Other than manually editing the crushmap is there a better way to 
achieve this?

Is this way the most optimal?



If you are on Luminous+ version - upmap.


Could you elaborate a bit more? upmap is used to map specific PGs to 
specific OSDs

in order to deal with CRUSH inefficiencies.

Why would I want to add a layer of indirection when the goal is to 
remove the bucket

entirely?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] official ceph.com buster builds?

2019-08-31 Thread Chad W Seys
Hi all,
   Am I missing the ceph buster build built by ceph.com ?
http://download.ceph.com/debian-nautilus/dists/
   Should I be using the Croit supplied builds?

Thanks!
Chad.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: official ceph.com buster builds? [https://eu.ceph.com/debian-luminous buster]

2019-08-31 Thread Jelle de Jong

Hi all,

I am also looking for https://eu.ceph.com/debian-luminous buster

The current stretch version is incompatible with buster and breaks curl 
and libcurl4 (it is probably build against libcurl3).


Can we get a luminous build for buster please?

The following additional packages will be installed:
  curl
The following packages will be REMOVED:
  ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd 
libcurl3 librgw2 python-rgw radosgw

The following NEW packages will be installed:
  libcurl4
The following packages will be upgraded:
  curl

Kind regards,

Jelle de Jong

On 8/31/19 6:19 PM, Chad W Seys wrote:

Hi all,
Am I missing the ceph buster build built by ceph.com ?
http://download.ceph.com/debian-nautilus/dists/
Should I be using the Croit supplied builds?

Thanks!
Chad.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] trouble with grafana dashboards in nautilus

2019-08-31 Thread Rory Schramm
HI,

I recently upgraded my cluster from 12.2 to 14.2 and I'm having some
trouble getting the mgr dashboards for grafana working.

I setup Prometheus and Grafana per
https://docs.ceph.com/docs/nautilus/mgr/prometheus/#mgr-prometheus

However, for the osd disk performance statistics graphs on the host details
dashboard I'm getting the following error:

"found duplicate series for the match group {device="dm-5",
instance=":9100"} on the right hand-side of the operation:
[{name="ceph_disk_occupation", ceph_daemon="osd.13", db_device="/dev/dm-8",
device="dm-5", instance=":9100", job="ceph"}, {name="ceph_disk_occupation",
ceph_daemon="osd.15", db_device="/dev/dm-10", device="dm-5",
instance=":9100", job="ceph"}];many-to-many matching not allowed: matching
labels must be unique on one side"

This also happens on the following graphs:

Host Overview/AVG Disk Utilization
Host Details/OSD Disk Performance Statistics/*

Also the following graphs show no data points:
OSD Details/Physical Device Performance/*

prometheus version: 2.12.0
node exporter: 0.15.2
grafana version: 6.3.3


note that my osds all have separate data and rocks db devices. I have also
upgraded all the osds to nautilus via ceph-bluestore-tool repair.

Any idea what's needed to fix this?

Thanks

below are the Prometheus config files

prometheus.yml

global:
  scrape_interval: 5s
  evaluation_interval: 5s

scrape_configs:
  - job_name: 'node'
file_sd_configs:
  - files:
- node_targets.yml
  - job_name: 'ceph'
honor_labels: true
file_sd_configs:
  - files:
- ceph_targets.yml



node_targets.yml:
[
{
"targets": [ "nas-osd-01:9100" ],
"labels": {
"instance": "nas-osd-01"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-02"
}
},
{
"targets": [ "nas-osd-02:9100" ],
"labels": {
"instance": "nas-osd-03"
}
}
]

---

ceph_targets.yml:

[
{
"targets": [ "nas-osd-01:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-02:9283" ],
"labels": {}
},
{
"targets": [ "nas-osd-03:9283" ],
"labels": {}
}
]
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: removing/flattening a bucket without data movement?

2019-08-31 Thread Konstantin Shalygin

On 8/31/19 4:14 PM, Zoltan Arnold Nagy wrote:
Could you elaborate a bit more? upmap is used to map specific PGs to 
specific OSDs

in order to deal with CRUSH inefficiencies.

Why would I want to add a layer of indirection when the goal is to 
remove the bucket
entirely? 


As I understood you want to make huge CRUSH map changes without huge 
data movement.


Upmap can help with this, you map your current PG's to OSD's that 
already holds this PG's.





k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io