Re: [ceph-users] How to reset compat weight-set changes caused by PG balancer module?

2019-10-22 Thread Konstantin Shalygin
Apparently the PG balancer crush-compat mode adds some crush bucket weights. Those cause major havoc in our cluster, our PG distribution is all over the place. Seeing things like this:... 97 hdd 9.09470 1.0 9.1 TiB 6.3 TiB 6.3 TiB 32 KiB 17 GiB 2.8 TiB 69.03 1.08 28 up 98 hd

Re: [ceph-users] ceph balancer do not start

2019-10-23 Thread Konstantin Shalygin
root at ceph-mgr :~# ceph balancer mode upmap root at ceph-mgr :~# ceph balancer optimize myplan root at ceph-mgr :~# ceph b

Re: [ceph-users] ceph balancer do not start

2019-10-24 Thread Konstantin Shalygin
Hi, ceph features { "mon": { "group": { "features": "0x3ffddff8eeacfffb", "release": "luminous", "num": 3 } }, "osd": { "group": { "features": "0x3ffddff8eeacfffb", "release": "luminous",

Re: [ceph-users] ceph balancer do not start

2019-10-24 Thread Konstantin Shalygin
connections coming from qemu vm clients. It's generally easy to upgrade. Just switch your Ceph yum repo from jewel to luminous. Then update `librbd` on your hypervisors and migrate your VM's. It's fast and without downtime of your VM's. k ___ ce

Re: [ceph-users] changing set-require-min-compat-client will cause hiccup?

2019-10-30 Thread Konstantin Shalygin
Hi,I need to change set-require-min-compat-clientto use upmap mode for the PG balancer. Will this cause a disconnect of all clients? We're talking cephfs and RBD images for VMs. Or is it save to switch that live? Is safe. k ___ ceph-users mailing

Re: [ceph-users] changing set-require-min-compat-client will cause hiccup?

2019-10-31 Thread Konstantin Shalygin
On 10/31/19 2:12 PM, Philippe D'Anjou wrote: Hi, it is NOT safe. All clients fail to mount rbds now :( Your clients is upmap compatible? k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Strange CEPH_ARGS problems

2019-11-15 Thread Konstantin Shalygin
I found a typo in my post: Of course I tried export CEPH_ARGS="-n client.rz --keyring=" and not export CEPH_ARGS=="-n client.rz --keyring=" try `export CEPH_ARGS="--id rz --keyring=..."` k ___ ceph-users mailing list ceph-users@lists.ce

Re: [ceph-users] Impact of a small DB size with Bluestore

2019-11-25 Thread Konstantin Shalygin
I have an Ceph cluster which was designed for file store. Each host have 5 SSDs write intensive of 400GB and 20 HDD of 6TB. So each HDD have a WAL of 5 GB on SSD If i want to put Bluestore on this cluster, i can only allocate ~75GB of WAL and DB on SSD for each HDD which is far below the 4% limit

Re: [ceph-users] rbd image size

2019-11-25 Thread Konstantin Shalygin
Hello , I use ceph as block storage in kubernetes. I want to get the rbd usage by command "rbd diff image_id | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }’”, but I found it is a lot bigger than the value which I got by command “df -h” in the pod. I do not know the reason and need you

Re: [ceph-users] Is a scrub error (read_error) on a primary osd safe to repair?

2019-12-04 Thread Konstantin Shalygin
I tried to dig in the mailinglist archives but couldn't find a clear answer to the following situation: Ceph encountered a scrub error resulting in HEALTH_ERR Two PG's are active+clean+inconsistent. When investigating the PG i see a "read_error" on the primary OSD. Both PG's are replicated PG's

Re: [ceph-users] Use telegraf/influx to detect problems is very difficult

2019-12-10 Thread Konstantin Shalygin
But it is very difficult/complicated to make simple queries because, for example I have osd up and osd total but not osd down metric. To determine how much osds down you don't need special metric, because you already have osd_up and osd_in metrics. Just use math. k ___

Re: [ceph-users] Install specific version using ansible

2020-01-09 Thread Konstantin Shalygin
Hello all! I'm trying to install a specific version of luminous (12.2.4). In the directory group_vars/all.yml I can specify the luminous version, but i didn't find a place where I can be more specific about the version. The ansible installs the latest version (12.2.12 at this time). I'm using ce

Re: [ceph-users] block db sizing and calculation

2020-01-14 Thread Konstantin Shalygin
i'm plannung to split the block db to a seperate flash device which i also would like to use as an OSD for erasure coding metadata for rbd devices. If i want to use 14x 14TB HDDs per Node https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing recommends a minimum size

<    1   2   3   4