[ceph-users] Re: Nautilus upgrade HEALTH_WARN legacy tunables

2020-07-05 Thread Jim Forde
Thanks for the help. That page had sort of the answer. I had tried "ceph osd crush tunables optimal" earlier but I got an error: "Error EINVAL: new crush map requires client version jewel but require_min_compat_client is firefly" The links for help on that page are dead, but I did end up findin

[ceph-users] Nautilus slow using "ceph tell osd.* bench"

2020-08-05 Thread Jim Forde
I have 2 clusters. Cluster 1 started at Hammer and has upgraded through the versions all the way to Nautilus 14.2.10 (Luminous to Nautilus in July 2020) . Cluster 2 started as Luminous and is now Nautilus 14.2.2 (Upgraded in September 2019) The clusters are basically identical 5 OSD Nodes with 6

[ceph-users] Re: Nautilus slow using "ceph tell osd.* bench"

2020-08-06 Thread Jim Forde
SOLUTION FOUND! Reweight the osd to 0, then set it back to where it belongs. ceph osd crush reweight osd.0 0.0 Original ceph tell osd.0 bench -f plain bench: wrote 1 GiB in blocks of 4 MiB in 4.03434 sec at 254 MiB/sec 63 IOPS After reweight of osd.0 ceph tell osd.0 bench -f plain bench: wrote 1

[ceph-users] Re: Nautilus slow using "ceph tell osd.* bench"

2020-08-07 Thread Jim Forde
I have set it to 0.0 and let it re-balance. Then I set it back and let it re-balance again. I have a fairly small cluster, and while in production, it is not getting much use because of pandemic. So a good time to do some of these things. Because of that I have been re-balancing the osd's in grou

[ceph-users] Re: Nautilus slow using "ceph tell osd.* bench"

2020-08-14 Thread Jim Forde
Solution Failed! I rebalanced all osd's to 0.0 and then back to their original weight, and started getting back to my original ~269 IOPS. It has been about 5 days since I completed the re-balance and performance is degrading again! There is a bit of improvement but not to where it was in Mimic.