[ceph-users] Re: Octopus 15.2.1

2020-04-11 Thread Gert Wieberdink
Hi Jeff,Thank you for your quick and clear answer.I was not aware of the ceph-el8 repo. This is great!. Installing Ceph on CentOS8 now succeeds without any missing dependencies. rgds,-gw On Fri, 2020-04-10 at 13:45 -0400, Jeff Bailey wrote: > Leveldb is currently in epel-testing and should be move

[ceph-users] Re: How to fix 1 pg stale+active+clean

2020-04-11 Thread Marc Roos
I had just one osd go down (31), why is ceph not auto-healing in this 'simple' case? -Original Message- To: ceph-users Subject: [ceph-users] How to fix 1 pg stale+active+clean How to fix 1 pg marked as stale+active+clean pg 30.4 is stuck stale for 175342.419261, current state stal

[ceph-users] radosgw garbage collection seems stuck and mannual gc process didn't work

2020-04-11 Thread 346415320
Ceph Version : Mimic 13.2.4 The cluster has been running steadily for more than a year, recently I found cluster usage grows faster than usual .And we figured out the problem is garbage collection. 'radosgw-admin gc list ' has millions of objects to gc. the earliest tag time is 2019-09 , but

[ceph-users] Possible to "move" an OSD?

2020-04-11 Thread Jarett DeAngelis
This is an edge case and probably not something that would be done in production, so I suspect the answer is “lol, no,” but here goes: I have three nodes running Nautilus courtesy of Proxmox. One of them is a self-built Ryzen 5 3600 system, and the other two are salvaged i5 Skylake desktops tha

[ceph-users] Re: Possible to "move" an OSD?

2020-04-11 Thread Adam Tygart
As far as Ceph is concerned, as long as there are no separate journal/blockdb/wal devices, you absolutely can transfer osds between hosts. If there are separate journal/blockdb/wal devices, you can do it still, provided they move with the OSDs. With Nautilus and up, make sure the osd bootstrap key

[ceph-users] Re: radosgw garbage collection seems stuck and mannual gc process didn't work

2020-04-11 Thread Matt Benjamin
An issue presenting exactly like this was fixed in spring of last year, for certain on nautilus and higher. Matt On Sat, Apr 11, 2020, 12:04 PM <346415...@qq.com> wrote: > Ceph Version : Mimic 13.2.4 > > The cluster has been running steadily for more than a year, recently I > found cluster usage

[ceph-users] Re: Understanding monitor requirements

2020-04-11 Thread Brian Topping
Hi again, after all, this appears to be an MTU issue: Baseline: 1) Two of the nodes have a straight ethernet with 1500MTU, the third (problem) node is on a WAN tunnel with a restricted MTU. It appears that the MTUs were not set up correctly, so no surprise some software has problems. 2) I deci

[ceph-users] Re: Understanding monitor requirements

2020-04-11 Thread Brian Topping
> On Apr 11, 2020, at 5:54 PM, Anthony D'Atri wrote: > > Dumb question, can’t you raise the MTU of the tunnel? I’m good with any question, that got it, thank you! I’m not exactly sure what happened, I believe an MTU setting I tried didn’t actually take or the CNI software was somehow not rel

[ceph-users] Re: radosgw garbage collection seems stuck and mannual gc process didn't work

2020-04-11 Thread Peter Parker
thanks a lot i'm not sure if the PR is https://github.com/ceph/ceph/pull/26601 ? and that has been backport to mimic https://github.com/ceph/ceph/pull/27796 it seems the cluster needs to be upgraded to 13.2.6 or higher after upgrade , what else should I do ? like manually execute gc process