Re: [ceph-users] Fwd: down+peering PGs, can I move PGs from one OSD to another

2018-08-04 Thread Bryan Henderson
>You can export and import PG's using ceph_objectstore_tool, but if the osd >won't start you may have trouble exporting a PG. I believe the very purpose of ceph-objectstore-tool is to manipulate OSDs while they aren't running. If the crush map says these PGs that are on the broken OSD belong on a

Re: [ceph-users] Fwd: down+peering PGs, can I move PGs from one OSD to another

2018-08-03 Thread Sean Patronis
Forgive the wall of text, i shortened it a little here is the osd log when I attempt to start the osd: 2018-08-04 03:53:28.917418 7f3102aa87c0 0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-21) detect_feature: extsize is disabled by conf 2018-08-04 03:53:28.977564 7f3102aa87c0 0 filestore(

Re: [ceph-users] Fwd: down+peering PGs, can I move PGs from one OSD to another

2018-08-03 Thread Sean Redmond
Hi, You can export and import PG's using ceph_objectstore_tool, but if the osd won't start you may have trouble exporting a PG. It maybe useful to share the errors you get when trying to start the osd. Thanks On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis wrote: > > > Hi all. > > We have an i

[ceph-users] Fwd: down+peering PGs, can I move PGs from one OSD to another

2018-08-03 Thread Sean Patronis
Hi all. We have an issue with some down+peering PGs (I think), when I try to mount or access data the requests are blocked: 114891/7509353 objects degraded (1.530%) 887 stale+active+clean 1 peering 54 active+recovery_wait 19609