Re: [ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-19 Thread Ana Aviles
So it points to: rbd_data.30732d3238f3535.(0x12)f7e3 > > > This is weird, as we cannot found any reference to > "rbd_data.30732d3238f3535.f7e3", this rbd prefix does > not exist on our system (when running rbd ls and requesting info >

Re: [ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-18 Thread Ana Aviles
; > On 10/17/2017 11:57 PM, Gregory Farnum wrote: >> >> >> On Tue, Oct 17, 2017 at 9:51 AM Ana Aviles > <mailto:a...@greenhost.nl>> wrote: >> >> Hello all, >> >> We had an inconsistent PG on our cluster. While performing PG repair &

[ceph-users] OSD crashed while reparing inconsistent PG luminous

2017-10-17 Thread Ana Aviles
Hello all, We had an inconsistent PG on our cluster. While performing PG repair operation the OSD crashed. The OSD was not able to start again anymore, and there was no hardware failure on the disk itself. This is the log output 2017-10-17 17:48:55.771384 7f234930d700 -1 log_channel(cluster) log

[ceph-users] Inconsistent PG, is safe pg repair? or manual fix?

2016-11-24 Thread Ana Aviles
Hello, We have a cluster with HEALTH_ERR state for a while now. We are trying to figure out how to solve it without the need of removing the affected rbd image. ceph -s cluster e94277ae-3d38-4547-8add-2cf3306f3efd health HEALTH_ERR 1 pgs inconsistent 5 scrub error

[ceph-users] Removing OSD after fixing PG-inconsistent brings back PG-inconsistent state

2016-07-29 Thread Ana Aviles
Hello, We have a cluster with HEALTH_ERR due to inconsisten PG. HEALTH_ERR 1 pgs inconsistent; 1 scrub errors pg 2.ae is active+clean+inconsistent, acting [11,4] 1 scrub errors We have run ceph pg repair on the problematic pg and health went back to OK. I checked the two osd acting on that pg (