[ceph-users] Failed to repair pg

2019-03-07 Thread Herbert Alexander Faleiros
Hi, # ceph health detail HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 3 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 2.2bb is active+clean+inconsistent, acting [36,12,80] # ceph pg repair 2.2bb instructing pg 2.2bb on osd.36 to repa

Re: [ceph-users] Failed to repair pg

2019-03-07 Thread Herbert Alexander Faleiros
On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote: > Hi, > > # ceph health detail > HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent > OSD_SCRUB_ERRORS 3 scrub errors > PG_DAMAGED Possible data damage: 1 pg inconsistent > pg

Re: [ceph-users] Failed to repair pg

2019-03-08 Thread Herbert Alexander Faleiros
Hi, thanks for the answer. On Thu, Mar 07, 2019 at 07:48:59PM -0800, David Zafman wrote: > See what results you get from this command. > > # rados list-inconsistent-snapset 2.2bb --format=json-pretty > > You might see this, so nothing interesting.  If you don't get json, then > re-run a scrub

Re: [ceph-users] Failed to repair pg

2019-03-08 Thread Herbert Alexander Faleiros
Hi, [...] > Now I have: > > HEALTH_ERR 5 scrub errors; Possible data damage: 1 pg inconsistent > OSD_SCRUB_ERRORS 5 scrub errors > PG_DAMAGED Possible data damage: 1 pg inconsistent > pg 2.2bb is active+clean+inconsistent, acting [36,12,80] > > Jumped from 3 to 5 scrub errors now. did the sam

[ceph-users] Too many PGs during filestore=>bluestore migration

2019-03-15 Thread Herbert Alexander Faleiros
Hi, I'm migrating my OSDs to bluestore (Luminous 12.2.10) recreating the OSDs, everything looks good (just a few OSDs left), but yesterday a weird thing happened after I set a wrong weigth to a newly migrate OSD: should be 2 but I put 6 (hardcoded in my salt state, oops). I got slow requests, when

Re: [ceph-users] Too many PGs during filestore=>bluestore migration

2019-03-15 Thread Herbert Alexander Faleiros
On Fri, Mar 15, 2019 at 10:23:31AM -0300, Herbert Alexander Faleiros wrote: [...] > Looking why I found the same kind of log, then I found what I don't > understand: > > # ceph osd pool get pg_num > pg_num: 4096 > > *but* counting the PGs (using ceph osd df or mgr das