Hi,
# ceph health detail
HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 3 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.2bb is active+clean+inconsistent, acting [36,12,80]
# ceph pg repair 2.2bb
instructing pg 2.2bb on osd.36 to repa
On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote:
> Hi,
>
> # ceph health detail
> HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent
> OSD_SCRUB_ERRORS 3 scrub errors
> PG_DAMAGED Possible data damage: 1 pg inconsistent
> pg
Hi,
thanks for the answer.
On Thu, Mar 07, 2019 at 07:48:59PM -0800, David Zafman wrote:
> See what results you get from this command.
>
> # rados list-inconsistent-snapset 2.2bb --format=json-pretty
>
> You might see this, so nothing interesting. If you don't get json, then
> re-run a scrub
Hi,
[...]
> Now I have:
>
> HEALTH_ERR 5 scrub errors; Possible data damage: 1 pg inconsistent
> OSD_SCRUB_ERRORS 5 scrub errors
> PG_DAMAGED Possible data damage: 1 pg inconsistent
> pg 2.2bb is active+clean+inconsistent, acting [36,12,80]
>
> Jumped from 3 to 5 scrub errors now.
did the sam
Hi,
I'm migrating my OSDs to bluestore (Luminous 12.2.10) recreating the
OSDs, everything looks good (just a few OSDs left), but yesterday a
weird thing happened after I set a wrong weigth to a newly migrate
OSD: should be 2 but I put 6 (hardcoded in my salt state, oops). I got
slow requests, when
On Fri, Mar 15, 2019 at 10:23:31AM -0300, Herbert Alexander Faleiros wrote:
[...]
> Looking why I found the same kind of log, then I found what I don't
> understand:
>
> # ceph osd pool get pg_num
> pg_num: 4096
>
> *but* counting the PGs (using ceph osd df or mgr das