Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-16 Thread Massimo Sgaravatto
And I confirm that a repair is not useful. As as far I can see it simply "cleans" the error (without modifying the big object) but the error of course reappears when the deep scrub runs again on that PG Cheers, Massimo On Thu, Jan 16, 2020 at 9:35 AM Massimo Sgaravatto < massimo.sgarava...@gmail.

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-16 Thread Massimo Sgaravatto
In my cluster I saw that the problematic objects have been uploaded by a specific application (onedata), which I think used to upload the files doing something like: rados --pool put Now (since Luminous ?) the default object size is 128MB but if I am not wrong it was 100GB before. This would e

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Liam Monahan
I just changed my max object size to 256MB and scrubbed and the errors went away. I’m not sure what can be done to reduce the size of these objects, though, if it really is a problem. Our cluster has dynamic bucket index resharding turned on, but that sharding process shouldn’t help it if non-

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Massimo Sgaravatto
I never changed the default value for that attribute I am missing why I have such big objects around I am also wondering what a pg repair would do in such case Il mer 15 gen 2020, 16:18 Liam Monahan ha scritto: > Thanks for that link. > > Do you have a default osd max object size of 128M? I’m

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Liam Monahan
Thanks for that link. Do you have a default osd max object size of 128M? I’m thinking about doubling that limit to 256MB on our cluster. Our largest object is only about 10% over that limit. > On Jan 15, 2020, at 3:51 AM, Massimo Sgaravatto > wrote: > > I guess this is coming from: > > ht

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-15 Thread Massimo Sgaravatto
I guess this is coming from: https://github.com/ceph/ceph/pull/30783 introduced in Nautilus 14.2.5 On Wed, Jan 15, 2020 at 8:10 AM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > As I wrote here: > > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html > >

Re: [ceph-users] PG inconsistent with error "size_too_large"

2020-01-14 Thread Massimo Sgaravatto
As I wrote here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html I saw the same after an update from Luminous to Nautilus 14.2.6 Cheers, Massimo On Tue, Jan 14, 2020 at 7:45 PM Liam Monahan wrote: > Hi, > > I am getting one inconsistent object on our cluster with

Re: [ceph-users] PG inconsistent, "pg repair" not working

2018-09-25 Thread Brad Hubbard
On Tue, Sep 25, 2018 at 7:50 PM Sergey Malinin wrote: > > # rados list-inconsistent-obj 1.92 > {"epoch":519,"inconsistents":[]} It's likely the epoch has changed since the last scrub and you'll need to run another scrub to repopulate this data. > > September 25, 2018 4:58 AM, "Brad Hubbard" wro

Re: [ceph-users] PG inconsistent, "pg repair" not working

2018-09-25 Thread Sergey Malinin
# rados list-inconsistent-obj 1.92 {"epoch":519,"inconsistents":[]} September 25, 2018 4:58 AM, "Brad Hubbard" wrote: > What does the output of the following command look like? > > $ rados list-inconsistent-obj 1.92 ___ ceph-users mailing list ceph-us

Re: [ceph-users] PG inconsistent, "pg repair" not working

2018-09-25 Thread Marc Roos
And where is the manual for bluestore? -Original Message- From: mj [mailto:li...@merit.unu.edu] Sent: dinsdag 25 september 2018 9:56 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG inconsistent, "pg repair" not working Hi, I was able to solve a similar is

Re: [ceph-users] PG inconsistent, "pg repair" not working

2018-09-25 Thread mj
Hi, I was able to solve a similar issue on our cluster using this blog: https://ceph.com/geen-categorie/ceph-manually-repair-object/ It does help if you are running a 3/2 config. Perhaps it helps you as well. MJ On 09/25/2018 02:37 AM, Sergey Malinin wrote: Hello, During normal operation ou

Re: [ceph-users] pg inconsistent, scrub stat mismatch on bytes

2018-06-20 Thread David Turner
As a part of the repair operation it runs a deep-scrub on the PG. If it showed active+clean after the repair and deep-scrub finished, then the next run of a scrub on the PG shouldn't change the PG status at all. On Wed, Jun 6, 2018 at 8:57 PM Adrian wrote: > Update to this. > > The affected pg

Re: [ceph-users] pg inconsistent, scrub stat mismatch on bytes

2018-06-06 Thread Adrian
Update to this. The affected pg didn't seem inconsistent: [root@admin-ceph1-qh2 ~]# ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 6.20 is active+clean+inconsistent, act

Re: [ceph-users] pg inconsistent

2018-03-08 Thread Harald Staub
Hi Brad Thank you very much for your attention. On 07.03.2018 23:46, Brad Hubbard wrote: On Thu, Mar 8, 2018 at 1:22 AM, Harald Staub wrote: "ceph pg repair" leads to: 5.7bd repair 2 errors, 0 fixed Only an empty list from: rados list-inconsistent-obj 5.7bd --format=json-pretty Inspired by

Re: [ceph-users] pg inconsistent

2018-03-07 Thread Brad Hubbard
On Thu, Mar 8, 2018 at 1:22 AM, Harald Staub wrote: > "ceph pg repair" leads to: > 5.7bd repair 2 errors, 0 fixed > > Only an empty list from: > rados list-inconsistent-obj 5.7bd --format=json-pretty > > Inspired by http://tracker.ceph.com/issues/12577 , I tried again with more > verbose logging a

Re: [ceph-users] pg inconsistent and repair doesn't work

2017-10-25 Thread Wei Jin
I found it is similar to bug: http://tracker.ceph.com/issues/21388. And fix it by rados command. The pg inconsistent info is like following,wish it could be fixed in the future. root@n10-075-019:/var/lib/ceph/osd/ceph-27/current/1.fcd_head# rados list-inconsistent-obj 1.fcd --format=json-pretty {

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-14 Thread Marc Roos
var/lib/ceph/osd/ceph-9/block) close > > > > > > 23555 16:26:31.336061 io_getevents(139955679129600, 1, 16, > > > > 23552 16:26:31.336081 futex(0x7ffe7e4c9210, FUTEX_WAKE_PRIVATE, 1) = > > 0 <0.000155> 23552 16:26:31.336452 futex(0x7f49fb4d20bc, > > FUTEX_WAKE_OP_PRIVAT

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-09 Thread Marc Roos
-12.1.1/src/rocksdb/db/db_impl.cc:343] Shutdown complete 2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount 2017-08-09 11:41:25.705389 7f26db8ae100 1 bdev(0x7f26de472e00 /var/lib/ceph/osd/ceph-0/block) close 2017-08-09 11:41:25.944548 7f26db8ae100 1 bdev(0x7f26de2b3a00 /var/lib/ceph/osd/cep

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-08 Thread Sage Weil
> > 16:26:31.336758 futex(0x7f49fb4d2038, FUTEX_WAKE_PRIVATE, 1 > > 23552 16:26:31.336801 madvise(0x7f4a0cafa000, 2555904, MADV_DONTNEED > > 23553 16:26:31.336915 <... futex resumed> ) = 0 <0.000113> > > 23552 16:26:31.336959 <... madvise resumed> ) = 0 <0.000148

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-08 Thread Brad Hubbard
00037> > 23552 16:26:31.338270 madvise(0x7f4a01ae4000, 16384, MADV_DONTNEED) = 0 > <0.20> 23552 16:26:31.338320 madvise(0x7f4a018cc000, 49152, MADV_DONTNEED) > = 0 <0.14> 23552 16:26:31.338561 madvise(0x7f4a0770a000, 24576, > MADV_DONTNEED) = 0 <0.15> 23552 16

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-08 Thread Marc Roos
V_DONTNEED) = 0 <0.13> 23552 16:26:31.339235 madvise(0x7f4a02102000, 32768, MADV_DONTNEED) = 0 <0.000014> 23552 16:26:31.339331 madvise(0x7f4a01df8000, 16384, MADV_DONTNEED) = 0 <0.19> 23552 16:26:31.339372 madvise(0x7f4a01df8000, 32768, MADV_DONTNEED) = 0 <0.13> ---

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-06 Thread Brad Hubbard
On Sat, Aug 5, 2017 at 1:21 AM, Marc Roos wrote: > > I have got a placement group inconsistency, and saw some manual where > you can export and import this on another osd. But I am getting an > export error on every osd. > > What does this export_files error -5 actually mean? I thought 3 copies

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread Marc Roos
:52 To: Marc Roos; ceph-users Subject: Re: [ceph-users] Pg inconsistent / export_files error -5 It _should_ be enough. What happened in your cluster recently? Power Outage, OSD failures, upgrade, added new hardware, any changes at all. What is your Ceph version? On Fri, Aug 4, 2017 at 11:22 AM

Re: [ceph-users] Pg inconsistent / export_files error -5

2017-08-04 Thread David Turner
It _should_ be enough. What happened in your cluster recently? Power Outage, OSD failures, upgrade, added new hardware, any changes at all. What is your Ceph version? On Fri, Aug 4, 2017 at 11:22 AM Marc Roos wrote: > > I have got a placement group inconsistency, and saw some manual where > you

Re: [ceph-users] pg inconsistent : found clone without head

2013-11-26 Thread Laurent Barbe
Hello, log [INF] : 3.136 repair ok, 0 fixed Thank you Greg, I did like that, it worked well. Laurent Le 25/11/2013 19:10, Gregory Farnum a écrit : On Mon, Nov 25, 2013 at 8:10 AM, Laurent Barbe wrote: Hello, Since yesterday, scrub has detected an inconsistent pg :( : # ceph health detai

Re: [ceph-users] pg inconsistent : found clone without head

2013-11-25 Thread Gregory Farnum
On Mon, Nov 25, 2013 at 8:10 AM, Laurent Barbe wrote: > Hello, > > Since yesterday, scrub has detected an inconsistent pg :( : > > # ceph health detail(ceph version 0.61.9) > HEALTH_ERR 1 pgs inconsistent; 9 scrub errors > pg 3.136 is active+clean+inconsistent, acting [9,1] > 9 scrub errors >