Yeah, this happens all the time during backfilling since Mimic and is
some kind of bug.
It will always resolve itself, but it's still quite annoying.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.cr
I've just seen this when *removing* an OSD too.
Issue resolved itself during recovery. OSDs were not full, not even
close, there's virtually nothing on this cluster.
Mimic 13.2.4 on RHEL 7.6. OSDs are all Bluestore HDD with SSD DBs.
Everything is otherwise default.
cluster:
id: MY ID
n to the letter and say something?
>
> - Original Message -
> From: "Fyodor Ustinov"
> To: "Caspar Smit"
> Cc: "Jan Kasprzak" , "ceph-users"
> Sent: Thursday, 31 January, 2019 16:50:24
> Subject: Re: [ceph-users] back
tinov"
To: "Caspar Smit"
Cc: "Jan Kasprzak" , "ceph-users"
Sent: Thursday, 31 January, 2019 16:50:24
Subject: Re: [ceph-users] backfill_toofull after adding new OSDs
Hi!
I saw the same several times when I added a new osd to the cluster. One-two pg
in "
s
-Yenya
Jan Kasprzak wrote:
: : - Original Message -
: : From: "Caspar Smit"
: : To: "Jan Kasprzak"
: : Cc: "ceph-users"
: : Sent: Thursday, 31 January, 2019 15:43:07
: : Subject: Re: [ceph-users] backfill_toofull after adding new OSDs
: :
: : Hi Jan,
g the data reshuffle.
13.2.4 on CentOS 7.
-Yenya
:
: - Original Message -
: From: "Caspar Smit"
: To: "Jan Kasprzak"
: Cc: "ceph-users"
: Sent: Thursday, 31 January, 2019 15:43:07
: Subject: Re: [ceph-users] backfill_toofull after adding new OSDs
:
:
, 2019 15:43:07
Subject: Re: [ceph-users] backfill_toofull after adding new OSDs
Hi Jan,
You might be hitting the same issue as Wido here:
[ https://www.spinics.net/lists/ceph-users/msg50603.html |
https://www.spinics.net/lists/ceph-users/msg50603.html ]
Kind regards,
Caspar
Op do 31
Hi Jan,
You might be hitting the same issue as Wido here:
https://www.spinics.net/lists/ceph-users/msg50603.html
Kind regards,
Caspar
Op do 31 jan. 2019 om 14:36 schreef Jan Kasprzak :
> Hello, ceph users,
>
> I see the following HEALTH_ERR during cluster rebalance:
>
> Degrade
Hello, ceph users,
I see the following HEALTH_ERR during cluster rebalance:
Degraded data redundancy (low space): 8 pgs backfill_toofull
Detailed description:
I have upgraded my cluster to mimic and added 16 new bluestore OSDs
on 4 hosts. The hosts are in a separate region in my