The backfill_toofull state means that one PG which tried to backfill
couldn’t do so because the *target* for backfilling didn’t have the amount
of free space necessary (with a large buffer so we don’t screw up!). It
doesn’t indicate anything about the overall state of the cluster, will
often resolv
well.. it repaired itself.. hmm.. still.. strange.. :-)
[INF] Health check cleared: PG_DEGRADED_FULL (was: Degraded data redundancy
(low space): 1 pg backfill_toofull)
On Sat, Jul 28, 2018 at 12:03 PM Sinan Polat wrote:
> Ceph has tried to (re)balance your data, backfill_toofull means no
> a
i set up my test cluster many years ago with only 3 OSDs and never
increased the PGs :-) I plan on doing so after its healthy again... it's
long overdue... maybe 512 :-)
and yes that's what i thought too.. it should have more than enough space
to move data .. hmm...
i wouldn't be surprised if i
Ceph has tried to (re)balance your data, backfill_toofull means no available
space to move data, but you have plenty of space.
Why do you have so little pgs? I would increase the amount of pgs, but before
doing so lets see what others will say.
Sinan
> Op 28 jul. 2018 om 11:50 heeft Sebastian
Hi,
i added 4 more OSDs on my 4 node Test Cluster and now i'm in HEALTH_ERR
state. Right now its still recovering, but still, should this happen ? None
of my OSDs are full. Maybe i need more PGs ? But since my %USE is < 40% it
should be still ok to recover without HEALTH_ERR ?
data:
pools: