Folks,
I am building new ceph storage and i have currently 8 OSD total (in
future i am going to add more)
Based on official document I should do following to calculate Total PG
8 * 100 / 3 = 266 ( nearest power of 2 is 512 )
Now i have 2 pool at present in my ceph cluster (images & vms) so per
The backfill_toofull state means that one PG which tried to backfill
couldn’t do so because the *target* for backfilling didn’t have the amount
of free space necessary (with a large buffer so we don’t screw up!). It
doesn’t indicate anything about the overall state of the cluster, will
often resolv
Strange...
- wouldn't swear, but pretty sure v13.2.0 was working ok before
- so what do others say/see?
- no one on v13.2.1 so far (hard to believe) OR
- just don't have this "systemctl ceph-osd.target" problem and all just works?
If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older
Strange...
- wouldn't swear, but pretty sure v13.2.0 was working ok before
- so what do others say/see?
- no one on v13.2.1 so far (hard to believe) OR
- just don't have this "systemctl ceph-osd.target" problem and all just works?
If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older)