[ceph-users] pg calculation question

2018-07-29 Thread Satish Patel
Folks, I am building new ceph storage and i have currently 8 OSD total (in future i am going to add more) Based on official document I should do following to calculate Total PG 8 * 100 / 3 = 266 ( nearest power of 2 is 512 ) Now i have 2 pool at present in my ceph cluster (images & vms) so per

Re: [ceph-users] Degraded data redundancy (low space): 1 pg backfill_toofull

2018-07-29 Thread Gregory Farnum
The backfill_toofull state means that one PG which tried to backfill couldn’t do so because the *target* for backfilling didn’t have the amount of free space necessary (with a large buffer so we don’t screw up!). It doesn’t indicate anything about the overall state of the cluster, will often resolv

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-29 Thread Nathan Cutler
Strange... - wouldn't swear, but pretty sure v13.2.0 was working ok before - so what do others say/see? - no one on v13.2.1 so far (hard to believe) OR - just don't have this "systemctl ceph-osd.target" problem and all just works? If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-29 Thread ceph . novice
Strange... - wouldn't swear, but pretty sure v13.2.0 was working ok before - so what do others say/see? - no one on v13.2.1 so far (hard to believe) OR - just don't have this "systemctl ceph-osd.target" problem and all just works? If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older)